1 如果有service则定位到好的节点,否则问题节点无法删除,我这里没有就不演示了,给出例子
[oracle@*Node1* ~]$ srvctl modify service -d *RAC DB* -s *RAC SERVICE1* -n -i *RAC INST 1* -f
重构RDBMS实例,删除问题节点实例,这里由于直接删除安装文件,重装系统,找个步骤也没有
正常使用dbca删除
因为无法使用DBCA,下面直接删除集群记录的实例资源,删除数据库实例
[root@rac1 bin]# srvctl remove instance -db prod -i prod2
Remove instance from the database prod? (y/[n]) y
验证
[root@rac1 bin]# srvctl config database -d prod
Database unique name: prod
Database name: prod
Oracle home: /oracle/db/product/12.2
Oracle user: oracle
Spfile: +DATA/PROD/PARAMETERFILE/spfile.268.1055338697
Password file: +DATA/PROD/PASSWORD/pwdprod.256.1055338365
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances: prod1
Configured nodes: rac1
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed
3 从RAC cluster 删除节点
[oracle@rac1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac1 bin]$ echo $ORACLE_HOME
/oracle/db/product/12.2
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/oracle/db/product/12.2 "CLUSTER_NODES={rac1}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 33654 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
4 从Grid Cluster删除节点
[root@rac1 ~]# srvctl stop vip -i rac2
[root@rac1 ~]# srvctl remove vip -i rac2 -f
[root@rac1 ~]# olsnodes -s -t
rac1 Active Unpinned
rac2 Inactive Unpinned
[root@rac1 ~]# crsctl delete node -n rac2
CRS-4661: Node rac2 successfully deleted.
检查当前集群资源i情况
[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rac1 STABLE
ora.DATA.dg
ONLINE ONLINE rac1 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE rac1 STABLE
ora.OCR.dg
ONLINE ONLINE rac1 STABLE
ora.chad
ONLINE ONLINE rac1 STABLE
ora.net1.network
ONLINE ONLINE rac1 STABLE
ora.ons
ONLINE ONLINE rac1 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE rac1 169.254.160.165 192.
168.28.101,STABLE
ora.asm
1 ONLINE ONLINE rac1 Started,STABLE
2 ONLINE OFFLINE STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE rac1 STABLE
ora.mgmtdb
1 ONLINE ONLINE rac1 Open,STABLE
ora.prod.db
1 ONLINE ONLINE rac1 Open,HOME=/oracle/db
/product/12.2,STABLE
2 ONLINE OFFLINE STABLE
ora.qosmserver
1 ONLINE ONLINE rac1 STABLE
ora.rac1.vip
1 ONLINE ONLINE rac1 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac1 STABLE
--------------------------------------------------------------------------------
但是目前集群数据库u配置信息中依然有节点信息
[root@rac1 ~]# srvctl config database -d prod
Database unique name: prod
Database name: prod
Oracle home: /oracle/db/product/12.2
Oracle user: oracle
Spfile: +DATA/PROD/PARAMETERFILE/spfile.268.1055338697
Password file: +DATA/PROD/PASSWORD/pwdprod.256.1055338365
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances: prod1,prod2
Configured nodes: rac1,rac2 <<<<<<两个节点 rac2依然及记录
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed
5 清理GI inventory记录的节点信息
[grid@rac1 asm]$ cd $ORACLE_HOME/oui/bin
[grid@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/oracle/asm "CLUSTER_NODES={rac1}" CRS=TRUE -silent
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 33655 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
验证
[root@rac1 ~]# olsnodes -s -t
rac1 Active Unpinned
6 安装前环境检查
[grid@rac1 ~]$ cd $ORACLE_HOME
[grid@rac1 asm]$ cd bin
[grid@rac1 bin]$ ls -lrt clu*
-rwxr-xr-x 1 grid dba 1398 Sep 13 2012 cluvfyrac.sh
-rwxr-xr-x 1 grid dba 3739 Nov 4 15:11 cluutil
-rwxr-xr-x 1 root dba 9838 Nov 4 15:11 cluvfy
[grid@rac1 bin]$ ./cluvfy stage -pre nodeadd -n rac2 -verbose
[grid@rac1 addnode]$ cd $ORACLE_HOME
[grid@rac1 asm]$ cd bin
[grid@rac1 bin]$ ./cluvfy stage -pre nodeadd -n rac2 -verbose
Verifying Physical Memory ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 8.3038GB (8707208.0KB) 8GB (8388608.0KB) passed
rac1 8.3038GB (8707208.0KB) 8GB (8388608.0KB) passed
Verifying Physical Memory ...PASSED
Verifying Available Physical Memory ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 7.7602GB (8137172.0KB) 50MB (51200.0KB) passed
rac1 4.2968GB (4505552.0KB) 50MB (51200.0KB) passed
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 32.999GB (3.4601976E7KB) 8.3038GB (8707208.0KB) passed
rac1 32.999GB (3.4601976E7KB) 8.3038GB (8707208.0KB) passed
Verifying Swap Size ...PASSED
Verifying Free Space: rac2:/usr,rac2:/var,rac2:/etc,rac2:/sbin,rac2:/tmp ...
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr rac2 / 21.0928GB 25MB passed
/var rac2 / 21.0928GB 5MB passed
/etc rac2 / 21.0928GB 25MB passed
/sbin rac2 / 21.0928GB 10MB passed
/tmp rac2 / 21.0928GB 1GB passed
Verifying Free Space: rac2:/usr,rac2:/var,rac2:/etc,rac2:/sbin,rac2:/tmp ...PASSED
Verifying Free Space: rac2:/oracle/asm ...
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/oracle/asm rac2 /oracle 41.8887GB 6.9GB passed
Verifying Free Space: rac2:/oracle/asm ...PASSED
Verifying Free Space: rac1:/usr,rac1:/var,rac1:/etc,rac1:/sbin,rac1:/tmp ...
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr rac1 / 16.8457GB 25MB passed
/var rac1 / 16.8457GB 5MB passed
/etc rac1 / 16.8457GB 25MB passed
/sbin rac1 / 16.8457GB 10MB passed
/tmp rac1 / 16.8457GB 1GB passed
Verifying Free Space: rac1:/usr,rac1:/var,rac1:/etc,rac1:/sbin,rac1:/tmp ...PASSED
Verifying Free Space: rac1:/oracle/asm ...
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/oracle/asm rac1 /oracle 23.6094GB 6.9GB passed
Verifying Free Space: rac1:/oracle/asm ...PASSED
Verifying User Existence: oracle ...
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed exists(300)
rac1 passed exists(300)
Verifying Users With Same UID: 300 ...PASSED
Verifying User Existence: oracle ...PASSED
Verifying User Existence: grid ...
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed exists(301)
rac1 passed exists(301)
Verifying Users With Same UID: 301 ...PASSED
Verifying User Existence: grid ...PASSED
Verifying User Existence: root ...
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed exists(0)
rac1 passed exists(0)
Verifying Users With Same UID: 0 ...PASSED
Verifying User Existence: root ...PASSED
Verifying Group Existence: dba ...
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed exists
rac1 passed exists
Verifying Group Existence: dba ...PASSED
Verifying Group Membership: dba ...
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
rac2 yes yes yes passed
rac1 yes yes yes passed
Verifying Group Membership: dba ...PASSED
Verifying Run Level ...
Node Name run level Required Status
------------ ------------------------ ------------------------ ----------
rac2 5 3,5 passed
rac1 5 3,5 passed
Verifying Run Level ...PASSED
Verifying Hard Limit: maximum open file descriptors ...
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac2 hard 65536 65536 passed
rac1 hard 65536 65536 passed
Verifying Hard Limit: maximum open file descriptors ...PASSED
Verifying Soft Limit: maximum open file descriptors ...
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac2 soft 1024 1024 passed
rac1 soft 1024 1024 passed
Verifying Soft Limit: maximum open file descriptors ...PASSED
Verifying Hard Limit: maximum user processes ...
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac2 hard 16384 16384 passed
rac1 hard 16384 16384 passed
Verifying Hard Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum user processes ...
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac2 soft 2047 2047 passed
rac1 soft 2047 2047 passed
Verifying Soft Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum stack size ...
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac2 soft 10240 10240 passed
rac1 soft 10240 10240 passed
Verifying Soft Limit: maximum stack size ...PASSED
Verifying Architecture ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 x86_64 x86_64 passed
rac1 x86_64 x86_64 passed
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 4.14.35-1902.3.2.el7uek.x86_64 3.8.13 passed
rac1 4.14.35-1902.3.2.el7uek.x86_64 3.8.13 passed
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 250 250 250 passed
rac2 250 250 250 passed
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 32000 32000 32000 passed
rac2 32000 32000 32000 passed
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 100 100 100 passed
rac2 100 100 100 passed
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 128 128 128 passed
rac2 128 128 128 passed
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 8589934592 8589934592 4458090496 passed
rac2 8589934592 8589934592 4458090496 passed
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 4096 4096 4096 passed
rac2 4096 4096 4096 passed
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 2097152 2097152 870720 passed
rac2 2097152 2097152 870720 passed
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 6815744 6815744 6815744 passed
rac2 6815744 6815744 6815744 passed
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 between 9000 & 65500 between 9000 & 65500 between 9000 & 65535 passed
rac2 between 9000 & 65500 between 9000 & 65500 between 9000 & 65535 passed
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 262144 262144 262144 passed
rac2 262144 262144 262144 passed
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 4194304 4194304 4194304 passed
rac2 4194304 4194304 4194304 passed
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 262144 262144 262144 passed
rac2 262144 262144 262144 passed
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 1048586 1048586 1048576 passed
rac2 1048586 1048586 1048576 passed
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 1048576 1048576 1048576 passed
rac2 1048576 1048576 1048576 passed
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying OS Kernel Parameter: panic_on_oops ...
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 1 1 1 passed
rac2 1 1 1 passed
Verifying OS Kernel Parameter: panic_on_oops ...PASSED
Verifying Package: binutils-2.23.52.0.1 ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 binutils-2.27-41.base.0.1.el7 binutils-2.23.52.0.1 passed
rac1 binutils-2.27-41.base.0.1.el7 binutils-2.23.52.0.1 passed
Verifying Package: binutils-2.23.52.0.1 ...PASSED
Verifying Package: compat-libcap1-1.10 ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 compat-libcap1-1.10-7.el7 compat-libcap1-1.10 passed
rac1 compat-libcap1-1.10-7.el7 compat-libcap1-1.10 passed
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: libgcc-4.8.2 (x86_64) ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libgcc(x86_64)-4.8.5-39.0.1.el7 libgcc(x86_64)-4.8.2 passed
rac1 libgcc(x86_64)-4.8.5-39.0.1.el7 libgcc(x86_64)-4.8.2 passed
Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-4.8.2 (x86_64) ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libstdc++(x86_64)-4.8.5-39.0.1.el7 libstdc++(x86_64)-4.8.2 passed
rac1 libstdc++(x86_64)-4.8.5-39.0.1.el7 libstdc++(x86_64)-4.8.2 passed
Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libstdc++-devel(x86_64)-4.8.5-39.0.1.el7 libstdc++-devel(x86_64)-4.8.2 passed
rac1 libstdc++-devel(x86_64)-4.8.5-39.0.1.el7 libstdc++-devel(x86_64)-4.8.2 passed
Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED
Verifying Package: sysstat-10.1.5 ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 sysstat-10.1.5-18.el7 sysstat-10.1.5 passed
rac1 sysstat-10.1.5-18.el7 sysstat-10.1.5 passed
Verifying Package: sysstat-10.1.5 ...PASSED
Verifying Package: ksh ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 missing ksh failed
rac1 missing ksh failed
Verifying Package: ksh ...FAILED (PRVF-7532)
Verifying Package: make-3.82 ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 make-3.82-24.el7 make-3.82 passed
rac1 make-3.82-24.el7 make-3.82 passed
Verifying Package: make-3.82 ...PASSED
Verifying Package: glibc-2.17 (x86_64) ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 glibc(x86_64)-2.17-292.0.1.el7 glibc(x86_64)-2.17 passed
rac1 glibc(x86_64)-2.17-292.0.1.el7 glibc(x86_64)-2.17 passed
Verifying Package: glibc-2.17 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.17 (x86_64) ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 glibc-devel(x86_64)-2.17-292.0.1.el7 glibc-devel(x86_64)-2.17 passed
rac1 glibc-devel(x86_64)-2.17-292.0.1.el7 glibc-devel(x86_64)-2.17 passed
Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED
Verifying Package: libaio-0.3.109 (x86_64) ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libaio(x86_64)-0.3.109-13.el7 libaio(x86_64)-0.3.109 passed
rac1 libaio(x86_64)-0.3.109-13.el7 libaio(x86_64)-0.3.109 passed
Verifying Package: libaio-0.3.109 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.109 (x86_64) ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 missing libaio-devel(x86_64)-0.3.109 failed
rac1 missing libaio-devel(x86_64)-0.3.109 failed
Verifying Package: libaio-devel-0.3.109 (x86_64) ...FAILED (PRVF-7532)
Verifying Package: nfs-utils-1.2.3-15 ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 nfs-utils-1.3.0-0.65.0.1.el7 nfs-utils-1.2.3-15 passed
rac1 nfs-utils-1.3.0-0.65.0.1.el7 nfs-utils-1.2.3-15 passed
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: smartmontools-6.2-4 ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 smartmontools-7.0-1.el7 smartmontools-6.2-4 passed
rac1 smartmontools-7.0-1.el7 smartmontools-6.2-4 passed
Verifying Package: smartmontools-6.2-4 ...PASSED
Verifying Package: net-tools-2.0-0.17 ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 net-tools-2.0-0.25.20131004git.el7 net-tools-2.0-0.17 passed
rac1 net-tools-2.0-0.25.20131004git.el7 net-tools-2.0-0.17 passed
Verifying Package: net-tools-2.0-0.17 ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...
Node Name Status
------------------------------------ ------------------------
rac2 passed
rac1 passed
Verifying Root user consistency ...PASSED
Verifying Package: cvuqdisk-1.0.10-1 ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 cvuqdisk-1.0.10-1 cvuqdisk-1.0.10-1 passed
rac1 cvuqdisk-1.0.10-1 cvuqdisk-1.0.10-1 passed
Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying Node Addition ...
Verifying CRS Integrity ...PASSED
Verifying Clusterware Version Consistency ...PASSED
Verifying '/oracle/asm' ...PASSED
Verifying Node Addition ...PASSED
Verifying Node Connectivity ...
Verifying Hosts File ...
Node Name Status
------------------------------------ ------------------------
rac1 passed
rac2 passed
Verifying Hosts File ...PASSED
Interface information for node "rac1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
enp0s3 192.168.15.101 192.168.15.0 0.0.0.0 192.168.28.1 08:00:27:6A:FC:3A 1500
enp0s3 192.168.15.201 192.168.15.0 0.0.0.0 192.168.28.1 08:00:27:6A:FC:3A 1500
enp0s3 192.168.15.200 192.168.15.0 0.0.0.0 192.168.28.1 08:00:27:6A:FC:3A 1500
enp0s8 192.168.28.101 192.168.28.0 0.0.0.0 192.168.28.1 08:00:27:1E:FD:C6 1500
Interface information for node "rac2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
enp0s3 192.168.15.102 192.168.15.0 0.0.0.0 192.168.28.1 08:00:27:0F:D3:57 1500
enp0s8 192.168.28.102 192.168.28.0 0.0.0.0 192.168.28.1 08:00:27:0C:BF:04 1500
Check: MTU consistency on the private interfaces of subnet "192.168.28.0"
Node Name IP Address Subnet MTU
---------------- ------------ ------------ ------------ ----------------
rac1 enp0s8 192.168.28.101 192.168.28.0 1500
rac2 enp0s8 192.168.28.102 192.168.28.0 1500
Check: MTU consistency of the subnet "192.168.15.0".
Node Name IP Address Subnet MTU
---------------- ------------ ------------ ------------ ----------------
rac1 enp0s3 192.168.15.101 192.168.15.0 1500
rac1 enp0s3 192.168.15.201 192.168.15.0 1500
rac1 enp0s3 192.168.15.200 192.168.15.0 1500
rac2 enp0s3 192.168.15.102 192.168.15.0 1500
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac1[enp0s3:192.168.15.101] rac1[enp0s3:192.168.15.201] yes
rac1[enp0s3:192.168.15.101] rac1[enp0s3:192.168.15.200] yes
rac1[enp0s3:192.168.15.101] rac2[enp0s3:192.168.15.102] yes
rac1[enp0s3:192.168.15.201] rac1[enp0s3:192.168.15.200] yes
rac1[enp0s3:192.168.15.201] rac2[enp0s3:192.168.15.102] yes
rac1[enp0s3:192.168.15.200] rac2[enp0s3:192.168.15.102] yes
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac1[enp0s8:192.168.28.101] rac2[enp0s8:192.168.28.102] yes
Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
Verifying subnet mask consistency for subnet "192.168.15.0" ...PASSED
Verifying subnet mask consistency for subnet "192.168.28.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast check ...
Checking subnet "192.168.28.0" for multicast communication with multicast group "224.0.0.251"
Verifying Multicast check ...PASSED
Verifying ASM Integrity ...
Verifying Node Connectivity ...
Verifying Hosts File ...
Node Name Status
------------------------------------ ------------------------
rac2 passed
Verifying Hosts File ...PASSED
Interface information for node "rac1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
enp0s3 192.168.15.101 192.168.15.0 0.0.0.0 192.168.28.1 08:00:27:6A:FC:3A 1500
enp0s3 192.168.15.201 192.168.15.0 0.0.0.0 192.168.28.1 08:00:27:6A:FC:3A 1500
enp0s3 192.168.15.200 192.168.15.0 0.0.0.0 192.168.28.1 08:00:27:6A:FC:3A 1500
enp0s8 192.168.28.101 192.168.28.0 0.0.0.0 192.168.28.1 08:00:27:1E:FD:C6 1500
Interface information for node "rac2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
enp0s3 192.168.15.102 192.168.15.0 0.0.0.0 192.168.28.1 08:00:27:0F:D3:57 1500
enp0s8 192.168.28.102 192.168.28.0 0.0.0.0 192.168.28.1 08:00:27:0C:BF:04 1500
Check: MTU consistency on the private interfaces of subnet "192.168.28.0"
Node Name IP Address Subnet MTU
---------------- ------------ ------------ ------------ ----------------
rac1 enp0s8 192.168.28.101 192.168.28.0 1500
rac2 enp0s8 192.168.28.102 192.168.28.0 1500
Check: MTU consistency of the subnet "192.168.15.0".
Node Name IP Address Subnet MTU
---------------- ------------ ------------ ------------ ----------------
rac1 enp0s3 192.168.15.101 192.168.15.0 1500
rac1 enp0s3 192.168.15.201 192.168.15.0 1500
rac1 enp0s3 192.168.15.200 192.168.15.0 1500
rac2 enp0s3 192.168.15.102 192.168.15.0 1500
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac1[enp0s3:192.168.15.101] rac1[enp0s3:192.168.15.201] yes
rac1[enp0s3:192.168.15.101] rac1[enp0s3:192.168.15.200] yes
rac1[enp0s3:192.168.15.101] rac2[enp0s3:192.168.15.102] yes
rac1[enp0s3:192.168.15.201] rac1[enp0s3:192.168.15.200] yes
rac1[enp0s3:192.168.15.201] rac2[enp0s3:192.168.15.102] yes
rac1[enp0s3:192.168.15.200] rac2[enp0s3:192.168.15.102] yes
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac1[enp0s8:192.168.28.101] rac2[enp0s8:192.168.28.102] yes
Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
Verifying subnet mask consistency for subnet "192.168.15.0" ...PASSED
Verifying subnet mask consistency for subnet "192.168.28.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying ASM Integrity ...PASSED
Verifying Device Checks for ASM ...
Verifying ASM device sharedness check ...
Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying Shared Storage Accessibility:/dev/sdd1,/dev/sdb1,/dev/sdf1,/dev/sdc1 ...PASSED
Device Device Type
------------------------------------ ------------------------
/dev/sdb1 Disk
/dev/sdc1 Disk
/dev/sdd1 Disk
/dev/sdf1 Disk
Verifying ASM device sharedness check ...PASSED
Verifying Access Control List check ...
Checking consistency of device owner across all nodes...
Consistency check of device owner for "/dev/sdb1" PASSED
Consistency check of device owner for "/dev/sdc1" PASSED
Consistency check of device owner for "/dev/sdd1" PASSED
Consistency check of device owner for "/dev/sdf1" PASSED
Checking consistency of device group across all nodes...
Consistency check of device group for "/dev/sdb1" PASSED
Consistency check of device group for "/dev/sdc1" PASSED
Consistency check of device group for "/dev/sdd1" PASSED
Consistency check of device group for "/dev/sdf1" PASSED
Checking consistency of device permissions across all nodes...
Consistency check of device permissions for "/dev/sdb1" PASSED
Consistency check of device permissions for "/dev/sdc1" PASSED
Consistency check of device permissions for "/dev/sdd1" PASSED
Consistency check of device permissions for "/dev/sdf1" PASSED
Verifying Access Control List check ...PASSED
Verifying Device Checks for ASM ...PASSED
Verifying Database home availability ...PASSED
Verifying OCR Integrity ...PASSED
Verifying Time zone consistency ...PASSED
Verifying Network Time Protocol (NTP) ...
Verifying '/etc/ntp.conf' ...
Node Name File exists?
------------------------------------ ------------------------
rac2 no
rac1 no
Verifying '/etc/ntp.conf' ...PASSED
Verifying '/etc/chrony.conf' ...
Node Name File exists?
------------------------------------ ------------------------
rac2 no
rac1 no
Verifying '/etc/chrony.conf' ...PASSED
Verifying '/var/run/ntpd.pid' ...
Node Name File exists?
------------------------------------ ------------------------
rac2 no
rac1 no
Verifying '/var/run/ntpd.pid' ...PASSED
Verifying '/var/run/chronyd.pid' ...
Node Name File exists?
------------------------------------ ------------------------
rac2 no
rac1 no
Verifying '/var/run/chronyd.pid' ...PASSED
Verifying Network Time Protocol (NTP) ...PASSED
Verifying User Not In Group "root": grid ...
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed does not exist
rac1 passed does not exist
Verifying User Not In Group "root": grid ...PASSED
Verifying resolv.conf Integrity ...
Verifying (Linux) resolv.conf Integrity ...PASSED
Verifying resolv.conf Integrity ...PASSED
Verifying DNS/NIS name service ...PASSED
Verifying User Equivalence ...
Node Name Status
------------------------------------ ------------------------
rac2 passed
Verifying User Equivalence ...PASSED
Verifying /boot mount ...PASSED
Verifying zeroconf check ...PASSED
Pre-check for node addition was unsuccessful on all the nodes.
Failures were encountered during execution of CVU verification request "stage -pre nodeadd".
Verifying Package: ksh ...FAILED
rac2: PRVF-7532 : Package "ksh" is missing on node "rac2"
rac1: PRVF-7532 : Package "ksh" is missing on node "rac1"
Verifying Package: libaio-devel-0.3.109 (x86_64) ...FAILED
rac2: PRVF-7532 : Package "libaio-devel(x86_64)" is missing on node "rac2"
rac1: PRVF-7532 : Package "libaio-devel(x86_64)" is missing on node "rac1"
CVU operation performed: stage -pre nodeadd
Date: Nov 5, 2020 2:46:51 PM
CVU home: /oracle/asm/
User: grid
7增加节点
[grid@rac1 asm]$ cd $ORACLE_HOME/addnode
[grid@rac1 addnode]$ pwd
/oracle/asm/addnode
[grid@rac1 addnode]$ export IGNORE_PREADDNODE_CHECKS=y
[grid@rac1 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}" "CLUSTER_NEW_NODE_ROLES={hub}"
报错信息
[grid@rac1 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}" "CLUSTER_NEW_NODE_ROLES={hub}"
[WARNING] [INS-06009] SSH performance is detected to be slow, which may impact performance during remote node operations like copying the software and executing prerequisite checks.
ACTION: Consider optimizing the ssh configuration.
[FATAL] [INS-40915] The installer has detected the presence of Oracle Clusterware on the following nodes: [rac2].
CAUSE: Oracle Clusterware may be running on the listed nodes, or previous installation of Oracle Clusterware is not completely deinstalled.
ACTION: For each node listed, ensure that existing Oracle Clusterware is completely deinstalled. Alternatively, do not to include the listed nodes.
彻底删除相关文件
如果是先决条件不具备,要看日志提示,其中critical的必须满足,其实再 cluvfy stage -pre测试阶段已经总结了failed部分,只是没有标注ignore或者critical。
解决上述问题
继续执行添加节点 ,增加集群节点添加参数 以忽略所有检查失败选项,前提是确认失败项目不会影响安装(比如我们检查发现veritas共享盘的共享性问题,这个问题Oracle定位为Critical)
[grid@rac1 addnode]$ ./addnode.sh -silent -ignoreSysPrereqs -ignorePrereq "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}" "CLUSTER_NEW_NODE_ROLES={hub}"
[WARNING] [INS-06009] SSH performance is detected to be slow, which may impact performance during remote node operations like copying the software and executing prerequisite checks.
ACTION: Consider optimizing the ssh configuration.
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
CAUSE: Some of the optional prerequisites are not met. See logs for details. /oracle/oraInventory/logs/addNodeActions2020-11-05_02-54-08-PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /oracle/oraInventory/logs/addNodeActions2020-11-05_02-54-08-PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
Prepare Configuration in progress.
Prepare Configuration successful.
.................................................. 7% Done.
Copy Files to Remote Nodes in progress.
.................................................. 12% Done.
.................................................. 17% Done.
..............................
Copy Files to Remote Nodes successful.
You can find the log of this install session at:
/oracle/oraInventory/logs/addNodeActions2020-11-05_02-54-08-PM.log
Instantiate files in progress.
Instantiate files successful.
.................................................. 49% Done.
Saving cluster inventory in progress.
.................................................. 83% Done.
Saving cluster inventory successful.
The Cluster Node Addition of /oracle/asm was successful.
Please check '/oracle/asm/inventory/silentInstall2020-11-05_2-53-56-PM.log' for more details.
Setup Oracle Base in progress.
Setup Oracle Base successful.
.................................................. 90% Done.
Update Inventory in progress.
Update Inventory successful.
.................................................. 97% Done.
As a root user, execute the following script(s):
1. /oracle/oraInventory/orainstRoot.sh
2. /oracle/asm/root.sh
Execute /oracle/oraInventory/orainstRoot.sh on the following nodes:
[rac2]
Execute /oracle/asm/root.sh on the following nodes:
[rac2]
The scripts can be executed in parallel on all the nodes.
.................................................. Done.
Successfully Setup Software.
8根据提示执行脚本
[root@rac2 ~]# /oracle/asm/root.sh
Check /oracle/asm/install/root_rac2_2020-11-05_15-15-13-759265151.log for the output of root script
[root@rac2 ~]# cat /oracle/asm/install/root_rac2_2020-11-05_15-15-13-759265151.log
root@rac2 ~]# cat /oracle/asm/install/root_rac2_2020-11-05_15-15-13-759265151.log
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/asm
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
.。。。。。。。。。。。
。。。。。。。。。。。
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2020/11/05 15:20:10 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2020/11/05 15:20:19 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
节点添加成功。
验证集群状态
[root@rac2 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.DATA.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.OCR.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.chad
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.net1.network
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.ons
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE rac1 169.254.218.17 192.1
68.28.101,STABLE
ora.asm
1 ONLINE ONLINE rac1 Started,STABLE
2 ONLINE ONLINE rac2 Started,STABLE <<<<<<<<回来了
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE rac1 STABLE
ora.mgmtdb
1 ONLINE ONLINE rac1 Open,STABLE
ora.prod.db
1 ONLINE ONLINE rac1 Open,HOME=/oracle/db
/product/12.2,STABLE
ora.qosmserver
1 ONLINE ONLINE rac1 STABLE
ora.rac1.vip
1 ONLINE ONLINE rac1 STABLE
ora.rac2.vip
1 ONLINE ONLINE rac2 STABLE <<<<<<<<回来了
ora.scan1.vip
1 ONLINE ONLINE rac1 STABLE
--------------------------------------------------------------------------------
增加数据库节点-图形化安装(环境检查 拷贝文件)
[oracle@rac1 ~]$ cd $ORACLE_HOME/addnode
[oracle@rac1 addnode]$ xhost +
xhost: unable to open display ""
[oracle@rac1 addnode]$ export DISPLAY=:0
[oracle@rac1 addnode]$ xhost +
access control disabled, clients can connect from any host
[oracle@rac1 addnode]$ ls -lrt add*
-rw-r----- 1 oracle dba 2106 Jan 26 2017 addnode_oraparam.ini.sbs
-rwxr-x--- 1 oracle dba 3577 Nov 4 16:30 addnode.sh
-rw-r--r-- 1 oracle dba 2098 Nov 4 16:30 addnode_oraparam.ini
[oracle@rac1 addnode]$ ./addnode.sh
给集群增加一个实例资源 -图形化安装(
[oracle@rac1 ~]$ export DISPLAY=:0
[oracle@rac1 ~]$ xhost +
access control disabled, clients can connect from any host
[oracle@rac1 ~]$ dbca
需要修改inventory.xml????????
加完节点后集群状态如下(安全期间,重启了一次新加入节点的集群软件)
[root@rac2 ContentsXML]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.DATA.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.USERDATA.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.net1.network
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.ons
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1 STABLE
ora.MGMTLSNR
1 OFFLINE OFFLINE STABLE
ora.asm
1 ONLINE ONLINE rac1 Started,STABLE
2 ONLINE ONLINE rac2 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE rac1 STABLE
ora.mgmtdb
1 OFFLINE OFFLINE STABLE
ora.prod.db
1 ONLINE ONLINE rac1 Open,HOME=/oracle/db
/base/product/12.2,S
TABLE
2 ONLINE ONLINE rac2 Open,HOME=/oracle/db
/base/product/12.2,S
TABLE
ora.qosmserver
1 ONLINE ONLINE rac1 STABLE
ora.rac1.vip
1 ONLINE ONLINE rac1 STABLE
ora.rac2.vip
1 ONLINE ONLINE rac2 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac1 STABLE
--------------------------------------------------------------------------------
20201118-dbca图形添加实例失败,undo空间创建问题,日志组问题,下面测试解决方法
1 删除节点2的集群日志(这个操作在srvctl手工增加实例会有自动删除操作,图形也会有这个操作,这里模拟节点异常删除如rm按照目录,重装操作系统等),如果正常删除节点,这个步骤应该是不需要的。
SQL> alter database disable thread 2;
Database altered.
SQL> select group#,thread#,sequence#,status from v$log;
GROUP# THREAD# SEQUENCE# STATUS
---------- ---------- ---------- --------------------------------
1 1 45 CURRENT
2 1 44 INACTIVE
3 2 9 INACTIVE
4 2 8 INACTIVE
SQL> alter database drop logfile group 3;
Database altered.
SQL> alter database drop logfile group 4;
Database altered.
SQL> select group#,thread#,sequence#,status from v$log; <<<<<<目前只有一个实例的日志组存在
GROUP# THREAD# SEQUENCE# STATUS
---------- ---------- ---------- --------------------------------
1 1 45 CURRENT
2 1 44 INACTIVE
2 处理undo,删除实例2的undo表空间。
SQL> select TABLESPACE_NAME,FILE_NAME,STATUS from dba_data_files
TABLESPACE_NAME FILE_NAME STATUS
-------------------- -------------------------------------------------- ------------------
SYSTEM +USERDATA/PROD/DATAFILE/system.260.1055783229 AVAILABLE
SYSAUX +USERDATA/PROD/DATAFILE/sysaux.261.1055783241 AVAILABLE
UNDOTBS1 +USERDATA/PROD/DATAFILE/undotbs1.262.1055783247 AVAILABLE
UNDOTBS2 +USERDATA/PROD/DATAFILE/undotbs2.264.1055783263 AVAILABLE
USERS +USERDATA/PROD/DATAFILE/users.265.1055783265 AVAILABLE
TEST +USERDATA/PROD/DATAFILE/test.272.1056186547 AVAILABLE
6 rows selected.
SQL> drop tablespace undotbs2 including contents and datafiles;
Tablespace dropped.
SQL> select TABLESPACE_NAME,FILE_NAME,STATUS from dba_data_files;
TABLESPACE_NAME FILE_NAME STATUS
-------------------- -------------------------------------------------- ------------------
SYSTEM +USERDATA/PROD/DATAFILE/system.260.1055783229 AVAILABLE
SYSAUX +USERDATA/PROD/DATAFILE/sysaux.261.1055783241 AVAILABLE
UNDOTBS1 +USERDATA/PROD/DATAFILE/undotbs1.262.1055783247 AVAILABLE
USERS +USERDATA/PROD/DATAFILE/users.265.1055783265 AVAILABLE
TEST +USERDATA/PROD/DATAFILE/test.272.1056186547 AVAILABLE
再次执行添加实例操作,顺利通过,新实例的参数设置也是比较合理。下面截取以端undo相关操作。
INFO: Init param detected: name:undo_tablespace value:UNDOTBS1 instanceName:prod1
INFO: Tablespace detected: tablespaceName: UNDOTBS1 contentType: UNDO
[pool-1-thread-1] [ 2020-11-19 08:49:47.102 CST ] [SQLEngine.doSQLSubstitution:2542] The substituted sql statement:=SELECT contents||'|'||logging||'|'||extent_management||'|'||block_size||'|'||segment_space_management||'|'||bigfile FROM sys.cdb_tablespaces WHERE tablespace_name ='UNDOTBS1' AND NVL(CON_ID,1) = 0
[pool-1-thread-1] [ 2020-11-19 08:49:47.102 CST ] [TableSpaceGenerator.initGeneral:142] sqlQuery SELECT contents||'|'||logging||'|'||extent_management||'|'||block_size||'|'||segment_space_management||'|'||bigfile FROM sys.cdb_tablespaces WHERE tablespace_name ='UNDOTBS1' AND NVL(CON_ID,1) = 0
[pool-1-thread-1] [ 2020-11-19 08:49:47.113 CST ] [SQLEngine.doSQLSubstitution:2542] The substituted sql statement:=SELECT d.file_name||'|'||v.status||'|'||TO_CHAR((d.bytes / 1024 / 1024), '99999990.000')||'|'||d.autoextensible||'|'|| d.increment_by||'|'||d.maxblocks FROM sys.cdb_data_files d, v$datafile v, (SELECT file_id, SUM(bytes) bytes FROM sys.cdb_free_space WHERE tablespace_name = 'UNDOTBS1' AND NVL(CON_ID,1) = 0 GROUP BY file_id) s WHERE (s.file_id (+)= d.file_id) AND (d.tablespace_name = 'UNDOTBS1') AND (d.file_name = v.name) AND (NVL(d.con_id,1) = NVL(v.con_id,1)) AND (NVL(d.con_id,1) = 0)
[pool-1-thread-1] [ 2020-11-19 08:49:47.220 CST ] [SQLEngine.doSQLSubstitution:2542] The substituted sql statement:=SELECT block_size FROM sys.cdb_tablespaces WHERE tablespace_name ='UNDOTBS1' AND NVL(CON_ID,1) = 0
[pool-1-thread-1] [ 2020-11-19 08:49:47.225 CST ] [SQLEngine.doSQLSubstitution:2542] The substituted sql statement:=SELECT (initial_extent /1024)||'|'||nvl((next_extent / 1024), '0')||'|'||nvl(min_extents, '0')||'|'||nvl(max_extents, '-1')||'|'||nvl(pct_increase, '0')||'|'||(min_extlen / 1024)||'|'||allocation_type FROM sys.cdb_tablespaces WHERE tablespace_name='UNDOTBS1' AND NVL(CON_ID,1) = 0
- undo_tablespace (prod2) : UNDOTBS2
- UNDOTBS1 : Type - UNDO, Extent Management - DICTIONARY
- UNDOTBS2 : Type - UNDO, Extent Management - LOCAL
Parameter List: [[cluster_database,TRUE,4020],[db_create_file_dest,+USERDATA,5500],[db_name,prod,5507],[db_domain,,5507],[dispatchers,(PROTOCOL=TCP) (SERVICE=prodXDB),4015],[audit_file_dest,/oracle/db/base/admin/prod/adump,4018],[compatible,12.2.0,4042],[remote_login_passwordfile,EXCLUSIVE,4018],[sga_target,2684354560,5501],[spfile,+USERDATA/PROD/PARAMETERFILE/spfile.268.1055787691,4042],[dml_locks,4000,5505],[processes,300,4002],[undo_tablespace,UNDOTBS2,5506],[control_files,+USERDATA/PROD/CONTROLFILE/current.257.1055783219,5500],[listener_networks,,4042],[diagnostic_dest,/oracle/db/base,4042],[audit_trail,DB,4018],[nls_territory,AMERICA,4019],[db_block_size,8192,4006],[open_cursors,300,4001],[nls_language,AMERICAN,4019],[remote_listener,rac-scan:1521,4026],[pga_aggregate_target,891289600,5504],[undo_tablespace,UNDOTBS1,4020],[undo_tablespace,UNDOTBS2,4020],[instance_number,1,4020],[instance_number,2,4020],[local_listener,(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.15.201)(PORT=1521)),4020],[thread,1,4020],[thread,2,4020],]
Tablespace Name: UNDOTBS2
Type: UNDO
[Thread-234] [ 2020-11-19 08:53:59.189 CST ] [AddInstanceStep.executeImpl:247] Undo tablespace SQL = CREATE SMALLFILE UNDO TABLESPACE "UNDOTBS2" DATAFILE SIZE 875M AUTOEXTEND ON NEXT 5120K MAXSIZE 32767M BLOCKSIZE 8192
[Thread-234] [ 2020-11-19 08:54:22.038 CST ] [AddInstanceStep.executeImpl:374] SQL=ALTER SYSTEM SET undo_tablespace = UNDOTBS2 SCOPE=SPFILE SID = 'prod2'
下面是增加实例的新实例参数
Add Instance Settings
--------------------------------------------------------------------------------
- Instance Name : prod2
- Node Name : rac2
--------------------------------------------------------------------------------
Initialization Parameters
--------------------------------------------------------------------------------
- instance_number (prod2) : 2
- thread (prod2) : 2
- undo_tablespace (prod2) : UNDOTBS2
--------------------------------------------------------------------------------
Tablespaces
--------------------------------------------------------------------------------
- SYSAUX : Type - PERMANENT, Extent Management - DICTIONARY
- SYSTEM : Type - PERMANENT, Extent Management - DICTIONARY
- TEMP : Type - TEMPORARY, Extent Management - DICTIONARY
- TEST : Type - PERMANENT, Extent Management - DICTIONARY
- UNDOTBS1 : Type - UNDO, Extent Management - DICTIONARY
- USERS : Type - PERMANENT, Extent Management - DICTIONARY
- UNDOTBS2 : Type - UNDO, Extent Management - LOCAL
--------------------------------------------------------------------------------
Redo Log Groups
--------------------------------------------------------------------------------
- Group 3, File - (OMF_3_REDOLOG_MEMBER_0), Size(K) : 204800
- Group 4, File - (OMF_4_REDOLOG_MEMBER_0), Size(K) : 204800
目前集群状态
[grid@rac1 asm]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.DATA.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.USERDATA.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.net1.network
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.ons
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1 STABLE
ora.MGMTLSNR
1 OFFLINE OFFLINE STABLE
ora.asm
1 ONLINE ONLINE rac1 Started,STABLE
2 ONLINE ONLINE rac2 STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE rac2 STABLE
ora.mgmtdb
1 OFFLINE OFFLINE STABLE
ora.prod.db
1 ONLINE ONLINE rac1 Open,HOME=/oracle/db
/base/product/12.2,S
TABLE
2 ONLINE ONLINE rac2 Open,HOME=/oracle/db
/base/product/12.2,S
TABLE
ora.qosmserver
1 ONLINE ONLINE rac2 STABLE
ora.rac1.vip
1 ONLINE ONLINE rac1 STABLE
ora.rac2.vip
1 ONLINE ONLINE rac2 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac1 STABLE
--------------------------------------------------------------------------------
如果有service 迁移回原来实例(重建或者relocate )恢复完毕!
异常删除集群节点(非正常删除)如何通过一系列操作再次恢复该节点,这里是完整的测试过程
分享好友
分享这个小栈给你的朋友们,一起进步吧。
订阅须知
• 所有用户可根据关注领域订阅专区或所有专区
• 付费订阅:虚拟交易,一经交易不退款;若特殊情况,可3日内客服咨询
• 专区发布评论属默认订阅所评论专区(除付费小栈外)