본문 바로가기

Oracle/10g. Rac Install

D. Cluster ware 설치.

각종 여러가지 설정후 오라클 계정으로 접속해서

새터미널로 root 계정으로
fdisk -l 하면 아직 format이 되지 않은 공유 디스크들이 보일것이다.

[root@ocm1 ~]# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        1912    15358108+  83  Linux
/dev/sda2            1913        2294     3068415   83  Linux
/dev/sda3            2295        2610     2538270   82  Linux swap

Disk /dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/sde: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sde doesn't contain a valid partition table

Disk /dev/sdf: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdf doesn't contain a valid partition table
[root@ocm1 ~]#

이중에서 /dev/sdb 의 디스크를 사용해서 voting disk 과 ocr 을 만들예정이다.

[root@ocm1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-130, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-130, default 130):
Using default value 130

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@ocm1 ~]#

이제 다시 터미널을 다시 열어서 설치하려는
Cluster Ware 를 압축을 풀고.
runinstaller 를 실행.
[oracle@ocm1 clusterware]$ ls
cluvfy  doc  install  response  rpm  runInstaller  stage  upgrade  welcome.html
[oracle@ocm1 clusterware]$ ./runInstaller
Starting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be redhat-3, SuSE-9, redhat-4, UnitedLinux-1.0, asianux-1 or asianux-2
                                      Passed


All installer requirements met.

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2010-09-12_08-28-25PM. Please wait ...[oracle@ocm1 clusterware]$ Oracle Universal Installer, Version 10.2.0.1.0 Production
Copyright (C) 1999, 2005, Oracle. All rights reserved.


1. runInstaller 실행.



2. 인벤토리 지정.


3. CRS홈 지정.



4. 노드네임 지정.


5. 기본적으로 설치한 노드이름이 있고 두번째 노드를 추가.


6. 노드를 추가한모습


7. 서브넷 지정.둘다 PRIVATE가 되어있고  둘중에 public 부분인 192.168.0.0부분을 public 로 수정.


7-1. public 수정완료.~


8. 소프트웨어 확인및 기타 설정에 관한 확인 점검.

Checking operating system requirements ...
Expected result: One of redhat-3,redhat-4,SuSE-9,asianux-1,asianux-2
Actual Result: redhat-4
Check complete. The overall result of this check is: Passed
=======================================================================

Checking operating system package requirements ...
Checking for make-3.79; found make-1:3.80-7.EL4. Passed
Checking for binutils-2.14; found binutils-2.15.92.0.2-25. Passed
Checking for gcc-3.2; found gcc-3.4.6-11.0.1. Passed
Check complete. The overall result of this check is: Passed
=======================================================================

Checking physical memory requirements ...
Expected result: 922MB
Actual Result: 1008MB
Check complete. The overall result of this check is: Passed
=======================================================================

Checking for Oracle Home incompatibilities ....
Actual Result: NEW_HOME
Check complete. The overall result of this check is: Passed
=======================================================================

Checking Oracle Home path for spaces...
Check complete. The overall result of this check is: Passed
=======================================================================

Checking local Cluster Synchronization Services (CSS) status ...
Check complete. The overall result of this check is: Passed
=======================================================================

Checking whether Oracle 9.2 RAC is available on all selected nodes
Check complete. The overall result of this check is: Passed
========================================================

9. OCR경로지정.



10. voting disk 지정.


11. install 시작.


12. ~~~~~


13. 일단 소프트웨어 설치후 root 로 실행해야할 사항이 나온다.
각노드에서 실행을 해주면 된다.


node1 :

[root@ocm1 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@ocm1 ~]# /u01/app/oracle/product/crs/root.sh
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
assigning default hostname ocm1 for node 1.
assigning default hostname ocm2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: ocm1 ocm1-priv ocm1
node 2: ocm2 ocm2-priv ocm2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /ocfs/clusterware/votingdisk
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        ocm1
CSS is inactive on these nodes.
        ocm2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.


node 2 :

[root@ocm2 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@ocm2 ~]# /u01/app/oracle/product/crs/root.sh
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname ocm1 for node 1.
assigning default hostname ocm2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: ocm1 ocm1-priv ocm1
node 2: ocm2 ocm2-priv ocm2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        ocm1
        ocm2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.
[root@ocm2 ~]#

* 중간에 콘솔에서 설치를 해서 vip 부분 설정이 되지 않았다.
그럴때는 설치된 crs 경로에 vipca 를 재실행해준다.

/u01/app/oracle/product/crs/bin/vipca  실행 (두번째 노드에서 실행)



요약정보나온후 설정.






14. vipca 완료후 다시 여기에서 OK를 누르고 다음


15. INSTALL 끝~~~~


'Oracle > 10g. Rac Install' 카테고리의 다른 글

F. Database 설치.  (0) 2010.09.13
E. ASM 라이브러리 및 DB Engine 설치.  (6) 2010.09.13
C. ClusterWare 설치를 위한 기본세팅  (0) 2010.09.12
B. Host File. SSH.  (0) 2010.09.12
RAC를 위한 Vmware 스토리지 추가하기  (0) 2010.09.12