重庆分公司,新征程启航
为企业提供网站建设、域名注册、服务器等服务
查看目前节点私网信息
创新互联专注于企业全网整合营销推广、网站重做改版、清涧网站定制设计、自适应品牌网站建设、成都h5网站建设、购物商城网站建设、集团公司官网建设、外贸网站建设、高端网站制作、响应式网页设计等建站业务,价格优惠性价比高,为清涧等各大城市提供网站开发制作服务。
节点二:
[grid@rac-two peer]$ oifcfg getif
eth0 192.168.4.0 global public
eth3 192.168.2.0 global cluster_interconnect
[grid@rac-two peer]$ oifcfg iflist -p -n
eth0 192.168.4.0 PRIVATE 255.255.255.0
eth3 192.168.2.0 PRIVATE 255.255.255.0
eth3 169.254.0.0 UNKNOWN 255.255.0.0
eth1 192.168.1.0 PRIVATE 255.255.255.0
eth2 192.168.1.0 PRIVATE 255.255.255.0
[grid@rac-two peer]$
节点一:
[grid@rac-one peer]$ oifcfg getif
eth0 192.168.4.0 global public
eth3 192.168.2.0 global cluster_interconnect
[grid@rac-one peer]$ oifcfg iflist -p -n
eth0 192.168.4.0 PRIVATE 255.255.255.0
eth1 192.168.1.0 PRIVATE 255.255.255.0
eth2 192.168.1.0 PRIVATE 255.255.255.0
eth3 192.168.2.0 PRIVATE 255.255.255.0
eth3 169.254.0.0 UNKNOWN 255.255.0.0
[grid@rac-one peer]$
目前可以知道集群内部连接私有网络设备名为eth3,所需要做的是将集群内部私有网络设备名由eth3改变为eth1和eth2.
3、增加新的集群内部连接
[grid@rac-one peer]$ oifcfg setif -global eth1/192.168.1.0:cluster_interconnect
[grid@rac-one peer]$ oifcfg setif -global eth2/192.168.1.0:cluster_interconnect
[grid@rac-one peer]$
4、验证
节点一:
[grid@rac-one peer]$ oifcfg getif
eth0 192.168.4.0 global public
eth3 192.168.2.0 global cluster_interconnect
eth1 192.168.1.0 global cluster_interconnect
eth2 192.168.1.0 global cluster_interconnect
[grid@rac-one peer]$
节点二:
[grid@rac-two peer]$ oifcfg getif
eth0 192.168.4.0 global public
eth3 192.168.2.0 global cluster_interconnect
eth1 192.168.1.0 global cluster_interconnect
eth2 192.168.1.0 global cluster_interconnect
[grid@rac-two peer]$
5、删除192.168.2.0这个集群内部连接:
[grid@rac-two peer]$ oifcfg delif -global eth3/192.168.2.0
[grid@rac-two peer]$
6、在所有节点关闭集群和禁用crs服务且重启
节点一:
root@rac-one cdgi]# ./crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac-one'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac-one'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac-one'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac-one'
CRS-2673: Attempting to stop 'ora.GIDG.dg' on 'rac-one'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac-one'
CRS-2673: Attempting to stop 'ora.DATADG.dg' on 'rac-one'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac-one'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac-one' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac-one'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac-one' succeeded
CRS-2673: Attempting to stop 'ora.rac-one.vip' on 'rac-one'
CRS-2677: Stop of 'ora.DATADG.dg' on 'rac-one' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'rac-one' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac-two'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac-one' succeeded
CRS-2677: Stop of 'ora.rac-one.vip' on 'rac-one' succeeded
CRS-2672: Attempting to start 'ora.rac-one.vip' on 'rac-two'
CRS-2676: Start of 'ora.scan1.vip' on 'rac-two' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac-two'
CRS-2676: Start of 'ora.rac-one.vip' on 'rac-two' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac-two' succeeded
CRS-2677: Stop of 'ora.GIDG.dg' on 'rac-one' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac-one'
CRS-2677: Stop of 'ora.asm' on 'rac-one' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac-one'
CRS-2677: Stop of 'ora.ons' on 'rac-one' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac-one'
CRS-2677: Stop of 'ora.net1.network' on 'rac-one' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac-one' has completed
添加如下语句
运行
[root@RAC1 ~]# source .bash_profile
使修改生效
目前两个节点的eth0 对应Public IP,eth1 对应Private IP,目标是eth0 与eht1 绑定bond0对应Public IP,eth3 与eth4 绑定bond1对应Private IP,具体IP地址不变。
一旦修改网卡绑定之后,RAC就不能启动,也就不能进行修改,所以我们这里先修改PUBLIC和 Private配置。修改完之后再绑定双网卡。
删除当前配置
创建bond0配置文件
写入
创建bond1配置文件
写入
修改eth0
修改eth1
修改eth2
修改eth3
添加
添加
RAC1的网卡绑定操作完成
RAC2的网卡绑定操作步骤与RAC1相同,注意修改对应的bond0与bond1的IP地址即可。
2016/9/12
一、 安装环境与网络配置
1.安装环境:
主机操作系统:windows XP
虚拟机软件:vmware workstation 8.0
Rac节点操作系统:Redhat Enterprise Linux5 x86_64
Oracle Database software :Oracle11gr2
Cluster software :Oracle grid infrastructure 11gr2
共享存储:ASM+raw
2. 网络配置:
(初步网卡规划,安装只要保证公网、虚拟IP、SCAN IP在同一网段,专用IP在同一网段即可)
说明:公有IP(公网)一般用于管理员,用来确保可以操作到正确的机器,可以理解为真实ip;专用IP(私网)用于心跳同步,这个对于用户层面,可以直接忽略,简单理解,这个ip用来保证两台服务器同步数据;虚拟IP用于客户端应用,以支持失效转移,通俗说就是一台挂了,另一台自动接管,客户端没有任何感觉;在11gR2中,SCAN IP是作为一个新增IP出现的,原有的CRS中的VIP仍然存在,scan主要是简化客户端连接 。
3.Oracle软件组:
4.节点:
5.存储组件:
二、安装Linux系统
安装Linux系统,主要是双网卡的规划设置,其它与普通一致。
三、配置Linux系统
1. 用户组及账号设置
1.1. 在 root用户环境下创建 OS 组
# groupadd ‐g 501 oinstall
# groupadd ‐g 502 dba
# groupadd ‐g 504 asmadmin
# groupadd ‐g 506 asmdba
# groupadd ‐g 507 asmoper
1.2. 创建安装oracle的用户
# useradd ‐u 501 ‐g oinstall ‐G asmadmin,asmdba,asmoper grid
# useradd ‐u 502 ‐g oinstall ‐G dba,asmdba oracle
1.3. 为 grid及 oracle用户设置密码
# passwd oracle
# passwd grid
2. 网络设置
2.1 定义每个节点的 public hostname
也就是本机的 host name, 比如 rac01,rac02.
2.2 定义 public virtual hostname, 一般建议为‐vip 或直接接 vip.
此处采用racvip01、rac02vip
2.3 开始修改所有节点的/etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.5.111 rac01
192.168.5.112 rac02
192.168.5.113 racvip01
192.168.5.114 racvip02
17.1.1.1 racpri01
17.1.1.2 racpri02
#single client access name(scan)
192.168.5.115 racscan
3. 配置Linux内核参数
fs.aio-max-nr=1048576
fs.file-max=6815744
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
4. 为 oracle 用户设置 shell limits.
4.1 修改/etc/security/limits.conf
[root@rac01 etc]# cd /etc/security/
[root@rac01 security]# vi limits.conf
grid soft nproc 2047
grid hard nproc 32768
grid soft nofile 1024
grid hard nofile 250000
oracle soft nproc 2047
oracle hard nproc 32768
oracle soft nofile 1024
oracle hard nofile 250000
4.2 修改/etc/pam.d/login,如果不存在以下行,请加入
session required pam_limits.so
4.3 对默认 shell startup file 做变更,加入如下行到/etc/profie
if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then
if [ \$SHELL = "/bin/ksh" ]; then
ulimit ‐p 16384
ulimit ‐n 65536
else
ulimit ‐u 16384 ‐n 65536
fi
umask 022
fi
4.4 设置 SELinux为 disable
修改/etc/selinux/config文件,确保selinux设置为:SELINU=disabled
5. 创建 Oracle Inventory Directory
[root@rac01 u01]# mkdir ‐p /u01/product/oraInventory
[root@rac01 u01]# chown ‐R grid:oinstall /u01/product/oraInventory
[root@rac01 u01]# chmod ‐R 775 /u01/product/oraInventory/
6. 创建 Oracle Grid Infrastructure home 目录
(注意: 11g单实例如果需要使用 ASM, grid 也必须安装,且必须放在 ORACLE_BASE 下,11g RAC则不行,它的 grid家目录必须另外放在一个地方,比如/u01/grid )
# mkdir ‐p /u01/grid
# chown ‐R grid:oinstall /u01/grid
# chmod ‐R 775 /u01/grid
创建 Oracle Base 目录
# mkdir ‐p /u01/product/oracle
# mkdir /u01/product/oracle/cfgtoollogs ‐‐ 确保软件安装后 dbca 可以运行
# chown ‐R oracle:oinstall /u01/product/oracle
# chmod ‐R 775 /u01/product/oracle
创建 Oracle RDBMS home 目录
# mkdir ‐p /u01/product/oracle/11.2.0/db_1
# chown ‐R oracle:oinstall /u01/product/oracle/11.2.0/db_1
# chmod ‐R 775 /u01/product/oracle/11.2.0/db_1
7.安装相应的包
RAC的安装基于Grid Infrastructure (GI)与 RDBMS,所需安装的包,与安装Orcle RDBMS一样,可参考RDBMS 安装文档,也可以在GI 执行安装先决条件时再具体安装缺少的包。
用如下命令: rpm -q binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel gcc gcc-c++ glibc glibc-common glibc-devel glibc-headers kernel-headers ksh libaio libaio-devel libgcc libgomp libstdc++ libstdc++-devel make numactl-devel sysstat unixODBC unixODBC-devel 检查相应的包,安装缺少的.
四、配置第二台节点raco2
关闭节点1,通过vmware复制一个新的节点出来。直接将rac1目录复制一份成rac2,然后修改.vmx文件中的配置,将 路径rac01相关的改成rac02.
将你启动RAC2 时,系统会弹出这个提示,选择I copied it,如图:
打开RAC2(Copy of RAC1),修改其中的配置。
1.修改hostname
将rac01 改成rac02
[root@node1 ~]# hostname rac02
[root@node1 ~]# vi /etc/sysconfig/network
修改/etc/hosts文件中的:
这样重启之后机器名称就会变成rac02 了。
双网卡bound跟oracle没关系 双网卡是两块网卡虚拟成一块网卡做负载均衡做容灾的 你只能配置一个IP地址 这是操作系统的事情 跟数据库没关系
关闭虚拟机,添加一块虚拟网卡:
启动虚拟机后,此时还查看不到新加的网卡:
[root@slave2 ~]# ifconfig
[plain] view plain copy 在CODE上查看代码片派生到我的代码片
eth0 Link encap:Ethernet HWaddr08:00:27:04:05:16
inet addr:10.192.200.202 Bcast:10.192.200.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe04:516/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:321 errors:0 dropped:0 overruns:0 frame:0
TX packets:44 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:33576 (32.7 KiB) TXbytes:5508 (5.3 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0(0.0 b)
[root@slave2 ~]#cd /etc/sysconfig/network-scripts/
[root@slave2 network-scripts]# cp ifcfg-eth0 ifcfg-eth1
[root@slave2 network-scripts]# cat /etc/udev/rules.d/70-persistent-net.rules
[plain] view plain copy 在CODE上查看代码片派生到我的代码片
# This file was automatically generated bythe /lib/udev/write_net_rules
# program, run by thepersistent-net-generator.rules rules file.
#
# You can modify it, as long as you keepeach rule on a single
# line, and change only the value of theNAME= key.
# PCI device 0x8086:0x100e (e1000)
SUBSYSTEM=="net",ACTION=="add", DRIVERS=="?*",ATTR{address}=="08:00:27:04:05:16", ATTR{type}=="1",KERNEL=="eth*", NAME="eth0"
# PCI device 0x8086:0x100e (e1000)
SUBSYSTEM=="net",ACTION=="add", DRIVERS=="?*",ATTR{address}=="08:00:27:3a:ec:3c", ATTR{type}=="1", KERNEL=="eth*",NAME="eth1"
[root@slave2 network-scripts]# vi ifcfg-eth1
改下DEVICE,HWADDR,IPADDR,删除GATEWAY
修改后:
[root@slave2 network-scripts]# cat ifcfg-eth1
[plain] view plain copy 在CODE上查看代码片派生到我的代码片
DEVICE=eth1
HWADDR=08:00:27:3a:ec:3c
TYPE=Ethernet
#UUID=a4610b15-fc38-4984-875d-208599054e37
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.0.0.2
NETMASK=255.255.255.0
注意:用于内连的网卡不能设置网关,否则会连不上网。
注意:MAC地址HWADDR一定得和/etc/udev/rules.d/70-persistent-net.rules文件中的eth1保持一致,否则servicenetwork restart会报错:
Bringing up interface eth1: Device eth1 does not seem to be present,delaying initialization.
[FAILED]
[root@slave2network-scripts]# service network restart
Shutting down interface eth0: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth0: Determining if ip address 10.192.200.202 isalready in use for device eth0...
[ OK ]
Bringing up interface eth1: Determining if ip address 10.0.0.2 is alreadyin use for device eth1...
[ OK ]
[root@slave2 network-scripts]# ifconfig
[plain] view plain copy 在CODE上查看代码片派生到我的代码片
eth0 Link encap:Ethernet HWaddr 08:00:27:04:05:16
inet addr:10.192.200.202 Bcast:10.192.200.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe04:516/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3303 errors:0 dropped:0 overruns:0 frame:0
TX packets:343 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:339469 (331.5 KiB) TX bytes:49274 (48.1 KiB)
eth1 Link encap:Ethernet HWaddr 08:00:27:3A:EC:3C
inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe3a:ec3c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:636 (636.0 b)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
对于准备投身虚拟化的企业来说,很多人会有这样的疑问,大部分虚拟化都用来架设什么服务?而Hyper-V的物理网卡要如何设定?虚拟机网卡该设定外部、内部还是私有?
其实真实状态下的任何服务,都可以放入虚拟机与实际情况以及可能发生状况问题的预测。原来只需维护一台主机,事情单纯化、但不符合成本效益。而虚拟化实现单台服务器提供多项服务的虚拟主机,虽然会增加环境复杂度,成本却能节省。
例如真实状态DNS主机是放在实体IP上,那在虚拟环境下就必须有真实的网卡IP(外部IP);真实的企业内部的Web主机在内部网络上,那虚拟环境下也可以用内部IP的网卡位置给它。并不需要想得太复杂。原来怎么接的位置,在虚拟主机上也是一样的。
再举例,虚拟主机的网卡可以插三片以上(看主板),再细分为外部、内部、DMZ、VLAN1、VLAN2等等。外部网卡又可以指定给虚拟主机A、B,网卡A做外部真实IP用,内部网卡可以指定给虚拟主机A、B,第二片网卡B作为内部IP用等等。
至于放不放在同一域里,该独立的,还是要独立为一台主机,这样才正确。