Hi, and welcome back to the second part. If you missed the first part, where we configured the TrueNAS, please look here. Now we will configure the iSCSI, the multipath, and the ASM disks.
For the sake of simplicity, I will use a VM previously configured with Oracle Restart and one database already used in other tests. I only added two new network interfaces to the VM to use the multipath.
I’ve added the TrueNAS VM IPs to the /etc/hosts.
You must configure the new internal networks with IPs in the same network as the TrueNAS portal (10.1.1.1/10.1.2.1).
--discover the iscsi target
root #> iscsiadm -m discovery -t sendtargets -p truenas1
10.1.2.1:3260,1 iqn.2005-10.org.freenas.ctl:has03
10.1.1.1:3260,1 iqn.2005-10.org.freenas.ctl:has03
--enable automatic login during startup
root #> iscsiadm -m node --op update -n node.startup -v automatic
--login to both network interfaces
root #> iscsiadm -m node -p truenas1 --login
root #> iscsiadm -m node -p truenas2 --login
--enable and start the service
root@has03:/ $> systemctl enable multipathd
root@has03:/ $> systemctl start multipathd
--create the multipath.conf
cat <<EOF>/etc/multipath.conf
defaults {
find_multipaths yes
user_friendly_names yes
failback immediate
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][[0-9]*]"
devnode "^sda[[0-9]*]"
devnode "^cciss!c[0-9]d[0-9]*"
}
EOF
root@has03:/ $> ll /etc/multipath.conf
-rw-r--r--. 1 root root 849 Apr 30 10:24 /etc/multipath.conf
--reload the service
root@has03:/ $> systemctl reload multipathd
--check if the disks exist
root@has03:/ $> ll /dev/mapper
total 0
crw-------. 1 root root 10, 236 Apr 30 09:34 control
lrwxrwxrwx. 1 root root 7 Apr 30 10:24 mpathd -> ../dm-4
lrwxrwxrwx. 1 root root 7 Apr 30 10:24 mpathe -> ../dm-2
lrwxrwxrwx. 1 root root 7 Apr 30 10:24 mpathf -> ../dm-3
lrwxrwxrwx. 1 root root 7 Apr 30 09:34 ol-root -> ../dm-0
lrwxrwxrwx. 1 root root 7 Apr 30 09:34 ol-swap -> ../dm-1
Using the multipath command, you can check if the disks use both networks.
root@has03:/ $> multipath -ll
mpathe (36589cfc000000580ddee277e9cda411e) dm-2 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active <--- network 1
| `- 7:0:0:2 sdf 8:80 active ready running
`-+- policy='service-time 0' prio=50 status=enabled <--- network 2
`- 8:0:0:2 sdi 8:128 active ready running
mpathd (36589cfc00000025b4072a1536ade2f9d) dm-4 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:1 sde 8:64 active ready running
`-+- policy='service-time 0' prio=50 status=enabled
`- 8:0:0:1 sdh 8:112 active ready running
mpathf (36589cfc000000f5feabc996aa1a12876) dm-3 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:3 sdg 8:96 active ready running
`-+- policy='service-time 0' prio=50 status=enabled
`- 8:0:0:3 sdj 8:144 active ready running
Let’s simplify our lives by adding aliases to the disks. I’ve updated the multipath.conf file and did reload the service.
root@has03:/ $> cat /etc/multipath.conf
defaults {
find_multipaths yes
user_friendly_names yes
failback immediate
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][[0-9]*]"
devnode "^sda[[0-9]*]"
devnode "^cciss!c[0-9]d[0-9]*"
}
multipaths {
multipath {
wwid 36589cfc000000580ddee277e9cda411e
alias mpdata1
}
multipath {
wwid 36589cfc00000025b4072a1536ade2f9d
alias mpdata2
}
multipath {
wwid 36589cfc000000f5feabc996aa1a12876
alias mpfra1
}
}
root@has03:/ $> systemctl reload multipathd
Now we have better names (aliases).
root@has03:/ $> multipath -ll
mpfra1 (36589cfc000000f5feabc996aa1a12876) dm-3 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:3 sdg 8:96 active ready running
`-+- policy='service-time 0' prio=50 status=enabled
`- 8:0:0:3 sdj 8:144 active ready running
mpdata2 (36589cfc00000025b4072a1536ade2f9d) dm-4 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:1 sde 8:64 active ready running
`-+- policy='service-time 0' prio=50 status=enabled
`- 8:0:0:1 sdh 8:112 active ready running
mpdata1 (36589cfc000000580ddee277e9cda411e) dm-2 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:2 sdf 8:80 active ready running
`-+- policy='service-time 0' prio=50 status=enabled
`- 8:0:0:2 sdi 8:128 active ready running
Nice, it worked. Now it’s time to use those disks in our ASM. The VM has03 already has an Oracle Restart installation, so I already have 3 ASM local disks.
Let’s add our multipath disks to the UDEV.
--find the disks' UUID
root@has03:/ $> udevadm info --query=all --name=/dev/mapper/mpdata1 | grep UUID
E: DM_UUID=mpath-36589cfc000000580ddee277e9cda411e
root@has03:/ $> udevadm info --query=all --name=/dev/mapper/mpdata2 | grep UUID
E: DM_UUID=mpath-36589cfc00000025b4072a1536ade2f9d
root@has03:/ $> udevadm info --query=all --name=/dev/mapper/mpfra1 | grep UUID
E: DM_UUID=mpath-36589cfc000000f5feabc996aa1a12876
--edit the udev conf file to add the new disks and rules, using the previously UUIDs
root@has03:/ $> cat /etc/udev/rules.d/96-asm.rules
#multipath disks
KERNEL=="dm-*", SUBSYSTEM=="block", ENV{DM_UUID}=="mpath-36589cfc000000580ddee277e9cda411e", SYMLINK+="oracleasm/mpdata1", OWNER="grid", GROUP="asmadmin", MODE="0660", OPTIONS:="nowatch"
KERNEL=="dm-*", SUBSYSTEM=="block", ENV{DM_UUID}=="mpath-36589cfc00000025b4072a1536ade2f9d", SYMLINK+="oracleasm/mpdata2", OWNER="grid", GROUP="asmadmin", MODE="0660", OPTIONS:="nowatch"
KERNEL=="dm-*", SUBSYSTEM=="block", ENV{DM_UUID}=="mpath-36589cfc000000f5feabc996aa1a12876", SYMLINK+="oracleasm/mpfra1", OWNER="grid", GROUP="asmadmin", MODE="0660", OPTIONS:="nowatch"
#local disks
KERNEL=="sd*", SUBSYSTEM=="block", ENV{ID_SERIAL}=="VBOX_HARDDISK_VB647c8062-550861fd", SYMLINK+="oracleasm/data1", OWNER="grid", GROUP="asmadmin", MODE="0660", OPTIONS:="nowatch"
KERNEL=="sd*", SUBSYSTEM=="block", ENV{ID_SERIAL}=="VBOX_HARDDISK_VB84084872-e9ff80ef", SYMLINK+="oracleasm/data2", OWNER="grid", GROUP="asmadmin", MODE="0660", OPTIONS:="nowatch"
KERNEL=="sd*", SUBSYSTEM=="block", ENV{ID_SERIAL}=="VBOX_HARDDISK_VB9de588bd-fea7d130", SYMLINK+="oracleasm/fra1", OWNER="grid", GROUP="asmadmin", MODE="0660", OPTIONS:="nowatch"
--reload UDEV
root@has03:/ $>udevadm control --reload-rules && udevadm trigger
--check if the disks exist
root@has03:/ $> ll /dev/oracleasm
total 0
lrwxrwxrwx. 1 root root 6 Apr 30 10:58 data1 -> ../sdb
lrwxrwxrwx. 1 root root 6 Apr 30 10:58 data2 -> ../sdc
lrwxrwxrwx. 1 root root 6 Apr 30 10:58 fra1 -> ../sdd
lrwxrwxrwx. 1 root root 7 Apr 30 10:58 mpdata1 -> ../dm-2
lrwxrwxrwx. 1 root root 7 Apr 30 10:58 mpdata2 -> ../dm-4
lrwxrwxrwx. 1 root root 7 Apr 30 10:58 mpfra1 -> ../dm-3
Now let’s swap our local disks with the multipath disks on the ASM diskgroups.
--check the candidates disks
grid@has03:[GRID]:/home/grid $> asmcmd lsdsk --candidate
Path
/dev/oracleasm/mpdata1
/dev/oracleasm/mpdata2
/dev/oracleasm/mpfra1
--check the current disks' names
--I removed some columns from the result below
grid@has03:[GRID]:/home/grid $> asmcmd lsdsk -k
Total_MB Free_MB OS_MB Name Failgroup Path
5120 3468 5120 DATA_0000 DATA_0000 /dev/oracleasm/data1
5120 3464 5120 DATA_0001 DATA_0001 /dev/oracleasm/data2
5120 3323 5120 FRA_0000 FRA_0000 /dev/oracleasm/fra1
--swap the disks
grid@has03:[GRID]:/home/grid $> sqlplus / as sysasm
SQL*Plus: Release 19.0.0.0.0 - Production on Sat Apr 30 11:05:50 2022
Version 19.14.0.0.0
Copyright (c) 1982, 2021, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.14.0.0.0
SQL> alter diskgroup fra add disk '/dev/oracleasm/mpfra1' drop disk fra_0000 rebalance power 1024;
Diskgroup altered.
SQL> alter diskgroup data add disk '/dev/oracleasm/mpdata1', '/dev/oracleasm/mpdata2' drop disk data_0000, data_0001 rebalance power 1024;
Diskgroup altered.
SQL>
Now we wait until the rebalance ends and validate the new disks.
Voilà, everything working like a charm!
That’s the end of the second part. In the next and last part, we will run some tests on our new multipath disks. If you missed the first part, where we configured the TrueNAS, please look here.
À la prochaine.