Playing With Oracle ASM and Multipath Disks — Benchmarking Your Performance

Written by lolima | Published 2023/09/04
Tech Story Tags: oracle | asm | multipath | hard-disk | multipath-disks-performance | testing-multipath-disks | running-tests-on-oracle-asm | home-server

TLDRHello, and welcome back to the last part of our article. In this part, let’s run some tests on our multipath disks.via the TL;DR App

Hello, and welcome back to the last part of our article. In this part, let’s run some tests on our multipath disks.

Testing the multipath disks

--check the multipath disks
root@has03:/ $> multipath -ll
mpdata1 (36589cfc000000580ddee277e9cda411e) dm-2 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:2 sdf 8:80  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 8:0:0:2 sdi 8:128 active ready running
mpfra1 (36589cfc000000f5feabc996aa1a12876) dm-3 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:3 sdg 8:96  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 8:0:0:3 sdj 8:144 active ready running
mpdata2 (36589cfc00000025b4072a1536ade2f9d) dm-4 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:1 sde 8:64  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 8:0:0:1 sdh 8:112 active ready running

--shutdown the device enp0s8  
root@has03:/ $> nmcli con down enp0s8
Connection 'enp0s8' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)

--check the disks, now we have a "failed faulty" path
root@has03:/ $> multipath -ll
mpdata1 (36589cfc000000580ddee277e9cda411e) dm-2 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=enabled
| `- 7:0:0:2 sdf 8:80  failed faulty running
`-+- policy='service-time 0' prio=50 status=active
  `- 8:0:0:2 sdi 8:128 active ready running
mpfra1 (36589cfc000000f5feabc996aa1a12876) dm-3 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=enabled
| `- 7:0:0:3 sdg 8:96  failed faulty running
`-+- policy='service-time 0' prio=50 status=active
  `- 8:0:0:3 sdj 8:144 active ready running
mpdata2 (36589cfc00000025b4072a1536ade2f9d) dm-4 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=enabled
| `- 7:0:0:1 sde 8:64  failed faulty running
`-+- policy='service-time 0' prio=50 status=active
  `- 8:0:0:1 sdh 8:112 active ready running

--the ASM disks still exist
root@has03:/ $> ll /dev/oracleasm/mp*
lrwxrwxrwx. 1 root root 7 Apr 30 11:53 /dev/oracleasm/mpdata1 -> ../dm-2
lrwxrwxrwx. 1 root root 7 Apr 30 11:53 /dev/oracleasm/mpdata2 -> ../dm-4
lrwxrwxrwx. 1 root root 7 Apr 30 11:53 /dev/oracleasm/mpfra1 -> ../dm-3

--the Oracle Restart is still up and running
root@has03:/ $> crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       has03                    STABLE
ora.FRA.dg
               ONLINE  ONLINE       has03                    STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       has03                    STABLE
ora.asm
               ONLINE  ONLINE       has03                    Started,STABLE
ora.ons
               OFFLINE OFFLINE      has03                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       has03                    STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       has03                    STABLE
ora.money.db
      1        ONLINE  ONLINE       has03                    Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             home_1,STABLE
--------------------------------------------------------------------------------

So, our multipath seems to work fine. Let’s try to shut down the other interface.

--startup the device enp0s8  
root@has03:/ $> nmcli con up enp0s8
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)

--check the multipath disks
--no more faulty paths
root@has03:/ $> multipath -ll
mpdata1 (36589cfc000000580ddee277e9cda411e) dm-2 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:2 sdf 8:80  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 8:0:0:2 sdi 8:128 active ready running
mpfra1 (36589cfc000000f5feabc996aa1a12876) dm-3 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:3 sdg 8:96  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 8:0:0:3 sdj 8:144 active ready running
mpdata2 (36589cfc00000025b4072a1536ade2f9d) dm-4 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:1 sde 8:64  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 8:0:0:1 sdh 8:112 active ready running

--shutdown the device enp0s9
root@has03:/ $> nmcli con down enp0s9
Connection 'enp0s9' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)

--check the multipath disks
--now, the other path is faulty, as expected
root@has03:/ $> multipath -ll
mpdata1 (36589cfc000000580ddee277e9cda411e) dm-2 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:2 sdf 8:80  active ready running
`-+- policy='service-time 0' prio=0 status=enabled
  `- 8:0:0:2 sdi 8:128 failed faulty running
mpfra1 (36589cfc000000f5feabc996aa1a12876) dm-3 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:3 sdg 8:96  active ready running
`-+- policy='service-time 0' prio=0 status=enabled
  `- 8:0:0:3 sdj 8:144 failed faulty running
mpdata2 (36589cfc00000025b4072a1536ade2f9d) dm-4 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:1 sde 8:64  active ready running
`-+- policy='service-time 0' prio=0 status=enabled
  `- 8:0:0:1 sdh 8:112 failed faulty running

--our has still working like a charm
root@has03:/ $> crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       has03                    STABLE
ora.FRA.dg
               ONLINE  ONLINE       has03                    STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       has03                    STABLE
ora.asm
               ONLINE  ONLINE       has03                    Started,STABLE
ora.ons
               OFFLINE OFFLINE      has03                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       has03                    STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       has03                    STABLE
ora.money.db
      1        ONLINE  ONLINE       has03                    Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             home_1,STABLE
--------------------------------------------------------------------------------

Now, let’s go crazy, and shutdown both interfaces.

root@has03:/ $> nmcli con down enp0s8
Connection 'enp0s8' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)

root@has03:/ $> nmcli d
DEVICE  TYPE      STATE         CONNECTION
enp0s3  ethernet  connected     enp0s3
enp0s8  ethernet  disconnected  --
enp0s9  ethernet  disconnected  --
lo      loopback  unmanaged     --

--no miracle here, both paths are faulty
root@has03:/ $> multipath -ll
mpdata1 (36589cfc000000580ddee277e9cda411e) dm-2 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=enabled
| `- 7:0:0:2 sdf 8:80  failed faulty running
`-+- policy='service-time 0' prio=0 status=enabled
  `- 8:0:0:2 sdi 8:128 failed faulty running
mpfra1 (36589cfc000000f5feabc996aa1a12876) dm-3 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=enabled
| `- 7:0:0:3 sdg 8:96  failed faulty running
`-+- policy='service-time 0' prio=0 status=enabled
  `- 8:0:0:3 sdj 8:144 failed faulty running
mpdata2 (36589cfc00000025b4072a1536ade2f9d) dm-4 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=enabled
| `- 7:0:0:1 sde 8:64  failed faulty running
`-+- policy='service-time 0' prio=0 status=enabled
  `- 8:0:0:1 sdh 8:112 failed faulty running
root@has03:/ $>

--the disks are still present
root@has03:/ $> ll /dev/oracleasm/mp*
lrwxrwxrwx. 1 root root 7 Apr 30 12:00 /dev/oracleasm/mpdata1 -> ../dm-2
lrwxrwxrwx. 1 root root 7 Apr 30 12:00 /dev/oracleasm/mpdata2 -> ../dm-4
lrwxrwxrwx. 1 root root 7 Apr 30 12:00 /dev/oracleasm/mpfra1 -> ../dm-3

--but the diskgroups and the database "died", as expected
root@has03:/ $> crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  OFFLINE      has03                    STABLE
ora.FRA.dg
               ONLINE  OFFLINE      has03                    STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       has03                    STABLE
ora.asm
               ONLINE  ONLINE       has03                    Started,STABLE
ora.ons
               OFFLINE OFFLINE      has03                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       has03                    STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       has03                    STABLE
ora.money.db
      1        ONLINE  OFFLINE                               Instance Shutdown,ST
                                                             ABLE
--------------------------------------------------------------------------------

Now, let’s clean up the mess.

--let's bring the interfaces back online
root@has03:/ $> nmcli con up enp0s8
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7)

root@has03:/ $> nmcli con up enp0s9
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/8)

--the disks are happy again
root@has03:/ $> multipath -ll
mpdata1 (36589cfc000000580ddee277e9cda411e) dm-2 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:2 sdf 8:80  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 8:0:0:2 sdi 8:128 active ready running
mpfra1 (36589cfc000000f5feabc996aa1a12876) dm-3 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:3 sdg 8:96  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 8:0:0:3 sdj 8:144 active ready running
mpdata2 (36589cfc00000025b4072a1536ade2f9d) dm-4 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:1 sde 8:64  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 8:0:0:1 sdh 8:112 active ready running
  
--since the Oracle Restart is not going to mount the disks automatically, let's give him a hand
grid@has03:[GRID]:/home/grid $> crsctl stop has -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'has03'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'has03'
CRS-2673: Attempting to stop 'ora.evmd' on 'has03'
CRS-2673: Attempting to stop 'ora.asm' on 'has03'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'has03' succeeded
CRS-2677: Stop of 'ora.evmd' on 'has03' succeeded
CRS-2677: Stop of 'ora.asm' on 'has03' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'has03'
CRS-2677: Stop of 'ora.cssd' on 'has03' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'has03' has completed
CRS-4133: Oracle High Availability Services has been stopped.

grid@has03:[GRID]:/home/grid $> crsctl start has
CRS-4123: Oracle High Availability Services has been started.

--It's AAAALLLLLIVEEEEE!!!
grid@has03:[GRID]:/home/grid $> crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       has03                    STABLE
ora.FRA.dg
               ONLINE  ONLINE       has03                    STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       has03                    STABLE
ora.asm
               ONLINE  ONLINE       has03                    Started,STABLE
ora.ons
               OFFLINE OFFLINE      has03                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       has03                    STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       has03                    STABLE
ora.money.db
      1        ONLINE  ONLINE       has03                    Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             home_1,STABLE
--------------------------------------------------------------------------------

C’est fini mes amis. In this article, I hope I have shown you how to create a homemade environment to implement and test multipath disks.

À la prochaine.


Written by lolima | Hovering around technology for the last 30 years.
Published by HackerNoon on 2023/09/04