Exadata: Grid Infrastructure starting hangs after DB Node OS patching.

Recently we patched an Exadata Eighth Rack by our Customer. The Exadata Machine has RAC One Configuration with 12.1 Grid Infrastructure and many 11.2 Database Instances. After the successful OS patching on the first node with dbnodeupdate.sh utility without any errors or warnings, we didn’t get stared the CRS stack during the post patch action:

  (*) 2016-11-05 13:14:27: Locking and starting Grid Infrastructure (/u01/app/12.1.0.2/grid)

  (*) 2016-11-05 13:16:27: Sleeping another 60 seconds while stack is starting (1/15)

  (*) 2016-11-05 13:17:27: Sleeping another 60 seconds while stack is starting (2/15)

Manually start of the crs stack didn’t work too.   The Output of the crsctl stat res –t –init showed, that the resource ora.storage hangs by starting:

--------------------------------------------------------------------------------

Name           Target  State        Server                   State details

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.asm

      1        OFFLINE OFFLINE                               Instance Shutdown,ST

                                                             ABLE

ora.cluster_interconnect.haip

      1        ONLINE  ONLINE       exaserv-dbadm-01         STABLE

ora.crf

      1        ONLINE  OFFLINE                               STABLE

ora.crsd

      1        ONLINE  OFFLINE                               STABLE

ora.cssd

      1        ONLINE  ONLINE       exaserv-dbadm-01         STABLE

ora.cssdmonitor

      1        ONLINE  ONLINE       exaserv-dbadm-01         STABLE

ora.ctssd

      1        ONLINE  ONLINE       exaserv-dbadm-01         OBSERVER,STABLE

ora.diskmon

      1        ONLINE  ONLINE       exaserv-dbadm-01         STABLE

ora.drivers.acfs

      1        ONLINE  ONLINE       exaserv-dbadm-01         STABLE

ora.drivers.oka

      1        OFFLINE OFFLINE                               STABLE

ora.evmd

      1        ONLINE  INTERMEDIATE exaserv-dbadm-01         STABLE

ora.gipcd

      1        ONLINE  ONLINE       exaserv-dbadm-01         STABLE

ora.gpnpd

      1        ONLINE  ONLINE       exaserv-dbadm-01         STABLE

ora.mdnsd

      1        ONLINE  ONLINE       exaserv-dbadm-01         STABLE

ora.storage

      1        ONLINE  OFFLINE      exaserv-dbadm-01         STARTING

 

Wrong was also that the target status of the ora.asm resource after the node patching was set to OFFLINE. To repair this we started the ora.asm service during the after patch procedure manually:

crsctl start res ora.asm -init

The second node had after the OS Patching the same problem.

Advertisements