The error occurred when switching from MAXIMUM PERFORMANCE to MAXIMUM AVAILABILITY Data Guard mode on an Exadata system. The primary RAC database, running on version 19c, crashed with an ORA-600 error code. The error mentioned that the redo log for group 45, sequence 509, was incorrectly located on DAX storage, causing issues. The incident details pointed to an internal error code [kfk_iodone_invalid_buffer], indicating that an I/O buffer did not pass HARD checks. The error was associated with the LGWR background process.

Changes
The issue arose after changing the Data Guard mode to MAXIMUM AVAILABILITY; switching to MAXIMUM PROTECTION did not trigger the ORA-600 error. Interestingly, a 12c database within the same Exadata rack did not experience the ORA-600 error. The problem manifested when the online or standby redo log files were located on Extreme Flash Cell or High Capacity Cell nodes. The system applied the following relevant patches: Patch 30165493, fixing log file fast sync parameters for PMEMLOG, Bug 31119057 associated with the ORA-600 error, and Bug 31305624 linked to instance crashes.

Solution
When transitioning to MAXIMUM AVAILABILITY Data Guard mode on Exadata, specific steps must be taken to resolve the ORA-600 [kfk_iodone_invalid_buffer] error.

Firstly, dynamic parameters must be set on all database instances: ‘_smart_log_threshold_usec‘ should be set to 0.

For Exadata systems with PMEM, ‘_exa_pmemlog_threshold_usec‘ must also be set to 0.

Furthermore, it is necessary to download and implement patch 31305624 from support.oracle.com.

Lastly, updating the system to version 19.6.0.0.200114DBRU or a higher version that includes the bug fix is part of the solution.

This error is triggered when trying to update a column in a table that is part of a correlated subquery involving the same table. This issue was encountered in Oracle databases of versions equal to or greater than 12.1 but below 12.2, with 12.1.0.2 being the confirmed affected version.

Description:
The error was caused by a regression that prevented the correct marking of the compare column in cases where there is a correlation column from the same table but a different view. This regression led to the ORA-00600 [qeselupdpre_20] error when executing certain update statements within the specified version range of Oracle databases.

Version Affected:
The problem affects Oracle database versions from 12.1 to 12.2, with the specific version 12.1.0.2 (Server Patch Set) confirmed to experience this issue. It’s crucial to note that this problem does not extend to versions 12.2 and above.

Workaround:
Unfortunately, there is no available workaround for the ORA-00600 [qeselupdpre_20] error in the affected versions of the Oracle database. Users encountering this issue are advised to proceed directly to the fix provided by Oracle.

Fixed:
The fix for the bug causing the ORA-00600 [qeselupdpre_20] error is first included in Oracle database version 12.2.0.1 (Base Release). Users experiencing this problem should consider upgrading their Oracle database to version 12.2.0.1 or later to resolve this issue.

Bug Description

The bug described in Bug 32043701 involves row cache locks for sequences in Real Application Clusters (RAC) due to the S-optimization feature for dc_sequences. This bug creates locks on the data dictionary rows, primarily to serialize changes to the data dictionary and wait for a lock on a data dictionary cache.

How to Fix
To fix this bug, the workaround is to turn off the S-optimization feature for the sequences (dc_sequences) cache. This can be achieved by explicitly listing the S-optimized enqueue types in the _lm_share_lock_restype config parameter and skipping the enqueue type QN. The specific setting to address this bug is to set _lm_share_lock_restype=’LNQAQBQCQDQEQFQGQHQIQJQKQLQMQOQPQQQRQSQTQUQVQWQXQYQZ’. It is crucial to ensure that this setting has the same value in all instances to resolve the issue reported in bug 32043701 effectively.

When patching an Oracle home, you might run into problems. OPatch might not behave as expected. It is helpful to know where to look for error messages or find additional information to pass on to Oracle Support if you want to log an SR for your problem. So OPatch maintains logs for apply, rollback, and lsinventory operations. The OPatch log files are located in the $ORACLE_HOME/cfgtoollogs/opatch. Each time you run OPatch, a new log file is created, and each log file is tagged with the operation’s timestamp. OPatch maintains an index of processed commands and log files in the opatch_history.txt file – and that is also in the above-mentioned $ORACLE_HOME/cfgtoollogs/opatch directory. So if you change the directory to $ORACLE_HOME/cfgtoollogs/opatch, you’ll see that every time you run OPatch, a log file is created with a date stamp. And then, at the bottom, you’ll see an opatch_history.txt file. look at the file, you’ll see a record of each time you ran the opatch apply, opatch rollback, or lsinventory command. The DBMS_QOPATCH package provides a PL/SQL or a SQL interface to view the installed database patches.

The package returns the patch and patch metadata information available as part of the “opatch lsinventory -xml” command in real-time. So it’s basically a way to see which patches are applied to your database home but from a PL/SQL or a SQL interface. So you basically get an XML-formatted return of your patch information. The DBMS_QOPATCH package allows users to query what patches are installed from SQL*Plus, write wrapper programs to create reports and do validation checks across multiple environments, and also to check patches installed on cluster nodes from a single location. If you log into SQL*Plus as the sys user and then perform select DBMS_QOPATCH.GET_OPATCH_LSINVENTORY from dual. And you’ll see just lots of XML information – which you can use an XML parser to make sense of it.

Today, 18th January 2022, The Database patch bundles were released.
All the details on MOS in Doc ID 19202201.9 and Doc ID 21202201.9 are recommended to be installed on production systems.

 

There have been a few issues related to the grid inter-process communication(GIPC) daemon. Since this lets redundant interconnect usage, it would produce many networks interconnect messages. Previously, I carried out a cyclic cleanup of the ‘gipcd’ related trace/logs. You can purge these huge trace files if your Clusterware is well and has no issues. If required, you can keep the recent history and purge the rest. In addition, see Doc ID 25776294.8.

Would you please download and install the one-off patch for your database version and OS for bugs not included in RU yet? If you do not find a patch for your specific version and OS from MOS, please open a Service Request with details on the patch needed. In addition, please include a list of patches already applied (opatch lsinventory -detail) and any other patches you intend to apply.

Some of the Optimizer bug fixes (not all) are controlled by the “_FIX_CONTROL” setting and are DISABLED by default in 19c when installed either through one-off backport or through RUs. Please refer to the corresponding Patch Readme carefully for one-off patches and enable the fixed control to activate the fix.

Example:

SQL> alter system set “_FIX_CONTROL”=”29302565:1”;

After applying DB RU 19.12.0.0.210720 or DB RUR 19.11.1.0.210720, you may notice that the Block Change Tracking (BCT) file is getting created of a size that is not in line with the database size.
Oracle recommends that you apply interim one-off Patch 33185773 to correct this problem(s) in the RU/RURs indicated above.
Note the fix for this issue has been included in the Oct2021 quarterly RU/RURs.

DBMS_LOB.APPEND RAISES ORA-22275 WHEN CLOB RETURNED BY XMLSERIALIZE() GETS APPENDED TO ANOTHER CLOB
REDISCOVERY INFORMATION:

Error signaling Function: koklnflushint

The fix for 32008819 is first included in 19.11.0.0.DBRU:210420 (APR 2021) DB Release Update(DB RU)