Posts

If you require a certain number of backups to be retained, you can set the retention policy based on the redundancy option. This option requires that a specified number of backups be cataloged before any backup is identified as obsolete. The default retention policy has a redundancy of 1, which means that only one backup of a file must exist at any given time. A backup is deemed obsolete when a more recent version of the same file has been backed up. For example, you can use the following command to configure a redundancy recovery policy of 2:

CONFIGURE RETENTION POLICY TO REDUNDANCY 2;

When patching an Oracle home, you might run into problems. OPatch might not behave as expected. It is helpful to know where to look for error messages or find additional information to pass on to Oracle Support if you want to log an SR for your problem. So OPatch maintains logs for apply, rollback, and lsinventory operations. The OPatch log files are located in the $ORACLE_HOME/cfgtoollogs/opatch. Each time you run OPatch, a new log file is created, and each log file is tagged with the operation’s timestamp. OPatch maintains an index of processed commands and log files in the opatch_history.txt file – and that is also in the above-mentioned $ORACLE_HOME/cfgtoollogs/opatch directory. So if you change the directory to $ORACLE_HOME/cfgtoollogs/opatch, you’ll see that every time you run OPatch, a log file is created with a date stamp. And then, at the bottom, you’ll see an opatch_history.txt file. look at the file, you’ll see a record of each time you ran the opatch apply, opatch rollback, or lsinventory command. The DBMS_QOPATCH package provides a PL/SQL or a SQL interface to view the installed database patches.

The package returns the patch and patch metadata information available as part of the “opatch lsinventory -xml” command in real-time. So it’s basically a way to see which patches are applied to your database home but from a PL/SQL or a SQL interface. So you basically get an XML-formatted return of your patch information. The DBMS_QOPATCH package allows users to query what patches are installed from SQL*Plus, write wrapper programs to create reports and do validation checks across multiple environments, and also to check patches installed on cluster nodes from a single location. If you log into SQL*Plus as the sys user and then perform select DBMS_QOPATCH.GET_OPATCH_LSINVENTORY from dual. And you’ll see just lots of XML information – which you can use an XML parser to make sense of it.

You query V$INSTANCE_RECOVERY view and consistently receive an OPTIMAL_LOGFILE_SIZE value that is greater than the size of your smallest online redo log file. The OPTIMAL_LOGFILE_SIZE column of the V$INSTANCE_RECOVERY view can be used to determine the appropriate size for all of the online redo log files in your database. If the value of the OPTIMAL_LOGFILE_SIZE column is greater than the size of your smallest online redo log file, you should change the size of all online redo log files to be at least this value. In addition, the FAST_START_MTTR_TARGET initialization parameter simplifies the configuration of recovery time from instance or system failure. After adjusting the size of your online redo log files, you may be able to adjust the value of this initialization parameter for better performance. This is done by rerunning the MTTR advisor after changing the size of your online redo log file to achieve more optimal results. However, running the MTTR advisor is not the best option in this situation for improving instance recovery performance.

 

There have been a few issues related to the grid inter-process communication(GIPC) daemon. Since this lets redundant interconnect usage, it would produce many networks interconnect messages. Previously, I carried out a cyclic cleanup of the ‘gipcd’ related trace/logs. You can purge these huge trace files if your Clusterware is well and has no issues. If required, you can keep the recent history and purge the rest. In addition, see Doc ID 25776294.8.

You can import database objects that are enabled for the IM column store using the TRANSFORM=INMEMORY:y option of the ‘impdp’ command. With this option, Data Pump keeps the IM column store clause for all objects that have one. When the TRANSFORM=INMEMORY:n option is specified, Data Pump drops the IM column store clause from all objects that have one.

You can also use the TRANSFORM=INMEMORY_CLAUSE:string option to override the IM column store clause for a database object in the dump file during import. For example, you can use this option to change the IM column store compression for a database object being imported.

Example:

transform=inmemory_clause:\”INMEMORY MEMCOMPRESS FOR DML PRIORITY CRITICAL\”

Note the \ characters are necessary to “escape” the double quotes so they are not stripped by the OS.

 

Reference:

Oracle Database In-Memory Option (DBIM) Basics and Interaction with Data Warehousing Features (Doc ID 1903683.1)

For all offerings using Oracle Database 19c or later, if you are not licensed for Oracle Multitenant, then you may have up to 3 user-created PDBs in a given container database at any time. For all offerings using Oracle Database 12.1 through 18c, if you are not licensed for Oracle Multitenant, then the container database architecture is available in single-tenant mode, that is, with one user-created PDB, one user-created application root, and one user-created proxy PDB.

EE: Extra cost option; if you are licensed for Oracle Multitenant, then you can create up to 252 PDBs.

ODA and Exa: Extra cost option; if you are licensed for Oracle Multitenant, then you can create up to 4096 PDBs.

ExaCS/CC, DBCS EE-HP, and DBCS EE-EP: Included option; you can create up to 4096 PDBs.