Buffer busy waits occur when multiple processes compete for access to specific blocks in the buffer cache of an Oracle database. In order to address this issue, it is essential to identify which blocks are causing contention and understand the reasons behind it. The most effective solution is to eliminate the root cause of the contention.

In many cases, buffer busy waits for data blocks occur because multiple processes are repeatedly accessing the same blocks. For example, if several users are scanning the same index, the first session quickly processes the blocks that are already in the buffer cache. However, when a block needs to be read from disk, other sessions scanning the same index catch up and also require that block. As a result, they have to wait for the buffer cache since another session is already reading the block from disk.

To mitigate buffer busy waits, it is important to analyze and optimize the access patterns to these blocks. This may involve techniques like tuning queries, optimizing indexing strategies, or reconsidering the nature of concurrent operations on the database. By resolving the contention and optimizing these operations, the buffer busy waits can be reduced, leading to improved performance and reduced wait times for accessing the buffer cache.

If Performance Objectives are not being met, Quality of Service Management makes a recommendation. Each recommendation focuses on improving the highest-ranked Performance Class by exceeding its Performance Objective. Submissions may include changing consumer group mappings – and reprioritizing work within existing resource boundaries. For example, changing consumer group mappings may involve promoting a specific workload to get a more significant share of resources, or demoting a competing workload to make additional resources available to the target Performance Class. Another recommendation is to move servers between server pools and reprioritize resources between them to meet workload demands; so effectively, taking a node out of one pool and adding it to another pool gives more resources to the Performance Class running in that pool. And another recommendation is moving CPUs between databases within a server pool – reprioritize CPU resources within existing server pool boundaries. And this is called instance caging, where the CPU count parameter is set to limit the amount of CPUs an instance can use on a node.

The Quality of Service Management recommendations to improve the performance of a particular Performance Class adds more of the bottleneck resource – such as CPU time – for that Performance Class, making the bottleneck resource available more quickly to work requests in the Performance Class. Adding more resources to a Performance Class that is not performing well means taking resources away from another Performance Class. The Performance Class where the resources are removed should be less business-critical than the one being helped. So overall, the reallocation of resources should be beneficial to the business. When generating recommendations, Quality of Service Management evaluates the impact of the system performance as a whole. For example, suppose the improvement for one Performance Class is rather tiny, but the adverse effects on another Performance Class are significant. In that case, Quality of Service Management might report that the performance gain is too small and not recommended. If there is more than one way to resolve the bottleneck, Quality of Service Management advises the best overall recommendation. It is invariable, such as the calculated impact on all the Performance Classes and the predicted disruption and settling time associated with the action. And using Oracle Enterprise Manager, you can view the current and the alternative recommendations. Performance data is sent to Oracle Enterprise Manager for display on the Quality of Service Management Dashboard and Performance History pages. By default, Oracle Database QoS Management does not automatically implement recommendations. Instead, it suggests improving performance, which the administrator must then implement by clicking the Implement button. From version 12.1.0.2, Quality of Service Management allows you to specify authorized automatic actions that it can implement without the intervention of an administrator.

You query V$INSTANCE_RECOVERY view and consistently receive an OPTIMAL_LOGFILE_SIZE value that is greater than the size of your smallest online redo log file. The OPTIMAL_LOGFILE_SIZE column of the V$INSTANCE_RECOVERY view can be used to determine the appropriate size for all of the online redo log files in your database. If the value of the OPTIMAL_LOGFILE_SIZE column is greater than the size of your smallest online redo log file, you should change the size of all online redo log files to be at least this value. In addition, the FAST_START_MTTR_TARGET initialization parameter simplifies the configuration of recovery time from instance or system failure. After adjusting the size of your online redo log files, you may be able to adjust the value of this initialization parameter for better performance. This is done by rerunning the MTTR advisor after changing the size of your online redo log file to achieve more optimal results. However, running the MTTR advisor is not the best option in this situation for improving instance recovery performance.

 

You use the DBMS_RESOURCE_MANAGER package to create a CDB resource plan and define the directives for the plan. Then, from the root container of your CDB connects as the SYS user. Then, create a pending area using the CREATE_PENDING_AREA procedure. After the pending area has been completed, you use the CREATE_CDB_PLAN procedure to create the CDB resource plan. Next, create the CDB resource plan directives for the PDBs using the CREATE_CDB_PLAN_DIRECTIVE procedure. Each directive specifies how resources are allocated to a specific PDB. Finally, you validate the pending area and then submit it. This is done using the VALIDATE_PENDING_AREA and SUBMIT_PENDING_AREA procedures, respectively.

 

Oracle Groundbreakers EMEA 2021
Michigan Oracle Users Summit 2021 –

The paper below describes an automated system that generates, selects, verifies, and maintains materialized views in the Oracle RDBMS; it presents a novel technique, called the extended covering subexpression algorithm, for the automated generation of materialized views. An extensive set of experiments is described that demonstrates the feasibility and efficiency of this approach. This system has been fully implemented and will be deployed on the Oracle Autonomous Database on the Cloud.

For more information see https://dl.acm.org/doi/abs/10.14778/3415478.3415533

Would you please download and install the one-off patch for your database version and OS for bugs not included in RU yet? If you do not find a patch for your specific version and OS from MOS, please open a Service Request with details on the patch needed. In addition, please include a list of patches already applied (opatch lsinventory -detail) and any other patches you intend to apply.

Some of the Optimizer bug fixes (not all) are controlled by the “_FIX_CONTROL” setting and are DISABLED by default in 19c when installed either through one-off backport or through RUs. Please refer to the corresponding Patch Readme carefully for one-off patches and enable the fixed control to activate the fix.

Example:

SQL> alter system set “_FIX_CONTROL”=”29302565:1”;

V$TRANSACTION lists the active transactions in the system.

(i) The following columns together point to a transaction. (i.e.) The combination of the following should give a unique transaction id for that database.

XIDUSN – Undo segment number
XIDSLOT – NUMBER Slot number
XIDSQN – NUMBER Sequence number

(ii) The following columns explain the number of undo blocks / undo records used per transaction.
USED_UBLK – Number of undo blocks used
USED_UREC – Number of undo records used

In the case of transaction rollback, the above columns will estimate the number of undo blocks that need to be rolled back.

The number of undo records and undo blocks (USED_UREC and USED_UBLK) decreases while the transaction rolls back. When they reach 0, the transaction disappears from v$transaction.

The following query can be used to monitor the transaction rollback.
SELECT A.SID, A.USERNAME, B.XIDUSN, B.USED_UREC, B.USED_UBLK
FROM V$SESSION A, V$TRANSACTION B
WHERE A.SADDR=B.SES_ADDR;

(iii) The STATUS following column explains the status of a transaction.

ACTIVE – Explains the transaction is active.

Before performing a normal/transactional shutdown, YOU can check this view to understand any ACTIVE transactions.
SELECT XIDUSN, XIDSLT, XIDSEQ , SES_ADDR, STATUS FROM V$TRANSACTION;

 

You can import database objects that are enabled for the IM column store using the TRANSFORM=INMEMORY:y option of the ‘impdp’ command. With this option, Data Pump keeps the IM column store clause for all objects that have one. When the TRANSFORM=INMEMORY:n option is specified, Data Pump drops the IM column store clause from all objects that have one.

You can also use the TRANSFORM=INMEMORY_CLAUSE:string option to override the IM column store clause for a database object in the dump file during import. For example, you can use this option to change the IM column store compression for a database object being imported.

Example:

transform=inmemory_clause:\”INMEMORY MEMCOMPRESS FOR DML PRIORITY CRITICAL\”

Note the \ characters are necessary to “escape” the double quotes so they are not stripped by the OS.

 

Reference:

Oracle Database In-Memory Option (DBIM) Basics and Interaction with Data Warehousing Features (Doc ID 1903683.1)

Oracle CBO performs transitivity as the first step in optimization. Transitivity means generating additional predicates based on existing predicates. The main advantage is the inclusion of extra access methods. Transitivity is not done for join predicates.

Assume A, B, and C are columns. If A=5 and A=B, you can conclude that B=5. This can be applied to queries as shown on the following page. Note that A=5 contains a constant.
However, in terms of joins, if A=B and B=C, it is not concluded that A=C; in other words, the Oracle server does not generate transitive closures for join predicates.
For more details about transitivity, see Doc ID 68979.1