Speed up database code with result caching

One of the most interesting new features of Oracle 11 for me is the new function result caching mechanism. Until now, making sure that a PL/SQL function gets executed only as many times as necessary was a black art. The new caching system makes that quite easy – here is how it works.

Enabling the cache

To turn on caching for a function, add RESULT_CACHE immediately before IS or AS in the function declaration. There is an optional RELIES_ON clause which can specify a database table or column dependency. If a dependency is specified, any change to the underlying object invalidates cached results. So, for example, we can write a function that does some on-the-fly statistics depending on a configuration parameter. If the parameter table changes, function results are automatically deleted from the cache. This mechanism is intended to replace all in-house caching systems developed over the years using triggers and PL/SQL package variables.

A new DBMS package dbms_result_cache can be used to get statistics and manipulate cache directly. Two most interesting methods in that package are memory_report and FLUSH. Flush clears the cache, and memory_report prints a nice memory usage report to serveroutput.

A practical example

So let’s try it out. We’ll do some statistics on a table, and run the query a couple of times without and with caching. I typically run tests like these on sys.all_objects, but for some reason result caching is disabled on SYS structures. For the test, we’ll first copy ALL_OBJECTS into a table in some user schema. You’ll probably need the DBA role to run this script because of the memory flushing, but result caching is available to normal user roles.


spool c:\log.txt 

drop table BIGTABLE;

create table BIGTABLE as 
select * from all_objects;

SET SERVEROUTPUT ON

exec dbms_result_cache.flush;

alter system flush shared_pool
/

exec dbms_result_cache.memory_report;

create or replace function COUNT_OBJECTS(pOBJECT_TYPE varchar2) return number as
lCount number;
begin
	select count(*) into lCOUNT from BIGTABLE where object_type=pOBJECT_TYPE;
	return lCOUNT;
end; 
/

create or replace function COUNT_OBJECTS2(pOBJECT_TYPE varchar2) return number 
RESULT_CACHE relies_on (BIGTABLE) 
as
lCount number;
begin
	select count(*) into lCOUNT from BIGTABLE where object_type=pOBJECT_TYPE;
	return lCOUNT;
end; 
/

set timing on;

set autotrace traceonly;

select object_type, COUNT_OBJECTS(object_type) from BIGTABLE group by object_type;

select object_type, COUNT_OBJECTS(object_type) from BIGTABLE group by object_type;

select object_type, COUNT_OBJECTS2(object_type) from BIGTABLE group by object_type;

exec dbms_result_cache.memory_report;

select object_type, COUNT_OBJECTS2(object_type) from BIGTABLE group by object_type;

exec dbms_result_cache.memory_report;

spool off;

Methods COUNT_OBJECTS and COUNT_OBJECTS2 are absolutely the same, except for the result caching. When the script is executed, we see the difference in the trace and memory report. The first execution, without caching, comes to this:


36 rows selected.

Elapsed: 00:00:00.42

Execution Plan
----------------------------------------------------------                      
Plan hash value: 2406321947                                                     
                                                                                
------------------------------------------------------------------------------- 
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     | 
------------------------------------------------------------------------------- 
|   0 | SELECT STATEMENT   |          | 63870 |   686K|   284   (2)| 00:00:04 | 
|   1 |  HASH GROUP BY     |          | 63870 |   686K|   284   (2)| 00:00:04 | 
|   2 |   TABLE ACCESS FULL| BIGTABLE | 63870 |   686K|   281   (1)| 00:00:04 | 
------------------------------------------------------------------------------- 
                                                                                
Note                                                                            
-----                                                                           
   - dynamic sampling used for this statement                                   


Statistics
----------------------------------------------------------                      
        248  recursive calls                                                    
          0  db block gets                                                      
      37423  consistent gets                                                    
       1006  physical reads                                                     
          0  redo size                                                          
       1179  bytes sent via SQL*Net to client                                   
        356  bytes received via SQL*Net from client                             
          4  SQL*Net roundtrips to/from client                                  
          2  sorts (memory)                                                     
          0  sorts (disk)                                                       
         36  rows processed                                                     

The second one, calling the same function but without caching, leads to this:

36 rows selected.

Elapsed: 00:00:00.37

Execution Plan
----------------------------------------------------------                      
Plan hash value: 2406321947                                                     
                                                                                
------------------------------------------------------------------------------- 
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     | 
------------------------------------------------------------------------------- 
|   0 | SELECT STATEMENT   |          | 63870 |   686K|   284   (2)| 00:00:04 | 
|   1 |  HASH GROUP BY     |          | 63870 |   686K|   284   (2)| 00:00:04 | 
|   2 |   TABLE ACCESS FULL| BIGTABLE | 63870 |   686K|   281   (1)| 00:00:04 | 
------------------------------------------------------------------------------- 
                                                                                
Note                                                                            
-----                                                                           
   - dynamic sampling used for this statement                                   


Statistics
----------------------------------------------------------                      
         38  recursive calls                                                    
          0  db block gets                                                      
      37222  consistent gets                                                    
          0  physical reads                                                     
          0  redo size                                                          
       1179  bytes sent via SQL*Net to client                                   
        356  bytes received via SQL*Net from client                             
          4  SQL*Net roundtrips to/from client                                  
          0  sorts (memory)                                                     
          0  sorts (disk)                                                       
         36  rows processed                                                     

The number of physical reads and recursive calls is much lower, because of internal caching mechanisms. Execution time drops from 42 to 37 ms. Now, the third execution with caching turned on:


36 rows selected.

Elapsed: 00:00:00.37

Execution Plan
----------------------------------------------------------                      
Plan hash value: 2406321947                                                     
                                                                                
------------------------------------------------------------------------------- 
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     | 
------------------------------------------------------------------------------- 
|   0 | SELECT STATEMENT   |          | 63870 |   686K|   284   (2)| 00:00:04 | 
|   1 |  HASH GROUP BY     |          | 63870 |   686K|   284   (2)| 00:00:04 | 
|   2 |   TABLE ACCESS FULL| BIGTABLE | 63870 |   686K|   281   (1)| 00:00:04 | 
------------------------------------------------------------------------------- 
                                                                                
Note                                                                            
-----                                                                           
   - dynamic sampling used for this statement                                   


Statistics
----------------------------------------------------------                      
         58  recursive calls                                                    
          0  db block gets                                                      
      37310  consistent gets                                                    
          0  physical reads                                                     
          0  redo size                                                          
       1180  bytes sent via SQL*Net to client                                   
        356  bytes received via SQL*Net from client                             
          4  SQL*Net roundtrips to/from client                                  
          0  sorts (memory)                                                     
          0  sorts (disk)                                                       
         36  rows processed                                                     

Number of recursive calls and physical reads is still lower than in the first case, because table data is loaded into the memory. Execution took about 37 ms, more or less the same as in the second run. Consistent gets show that we are still processing the whole table, because they are basically the same as in the first run.

The memory_report procedure after this call shows that some data is in the cache:

R e s u l t   C a c h e   M e m o r y   R e p o r t                             
[Parameters]                                                                    
Block Size          = 1K bytes                                                  
Maximum Cache Size  = 4064K bytes (4064 blocks)                                 
Maximum Result Size = 203K bytes (203 blocks)                                   
[Memory]                                                                        
Total Memory = 202184 bytes [0.057% of the Shared Pool]                         
... Fixed Memory = 5296 bytes [0.002% of the Shared Pool]                       
... Dynamic Memory = 196888 bytes [0.056% of the Shared Pool]                   
....... Overhead = 131352 bytes                                                 
....... Cache Memory = 64K bytes (64 blocks)                                    
........... Unused Memory = 26 blocks                                           
........... Used Memory = 38 blocks                                             
............... Dependencies = 2 blocks (2 count)                               
............... Results = 36 blocks                                             
................... PLSQL   = 36 blocks (36 count)                              
PL/SQL procedure successfully completed.

Now the last execution – repeating the function with caching turned on:

36 rows selected.

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------                      
Plan hash value: 2406321947                                                     
                                                                                
------------------------------------------------------------------------------- 
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     | 
------------------------------------------------------------------------------- 
|   0 | SELECT STATEMENT   |          | 63870 |   686K|   284   (2)| 00:00:04 | 
|   1 |  HASH GROUP BY     |          | 63870 |   686K|   284   (2)| 00:00:04 | 
|   2 |   TABLE ACCESS FULL| BIGTABLE | 63870 |   686K|   281   (1)| 00:00:04 | 
------------------------------------------------------------------------------- 
                                                                                
Note                                                                            
-----                                                                           
   - dynamic sampling used for this statement                                   


Statistics
----------------------------------------------------------                      
          0  recursive calls                                                    
          0  db block gets                                                      
       1006  consistent gets                                                    
          0  physical reads                                                     
          0  redo size                                                          
       1180  bytes sent via SQL*Net to client                                   
        356  bytes received via SQL*Net from client                             
          4  SQL*Net roundtrips to/from client                                  
          0  sorts (memory)                                                     
          0  sorts (disk)                                                       
         36  rows processed                                                     

So between third and fourth call, number of consistent gets dropped significantly. There were no recursive calls and no function executions. All values were read straight from the cache. Notice how the execution time dropped from 36 ms to 1 ms.

The cache is in the system global area (SGA), so cached values are available across database sessions. So, to conclude, adding RESULT_CACHE is an easy way to optimise dynamic statistics and parametrisation. Here are some links for the end:

I'm Gojko Adzic, author of Impact Mapping and Specification by Example. I'm currently working on 50 Quick Ideas to Improve Your User Stories. To learn about discounts on my books, conferences and workshops, sign up for Impact or follow me on Twitter. Join me at these conferences and workshops:

Specification by Example Workshops

Product Owner Survival Camp

Conference talks and workshops

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>