Oracle Database puppet module “https://github.com/biemond/biemond-oradb”

Dependency with

  • puppetlabs/concat >= 1.0.0
  • puppetlabs/stdlib >= 3.2.0

Should work on Docker, for Solaris and on all Linux version like RedHat, CentOS, Ubuntu, Debian, Suse SLES or OracleLinux

Example of Opensource Puppet 3.4.3 Puppet master configuration in a vagrant box puppet master

  • oradb (oracle database 11.2.0.1 ) with GoldenGate 12.1.2

Should work for Puppet 2.7 & 3.0

Oracle Database Features

  • Oracle Grid 11.2.0.4, 12.1.0.1 Linux / Solaris installation
  • Oracle Database 12.1.0.1,12.1.0.2 Linux / Solaris installation
  • Oracle Database 11.2.0.1,11.2.0.3,11.2.0.4 Linux / Solaris installation
  • Oracle Database Instance 11.2 & 12.1 with pluggable database or provide your own db template
  • Oracle Database Client 12.1.0.1,12.1.0.2,11.2.0.4,11.2.0.1 Linux / Solaris installation
  • Oracle Database Net configuration
  • Oracle Database Listener
  • Tnsnames entry
  • Oracle ASM
  • Oracle RAC
  • OPatch upgrade
  • Apply OPatch also for Clusterware
  • Create database instances
  • Stop/Start database instances with db_control puppet resource type

Enterprise Manager

  • Enterprise Manager Server 12.1.0.4 12c cloud installation / configuration
  • Agent installation via AgentPull.sh & AgentDeploy.sh

GoldenGate

  • GoldenGate 12.1.2, 11.2.1

Repository Creation Utility (RCU)

  • Installs RCU repositoy for Oracle SOA Suite / Webcenter ( 11.1.1.6.0 and 11.1.1.7.0 ) / Oracle Identity Management ( 11.1.2.1 )

Oracle RAC

In combination with the ora_rac module of Bert Hajee (https://forge.puppetlabs.com/hajee/ora_rac)

Oracle Database resource types

  • db_control, start stop or a restart a database instance also used by dbactions manifest.pp
  • db_opatch, used by the opatch.pp manifest
  • db_rcu, used by the rcu.pp manifest
  • db_listener, start stop or a restart the oracle listener ( supports refreshonly )

In combination with the oracle module of Bert Hajee (http://forge.puppetlabs.com/hajee/oracle) you can also create

  • create a tablespace
  • create a user with the required grants and quota’s
  • create one or more roles
  • create one or more services
  • change a database init parameter (Memory or SPFILE)

Some manifests like installdb.pp, opatch.pp or rcusoa.pp supports an alternative mountpoint for the big oracle files. When not provided it uses the files location of the oradb puppet module else you can use $puppetDownloadMntPoint => “/mnt” or “puppet:///modules/xxxx/”

Oracle Big files and alternate download location

Some manifests like oradb:installdb, opatch or rcu supports an alternative mountpoint for the big oracle setup/install files. When not provided it uses the files folder located in the orawls puppet module else you can use $source =>

  • “/mnt”
  • “/vagrant”
  • “puppet:///modules/oradb/” (default)
  • “puppet:///database/”

when the files are also locally accessible then you can also set $remote_file => false this will not move the files to the download folder, just extract or install

templates.pp

The databaseType value should contain only one of these choices.

  • EE = Enterprise Edition
  • SE = Standard Edition
  • SEONE = Standard Edition One

Installation, Disk or memory issues

# hiera
hosts:
  'emdb.example.com':
    ip:                "10.10.10.15"
    host_aliases:      'emdb'
  'localhost':
    ip:                "127.0.0.1"
    host_aliases:      'localhost.localdomain,localhost4,localhost4.localdomain4'

$host_instances = hiera('hosts', {})
create_resources('host',$host_instances)

# disable the firewall
service { iptables:
  enable    => false,
  ensure    => false,
  hasstatus => true,
}

# set the swap ,forge puppet module petems-swap_file
class { 'swap_file':
  swapfile     => '/var/swap.1',
  swapfilesize => '8192000000'
}

# set the tmpfs
mount { '/dev/shm':
  ensure      => present,
  atboot      => true,
  device      => 'tmpfs',
  fstype      => 'tmpfs',
  options     => 'size=3500m',
}

see this chapter “Linux kernel, ulimits and required packages” for more important information

Linux kernel, ulimits and required packages

install the following module to set the database kernel parameters puppet module install fiddyspence-sysctl

install the following module to set the database user limits parameters puppet module install erwbgy-limits

   $all_groups = ['oinstall','dba' ,'oper']

   group { $all_groups :
     ensure      => present,
   }

   user { 'oracle' :
     ensure      => present,
     uid         => 500,
     gid         => 'oinstall',
     groups      => ['oinstall','dba','oper'],
     shell       => '/bin/bash',
     password    => '$1$DSJ51vh6$4XzzwyIOk6Bi/54kglGk3.',
     home        => "/home/oracle",
     comment     => "This user oracle was created by Puppet",
     require     => Group[$all_groups],
     managehome  => true,
   }

   sysctl { 'kernel.msgmnb':                 ensure => 'present', permanent => 'yes', value => '65536',}
   sysctl { 'kernel.msgmax':                 ensure => 'present', permanent => 'yes', value => '65536',}
   sysctl { 'kernel.shmmax':                 ensure => 'present', permanent => 'yes', value => '2588483584',}
   sysctl { 'kernel.shmall':                 ensure => 'present', permanent => 'yes', value => '2097152',}
   sysctl { 'fs.file-max':                   ensure => 'present', permanent => 'yes', value => '6815744',}
   sysctl { 'net.ipv4.tcp_keepalive_time':   ensure => 'present', permanent => 'yes', value => '1800',}
   sysctl { 'net.ipv4.tcp_keepalive_intvl':  ensure => 'present', permanent => 'yes', value => '30',}
   sysctl { 'net.ipv4.tcp_keepalive_probes': ensure => 'present', permanent => 'yes', value => '5',}
   sysctl { 'net.ipv4.tcp_fin_timeout':      ensure => 'present', permanent => 'yes', value => '30',}
   sysctl { 'kernel.shmmni':                 ensure => 'present', permanent => 'yes', value => '4096', }
   sysctl { 'fs.aio-max-nr':                 ensure => 'present', permanent => 'yes', value => '1048576',}
   sysctl { 'kernel.sem':                    ensure => 'present', permanent => 'yes', value => '250 32000 100 128',}
   sysctl { 'net.ipv4.ip_local_port_range':  ensure => 'present', permanent => 'yes', value => '9000 65500',}
   sysctl { 'net.core.rmem_default':         ensure => 'present', permanent => 'yes', value => '262144',}
   sysctl { 'net.core.rmem_max':             ensure => 'present', permanent => 'yes', value => '4194304', }
   sysctl { 'net.core.wmem_default':         ensure => 'present', permanent => 'yes', value => '262144',}
   sysctl { 'net.core.wmem_max':             ensure => 'present', permanent => 'yes', value => '1048576',}

   class { 'limits':
     config => {
                '*'       => { 'nofile'  => { soft => '2048'   , hard => '8192',   },},
                'oracle'  => { 'nofile'  => { soft => '65536'  , hard => '65536',  },
                                'nproc'  => { soft => '2048'   , hard => '16384',  },
                                'stack'  => { soft => '10240'  ,},},
                },
     use_hiera => false,
   }

   $install = [ 'binutils.x86_64', 'compat-libstdc++-33.x86_64', 'glibc.x86_64','ksh.x86_64','libaio.x86_64',
                'libgcc.x86_64', 'libstdc++.x86_64', 'make.x86_64','compat-libcap1.x86_64', 'gcc.x86_64',
                'gcc-c++.x86_64','glibc-devel.x86_64','libaio-devel.x86_64','libstdc++-devel.x86_64',
                'sysstat.x86_64','unixODBC-devel','glibc.i686','libXext.x86_64','libXtst.x86_64']

   package { $install:
     ensure  => present,
   }

Database install

$puppetDownloadMntPoint = "puppet:///modules/oradb/"

oradb::installdb{ '12.1.0.2_Linux-x86-64':
  version                => '12.1.0.2',
  file                   => 'V46095-01',
  databaseType           => 'SE',
  oracleBase             => '/oracle',
  oracleHome             => '/oracle/product/12.1/db',
  bashProfile            => true,
  user                   => 'oracle',
  group                  => 'dba',
  group_install          => 'oinstall',
  group_oper             => 'oper',
  downloadDir            => '/data/install',
  zipExtract             => true,
  puppetDownloadMntPoint => $puppetDownloadMntPoint,
}

or with zipExtract ( does not download or extract , software is in /install/linuxamd64_12c_database )

oradb::installdb{ '12.1.0.1_Linux-x86-64':
  version                => '12.1.0.1',
  file                   => 'linuxamd64_12c_database',
  databaseType           => 'SE',
  oracleBase             => '/oracle',
  oracleHome             => '/oracle/product/12.1/db',
  bashProfile            => true,
  user                   => 'oracle',
  group                  => 'dba',
  group_install          => 'oinstall',
  group_oper             => 'oper',
  downloadDir            => '/install',
  zipExtract             => false,
}

or

oradb::installdb{ '112040_Linux-x86-64':
  version                => '11.2.0.4',
  file                   => 'p13390677_112040_Linux-x86-64',
  databaseType           => 'SE',
  oracleBase             => '/oracle',
  oracleHome             => '/oracle/product/11.2/db',
  eeOptionsSelection     => true,
  eeOptionalComponents   => 'oracle.rdbms.partitioning:11.2.0.4.0,oracle.oraolap:11.2.0.4.0,oracle.rdbms.dm:11.2.0.4.0,oracle.rdbms.dv:11.2.0.4.0,oracle.rdbms.lbac:11.2.0.4.0,oracle.rdbms.rat:11.2.0.4.0',
  user                   => 'oracle',
  group                  => 'dba',
  group_install          => 'oinstall',
  group_oper             => 'oper',
  downloadDir            => '/install',
  zipExtract             => true,
  puppetDownloadMntPoint => $puppetDownloadMntPoint,
}

or

oradb::installdb{ '112030_Linux-x86-64':
  version                => '11.2.0.3',
  file                   => 'p10404530_112030_Linux-x86-64',
  databaseType           => 'SE',
  oracleBase             => '/oracle',
  oracleHome             => '/oracle/product/11.2/db',
  user                   => 'oracle',
  group                  => 'dba',
  group_install          => 'oinstall',
  group_oper             => 'oper',
  downloadDir            => '/install',
  zipExtract             => true,
  puppetDownloadMntPoint => $puppetDownloadMntPoint,
}

or

oradb::installdb{ '112010_Linux-x86-64':
  version       => '11.2.0.1',
  file          => 'linux.x64_11gR2_database',
  databaseType  => 'SE',
  oracleBase    => '/oracle',
  oracleHome    => '/oracle/product/11.2/db',
  user          => 'oracle',
  group         => 'dba',
  group_install => 'oinstall',
  group_oper    => 'oper',
  downloadDir   => '/install',
  zipExtract    => true,
 }

Patching

For opatchupgrade you need to provide the Oracle support csiNumber and supportId and need to be online. Or leave them empty but it needs the Expect rpm to emulate OCM

# use this on a Grid or Database home
oradb::opatchupgrade{'112000_opatch_upgrade':
  oracleHome             => '/oracle/product/11.2/db',
  patchFile              => 'p6880880_112000_Linux-x86-64.zip',
  #  csiNumber              => '11111',
  #  supportId              => 'biemond@gmail.com',
  csiNumber              => undef,
  supportId              => undef,
  opversion              => '11.2.0.3.6',
  user                   => 'oracle',
  group                  => 'dba',
  downloadDir            => '/install',
  puppetDownloadMntPoint => $puppetDownloadMntPoint,
  require                =>  Oradb::Installdb['112030_Linux-x86-64'],
}

Opatch

# october 2014 11.2.0.4.4 patch
oradb::opatch{'19121551_db_patch':
  ensure                 => 'present',
  oracleProductHome      => hiera('oracle_home_dir'),
  patchId                => '19121551',
  patchFile              => 'p19121551_112040_Linux-x86-64.zip',
  user                   => hiera('oracle_os_user'),
  group                  => 'oinstall',
  downloadDir            => hiera('oracle_download_dir'),
  ocmrf                  => true,
  require                => Oradb::Opatchupgrade['112000_opatch_upgrade_db'],
  puppetDownloadMntPoint => hiera('oracle_source'),
}

or for clusterware aka opatch auto

oradb::opatch{'18706472_grid_patch':
  ensure                 => 'present',
  oracleProductHome      => hiera('grid_home_dir'),
  patchId                => '18706472',
  patchFile              => 'p18706472_112040_Linux-x86-64.zip',
  clusterWare            => true,
  bundleSubPatchId       => '18522515',  sub patchid of bundle patch ( else I can't detect it if it is already applied)
  user                   => hiera('grid_os_user'),
  group                  => 'oinstall',
  downloadDir            => hiera('oracle_download_dir'),
  ocmrf                  => true,
  require                => Oradb::Opatchupgrade['112000_opatch_upgrade'],
  puppetDownloadMntPoint => hiera('oracle_source'),
}

# this 19791420 patch contains 2 patches (in different sub folders), one bundle and a normal one.
# we want to apply the bundle and need to provide the right value for bundleSubFolder
oradb::opatch{'19791420_grid_patch':
  ensure                 => 'present',
  oracleProductHome      => hiera('grid_home_dir'),
  patchId                => '19791420',
  patchFile              => 'p19791420_112040_Linux-x86-64.zip',
  clusterWare            => true,
  bundleSubPatchId       => '19121552', # sub patchid of bundle patch ( else I can't detect it if it is already applied)
  bundleSubFolder        => '19380115', # optional subfolder inside the patch zip
  user                   => hiera('grid_os_user'),
  group                  => 'oinstall',
  downloadDir            => hiera('oracle_download_dir'),
  ocmrf                  => true,
  require                => Oradb::Opatchupgrade['112000_opatch_upgrade_asm'],
  puppetDownloadMntPoint => hiera('oracle_source'),
}

# the same patch applied with opatch auto to an oracle database home, this time we need to use the 19121551 as bundleSubPatchId
# this is the october 2014  11.2.0.4.4 patch
oradb::opatch{'19791420_grid_patch':
  ensure                 => 'present',
  oracleProductHome      => hiera('oracle_home_dir'),
  patchId                => '19791420',
  patchFile              => 'p19791420_112040_Linux-x86-64.zip',
  clusterWare            => true,
  bundleSubPatchId       => '19121551', # sub patchid of bundle patch ( else I can't detect it if it is already applied)
  bundleSubFolder        => '19380115', # optional subfolder inside the patch zip
  user                   => hiera('grid_os_user'),
  group                  => 'oinstall',
  downloadDir            => hiera('oracle_download_dir'),
  ocmrf                  => true,
  require                => Oradb::Opatchupgrade['112000_opatch_upgrade_asm'],
  puppetDownloadMntPoint => hiera('oracle_source'),
}

# same patch 19791420 but then for the oracle db home, this patch requires the bundle patch of 19791420 or
# 19121551 october 2014  11.2.0.4.4 patch
oradb::opatch{'19791420_db_patch':
  ensure                 => 'present',
  oracleProductHome      => hiera('oracle_home_dir'),
  patchId                => '19791420',
  patchFile              => 'p19791420_112040_Linux-x86-64.zip',
  clusterWare            => false,
  bundleSubPatchId       => '19282021', # sub patchid of bundle patch ( else I can't detect it)
  bundleSubFolder        => '19282021', # optional subfolder inside the patch zip
  user                   => hiera('oracle_os_user'),
  group                  => 'oinstall',
  downloadDir            => hiera('oracle_download_dir'),
  ocmrf                  => true,
  require                => Oradb::Opatch['19121551_db_patch'],
  puppetDownloadMntPoint => hiera('oracle_source'),
}

Oracle net

oradb::net{ 'config net8':
  oracleHome   => '/oracle/product/11.2/db',
  version      => '11.2' or "12.1",
  user         => 'oracle',
  group        => 'dba',
  downloadDir  => '/install',
  dbPort       => '1521', #optional
  require      => Oradb::Opatch['14727310_db_patch'],
}

Listener

db_listener{ 'startlistener':
  ensure          => 'running',  # running|start|abort|stop
  oracle_base_dir => '/oracle',
  oracle_home_dir => '/oracle/product/11.2/db',
  os_user         => 'oracle',
  listener_name   => 'listener' # which is the default and optional
}

# subscribe to changes
db_listener{ 'startlistener':
  ensure          => 'running',  # running|start|abort|stop
  oracle_base_dir => '/oracle',
  oracle_home_dir => '/oracle/product/11.2/db',
  os_user         => 'oracle',
  listener_name   => 'listener' # which is the default and optional
  refreshonly     => true,
  subscribe       => XXXXX,
}

# the old way which also calls db_listener type
oradb::listener{'start listener':
  action       => 'start',  # running|start|abort|stop
  oracleBase   => '/oracle',
  oracleHome   => '/oracle/product/11.2/db',
  user         => 'oracle',
  group        => 'dba',
  listenername => 'listener' # which is the default and optional
}

Database instance

oradb::database{ 'testDb_Create':
  oracleBase              => '/oracle',
  oracleHome              => '/oracle/product/11.2/db',
  version                 => '11.2',
  user                    => 'oracle',
  group                   => 'dba',
  downloadDir             => '/install',
  action                  => 'create',
  dbName                  => 'test',
  dbDomain                => 'oracle.com',
  dbPort                  => '1521',
  sysPassword             => 'Welcome01',
  systemPassword          => 'Welcome01',
  dataFileDestination     => "/oracle/oradata",
  recoveryAreaDestination => "/oracle/flash_recovery_area",
  characterSet            => "AL32UTF8",
  nationalCharacterSet    => "UTF8",
  initParams              => {'open_cursors'        => '1000',
                              'processes'           => '600',
                              'job_queue_processes' => '4' },
  sampleSchema            => 'TRUE',
  memoryPercentage        => "40",
  memoryTotal             => "800",
  databaseType            => "MULTIPURPOSE",
  emConfiguration         => "NONE",
  require                 => Oradb::Listener['start listener'],
}

you can also use a comma separated string for initParams

  initParams              => "open_cursors=1000,processes=600,job_queue_processes=4",

or based on your own template

The template must be have the following extension dbt.erb like dbtemplate_12.1.dbt.erb, use puppetDownloadMntPoint parameter for the template location or add your template to the template dir of the oradb module

with a template of the oradb module

oradb::database{ 'testDb_Create':
  oracleBase              => '/oracle',
  oracleHome              => '/oracle/product/12.1/db',
  version                 => '12.1',
  user                    => 'oracle',
  group                   => 'dba',
  template                => 'dbtemplate_12.1', # or dbtemplate_11gR2_asm, this will use dbtemplate_12.1.dbt.erb example template
  downloadDir             => '/install',
  action                  => 'create',
  dbName                  => 'test',
  dbDomain                => 'oracle.com',
  dbPort                  => '1521',
  sysPassword             => 'Welcome01',
  systemPassword          => 'Welcome01',
  dataFileDestination     => "/oracle/oradata",
  recoveryAreaDestination => "/oracle/flash_recovery_area",
  characterSet            => "AL32UTF8",
  nationalCharacterSet    => "UTF8",
  memoryPercentage        => "40",
  memoryTotal             => "800",
  require                 => Oradb::Listener['start listener'],
}

or your own template on your own location

  template                => 'my_dbtemplate_11gR2_asm',
  puppetDownloadMntPoint  => '/vagrant', # 'oradb' etc

12c container and pluggable databases

oradb::database{ 'oraDb':
  oracleBase              => '/oracle',
  oracleHome              => '/oracle/product/12.1/db',
  version                 => '12.1',
  user                    => 'oracle',
  group                   => 'dba'
  downloadDir             => '/install',
  action                  => 'create',
  dbName                  => 'orcl',
  dbDomain                => 'example.com',
  sysPassword             => 'Welcome01',
  systemPassword          => 'Welcome01',
  characterSet            => 'AL32UTF8',
  nationalCharacterSet    => 'UTF8',
  sampleSchema            => 'FALSE',
  memoryPercentage        => '40',
  memoryTotal             => '800',
  databaseType            => 'MULTIPURPOSE',
  emConfiguration         => 'NONE',
  dataFileDestination     => '/oracle/oradata',
  recoveryAreaDestination => '/oracle/flash_recovery_area',
  initParams              => {'open_cursors'        => '1000',
                              'processes'           => '600',
                              'job_queue_processes' => '4' },
  containerDatabase       => true,   <|-------
}

oradb::database_pluggable{'pdb1':
  ensure                   => 'present',
  version                  => '12.1',
  oracle_home_dir          => '/oracle/product/12.1/db',
  user                     => 'oracle',
  group                    => 'dba',
  source_db                => 'orcl',
  pdb_name                 => 'pdb1',
  pdb_admin_username       => 'pdb_adm',
  pdb_admin_password       => 'Welcome01',
  pdb_datafile_destination => "/oracle/oradata/orcl/pdb1",
  create_user_tablespace   => true,
  log_output               => true,
}

# remove the pluggable database
oradb::database_pluggable{'pdb1':
  ensure                   => 'absent',
  version                  => '12.1',
  oracle_home_dir          => '/oracle/product/12.1/db',
  user                     => 'oracle',
  group                    => 'dba',
  source_db                => 'orcl',
  pdb_name                 => 'pdb1',
  pdb_datafile_destination => "/oracle/oradata/orcl/pdb1",
  log_output               => true,
}

or delete a database

oradb::database{ 'testDb_Delete':
  oracleBase              => '/oracle',
  oracleHome              => '/oracle/product/11.2/db',
  user                    => 'oracle',
  group                   => 'dba',
  downloadDir             => '/install',
  action                  => 'delete',
  dbName                  => 'test',
  sysPassword             => 'Welcome01',
  require                 => Oradb::Dbactions['start testDb'],
}

Database instance actions

db_control{'emrepos start':
  ensure                  => 'running', #running|start|abort|stop
  instance_name           => 'test',
  oracle_product_home_dir => '/oracle/product/11.2/db',
  os_user                 => 'oracle',
}

db_control{'emrepos stop':
  ensure                  => 'stop', #running|start|abort|stop
  instance_name           => 'test',
  oracle_product_home_dir => '/oracle/product/11.2/db',
  os_user                 => 'oracle',
}

# the old way
oradb::dbactions{ 'stop testDb':
  oracleHome              => '/oracle/product/11.2/db',
  user                    => 'oracle',
  group                   => 'dba',
  action                  => 'stop',
  dbName                  => 'test',
  require                 => Oradb::Database['testDb'],
}

oradb::dbactions{ 'start testDb':
  oracleHome              => '/oracle/product/11.2/db',
  user                    => 'oracle',
  group                   => 'dba',
  action                  => 'start',
  dbName                  => 'test',
  require                 => Oradb::Dbactions['stop testDb'],
}

# subscribe to changes
db_control{'emrepos restart':
  ensure                  => 'running', #running|start|abort|stop
  instance_name           => 'test',
  oracle_product_home_dir => '/oracle/product/11.2/db',
  os_user                 => 'oracle',
  refreshonly             => true,
  subscribe               => Init_param['emrepos/memory_target'],
}

oradb::autostartdatabase{ 'autostart oracle':
  oracleHome              => '/oracle/product/12.1/db',
  user                    => 'oracle',
  dbName                  => 'test',
  require                 => Oradb::Dbactions['start testDb'],
}

Tnsnames.ora

oradb::tnsnames{'orcl':
  oracleHome         => '/oracle/product/11.2/db',
  user               => 'oracle',
  group              => 'dba',
  server             => { myserver => { host => soadb.example.nl, port => '1521', protocol => 'TCP' }},
  connectServiceName => 'soarepos.example.nl',
  require            => Oradb::Dbactions['start oraDb'],
}

oradb::tnsnames{'test':
  oracleHome         => '/oracle/product/11.2/db',
  user               => 'oracle',
  group              => 'dba',
  server             => { myserver => { host => soadb.example.nl, port => '1525', protocol => 'TCP' }, { host => soadb2.example.nl, port => '1526', protocol => 'TCP' }},
  connectServiceName => 'soarepos.example.nl',
  connectServer      => 'DEDICATED',
  require            => Oradb::Dbactions['start oraDb'],
}

Grid install with ASM

  $all_groups = ['oinstall','dba' ,'oper','asmdba','asmadmin','asmoper']

  group { $all_groups :
    ensure      => present,
  }

  user { 'oracle' :
    ensure      => present,
    uid         => 500,
    gid         => 'oinstall',
    groups      => ['oinstall','dba','oper','asmdba'],
    shell       => '/bin/bash',
    password    => '$1$DSJ51vh6$4XzzwyIOk6Bi/54kglGk3.',
    home        => "/home/oracle",
    comment     => "This user oracle was created by Puppet",
    require     => Group[$all_groups],
    managehome  => true,
  }

  user { 'grid' :
    ensure      => present,
    uid         => 501,
    gid         => 'oinstall',
    groups      => ['oinstall','dba','asmadmin','asmdba','asmoper'],
    shell       => '/bin/bash',
    password    => '$1$DSJ51vh6$4XzzwyIOk6Bi/54kglGk3.',
    home        => "/home/grid",
    comment     => "This user grid was created by Puppet",
    require     => Group[$all_groups],
    managehome  => true,
  }

  ####### NFS example

  file { '/home/nfs_server_data':
    ensure  => directory,
    recurse => false,
    replace => false,
    mode    => '0775',
    owner   => 'grid',
    group   => 'asmadmin',
    require =>  User['grid'],
  }

  class { 'nfs::server':
    package => latest,
    service => running,
    enable  => true,
  }

  nfs::export { '/home/nfs_server_data':
    options => [ 'rw', 'sync', 'no_wdelay','insecure_locks','no_root_squash' ],
    clients => [ "*" ],
    require => [File['/home/nfs_server_data'],Class['nfs::server'],],
  }

  file { '/nfs_client':
    ensure  => directory,
    recurse => false,
    replace => false,
    mode    => '0775',
    owner   => 'grid',
    group   => 'asmadmin',
    require =>  User['grid'],
  }

  mounts { 'Mount point for NFS data':
    ensure  => present,
    source  => 'soadbasm:/home/nfs_server_data',
    dest    => '/nfs_client',
    type    => 'nfs',
    opts    => 'rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0  0 0',
    require => [File['/nfs_client'],Nfs::Export['/home/nfs_server_data'],]
  }

  exec { "/bin/dd if=/dev/zero of=/nfs_client/asm_sda_nfs_b1 bs=1M count=7520":
    user      => 'grid',
    group     => 'asmadmin',
    logoutput => true,
    unless    => "/usr/bin/test -f /nfs_client/asm_sda_nfs_b1",
    require   => Mounts['Mount point for NFS data'],
  }
  exec { "/bin/dd if=/dev/zero of=/nfs_client/asm_sda_nfs_b2 bs=1M count=7520":
    user      => 'grid',
    group     => 'asmadmin',
    logoutput => true,
    unless    => "/usr/bin/test -f /nfs_client/asm_sda_nfs_b2",
    require   => [Mounts['Mount point for NFS data'],
                  Exec["/bin/dd if=/dev/zero of=/nfs_client/asm_sda_nfs_b1 bs=1M count=7520"]],
  }

  exec { "/bin/dd if=/dev/zero of=/nfs_client/asm_sda_nfs_b3 bs=1M count=7520":
    user      => 'grid',
    group     => 'asmadmin',
    logoutput => true,
    unless    => "/usr/bin/test -f /nfs_client/asm_sda_nfs_b3",
    require   => [Mounts['Mount point for NFS data'],
                  Exec["/bin/dd if=/dev/zero of=/nfs_client/asm_sda_nfs_b1 bs=1M count=7520"],
                  Exec["/bin/dd if=/dev/zero of=/nfs_client/asm_sda_nfs_b2 bs=1M count=7520"],],
  }

  exec { "/bin/dd if=/dev/zero of=/nfs_client/asm_sda_nfs_b4 bs=1M count=7520":
    user      => 'grid',
    group     => 'asmadmin',
    logoutput => true,
    unless    => "/usr/bin/test -f /nfs_client/asm_sda_nfs_b4",
    require   => [Mounts['Mount point for NFS data'],
                  Exec["/bin/dd if=/dev/zero of=/nfs_client/asm_sda_nfs_b1 bs=1M count=7520"],
                  Exec["/bin/dd if=/dev/zero of=/nfs_client/asm_sda_nfs_b2 bs=1M count=7520"],
                  Exec["/bin/dd if=/dev/zero of=/nfs_client/asm_sda_nfs_b3 bs=1M count=7520"],],
  }

  $nfs_files = ['/nfs_client/asm_sda_nfs_b1','/nfs_client/asm_sda_nfs_b2','/nfs_client/asm_sda_nfs_b3','/nfs_client/asm_sda_nfs_b4']

  file { $nfs_files:
    ensure  => present,
    owner   => 'grid',
    group   => 'asmadmin',
    mode    => '0664',
    require => Exec["/bin/dd if=/dev/zero of=/nfs_client/asm_sda_nfs_b4 bs=1M count=7520"],
  }
  ###### end of NFS example


  oradb::installasm{ 'db_linux-x64':
    version                => hiera('db_version'),
    file                   => hiera('asm_file'),
    gridType               => 'HA_CONFIG',
    gridBase               => hiera('grid_base_dir'),
    gridHome               => hiera('grid_home_dir'),
    oraInventoryDir        => hiera('oraInventory_dir'),
    userBaseDir            => '/home',
    user                   => hiera('grid_os_user'),
    group                  => 'asmdba',
    group_install          => 'oinstall',
    group_oper             => 'asmoper',
    group_asm              => 'asmadmin',
    sys_asm_password       => 'Welcome01',
    asm_monitor_password   => 'Welcome01',
    asm_diskgroup          => 'DATA',
    disk_discovery_string  => "/nfs_client/asm*",
    disks                  => "/nfs_client/asm_sda_nfs_b1,/nfs_client/asm_sda_nfs_b2",
    # disk_discovery_string  => "ORCL:*",
    # disks                  => "ORCL:DISK1,ORCL:DISK2",
    disk_redundancy        => "EXTERNAL",
    downloadDir            => hiera('oracle_download_dir'),
    remoteFile             => false,
    puppetDownloadMntPoint => hiera('oracle_source'),
  }

  oradb::opatchupgrade{'112000_opatch_upgrade_asm':
    oracleHome             => hiera('grid_home_dir'),
    patchFile              => 'p6880880_112000_Linux-x86-64.zip',
    csiNumber              => undef,
    supportId              => undef,
    opversion              => '11.2.0.3.6',
    user                   => hiera('grid_os_user'),
    group                  => 'oinstall',
    downloadDir            => hiera('oracle_download_dir'),
    puppetDownloadMntPoint => hiera('oracle_source'),
    require                => Oradb::Installasm['db_linux-x64'],
  }

  oradb::opatch{'19791420_grid_patch':
    ensure                 => 'present',
    oracleProductHome      => hiera('grid_home_dir'),
    patchId                => '19791420',
    patchFile              => 'p19791420_112040_Linux-x86-64.zip',
    clusterWare            => true,
    bundleSubPatchId       => '19121552', # sub patchid of bundle patch ( else I can't detect it)
    bundleSubFolder        => '19380115', # optional subfolder inside the patch zip
    user                   => hiera('grid_os_user'),
    group                  => 'oinstall',
    downloadDir            => hiera('oracle_download_dir'),
    ocmrf                  => true,
    require                => Oradb::Opatchupgrade['112000_opatch_upgrade_asm'],
    puppetDownloadMntPoint => hiera('oracle_source'),
  }

  oradb::installdb{ 'db_linux-x64':
    version                => hiera('db_version'),
    file                   => hiera('db_file'),
    databaseType           => 'EE',
    oraInventoryDir        => hiera('oraInventory_dir'),
    oracleBase             => hiera('oracle_base_dir'),
    oracleHome             => hiera('oracle_home_dir'),
    userBaseDir            => '/home',
    user                   => hiera('oracle_os_user'),
    group                  => 'dba',
    group_install          => 'oinstall',
    group_oper             => 'oper',
    downloadDir            => hiera('oracle_download_dir'),
    remoteFile             => false,
    puppetDownloadMntPoint => hiera('oracle_source'),
    # require                => Oradb::Opatch['18706472_grid_patch'],
    require                => Oradb::Opatch['19791420_grid_patch'],
  }

  oradb::opatchupgrade{'112000_opatch_upgrade_db':
    oracleHome             => hiera('oracle_home_dir'),
    patchFile              => 'p6880880_112000_Linux-x86-64.zip',
    csiNumber              => undef,
    supportId              => undef,
    opversion              => '11.2.0.3.6',
    user                   => hiera('oracle_os_user'),
    group                  => hiera('oracle_os_group'),
    downloadDir            => hiera('oracle_download_dir'),
    puppetDownloadMntPoint => hiera('oracle_source'),
    require                => Oradb::Installdb['db_linux-x64'],
  }

  oradb::opatch{'19791420_db_patch':
    ensure                 => 'present',
    oracleProductHome      => hiera('oracle_home_dir'),
    patchId                => '19791420',
    patchFile              => 'p19791420_112040_Linux-x86-64.zip',
    clusterWare            => true,
    bundleSubPatchId       => '19121551', #,'19121552', # sub patchid of bundle patch ( else I can't detect it)
    bundleSubFolder        => '19380115', # optional subfolder inside the patch zip
    user                   => hiera('oracle_os_user'),
    group                  => 'oinstall',
    downloadDir            => hiera('oracle_download_dir'),
    ocmrf                  => true,
    require                => Oradb::Opatchupgrade['112000_opatch_upgrade_db'],
    puppetDownloadMntPoint => hiera('oracle_source'),
  }

  oradb::opatch{'19791420_db_patch_2':
    ensure                 => 'present',
    oracleProductHome      => hiera('oracle_home_dir'),
    patchId                => '19791420',
    patchFile              => 'p19791420_112040_Linux-x86-64.zip',
    clusterWare            => false,
    bundleSubPatchId       => '19282021', # sub patchid of bundle patch ( else I can't detect it)
    bundleSubFolder        => '19282021', # optional subfolder inside the patch zip
    user                   => hiera('oracle_os_user'),
    group                  => 'oinstall',
    downloadDir            => hiera('oracle_download_dir'),
    ocmrf                  => true,
    require                => Oradb::Opatch['19791420_db_patch'],
    puppetDownloadMntPoint => hiera('oracle_source'),
  }

  # with the help of the oracle and easy-type module of Bert Hajee
  ora_asm_diskgroup{ 'RECO@+ASM':
    ensure          => 'present',
    au_size         => '1',
    compat_asm      => '11.2.0.0.0',
    compat_rdbms    => '10.1.0.0.0',
    diskgroup_state => 'MOUNTED',
    disks           => {'RECO_0000' => {'diskname' => 'RECO_0000', 'path' => '/nfs_client/asm_sda_nfs_b3'},
                        'RECO_0001' => {'diskname' => 'RECO_0001', 'path' => '/nfs_client/asm_sda_nfs_b4'}},
    redundancy_type => 'EXTERNAL',
    require         => Oradb::Opatch['19791420_db_patch_2'],
  }

  # based on a template
  oradb::database{ 'oraDb':
    oracleBase              => hiera('oracle_base_dir'),
    oracleHome              => hiera('oracle_home_dir'),
    version                 => hiera('dbinstance_version'),
    user                    => hiera('oracle_os_user'),
    group                   => hiera('oracle_os_group'),
    downloadDir             => hiera('oracle_download_dir'),
    action                  => 'create',
    dbName                  => hiera('oracle_database_name'),
    dbDomain                => hiera('oracle_database_domain_name'),
    sysPassword             => hiera('oracle_database_sys_password'),
    systemPassword          => hiera('oracle_database_system_password'),
    template                => 'dbtemplate_11gR2_asm',
    characterSet            => "AL32UTF8",
    nationalCharacterSet    => "UTF8",
    sampleSchema            => 'FALSE',
    memoryPercentage        => "40",
    memoryTotal             => "800",
    databaseType            => "MULTIPURPOSE",
    emConfiguration         => "NONE",
    storageType             => "ASM",
    asmSnmpPassword         => 'Welcome01',
    asmDiskgroup            => 'DATA',
    recoveryDiskgroup       => 'RECO',
    recoveryAreaDestination => 'RECO',
    require                 => [Oradb::Opatch['19791420_db_patch_2'],
                                Ora_asm_diskgroup['RECO@+ASM'],],
  }

  # or not based on a template
  oradb::database{ 'oraDb':
    oracleBase              => hiera('oracle_base_dir'),
    oracleHome              => hiera('oracle_home_dir'),
    version                 => hiera('dbinstance_version'),
    user                    => hiera('oracle_os_user'),
    group                   => hiera('oracle_os_group'),
    downloadDir             => hiera('oracle_download_dir'),
    action                  => 'create',
    dbName                  => hiera('oracle_database_name'),
    dbDomain                => hiera('oracle_database_domain_name'),
    sysPassword             => hiera('oracle_database_sys_password'),
    systemPassword          => hiera('oracle_database_system_password'),
    characterSet            => "AL32UTF8",
    nationalCharacterSet    => "UTF8",
    sampleSchema            => 'FALSE',
    memoryPercentage        => "40",
    memoryTotal             => "800",
    databaseType            => "MULTIPURPOSE",
    emConfiguration         => "NONE",
    storageType             => "ASM",
    asmSnmpPassword         => 'Welcome01',
    asmDiskgroup            => 'DATA',
    recoveryAreaDestination => 'DATA',
    require                 => Oradb::Opatch['19791420_db_patch_2'],
  }

Oracle Database Client

oradb::client{ '12.1.0.1_Linux-x86-64':
  version                => '12.1.0.1',
  file                   => 'linuxamd64_12c_client.zip',
  oracleBase             => '/oracle',
  oracleHome             => '/oracle/product/12.1/client',
  user                   => 'oracle',
  group                  => 'dba',
  group_install          => 'oinstall',
  downloadDir            => '/install',
  remoteFile             => true,
  puppetDownloadMntPoint => "puppet:///modules/oradb/",
  logoutput               => true,
}

or

oradb::client{ '11.2.0.1_Linux-x86-64':
  version                => '11.2.0.1',
  file                   => 'linux.x64_11gR2_client.zip',
  oracleBase             => '/oracle',
  oracleHome             => '/oracle/product/11.2/client',
  user                   => 'oracle',
  group                  => 'dba',
  group_install          => 'oinstall',
  downloadDir            => '/install',
  remoteFile             => false,
  puppetDownloadMntPoint => "/software",
  logoutput              => true,
}

Enteprise Mananager

oradb::installem{ 'em12104':
  version                     => '12.1.0.4',
  file                        => 'em12104_linux64',
  oracle_base_dir             => '/oracle',
  oracle_home_dir             => '/oracle/product/12.1/em',
  agent_base_dir              => '/oracle/product/12.1/agent',
  software_library_dir        => '/oracle/product/12.1/swlib',
  weblogic_user               => 'weblogic',
  weblogic_password           => 'Welcome01',
  database_hostname           => 'emdb.example.com',
  database_listener_port      => 1521,
  database_service_sid_name   => 'emrepos.example.com',
  database_sys_password       => 'Welcome01',
  sysman_password             => 'Welcome01',
  agent_registration_password => 'Welcome01',
  deployment_size             => 'SMALL',
  user                        => 'oracle',
  group                       => 'oinstall',
  download_dir                => '/install',
  zip_extract                 => true,
  puppet_download_mnt_point   => '/software',
  remote_file                 => false,
  log_output                  => true,
}

oradb::installem_agent{ 'em12104_agent':
  version                     => '12.1.0.4',
  source                      => 'https://10.10.10.25:7802/em/install/getAgentImage',
  install_type                => 'agentPull',
  install_platform            => 'Linux x86-64',
  oracle_base_dir             => '/oracle',
  agent_base_dir              => '/oracle/product/12.1/agent',
  agent_instance_home_dir     => '/oracle/product/12.1/agent/agent_inst',
  sysman_user                 => 'sysman',
  sysman_password             => 'Welcome01',
  agent_registration_password => 'Welcome01',
  agent_port                  => 1830,
  oms_host                    => '10.10.10.25',
  oms_port                    => 7802,
  em_upload_port              => 4903,
  user                        => 'oracle',
  group                       => 'dba',
  download_dir                => '/var/tmp/install',
  log_output                  => true,
}

oradb::installem_agent{ 'em12104_agent2':
  version                     => '12.1.0.4',
  source                      => '/var/tmp/install/agent.zip',
  install_type                => 'agentDeploy',
  oracle_base_dir             => '/oracle',
  agent_base_dir              => '/oracle/product/12.1/agent2',
  agent_instance_home_dir     => '/oracle/product/12.1/agent2/agent_inst',
  agent_registration_password => 'Welcome01',
  agent_port                  => 1832,
  oms_host                    => '10.10.10.25',
  em_upload_port              => 4903,
  user                        => 'oracle',
  group                       => 'dba',
  download_dir                => '/var/tmp/install',
  log_output                  => true,
}

Database configuration

In combination with the oracle puppet module from hajee you can create/change a database init parameter, tablespace,role or an oracle user

ora_init_param{'SPFILE/processes@soarepos':
  ensure => 'present',
  value  => '1000',
}

ora_init_param{'SPFILE/job_queue_processes@soarepos':
  ensure  => present,
  value   => '4',
}

db_control{'soarepos restart':
  ensure                  => 'running', #running|start|abort|stop
  instance_name           => hiera('oracle_database_name'),
  oracle_product_home_dir => hiera('oracle_home_dir'),
  os_user                 => hiera('oracle_os_user'),
  refreshonly             => true,
  subscribe               => [Ora_init_param['SPFILE/processes@soarepos'],
                              Ora_init_param['SPFILE/job_queue_processes@soarepos'],],
}

ora_tablespace {'JMS_TS@soarepos':
  ensure                    => present,
  datafile                  => 'jms_ts.dbf',
  size                      => 100M,
  logging                   => yes,
  autoextend                => on,
  next                      => 100M,
  max_size                  => 1G,
  extent_management         => local,
  segment_space_management  => auto,
}

ora_role {'APPS@soarepos':
  ensure    => present,
}

ora_user{'JMS@soarepos':
  ensure                    => present,
  temporary_tablespace      => temp,
  default_tablespace        => 'JMS_TS',
  password                  => 'jms',
  require                   => [Ora_tablespace['JMS_TS@soarepos'],
                                Ora_role['APPS@soarepos']],
  grants                    => ['SELECT ANY TABLE', 'CONNECT', 'CREATE TABLE', 'CREATE TRIGGER','APPS'],
  quotas                    => {
                                  "JMS_TS"  => 'unlimited'
                                },
}

Oracle GoldenGate 12.1.2 and 11.2.1

  $groups = ['oinstall','dba']

  group { $groups :
    ensure      => present,
    before      => User['ggate'],
  }

  user { 'ggate' :
    ensure      => present,
    gid         => 'dba',
    groups      => $groups,
    shell       => '/bin/bash',
    password    => '$1$DSJ51vh6$4XzzwyIOk6Bi/54kglGk3.',
    home        => "/home/ggate",
    comment     => "This user ggate was created by Puppet",
    managehome  => true,
  }

  oradb::goldengate{ 'ggate12.1.2':
    version                 => '12.1.2',
    file                    => '121200_fbo_ggs_Linux_x64_shiphome.zip',
    databaseType            => 'Oracle',
    databaseVersion         => 'ORA11g',
    databaseHome            => '/oracle/product/12.1/db',
    oracleBase              => '/oracle',
    goldengateHome          => "/oracle/product/12.1/ggate",
    managerPort             => 16000,
    user                    => 'ggate',
    group                   => 'dba',
    group_install           => 'oinstall',
    downloadDir             => '/install',
    puppetDownloadMntPoint  => hiera('oracle_source'),
    require                 => User['ggate'],
  }

  file { "/oracle/product/11.2.1" :
    ensure        => directory,
    recurse       => false,
    replace       => false,
    mode          => '0775',
    owner         => 'ggate',
    group         => 'dba',
    require       => Oradb::Goldengate['ggate12.1.2'],
  }

  oradb::goldengate{ 'ggate11.2.1':
    version                 => '11.2.1',
    file                    => 'ogg112101_fbo_ggs_Linux_x64_ora11g_64bit.zip',
    tarFile                 => 'fbo_ggs_Linux_x64_ora11g_64bit.tar',
    goldengateHome          => "/oracle/product/11.2.1/ggate",
    user                    => 'ggate',
    group                   => 'dba',
    downloadDir             => '/install',
    puppetDownloadMntPoint  => hiera('oracle_source'),
    require                 => File["/oracle/product/11.2.1"],
  }

  oradb::goldengate{ 'ggate11.2.1_java':
    version                 => '11.2.1',
    file                    => 'V38714-01.zip',
    tarFile                 => 'ggs_Adapters_Linux_x64.tar',
    goldengateHome          => "/oracle/product/11.2.1/ggate_java",
    user                    => 'ggate',
    group                   => 'dba',
    group_install           => 'oinstall',
    downloadDir             => '/install',
    puppetDownloadMntPoint  => hiera('oracle_source'),
    require                 => File["/oracle/product/11.2.1"],
  }

Oracle SOA Suite Repository Creation Utility (RCU)

product =

  • soasuite
  • webcenter
  • all

RCU examples

soa suite repository

oradb::rcu{'DEV_PS6':
  rcuFile          => 'ofm_rcu_linux_11.1.1.7.0_32_disk1_1of1.zip',
  product          => 'soasuite',
  version          => '11.1.1.7',
  oracleHome       => '/oracle/product/11.2/db',
  user             => 'oracle',
  group            => 'dba',
  downloadDir      => '/install',
  action           => 'create',
  dbServer         => 'dbagent1.alfa.local:1521',
  dbService        => 'test.oracle.com',
  sysPassword      => 'Welcome01',
  schemaPrefix     => 'DEV',
  reposPassword    => 'Welcome02',
}

webcenter repository with a fixed temp tablespace

oradb::rcu{'DEV2_PS6':
  rcuFile          => 'ofm_rcu_linux_11.1.1.7.0_32_disk1_1of1.zip',
  product          => 'webcenter',
  version          => '11.1.1.7',
  oracleHome       => '/oracle/product/11.2/db',
  user             => 'oracle',
  group            => 'dba',
  downloadDir      => '/install',
  action           => 'create',
  dbServer         => 'dbagent1.alfa.local:1521',
  dbService        => 'test.oracle.com',
  sysPassword      => 'Welcome01',
  schemaPrefix     => 'DEV',
  tempTablespace   => 'TEMP',
  reposPassword    => 'Welcome02',
}

delete a repository

oradb::rcu{'Delete_DEV3_PS5':
  rcuFile          => 'ofm_rcu_linux_11.1.1.6.0_disk1_1of1.zip',
  product          => 'soasuite',
  version          => '11.1.1.6',
  oracleHome       => '/oracle/product/11.2/db',
  user             => 'oracle',
  group            => 'dba',
  downloadDir      => '/install',
  action           => 'delete',
  dbServer         => 'dbagent1.alfa.local:1521',
  dbService        => 'test.oracle.com',
  sysPassword      => 'Welcome01',
  schemaPrefix     => 'DEV3',
  reposPassword    => 'Welcome02',
}

OIM, OAM repository, OIM needs an Oracle Enterprise Edition database

oradb::rcu{'DEV_1112':
  rcuFile                => 'V37476-01.zip',
  product                => 'oim',
  version                => '11.1.2.1',
  oracleHome             => '/oracle/product/11.2/db',
  user                   => 'oracle',
  group                  => 'dba',
  downloadDir            => '/data/install',
  action                 => 'create',
  dbServer               => 'oimdb.alfa.local:1521',
  dbService              => 'oim.oracle.com',
  sysPassword            => hiera('database_test_sys_password'),
  schemaPrefix           => 'DEV',
  reposPassword          => hiera('database_test_rcu_dev_password'),
  puppetDownloadMntPoint => $puppetDownloadMntPoint,
  logoutput              => true,
  require                => Oradb::Dbactions['start oimDb'],
 }

Solaris 10 kernel, ulimits and required packages

exec { "create /cdrom/unnamed_cdrom":
  command => "/usr/bin/mkdir -p /cdrom/unnamed_cdrom",
  creates => "/cdrom/unnamed_cdrom",
}

mount { "/cdrom/unnamed_cdrom":
  device   => "/dev/dsk/c0t1d0s2",
  fstype   => "hsfs",
  ensure   => "mounted",
  options  => "ro",
  atboot   => true,
  remounts => false,
  require  => Exec["create /cdrom/unnamed_cdrom"],
}

$install = [
             'SUNWarc','SUNWbtool','SUNWcsl',
             'SUNWdtrc','SUNWeu8os','SUNWhea',
             'SUNWi1cs', 'SUNWi15cs',
             'SUNWlibC','SUNWlibm','SUNWlibms',
             'SUNWsprot','SUNWpool','SUNWpoolr',
             'SUNWtoo','SUNWxwfnt'
            ]

package { $install:
  ensure    => present,
  adminfile => "/vagrant/pkgadd_response",
  source    => "/cdrom/unnamed_cdrom/Solaris_10/Product/",
  require   => [Exec["create /cdrom/unnamed_cdrom"],
                Mount["/cdrom/unnamed_cdrom"]
               ],
}
package { 'SUNWi1of':
  ensure    => present,
  adminfile => "/vagrant/pkgadd_response",
  source    => "/cdrom/unnamed_cdrom/Solaris_10/Product/",
  require   => Package[$install],
}


# pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibC SUNWlibm SUNWlibms SUNWsprot SUNWtoo SUNWi1of SUNWi1cs SUNWi15cs SUNWxwfnt SUNWcsl SUNWdtrc
# pkgadd -d /cdrom/unnamed_cdrom/Solaris_10/Product/ -r response -a response SUNWarc SUNWbtool SUNWhea SUNWlibC SUNWlibm SUNWlibms SUNWsprot SUNWtoo SUNWi1of SUNWi1cs SUNWi15cs SUNWxwfnt SUNWcsl SUNWdtrc


$all_groups = ['oinstall','dba' ,'oper']

group { $all_groups :
  ensure      => present,
}

user { 'oracle' :
  ensure      => present,
  uid         => 500,
  gid         => 'oinstall',
  groups      => ['oinstall','dba','oper'],
  shell       => '/bin/bash',
  password    => '$1$DSJ51vh6$4XzzwyIOk6Bi/54kglGk3.',
  home        => "/home/oracle",
  comment     => "This user oracle was created by Puppet",
  require     => Group[$all_groups],
  managehome  => true,
}

$execPath     = "/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:"

exec { "projadd max-shm-memory":
  command => "projadd -p 102  -c 'ORADB' -U oracle -G dba  -K 'project.max-shm-memory=(privileged,4G,deny)' ORADB",
  require => [ User["oracle"],
               Package['SUNWi1of'],
               Package[$install],
             ],
  unless  => "projects -l | grep -c ORADB",
  path    => $execPath,
}

exec { "projmod max-sem-ids":
  command     => "projmod -s -K 'project.max-sem-ids=(privileged,100,deny)' ORADB",
  subscribe   => Exec["projadd max-shm-memory"],
  require     => Exec["projadd max-shm-memory"],
  refreshonly => true,
  path        => $execPath,
}

exec { "projmod max-shm-ids":
  command     => "projmod -s -K 'project.max-shm-ids=(privileged,100,deny)' ORADB",
  require     => Exec["projmod max-sem-ids"],
  subscribe   => Exec["projmod max-sem-ids"],
  refreshonly => true,
  path        => $execPath,
}

exec { "projmod max-sem-nsems":
  command     => "projmod -s -K 'process.max-sem-nsems=(privileged,256,deny)' ORADB",
  require     => Exec["projmod max-shm-ids"],
  subscribe   => Exec["projmod max-shm-ids"],
  refreshonly => true,
  path        => $execPath,
}

exec { "projmod max-file-descriptor":
  command     => "projmod -s -K 'process.max-file-descriptor=(basic,65536,deny)' ORADB",
  require     => Exec["projmod max-sem-nsems"],
  subscribe   => Exec["projmod max-sem-nsems"],
  refreshonly => true,
  path        => $execPath,
}

exec { "projmod max-stack-size":
  command     => "projmod -s -K 'process.max-stack-size=(privileged,32MB,deny)' ORADB",
  require     => Exec["projmod max-file-descriptor"],
  subscribe   => Exec["projmod max-file-descriptor"],
  refreshonly => true,
  path        => $execPath,
}

exec { "usermod oracle":
  command     => "usermod -K project=ORADB oracle",
  require     => Exec["projmod max-stack-size"],
  subscribe   => Exec["projmod max-stack-size"],
  refreshonly => true,
  path        => $execPath,
}

exec { "ndd 1":
  command => "ndd -set /dev/tcp tcp_smallest_anon_port 9000",
  require => Exec["usermod oracle"],
  path    => $execPath,
}
exec { "ndd 2":
  command => "ndd -set /dev/tcp tcp_largest_anon_port 65500",
  require => Exec["ndd 1"],
  path    => $execPath,
}

exec { "ndd 3":
  command => "ndd -set /dev/udp udp_smallest_anon_port 9000",
  require => Exec["ndd 2"],
  path    => $execPath,
}

exec { "ndd 4":
  command => "ndd -set /dev/udp udp_largest_anon_port 65500",
  require => Exec["ndd 3"],
  path    => $execPath,
}

exec { "ulimit -S":
  command => "ulimit -S -n 4096",
  require => Exec["ndd 4"],
  path    => $execPath,
}

exec { "ulimit -H":
  command => "ulimit -H -n 65536",
  require => Exec["ulimit -S"],
  path    => $execPath,
}

Solaris 11 kernel, ulimits and required packages

package { ['shell/ksh', 'developer/assembler']:
  ensure => present,
}

$install  = "pkg:/group/prerequisite/oracle/oracle-rdbms-server-12-1-preinstall"

package { $install:
  ensure  => present,
}

$groups = ['oinstall','dba' ,'oper' ]

group { $groups :
  ensure      => present,
}

user { 'oracle' :
  ensure      => present,
  uid         => 500,
  gid         => 'dba',
  groups      => $groups,
  shell       => '/bin/bash',
  password    => '$1$DSJ51vh6$4XzzwyIOk6Bi/54kglGk3.',
  home        => "/export/home/oracle",
  comment     => "This user oracle was created by Puppet",
  require     => Group[$groups],
  managehome  => true,
}

$execPath     = "/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:"

exec { "projadd group.dba":
  command => "projadd -U oracle -G dba -p 104 group.dba",
  require => User["oracle"],
  unless  => "projects -l | grep -c group.dba",
  path    => $execPath,
}

exec { "usermod oracle":
  command => "usermod -K project=group.dba oracle",
  require => [User["oracle"],Exec["projadd group.dba"],],
  path    => $execPath,
}

exec { "projmod max-shm-memory":
  command => "projmod -sK 'project.max-shm-memory=(privileged,4G,deny)' group.dba",
  require => [User["oracle"],Exec["projadd group.dba"],],
  path    => $execPath,
}

exec { "projmod max-sem-ids":
  command     => "projmod -sK 'project.max-sem-ids=(privileged,100,deny)' group.dba",
  require     => Exec["projadd group.dba"],
  path        => $execPath,
}

exec { "projmod max-shm-ids":
  command     => "projmod -s -K 'project.max-shm-ids=(privileged,100,deny)' group.dba",
  require     => Exec["projadd group.dba"],
  path        => $execPath,
}

exec { "projmod max-sem-nsems":
  command     => "projmod -sK 'process.max-sem-nsems=(privileged,256,deny)' group.dba",
  require     => Exec["projadd group.dba"],
  path        => $execPath,
}

exec { "projmod max-file-descriptor":
  command     => "projmod -sK 'process.max-file-descriptor=(basic,65536,deny)' group.dba",
  require     => Exec["projadd group.dba"],
  path        => $execPath,
}

exec { "projmod max-stack-size":
  command     => "projmod -sK 'process.max-stack-size=(privileged,32MB,deny)' group.dba",
  require     => Exec["projadd group.dba"],
  path        => $execPath,
}

exec { "ipadm smallest_anon_port tcp":
  command     => "ipadm set-prop -p smallest_anon_port=9000 tcp",
  path        => $execPath,
}
exec { "ipadm smallest_anon_port udp":
  command     => "ipadm set-prop -p smallest_anon_port=9000 udp",
  path        => $execPath,
}
exec { "ipadm largest_anon_port tcp":
  command     => "ipadm set-prop -p largest_anon_port=65500 tcp",
  path        => $execPath,
}
exec { "ipadm largest_anon_port udp":
  command     => "ipadm set-prop -p largest_anon_port=65500 udp",
  path        => $execPath,
}

exec { "ulimit -S":
  command => "ulimit -S -n 4096",
  path    => $execPath,
}

exec { "ulimit -H":
  command => "ulimit -H -n 65536",
  path    => $execPath,
}
Advertisements

Query Result Cache in Oracle 11g

query result cache in oracle 11g

Caching has been a feature of Oracle for as long as most of us can remember. Over the many years and versions, Oracle has continually expanded its caching mechanisms. We are all familiar with the buffer cache, keep cache, library cache, shared pool, PGA/UGA and so on. In addition to the many data and cursor caches included in its architecture, Oracle has also enhanced caching support for common programming techniques; for example, scalar subquery caching, global temporary tables and associative arrays. In 11g, Oracle has extended this further by adding result caching to both its server and client architecture.

There are three new result caching features in 11g:

  • query result cache;
  • PL/SQL function result cache; and
  • client OCI result cache.

This article will describe and demonstrate the query result cache only. The PL/SQL function result cache feature shares much of the same architecture, but will be the subject of a future article.

an overview

As its name suggests, the query result cache is used to store the results of SQL queries for re-use in subsequent executions. By caching the results of queries, Oracle can avoid having to repeat the potentially time-consuming and intensive operations that generated the resultset in the first place (for example, sorting/aggregation, physical I/O, joins etc). The cache results themselves are available across the instance (i.e. for use by sessions other than the one that first executed the query) and are maintained by Oracle in a dedicated area of memory. Unlike our homegrown solutions using associative arrays or global temporary tables, the query result cache is completely transparent to our applications. It is also maintained for consistency automatically, unlike our own caching programs.

We will examine the features of the query result cache in more detail throughout this article.

database configuration

We will begin by looking at some of the database configuration required to use the query result cache. The initialisation parameters are as follows.

SQL> SELECT name, value, isdefault

2 FROM   v$parameter

3 WHERE name LIKE ‘result_cache%’;

 

NAME                               VALUE             ISDEFAULT

———————————- —————— ———

result_cache_mode                MANUAL             TRUE

result_cache_max_size             1081344           TRUE

result_cache_max_result           5                 TRUE

result_cache_remote_expiration     0                 TRUE

 

4 rows selected.

A brief explanation of each of these parameters is as follows.

  • result_cache_mode: the result cache can be enabled in three ways: via hint, alter session or alter system. Default is MANUAL which means that we need to explicitly request caching via the RESULT_CACHE hint;
  • result_cache_max_size: this is the size of the result cache in bytes. The cache is allocated directly from the shared pool but is maintained separately (for example, flushing the shared pool will not flush the result cache);
  • result_cache_max_result: this specifies the highest percentage of the cache that is able to be used by a single resultset (default 5%); and
  • result_cache_remote_expiration: this specifies the number of minutes for which a resultset based on a remote object can remain valid. The default is 0 which means that resultsets dependant on remote objects will not be cached.

The cache size is dynamic and can be changed either permanently or until the instance is restarted. We will roughly double the size of the cache for this article and verify that we have a larger result cache as follows (note this was run as SYSDBA).

SQL> ALTER SYSTEM SET result_cache_max_size = 2M SCOPE = MEMORY;

 

System altered.

 

SQL> SELECT name, value

2 FROM   v$parameter

3 WHERE name = ‘result_cache_max_size’;

 

NAME                                    VALUE

—————————————- ————————-

result_cache_max_size                   2097152

 

1 row selected.

The setup for the result cache is simple and should be a one-time DBA operation. We will now see some examples of caching and using results below.

caching results manually

As we saw earlier, the default caching mode for this instance is MANUAL. This means that query resultsets will not be cached unless we instruct Oracle to do so by using the RESULT_CACHE hint. In our first example below, we will manually cache the results of a simple aggregate query. Note that the examples in this article are all based on the SH sample schema. First, we verify our cache mode as follows.

SQL> SELECT value

2 FROM   v$parameter

3 WHERE name = ‘result_cache_mode’;

 

VALUE

—————-

MANUAL

 

1 row selected.

We will now run a query and cache its results. We will run this through Autotrace because we are interested in both the workload statistics and the execution plan (Autotrace will also conveniently suppress the query output).

SQL> set autotrace traceonly

 

SQL> set timing on

 

SQL> SELECT /*+ RESULT_CACHE */

2         p.prod_name

3 ,     SUM(s.amount_sold)   AS total_revenue

4 ,     SUM(s.quantity_sold) AS total_sales

5 FROM   sales s

6 ,     products p

7 WHERE s.prod_id = p.prod_id

8 GROUP BY

9         p.prod_name;

 

71 rows selected.

 

Elapsed: 00:00:05.00

Using the RESULT_CACHE hint, we have instructed Oracle to cache the results of this aggregate query. We can see that it returned 71 rows and took 5 seconds to execute. We will see the amount of work that Oracle did to generate these results further below, but first we will see the execution plan (note that this is a theoretical explain plan and not the real execution plan, but is a good approximation in this system).

Execution Plan

———————————————————-

Plan hash value: 504757596

 

———————————————————————– … —————–

| Id | Operation               | Name                       | Rows | … | Pstart| Pstop |

———————————————————————– … —————–

|   0 | SELECT STATEMENT         |                            |   71 | … |       |       |

|   1 | RESULT CACHE           | 091zc7mvn8ums36mbd2gqac4h0 |       | … |       |       |

|   2 |   HASH GROUP BY         |                           |   71 | … |       |       |

|* 3 |   HASH JOIN             |                           |   72 | … |       |       |

|   4 |     VIEW                 | VW_GBC_5                   |   72 | … |       |       |

|   5 |     HASH GROUP BY       |                           |   72 | … |      |       |

|   6 |       PARTITION RANGE ALL|                           |   918K| … |     1 |   28 |

|   7 |       TABLE ACCESS FULL | SALES                     |   918K| … |     1 |   28 |

|   8 |     TABLE ACCESS FULL   | PRODUCTS                  |   72 | … |       |       |

———————————————————————– … —————–

 

Predicate Information (identified by operation id):

—————————————————

 

3 – access(“ITEM_1″=”P”.”PROD_ID”)

 

Result Cache Information (identified by operation id):

——————————————————

 

   1 – column-count=3; dependencies=(SH.SALES, SH.PRODUCTS); parameters=(nls); name=”SELECT /*+ RESULT_CACHE */

       p.prod_name

,     SUM(s.amount_sold)   AS total_revenue

,     SUM(s.quantity_sold) AS total_”

Note the highlighted sections of the execution plan. It contains some new information, which we can summarise as follows:

  • first, we can see a new operation, “RESULT CACHE” at operation ID=1. This is the last step in this particular example and it is telling us that Oracle will cache the results of the preceding operations;
  • second, we see a system-generated name beside the RESULT CACHE operation. This is used internally as a key for looking up and matching SQL statements to their cached results;
  • third, we see a new section in the plan report on the result cache metadata for this query. This section includes information such as the objects that the results are dependant on (i.e. to maintain cache coherency) and the leading part of the SQL text that generated the results.

Finally, the Autotrace report displays the work that Oracle performed to generate these results.

Statistics

———————————————————-

14871 recursive calls

0 db block gets

4890 consistent gets

1745 physical reads

0 redo size

3526 bytes sent via SQL*Net to client

416 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

136 sorts (memory)

0 sorts (disk)

71 rows processed

We can see a range of I/O and CPU activity in these figures, as expected. We will now test the new query result cache by running the same query a second time and comparing the Autotrace report, as follows.

SQL> SELECT /*+ RESULT_CACHE */

2         p.prod_name

3 ,     SUM(s.amount_sold)   AS total_revenue

4 ,     SUM(s.quantity_sold) AS total_sales

5 FROM   sales s

6 ,      products p

7 WHERE s.prod_id = p.prod_id

8 GROUP BY

9         p.prod_name;

 

71 rows selected.

 

Elapsed: 00:00:00.01

 

Execution Plan

———————————————————-

Plan hash value: 504757596

 

———————————————————————– … —————–

| Id | Operation               | Name                       | Rows | … | Pstart| Pstop |

———————————————————————– … —————–

|   0 | SELECT STATEMENT         |                           |   71 | … |       |       |

|   1 | RESULT CACHE           | 091zc7mvn8ums36mbd2gqac4h0 |       | … |       |       |

|   2 |   HASH GROUP BY         |                           |   71 | … |       |       |

|* 3 |   HASH JOIN             |                           |   72 | … |       |       |

|   4 |     VIEW                 | VW_GBC_5                   |   72 | … |       |       |

|   5 |     HASH GROUP BY       |                          |   72 | … |       |       |

|   6 |       PARTITION RANGE ALL|                           |   918K| … |     1 |   28 |

|   7 |       TABLE ACCESS FULL | SALES                     |   918K| … |     1 |   28 |

|   8 |     TABLE ACCESS FULL   | PRODUCTS                   |   72 | … |       |       |

———————————————————————– … —————–

 

Predicate Information (identified by operation id):

—————————————————

 

3 – access(“ITEM_1″=”P”.”PROD_ID”)

 

Result Cache Information (identified by operation id):

——————————————————

 

1 – column-count=3; dependencies=(SH.SALES, SH.PRODUCTS); parameters=(nls); name=”SELECT /*+ RESULT_CACHE */

p.prod_name

,     SUM(s.amount_sold)   AS total_revenue

,     SUM(s.quantity_sold) AS total_”

 

 

Statistics

———————————————————-

         0 recursive calls

         0 db block gets

         0 consistent gets

         0 physical reads

         0 redo size

3526 bytes sent via SQL*Net to client

416 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

         0 sorts (memory)

         0 sorts (disk)

71 rows processed

Starting with the statistics report, we can see that this time Oracle has done very little work. In fact it has performed none of the I/O, sorting or recursive SQL that was required to answer our query the first time. Oracle has recognised that the query can be satisfied from the result cache and simply returned the pre-computed answer to us instead, in approximately 0.1 seconds.

Interestingly, the execution plan remains the same (this is to be expected because the SQL is not optimised a second time) but is now slightly misleading. None of the plan operations actually take place once we have a reusable resultset, but the presence of the RESULT CACHE operation should alert us to the fact that we might already have a cached set of results. In fact, we can use the information supplied in this plan to verify the existence of a cached resultset for ourselves, which we will examine later in this article.

We have now seen a simple example of query result caching. Minimising the amount of work that Oracle has to do to answer our query will reduce the time it takes. It also follows that the more work Oracle can avoid, the better the gains from caching.

automatic result caching

The alternative result_cache_mode to MANUAL is FORCE. This can be session or system specific and in this mode Oracle will attempt to set or use cached query results when it can, unless we use the NO_RESULT_CACHE hint. We will see an example of this mode below. We will set the mode to FORCE at a session level, then repeat our previous SQL example minus the RESULT_CACHE hint. First we set the result_cache_mode as follows.

SQL> ALTER SESSION SET result_cache_mode = FORCE;

 

Session altered.

With Autotrace set, we will now run the same aggregate statement from our manual example above. Note that the only modification to this SQL statement is to remove the RESULT_CACHE hint.

SQL> set autotrace traceonly

 

SQL> set timing on

 

SQL> SELECT p.prod_name

2 ,     SUM(s.amount_sold)   AS total_revenue

3 ,     SUM(s.quantity_sold) AS total_sales

4 FROM   sales s

5 ,     products p

6 WHERE s.prod_id = p.prod_id

7 GROUP BY

8         p.prod_name;

 

71 rows selected.

 

Elapsed: 00:00:00.01

 

Execution Plan

———————————————————-

Plan hash value: 504757596

 

———————————————————————– … —————–

| Id | Operation               | Name                       | Rows | … | Pstart| Pstop |

———————————————————————– … —————–

|   0 | SELECT STATEMENT         |                           |   71 | … |       |       |

|   1 | RESULT CACHE           | 091zc7mvn8ums36mbd2gqac4h0 |       | … |       |      |

|   2 |   HASH GROUP BY         |                           |   71 | … |       |       |

|* 3 |   HASH JOIN             |                           |   72 | … |       |       |

|   4 |     VIEW                 | VW_GBC_5                  |   72 | … |       |       |

|   5 |     HASH GROUP BY       |                           |   72 | … |       |       |

|   6 |       PARTITION RANGE ALL|                           |   918K| … |     1 |   28 |

|   7 |       TABLE ACCESS FULL | SALES                     |   918K| … |     1 |   28 |

|   8 |     TABLE ACCESS FULL   | PRODUCTS                   |   72 | … |       |       |

———————————————————————– … —————–

 

Predicate Information (identified by operation id):

—————————————————

 

3 – access(“ITEM_1″=”P”.”PROD_ID”)

 

Result Cache Information (identified by operation id):

——————————————————

 

1 – column-count=3; dependencies=(SH.SALES, SH.PRODUCTS); parameters=(nls); name=”SELECT p.prod_name

,     SUM(s.amount_sold)   AS total_revenue

,     SUM(s.quantity_sold) AS total_sales

FROM   sales s

,     “

This is really interesting! We can see that in FORCE mode without a RESULT_CACHE hint, Oracle has used the same plan as the “equivalent” statement in MANUAL mode with the RESULT_CACHE hint (though in reality they are different cursors). Note that the result cache hashes are the same, although the name parameter is different. Given this, we would expect Oracle to use our previously cached results, which we can verify from the Statistics report, as follows.

Statistics

———————————————————-

         1 recursive calls

       0 db block gets

         0 consistent gets

         0 physical reads

         0 redo size

3526 bytes sent via SQL*Net to client

416 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

        0 sorts (memory)

         0 sorts (disk)

71 rows processed

We can see quite clearly that Oracle has returned the results that it cached from the manual-mode, hinted SQL. It recognises their equivalence despite the difference in the SQL text.

Continuing in FORCE result_cache_mode, we will now execute a new aggregate statement, as follows.

SQL> SELECT p.prod_name

2 ,     MAX(s.quantity_sold) AS max_sales

3 FROM   sales s

4 ,     products p

5 WHERE s.prod_id = p.prod_id

6 GROUP BY

7         p.prod_name;

 

71 rows selected.

 

Elapsed: 00:00:00.59

 

Execution Plan

———————————————————-

Plan hash value: 504757596

 

———————————————————————– … —————–

| Id | Operation               | Name                       | Rows | … | Pstart| Pstop |

———————————————————————– … —————–

|   0 | SELECT STATEMENT         |                          |   71 | … |       |       |

|   1 | RESULT CACHE           | 12scakxrxks3p73w5nxr69wn3j |       | … |       |       |

|   2 |   HASH GROUP BY         |                           |   71 | … |       |       |

|* 3 |   HASH JOIN            |                           |   72 | … |       |       |

|   4 |     VIEW                 | VW_GBC_5                   |   72 | … |       |       |

|   5 |     HASH GROUP BY       |                           |   72 | … |       |       |

|  6 |       PARTITION RANGE ALL|                           |   918K| … |     1 |   28 |

|   7 |       TABLE ACCESS FULL | SALES                     |   918K| … |     1 |   28 |

|   8 |     TABLE ACCESS FULL   | PRODUCTS                   |   72 | … |       |       |

———————————————————————– … —————–

 

Predicate Information (identified by operation id):

—————————————————

 

3 – access(“ITEM_1″=”P”.”PROD_ID”)

 

Result Cache Information (identified by operation id):

——————————————————

 

1 – column-count=2; dependencies=(SH.SALES, SH.PRODUCTS); parameters=(nls); name=”SELECT p.prod_name

,     MAX(s.quantity_sold) AS max_sales

FROM   sales s

,     products p

WHERE s.prod_id = p.prod_id

GROUP ”

 

 

Statistics

———————————————————-

1 recursive calls

0 db block gets

1731 consistent gets

0 physical reads

0 redo size

2687 bytes sent via SQL*Net to client

416 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

0 sorts (memory)

0 sorts (disk)

71 rows processed

In FORCE mode, Oracle has cached the results of our query without us requesting it. We can see this in the execution plan. We will run the query a second time, as follows.

SQL> SELECT p.prod_name

2 ,     MAX(s.quantity_sold) AS max_sales

3 FROM   sales s

4 ,     products p

5 WHERE s.prod_id = p.prod_id

6 GROUP BY

7         p.prod_name;

 

71 rows selected.

 

Elapsed: 00:00:00.00

 

<< …plan removed… >>

 

Statistics

———————————————————-

         0 recursive calls

         0 db block gets

         0 consistent gets

         0 physical reads

         0 redo size

2687 bytes sent via SQL*Net to client

416 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

         0 sorts (memory)

         0 sorts (disk)

71 rows processed

The wall-clock timing and Statistics report shows clearly that Oracle has retrieved the cached results.

dynamic result cache views

So far we have seen the effects of caching with the two modes of the query result cache. We will now look a little deeper into what happens with the query cache and what information Oracle exposes about it. We can search the data dictionary for the result cache dynamic views, as follows.

SQL> SELECT view_name

2 FROM   dba_views

3 WHERE view_name LIKE ‘V_$RESULT_CACHE%’;

 

VIEW_NAME

——————————

V_$RESULT_CACHE_DEPENDENCY

V_$RESULT_CACHE_MEMORY

V_$RESULT_CACHE_OBJECTS

V_$RESULT_CACHE_STATISTICS

 

4 rows selected.

Oracle provides four dynamic views. We will have a brief look at these below (refer to the online documentation for more details: a link is provided at the end of this article). We will start with V$RESULT_CACHE_OBJECTS, which exposes the most information about our cached query results.

SQL> DESC v$result_cache_objects;

 

Name                     Null? Type

———————— —— ——————

1     ID                             NUMBER

2     TYPE                          VARCHAR2(10)

3     STATUS                         VARCHAR2(9)

4     BUCKET_NO                       NUMBER

5     HASH                           NUMBER

6     NAME                           VARCHAR2(128)

7     NAMESPACE                      VARCHAR2(5)

8     CREATION_TIMESTAMP             DATE

9     CREATOR_UID                     NUMBER

10     DEPEND_COUNT                   NUMBER

11     BLOCK_COUNT                     NUMBER

12     SCN                            NUMBER

13     COLUMN_COUNT                   NUMBER

14     PIN_COUNT                       NUMBER

15     SCAN_COUNT                     NUMBER

16     ROW_COUNT                       NUMBER

17     ROW_SIZE_MAX                    NUMBER

18     ROW_SIZE_MIN                   NUMBER

19     ROW_SIZE_AVG                   NUMBER

20     BUILD_TIME                     NUMBER

21     LRU_NUMBER                     NUMBER

22     OBJECT_NO                      NUMBER

23     INVALIDATIONS                   NUMBER

24     SPACE_OVERHEAD                 NUMBER

25     SPACE_UNUSED                   NUMBER

26     CACHE_ID                       VARCHAR2(93)

27     CACHE_KEY                      VARCHAR2(93)

As we can see, there is a great deal of information potentially available. We will query a few important columns below.

SQL> SELECT name

2 ,     type

3 ,     cache_id

4 ,     row_count

5 FROM   v$result_cache_objects

6 ORDER BY

7         creation_timestamp;

 

NAME                           TYPE       CACHE_ID                   ROW_COUNT

—————————— ———- ————————– ———-

SH.PRODUCTS                   Dependency SH.PRODUCTS                         0

SH.SALES                       Dependency SH.SALES                           0

SELECT /*+ RESULT_CACHE */     Result     091zc7mvn8ums36mbd2gqac4h0         71

SELECT p.prod_name             Result     12scakxrxks3p73w5nxr69wn3j         71

SELECT DECODE(‘A’,’A’,’1′,’2′ Result     0y8dgk314f9f8bz05qsrrny8u8         1

 

5 rows selected.

We can see two types of information in this view: dependencies and results. We will discuss dependencies later, but the results’ names clearly align with the queries we have run so far (the SUM and MAX aggregate sales queries). The last query in the output is executed by SQL*Plus. Remember from earlier that we executed two SQL statements (equivalent except for the RESULT_CACHE hint) and note the CACHE_ID values. There is only one entry for the two statements due to the fact that they shared a result set and hashed to the same CACHE_ID.

We can also look at the result cache statistics for a high-level overview of how it is being used, as follows.

SQL> SELECT *

2 FROM   v$result_cache_statistics;

 

ID NAME                                       VALUE

———- ——————————— —————

1 Block Size (Bytes)                           1024

2 Block Count Maximum                         2048

3 Block Count Current                           32

4 Result Size Maximum (Blocks)                 102

5 Create Count Success                           3

6 Create Count Failure                            0

7 Find Count                                     4

8 Invalidation Count                             0

9 Delete Count Invalid                           0

10 Delete Count Valid                              0

 

10 rows selected.

We can see some general information on cache entries and hits, which are minimal so far as we have used a clean cache to run our examples. We can also see some statistics about how our cache is organised (i.e. up to 2,048 blocks of 1,024 bytes each which is equivalent to the 2M result_cache_max_size value that we set earlier). Currently, we are using 32K of the cache (32 blocks) and no single result set can be more than 102 blocks (or consume 102K of memory).

result cache dependencies

Each query result is dependant on one or more tables (i.e. the source tables for the query). We can get information on which objects a query is dependant on in a number of places. The V$RESULT_CACHE_DEPENDENCY view summarises the dependencies for each entry in the result cache. We saw the dependencies parameter in the Result Cache report from DBMS_XPLAN.DISPLAY which listed the tables involved in our sample aggregate queries. We also saw entries in the V$RESULT_CACHE_OBJECTS view data with a type of “Dependency”. We can put these together to summarise the dependencies as follows.

SQL> SELECT ro.id

2 ,     ro.name

3 ,     wm_concat(do.object_name) AS object_names

4 FROM   v$result_cache_objects   ro

5           LEFT OUTER JOIN

6        v$result_cache_dependency rd

7           ON (ro.id = rd.result_id)

8           LEFT OUTER JOIN

9         dba_objects               do

10           ON (rd.object_no = do.object_id)

11 WHERE ro.type = ‘Result’

12 GROUP BY

13         ro.id

14 ,     ro.name;

 

ID NAME                                               OBJECT_NAMES

———- ————————————————– —————-

2 SELECT /*+ RESULT_CACHE */                         SALES,PRODUCTS

p.prod_name

,     SUM(s.amount_sold)   AS total_revenue

,     SUM(s.quantity_sold) AS total_

 

6 SELECT DECODE(‘A’,’A’,’1′,’2′) FROM DUAL

7 SELECT p.prod_name                                 SALES,PRODUCTS

,     MAX(s.quantity_sold) AS max_sales

FROM   sales s

,     products p

WHERE s.prod_id = p.prod_id

GROUP

 

 

3 rows selected.

Dependencies are necessary to protect the integrity of the query results in the cache. If the data in any of the dependant tables is modified, Oracle will invalidate the result cache entry and will not use it until it is refreshed by a repeat of the original SQL. This behaviour cannot be circumvented, even if we are prepared to tolerate inconsistent results.

We can demonstrate result cache invalidation very easily. We will perform a “no-change” update to a single row of PRODUCTS and commit the transaction, as follows.

SQL> UPDATE products

2 SET   prod_name = prod_name

3 WHERE ROWNUM = 1;

 

1 row updated.

 

SQL> COMMIT;

 

Commit complete.

We will now repeat one of our cached aggregation queries and measure the workload using Autotrace.

SQL> set autotrace traceonly statistics

 

SQL> SELECT p.prod_name

2 ,     MAX(s.quantity_sold) AS max_sales

3 FROM   sales s

4 ,     products p

5 WHERE s.prod_id = p.prod_id

6 GROUP BY

7         p.prod_name;

 

71 rows selected.

 

Statistics

———————————————————-

0 recursive calls

0 db block gets

1731 consistent gets

0 physical reads

0 redo size

2687 bytes sent via SQL*Net to client

416 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

0 sorts (memory)

0 sorts (disk)

71 rows processed

Oracle will not attempt to understand the nature of the modification to the dependant objects. Even with a no-change update, the cached result entry was invalidated and the subsequent repeat of the source SQL caused the data to be generated again. The V$RESULT_CACHE_OBJECTS view provides some statistics on this, as follows.

SQL> SELECT id

2 ,     name

3 ,     type

4 ,     invalidations

5 ,     status

6 FROM   v$result_cache_objects

7 ORDER BY

8         id;

 

ID NAME                           TYPE       INVALIDATIONS STATUS

—- ——————————- ———- ————- ———

1 SH.PRODUCTS                     Dependency            1 Published

0 SH.SALES                       Dependency             0 Published

6 SELECT DECODE(‘A’,’A’,’1′,’2′) Result                 0 Published

2 SELECT /*+ RESULT_CACHE */     Result                 0 Invalid

7 SELECT p.prod_name             Result                 0 Invalid

10 SELECT p.prod_name             Result                 0 Published

 

6 rows selected.

We can see that the invalidation occurred at two levels. First, the INVALIDATIONS column details the number of times that modifications to an underlying table have caused an invalidation. Second, the STATUS column shows us which results have been invalidated by the same action. When we updated the PRODUCTS table, we invalidated the results from our previous queries (IDs 2 and 7). We then repeated one of the original queries, for which Oracle created a new set of results in the cache (ID 10).

cache find count

If we are caching query results, we might be interested to know how often they are used. The V$RESULT_CACHE_STATISTICS view provides a “Find Count” statistic, but this is cache-wide so we can’t limit it to a particular query. In the following example, we will capture the current Find Count and then run a SQL statement in a PL/SQL loop 100 times.

SQL> SELECT value

2 FROM   v$result_cache_statistics

3 WHERE name = ‘Find Count’;

 

VALUE

—————

6

 

1 row selected.

 

SQL> DECLARE

2     n PLS_INTEGER;

3 BEGIN

4     FOR i IN 1 .. 100 LOOP

5       SELECT /*+ RESULT_CACHE */ COUNT(*) INTO n FROM channels;

6     END LOOP;

7 END;

8 /

 

PL/SQL procedure successfully completed.

We will now measure the Find Count again, as follows.

SQL> SELECT value

2 FROM   v$result_cache_statistics

3 WHERE name = ‘Find Count’;

 

VALUE

—————

105

 

1 row selected.

This increased by 99, which is to be expected. We executed our SQL statement 100 times. The first execution cached the results and the 99 remaining executions used them. Needless to say, this was a single-user test system. We can confirm that we added the SQL results to the cache as follows.

SQL> SELECT name

2 ,     type

3 ,     row_count

4 FROM   v$result_cache_objects

5 ORDER BY

6         creation_timestamp;

 

NAME                                              TYPE       ROW_COUNT

————————————————– ———- ———-

SELECT /*+ RESULT_CACHE */                         Result             71

SH.PRODUCTS                                       Dependency        0

SH.SALES                                           Dependency         0

SELECT p.prod_name                                 Result             71

SELECT DECODE(‘A’,’A’,’1′,’2′) FROM DUAL           Result             1

SELECT p.prod_name                                Result             71

SH.CHANNELS                                       Dependency         0

SELECT /*+ RESULT_CACHE */ COUNT(*) FROM CHANNELS Result             1

 

8 rows selected.

Hence there is no difference for SQL embedded in PL/SQL programs.

parameterised caching

In our previous example, we executed a single SQL statement 100 times and saw 1 cache entry. A far more common scenario is to have single-row lookups based on a primary key derived from another cursor (this is not particularly efficient, but is still extremely common). The query result cache handles this scenario by recognising the different bind variables and caching each resultset independently. The bind variables act as parameters to the result cache lookup and are listed in the Result Cache report from DBMS_XPLAN. If a bind variable is repeated, the cached results will be used.

We will demonstrate this behaviour below. We will set the result_cache_mode to FORCE for convenience. We will choose 4 products and lookup each one 10 times. The lookup will use bind variables.

SQL> ALTER SESSION SET result_cache_mode = FORCE;

 

Session altered.

 

SQL> DECLARE

2

3     TYPE id_ntt IS TABLE OF products.prod_id%TYPE;

4     nt_ids id_ntt := id_ntt(40,41,42,43);

5

6    v_name products.prod_name%TYPE;

7

8 BEGIN

9     FOR i IN 1 .. 10 LOOP

10       FOR ii IN 1 .. nt_ids.COUNT LOOP

11

12           SELECT prod_name INTO v_name

13           FROM   products

14           WHERE prod_id = nt_ids(ii);

15

16       END LOOP;

17     END LOOP;

18 END;

19 /

 

PL/SQL procedure successfully completed.

According to what we now know about the result cache mechanism, we ran 4 different SQL statements above (the same SQL statement with 4 different inputs). We will query V$RESULT_CACHE_OBJECTS to verify this, as follows.

SQL> SELECT name

2 ,     type

3 ,     row_count

4 FROM   v$result_cache_objects

5 ORDER BY

6         creation_timestamp;

 

NAME                                                   TYPE       ROW_COUNT

——————————————————- ———- ———-

SELECT /*+ RESULT_CACHE */                             Result             71

<< …snip… >>

SH.CHANNELS                                             Dependency         0

SELECT /*+ RESULT_CACHE */ COUNT(*) FROM CHANNELS       Result             1

SELECT PROD_NAME FROM PRODUCTS WHERE PROD_ID = :B1     Result             1

SELECT PROD_NAME FROM PRODUCTS WHERE PROD_ID = :B1     Result             1

SELECT PROD_NAME FROM PRODUCTS WHERE PROD_ID = :B1     Result             1

SELECT PROD_NAME FROM PRODUCTS WHERE PROD_ID = :B1     Result             1

 

12 rows selected.

We can see that the results for the same SQL text was added to the cache 4 times, as expected. The bind variable inputs are additional parameters to the cache lookup. Each resultset was added on the first execution of each cursor and the cache was “hit” 9 times for each cursor. Similar logic is commonly used by developers in associative array caching; a colleague of mine calls this “on-demand caching” (i.e. rather than cache entire lookup tables, only cache a lookup record when it is actually requested).

Remember that the result_cache_max_result parameter specifies that the largest cached resultset possible is n% of the total cache memory. While this protects us from filling the cache with the results of a single SQL statement, it doesn’t stop us from filling the cache with parameterised cursors like those we saw above. In the following example, we will lookup every customer in the CUSTOMERS table twice. Again, we will be in FORCE result_cache_mode for convenience. Note that there are 55,500 records in the SH.CUSTOMERS demo table.

SQL> ALTER SESSION SET result_cache_mode = FORCE;

 

Session altered.

 

SQL> DECLARE

2     v_first_name customers.cust_first_name%TYPE;

3 BEGIN

4     FOR i IN 1 .. 2 LOOP

5       FOR r IN (SELECT cust_id FROM customers) LOOP

6           SELECT cust_first_name INTO v_first_name

7           FROM   customers

8          WHERE cust_id = r.cust_id;

9       END LOOP;

10     END LOOP;

11 END;

12 /

 

PL/SQL procedure successfully completed.

We will examine the cache entries below. Based on what we know about result cache behaviour, we can expect a large number of single-row resultsets, so we will try to aggregate these. We will query the minimum and maximum names in V$RESULT_CACHE_OBJECTS, together with a count of the entries, as follows.

SQL> SELECT MIN(name) AS min_name

2 ,     MAX(name) AS max_name

3 ,     COUNT(*) AS cache_entries

4 FROM   v$result_cache_objects

5 WHERE type = ‘Result’;

 

MIN_NAME                       MAX_NAME                       CACHE_ENTRIES

—————————— —————————— ————-

SELECT CUST_FIRST_NAME FROM CU SELECT CUST_FIRST_NAME FROM CU         2035

STOMERS WHERE CUST_ID = :B1   STOMERS WHERE CUST_ID = :B1

 

1 row selected.

We added 2,035 customer lookups (out of a possible 55,500) to the cache. In fact, we completely flushed our previous results from the cache. We should therefore be aware of the potential for single lookups, particularly in PL/SQL programs, to “hog” the cache. If we query V$RESULT_CACHE_STATISTICS, we will see that the “Create Count Success” statistic should be quite high.

SQL> SELECT *

2 FROM   v$result_cache_statistics;

 

ID NAME                                     VALUE

———- ——————————- —————

1 Block Size (Bytes)                         1024

2 Block Count Maximum                       2048

3 Block Count Current                       2048

4 Result Size Maximum (Blocks)               102

5 Create Count Success                     111011

6 Create Count Failure                        0

7 Find Count                                 141

8 Invalidation Count                           2

9 Delete Count Invalid                         4

10 Delete Count Valid                       108972

 

10 rows selected.

We have added over 111,000 resultsets to the cache, mostly as a result of the previous example. The loop through 55,500 customers would have continually replaced the existing cache entries (we only had room in the cache for approximately 4% of the total resultsets being processed in the PL/SQL).

flashback query results

The query result cache supports flashback queries. Most readers will be aware of flashback queries (an overview is available here). An SCN or timestamp is supplied to a flashback query using the AS OF extension to the table(s) in the FROM clause. This supplied point-in-time is treated by Oracle as a parameter to the query result cache.

To demonstrate this, we will run a simple flashback query twice. We will use Autotrace to demonstrate the result cache behaviour. We will begin by setting the result_cache_mode to FORCE for convenience.

SQL> ALTER SESSION SET result_cache_mode = FORCE;

 

Session altered.

We will setup a bind variable for our timestamp and execute a simple flashback query, as follows.

SQL> exec :ts := TO_CHAR(TRUNC(SYSDATE,’HH’),’YYYYMMDDHH24MISS’);

 

PL/SQL procedure successfully completed.

 

SQL> set autotrace traceonly

 

SQL> SELECT MIN(prod_id)

2 FROM   products AS OF TIMESTAMP TO_TIMESTAMP(:ts,’YYYYMMDDHH24MISS’);

 

1 row selected.

 

 

Execution Plan

———————————————————-

Plan hash value: 1489483397

 

————————————————————————— … —

| Id | Operation                   | Name                       | Rows | … |

————————————————————————— … —

|   0 | SELECT STATEMENT           |                            |     1 | … |

|   1 | RESULT CACHE               | 4vff36vw5vmn32gftq4a5qfpxh |       | … |

|   2 |   SORT AGGREGATE           |                           |     1 | … |

|   3 |   INDEX FULL SCAN (MIN/MAX)| PRODUCTS_PK               |   72 | … |

————————————————————————— … —

 

Result Cache Information (identified by operation id):

——————————————————

 

1 – column-count=1; attributes=(single-row); parameters=(:TS); name=”SELECT MIN(prod_id)

FROM   products AS OF TIMESTAMP TO_TIMESTAMP(:ts,’YYYYMMDDHH24MISS’)”

 

 

Statistics

———————————————————-

3 recursive calls

0 db block gets

73 consistent gets

0 physical reads

0 redo size

422 bytes sent via SQL*Net to client

416 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

0 sorts (memory)

0 sorts (disk)

1 rows processed

The Result Cache Information report provided by DBMS_XPLAN includes the parameters that Oracle used for this query; in this case the TS bind variable. The execution plan confirms that Oracle will cache the results of this flashback query. We will run the same query a second time to see if the results are re-used, as follows.

SQL> SELECT MIN(prod_id)

2 FROM   products AS OF TIMESTAMP TO_TIMESTAMP(:ts,’YYYYMMDDHH24MISS’);

 

1 row selected.

 

<< …plan removed… >>

 

Statistics

———————————————————-

1 recursive calls

         0 db block gets

         0 consistent gets

         0 physical reads

         0 redo size

422 bytes sent via SQL*Net to client

416 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

         0 sorts (memory)

         0 sorts (disk)

1 rows processed

 

Given the same inputs (i.e. the same bind variable), Oracle was able to retrieve the flashback query results from the result cache, as confirmed by the Autotrace Statistics report.

performance considerations

Recall our examples so far. First, we ran some aggregate “report-style” queries to demonstrate the caching mechanism. These are probably not the types of queries we would expect to cache and re-use (they are primarily in the style of run-once reports). Second, we ran some lookup examples from PL/SQL, both with and without primary key predicates. These are far more likely to be candidates for caching in our applications: single-row lookups are common in PL/SQL programs. For this reason, we will concentrate on lookups to test the performance of the query result cache.

In the following example, we will use a version of Tom Kyte’s Runstats program to compare the performance of 1 million cached and uncached single-row lookups. First, we will switch to MANUAL result_cache_mode, as follows.

SQL> ALTER SESSION SET result_cache_mode = MANUAL;

 

Session altered.

We will select 8 primary key values from the SH.PRODUCTS table and perform 125,000 lookups for each (totalling 1 million executions). We will begin with the uncached version, as follows.

SQL> exec runstats_pkg.rs_start;

 

PL/SQL procedure successfully completed.

 

SQL> DECLARE

2

3   TYPE id_ntt IS TABLE OF products.prod_id%TYPE;

4     nt_ids id_ntt := id_ntt(40,41,42,43,44,45,46,47);

5

6     v_name products.prod_name%TYPE;

7

8 BEGIN

9     FOR i IN 1 .. 125000 LOOP

10       FOR ii IN 1 .. nt_ids.COUNT LOOP

11

12           SELECT prod_name INTO v_name

13           FROM   products

14           WHERE prod_id = nt_ids(ii);

15

16       END LOOP;

17     END LOOP;

18 END;

19 /

 

PL/SQL procedure successfully completed.

We will now repeat the example but cache the eight different cursors, as follows.

SQL> exec runstats_pkg.rs_middle;

 

PL/SQL procedure successfully completed.

 

SQL> DECLARE

2

3     TYPE id_ntt IS TABLE OF products.prod_id%TYPE;

4     nt_ids id_ntt := id_ntt(40,41,42,43,44,45,46,47);

5

6     v_name products.prod_name%TYPE;

7

8 BEGIN

9     FOR i IN 1 .. 125000 LOOP

10       FOR ii IN 1 .. nt_ids.COUNT LOOP

11

12           SELECT /*+ RESULT_CACHE */ prod_name INTO v_name

13           FROM   products

14           WHERE prod_id = nt_ids(ii);

15

16       END LOOP;

17     END LOOP;

18 END;

19 /

 

PL/SQL procedure successfully completed.

We will now report the major differences between the two approaches, as follows.

SQL> exec runstats_pkg.rs_stop(1000);

Run1 ran in 4184 hsecs

Run2 ran in 4059 hsecs

Run1 ran in 103.08% of the time

 

 

Name                                 Run1       Run2       Diff

STAT..session pga memory         -327,680     65,536     393,216

STAT..calls to kcmgrs           1,000,000         8   -999,992

STAT..index fetch by key         1,000,002         10   -999,992

STAT..rows fetched via callbac   1,000,002         10   -999,992

STAT..table fetch by rowid       1,000,002         10   -999,992

LATCH.shared pool simulator             97   1,000,090     999,993

LATCH.cache buffers chains       2,000,007         23 -1,999,984

STAT..consistent gets           2,000,006         22 -1,999,984

STAT..consistent gets – examin   2,000,006         22 -1,999,984

STAT..consistent gets from cac   2,000,006         22 -1,999,984

STAT..session logical reads     2,000,006         22 -1,999,984

LATCH.Result Cache: Latch               0   2,000,017   2,000,017

STAT..buffer is not pinned cou   3,000,004         28 -2,999,976

 

 

Run1 latches total versus run2 — difference and pct

Run1       Run2       Diff       Pct

3,003,103   4,003,090     999,987     75.02%

 

PL/SQL procedure successfully completed.

We can see that there is little performance difference between the cached and uncached lookups. In fact, the result cache uses more latches (note the two million hits on the new Result Cache latch), although it is marginally quicker overall. These results are probably to be expected when we consider the work to be done in a single primary key lookup. First, the index and table data will be in the buffer cache and second, the amount of I/O saved by a lookup in the query result cache is minimal.

The best performance gains will be achieved when the result cache is preventing a large amount of database work from being repeated (such as in our aggregate queries earlier). Unfortunately, these types of queries are going to be rare in the result cache as the base tables will probably be frequently updated.

encapsulating cached results

Caching lookup tables (in KEEP caches, user-defined arrays etc) is nothing new in Oracle. The query result cache provides another mechanism for doing this. In the following example, we will encapsulate the PRODUCTS table in a view. We will include the RESULT_CACHE hint and then load the contents of PRODUCTS into the query result cache, as follows.

SQL> CREATE VIEW v_products

2 AS

3     SELECT /*+ RESULT_CACHE */ *

4     FROM   products;

 

View created.

 

SQL> SELECT * FROM v_products;

 

<< …snip… >>

 

72 rows selected.

We can use the cached results of PRODUCTS as follows. Note that we are gathering plan statistics to investigate whether this technique leads to an I/O-free join to PRODUCTS.

SQL> SELECT /*+ GATHER_PLAN_STATISTICS */

2         p.prod_name

3 ,     AVG(s.quantity_sold) AS avg_sales

4 FROM   sales s

5 ,     (SELECT * FROM v_products) p

6 WHERE s.prod_id = p.prod_id

7 GROUP BY

8         p.prod_name;

 

<< …snip… >>

 

71 rows selected.

We will query the DBMS_XPLAN.DISPLAY_CURSOR function to view the rowsource plan for our example query, as follows.

SQL> SELECT *

2 FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,’RUNSTATS_LAST’));

PLAN_TABLE_OUTPUT

—————————————————————————————————————–

SQL_ID 33wh37bxzhf26, child number 0

————————————-

SELECT /*+ GATHER_PLAN_STATISTICS */       p.prod_name ,

AVG(s.quantity_sold) AS avg_sales FROM   sales s ,     (SELECT * FROM

v_products) p WHERE s.prod_id = p.prod_id GROUP BY       p.prod_name

 

Plan hash value: 1128805752

 

—————————————————————————————– … ———–

| Id | Operation               | Name                       | Starts | E-Rows | A-Rows | … | Buffers |

—————————————————————————————– … ———–

|   1 | HASH GROUP BY         |                           |     1 |     71 |     71 | … |   1727 |

|* 2 |   HASH JOIN             |                           |     1 |     72 |     72 | … |   1727 |

|   3 |   VIEW                 | VW_GBC_5                   |     1 |     72 |     72 | … |   1727 |

|   4 |     HASH GROUP BY       |                            |     1 |     72 |     72 | … |   1727 |

|   5 |     PARTITION RANGE ALL|                           |     1 |   918K|   918K| … |   1727 |

|   6 |       TABLE ACCESS FULL | SALES                     |     28 |   918K|   918K| … |   1727 |

|   7 |   VIEW                 | V_PRODUCTS                 |     1 |     72 |     72 | … |       0 |

|   8 |     RESULT CACHE       | dm772rv5jrfhpf7019qp8srvzu |     1 |       |     72 | … |       0 |

|   9 |     TABLE ACCESS FULL | PRODUCTS                   |     0 |     72 |     0 | … |       0 |

—————————————————————————————– … ———–

 

Predicate Information (identified by operation id):

—————————————————

 

2 – access(“ITEM_1″=”V_PRODUCTS”.”PROD_ID”)

 

Result Cache Information (identified by operation id):

——————————————————

 

8 –

 

 

32 rows selected.

For some reason, we are missing the information from the Result Cache section, but we can see clearly on operation 9 that we incurred no I/O from the PRODUCTS table. We will see if the same is true of a join to the V_PRODUCTS view itself below.

SQL> SELECT /*+ GATHER_PLAN_STATISTICS */

2         p.prod_name

3 ,     AVG(s.quantity_sold) AS avg_sales

4 FROM   sales s

5 ,     v_products p

6 WHERE s.prod_id = p.prod_id

7 GROUP BY

8         p.prod_name;

 

<< …snip… >>

 

71 rows selected.

 

SQL> SELECT *

2 FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,’RUNSTATS_LAST’));

PLAN_TABLE_OUTPUT

—————————————————————————————————————–

SQL_ID 78nw9vr6vwg7n, child number 0

————————————-

SELECT /*+ GATHER_PLAN_STATISTICS */       p.prod_name ,

AVG(s.quantity_sold) AS avg_sales FROM   sales s ,     v_products p

WHERE s.prod_id = p.prod_id GROUP BY       p.prod_name

 

Plan hash value: 1128805752

 

—————————————————————————————– … ———–

| Id | Operation               | Name                       | Starts | E-Rows | A-Rows | … | Buffers |

—————————————————————————————– … ———–

|   1 | HASH GROUP BY         |                           |     1 |     71 |     71 | … |   1731 |

|* 2 |   HASH JOIN             |                           |     1 |     72 |     72 | … |   1731 |

|   3 |   VIEW                | VW_GBC_5                   |     1 |     72 |     72 | … |   1727 |

|   4 |     HASH GROUP BY       |                           |     1 |     72 |     72 | … |   1727 |

|   5 |     PARTITION RANGE ALL|                           |     1 |   918K|   918K| … |   1727 |

|   6 |       TABLE ACCESS FULL | SALES                     |     28 |   918K|   918K| … |   1727 |

|   7 |   VIEW                 | V_PRODUCTS                 |     1 |     72 |     72 | … |       4 |

|   8 |     RESULT CACHE       | 1b3305bb7rs5sdmdydr6u8sb49 |     1 |       |     72 | … |       4 |

|   9 |     TABLE ACCESS FULL | PRODUCTS                   |     1 |     72 |     72 | … |       4 |

—————————————————————————————– … ———–

 

Predicate Information (identified by operation id):

—————————————————

 

2 – access(“ITEM_1″=”P”.”PROD_ID”)

 

Result Cache Information (identified by operation id):

——————————————————

 

8 –

 

 

32 rows selected.

This time, we have incurred some I/O against the PRODUCTS table. Therefore, if we wish to cache a frequently-used table, we will need to repeat whatever query cached the data in the first place.

dbms_result_cache

Finally, we will look at a new built-in package, DBMS_RESULT_CACHE. This package is primarily for DBAs and has a number of APIs to manage and report on the result cache. We will look at one API in particular; the FLUSH function and procedure. Oracle supplies this API in both formats and as its name suggests, it enables us to clear the result cache and its statistics. We can optionally retain either the free memory in the cache or the cache statistics, but the default is to flush everything, as we will see below (note this is executed as SYSDBA).

SQL> BEGIN

2     DBMS_RESULT_CACHE.FLUSH;

3 END;

4 /

 

PL/SQL procedure successfully completed.

 

SQL> SELECT * FROM v$result_cache_objects;

 

no rows selected

 

SQL> SELECT * FROM v$result_cache_statistics;

 

ID NAME                                                         VALUE

———- ————————————————– —————

1 Block Size (Bytes)                                          1024

2 Block Count Maximum                                           2048

3 Block Count Current                                             0

4 Result Size Maximum (Blocks)                                   102

5 Create Count Success                                             0

6 Create Count Failure                                             0

7 Find Count                                                       0

8 Invalidation Count                                              0

9 Delete Count Invalid                                             0

10 Delete Count Valid                                               0

 

10 rows selected.

Slightly improve performance using index_asc or index_desc hints

You can use index_asc and index_desc hints if you want to instruct optimizer to explicitly choose index scan for specified table. If you use index_asc hint Oracle scans index entries in ascending order. If you use index_desc Oracle will scan index entries in descending order.

I’ll show you in my simple example how to slightly improve performance using one of this hints.

For my example I will create “big_table” using Tom Kyte’s script as usual.

You can use index_asc and index_desc hints if you want to instruct optimizer to explicitly choose index scan for specified table. If you use index_asc hint Oracle scans index entries in ascending order. If you use index_desc Oracle will scan index entries in descending order.

I’ll show you in my simple example how to slightly improve performance using one of this hints.

For my example I will create “big_table” using Tom Kyte’s script as usual.

 

Creating big_table:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
create table big_table
as
select rownum id, a.*
from all_objects a
where 1=0
/
alter table big_table nologging;
declare
l_cnt number;
l_rows number := 1000000;
begin
insert /*+ append*/
into big_table
select rownum, a.*
from all_objects a
where rownum <= 1000000;
l_cnt := sql%rowcount;
commit;
while (l_cnt < l_rows)
loop
insert /*+APPEND */ into big_table
select rownum+l_cnt,
OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID, DATA_OBJECT_ID,OBJECT_TYPE,
CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS,TEMPORARY,GENERATED,SECONDARY,NAMESPACE,EDITION_NAME
from big_table
where rownum <= l_rows - l_cnt;
l_cnt := l_cnt + sql%rowcount;
commit;
end loop;
end;
/

I will create index on object_id column and gather statistics.

1
2
3
4
5
6
7
8
9
10
create index i_objectid on big_table(object_id);
begin
dbms_stats.gather_table_stats
( ownname    => user,
tabname    => 'BIG_TABLE',
method_opt => 'for all indexed columns',
cascade    => TRUE );
end;
/

It important to mention that Oracle uses index_asc by default so I’ll concentrate my example on index_desc hint. Suppose I have a query that scans table for specified range and prints result in descending order.

1
2
3
select  * from big_table t
where object_id between 30000 and 60000
order by object_id desc;

First I will use index_asc hint to gther data. I’ve flushed buffer for testing purpose.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
msutic@TEST11> set timing on
msutic@TEST11> set autot traceonly
msutic@TEST11>
msutic@TEST11>
msutic@TEST11> select /*+ index_asc(t i_objectid) */ * from big_table t
where object_id between 30000 and 60000
order by object_id desc
4  /
429740 rows selected.
Elapsed: 00:01:35.59
Execution Plan
----------------------------------------------------------
Plan hash value: 4044193925
---------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name       | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |            |   431K|    43M|       |   441K  (1)| 00:26:34 |
|   1 |  SORT ORDER BY               |            |   431K|    43M|    60M|   441K  (1)| 00:26:34 |
|   2 |   TABLE ACCESS BY INDEX ROWID| BIG_TABLE  |   431K|    43M|       |   432K  (1)| 00:26:01 |
|*  3 |    INDEX RANGE SCAN          | I_OBJECTID |   431K|       |       |   972   (2)| 00:00:04 |
---------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("OBJECT_ID">=30000 AND "OBJECT_ID"<=60000)
Statistics
----------------------------------------------------------
763  recursive calls
11  db block gets
430823  consistent gets
19018  physical reads
0  redo size
5740226  bytes sent via SQL*Net to client
9826  bytes received via SQL*Net from client
861  SQL*Net roundtrips to/from client
10  sorts (memory)
1  sorts (disk)
429740  rows processed

It lasted 1min and 35.59 secs. Notice “SORT ORDER BY” – as resultset was not already sored Oracle used explicit sort to sort output by descending order and that took some time. But what will happen when I use index_desc hint to get resultset sorted in descending order.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
msutic@TEST11> select /*+ index_desc(t i_objectid) */ * from big_table t
where object_id between 30000 and 60000
order by object_id desc
4  /
429740 rows selected.
Elapsed: 00:01:12.21
Execution Plan
----------------------------------------------------------
Plan hash value: 2071862395
-------------------------------------------------------------------------------------------
| Id  | Operation                    | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |            |   431K|    43M|   432K  (1)| 00:26:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID | BIG_TABLE  |   431K|    43M|   432K  (1)| 00:26:01 |
|*  2 |   INDEX RANGE SCAN DESCENDING| I_OBJECTID |   431K|       |   972   (2)| 00:00:04 |
-------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("OBJECT_ID">=30000 AND "OBJECT_ID"<=60000)
Statistics
----------------------------------------------------------
707  recursive calls
0  db block gets
431681  consistent gets
11957  physical reads
0  redo size
5740226  bytes sent via SQL*Net to client
9826  bytes received via SQL*Net from client
861  SQL*Net roundtrips to/from client
10  sorts (memory)
0  sorts (disk)
429740  rows processed

This execution lasted 1 min and 12.21 secs whitch is slightly faster then execution before. As you can see, in this example I don’t have SORT ORDER BY explicit sort because the resultset was already sorted. Query took advantage of descending index scan to get resultset in descending order. This was just simple example to illustrate how to gain performance benefit using index hints. In many cases it is useless to use this hints but still i made demo case just to try out simple scenario.

 

Understanding Indexes Concept -II-

Indexes plays and crucial role in the performance tunning of a database . It is very important to know how the index  work i.e, how indexes fetches the data’s from a tables . There is a very good post by  rleishman on the working of indexes . Let’s have a look .
What is an Index ? 

An index is a schema object that contains an entry for each value that appears in the indexed column(s) of the table or cluster and provides direct, fast access to rows. It is just as the index in this manual helps us to locate information faster than if there were no index, an Oracle Database index provides a faster access path to table data .

 

Blocks 

First we need to understand a block. A block – or page for Microsoft boffins – is the smallest unit of disk that Oracle will read or write. All data in Oracle – tables, indexes, clusters – is stored in blocks. The block size is configurable for any given database but is usually one of 4Kb, 8Kb, 16Kb, or 32Kb. Rows in a table are usually much smaller than this, so many rows will generally fit into a single block. So we never read “just one row”; we will always read the entire block and ignore the rows we don’t need. Minimising this wastage is one of the fundamentals of Oracle Performance Tuning.
Oracle uses two different index architectures: b-Tree indexes and bitmap indexes. Cluster indexes, bitmap join indexes, function-based indexes, reverse key indexes and text indexes are all just variations on the two main types. b-Tree is the “normal” index .

The “-Tree” in b-Tree 

A b-Tree index is a data structure in the form of a tree – no surprises there – but it is a tree of database blocks, not rows. Imagine the leaf blocks of the index as the pages of a phone book .  Each page in the book (leaf block in the index) contains many entries, which consist of a name (indexed column value) and an address (ROWID) that tells us the physical location of the telephone (row in the table).

The names on each page are sorted, and the pages – when sorted correctly – contain a complete sorted list of every name and address

A sorted list in a phone book is fine for humans, beacuse we have mastered “the flick” – the ability to fan through the book looking for the page that will contain our target without reading the entire page. When we flick through the phone book, we are just reading the first name on each page, which is usually in a larger font in the page header. Oracle cannot read a single name (row) and ignore the reset of the page (block); it needs to read the entire block.

 

If we had no thumbs, we may find it convenient to create a separate ordered list containing the first name on each page of the phone book along with the page number. This is how the branch-blocks of an index work; a reduced list that contains the first row of each block plus the address of that block. In a large phone book, this reduced list containing one entry per page will still cover many pages, so the process is repeated, creating the next level up in the index, and so on until we are left with a single page: the root of the tree.

 

For example : 

To find the name Gallileo in this b-Tree phone book, we:

=> Read page 1. This tells us that page 6 starts with Fermat and that page 7 starts with Hawking.

=> Read page 6. This tells us that page 350 starts with Fyshe and that page 351 starts with Garibaldi.

=> Read page 350, which is a leaf block; we find Gallileo’s address and phone number.

=> That’s it; 3 blocks to find a specific row in a million row table. In reality, index blocks often fit 100 or more rows, so b-Trees are typically quite shallow. I have never seen an index with more than 5 levels. Curious? Try this:
SQL> select index_name,  blevel+1  from  user_indexes  order  by  2 ;

user_indexes.blevel is the number of branch levels. Always add 1 to include the leaf level; this tells us the number of blocks a unique index scan must read to reach the leaf-block. If we’re really, really, insatiably curious; try this in SQL*Plus:
SQL> accept   index_name  prompt   “Index Name: ”
SQL> alter session set tracefile_identifier=’&index_name’ ;
SQL> column object_id new_value object_id
SQL> select  object_id  from user_objects where object_type = ‘INDEX’  and  object_name=upper(‘&index_name’);
SQL> alter session set events ‘Immediate trace name treedump level &object_id’;
SQL> alter session set tracefile identifier=”” ;
SQL> show parameter user_dump_dest
Give the name of an index on a smallish table (because this will create a BIG file). Now, on the Oracle server, go to the directory shown by the final SHOW PARAMETER user_dump_dest command and find the trace file – the file name will contain the index name. Here is a sample:
—- begin tree dump
branch: 0x68066c8 109078216 (0: nrow: 325, level: 1)
leaf: 0x68066c9 109078217 (-1: nrow: 694 rrow: 694)
leaf: 0x68066ca 109078218 (0: nrow: 693 rrow: 693)
leaf: 0x68066cb 109078219 (1: nrow: 693 rrow: 693)
leaf: 0x68066cc 109078220 (2: nrow: 693 rrow: 693)
leaf: 0x68066cd 109078221 (3: nrow: 693 rrow: 693)


leaf: 0x68069cf 109078991 (320: nrow: 763 rrow: 763)
leaf: 0x68069d0 109078992 (321: nrow: 761 rrow: 761)
leaf: 0x68069d1 109078993 (322: nrow: 798 rrow: 798)
leaf: 0x68069d2 109078994 (323: nrow: 807 rrow: 807)
—– end tree dump
This index has only a root branch with 323 leaf nodes. Each leaf node contains a variable number of index entries up to 807! A deeper index would be more interesting, but it would take a while to dump.

“B”  is  for…

Contrary to popular belief, b is not for binary; it’s balanced.

As we insert new rows into the table, new rows are inserted into index leaf blocks. When a leaf block is full, another insert will cause the block to be split into two blocks, which means an entry for the new block must be added to the parent branch-block. If the branch-block is also full, it too is split. The process propagates back up the tree until the parent of split has space for one more entry, or the root is reached. A new root is created if the root node splits. Staggeringly, this process ensures that every branch will be the same length.
How are Indexes used ?
Indexes have three main uses:

  • To quickly find specific rows by avoiding a Full Table Scan

 

We’ve already seen above how a Unique Scan works. Using the phone book metaphor, it’s not hard to understand how a Range Scan works in much the same way to find all people named “Gallileo”, or all of the names alphabetically between “Smith” and “Smythe”. Range Scans can occur when we use >, <, LIKE, or BETWEEN in a WHERE clause. A range scan will find the first row in the range using the same technique as the Unique Scan, but will then keep reading the index up to the end of the range. It is OK if the range covers many blocks.

  • To avoid a table access altogether

 

If all we wanted to do when looking up Gallileo in the phone book was to find his address or phone number, the job would be done. However if we wanted to know his date of birth, we’d have to phone and ask. This takes time. If it was something that we needed all the time, like an email address, we could save time by adding it to the phone book.

Oracle does the same thing. If the information is in the index, then it doesn’t bother to read the table. It is a reasonably common technique to add columns to an index, not because they will be used as part of the index scan, but because they save a table access. In fact, Oracle may even perform a Fast Full Scan of an index that it cannot use in a Range or Unique scan just to avoid a table access.

  • To avoid a sort

 

This one is not so well known, largely because it is so poorly documented (and in many cases, unpredicatably implemented by the Optimizer as well). Oracle performs a sort for many reasons: ORDER BY, GROUP BY, DISTINCT, Set operations (eg. UNION), Sort-Merge Joins, uncorrelated IN-subqueries, Analytic Functions). If a sort operation requires rows in the same order as the index, then Oracle may read the table rows via the index. A sort operation is not necessary since the rows are returned in sorted order.

 

Despite all of the instances listed above where a sort is performed, I have only seen three cases where a sort is actually avoided.
1. GROUP BY : 


SQL> select src_sys, sum(actl_expns_amt), count(*)  from ef_actl_expns
where src_sys = ‘CDW’   and actl_expns_amt > 0
group by src_sys ;
—————————————————————————————–
| Id   |      Operation                                               |     Name             |
—————————————————————————————-
|   0  | SELECT STATEMENT                                     |                           |
|   1  |  SORT GROUP BY NOSORT  <——-           |                           |
|*  2 |   TABLE ACCESS BY GLOBAL INDEX ROWID | EF_ACTL_EXPNS |
|*  3 |    INDEX RANGE SCAN                                 | EF_AEXP_PK       |
—————————————————————————————

Predicate Information (identified by operation id):
—————————————————————-
2 – filter(“ACTL_EXPNS_AMT”>0)
3 – access(“SRC_SYS”=’CDW’)
Note the NOSORT qualifier in Step 1.

2. ORDER BY : 


SQL> select *  from ef_actl_expns
where src_sys = ‘CDW’ and actl_expns_amt > 0
order by src_sys
—————————————————————————————-
| Id   | Operation                                                     |     Name            |
—————————————————————————————-
|   0  | SELECT STATEMENT                                     |                           |
|*  1 |  TABLE ACCESS BY GLOBAL INDEX ROWID   | EF_ACTL_EXPNS|
|*  2 |   INDEX RANGE SCAN                                   | EF_AEXP_PK      |
—————————————————————————————-

Predicate Information (identified by operation id):
—————————————————
1 – filter(“ACTL_EXPNS_AMT”>0)
2 – access(“SRC_SYS”=’CDW’)

Note that there is no SORT operation, despite the ORDER BY clause. Compare this to the following:
SQL>  select * from ef_actl_expns
where src_sys = ‘CDW’  and actl_expns_amt > 0
order by actl_expns_amt ;
———————————————————————————————
| Id  | Operation                                                      |         Name          |
———————————————————————————————
|   0 | SELECT STATEMENT                                       |                            |
|   1 |  SORT ORDER BY                                            |                            |
|*  2 |   TABLE ACCESS BY GLOBAL INDEX ROWID   | EF_ACTL_EXPNS |
|*  3 |    INDEX RANGE SCAN                                   | EF_AEXP_PK       |
—————————————————————————————-

Predicate Information (identified by operation id):
—————————————————
2 – filter(“ACTL_EXPNS_AMT”>0)
3 – access(“SRC_SYS”=’CDW’)

3. DISTINCT : 


SQL> select distinct src_sys  from ef_actl_expns
where src_sys = ‘CDW’  and actl_expns_amt > 0 ;
———————————————————————————————–
| Id  |          Operation                                             |         Name          |
———————————————————————————————–
|   0 | SELECT STATEMENT                                       |                            |
|   1 |  SORT UNIQUE NOSORT                                 |                            |
|*  2 |   TABLE ACCESS BY GLOBAL INDEX ROWID   | EF_ACTL_EXPNS |
|*  3 |    INDEX RANGE SCAN                                   | EF_AEXP_PK       |
————————————————————————————–

Predicate Information (identified by operation id):
—————————————————
2 – filter(“ACTL_EXPNS_AMT”>0)
3 – access(“SRC_SYS”=’CDW’)

Again, note the NOSORT qualifier.

This is an extraordinary tuning technique in OLTP systems like SQL*Forms that return one page of detail at a time to the screen. A SQL with a DISTINCT, GROUP BY, or ORDER BY that uses an index to sort can return just the first page of matching rows without having to fetch the entire result set for a sort. This can be the difference between sub-second response time and several minutes or hours.
Full table Scans are not bad : 

Up to now, we’ve seen how indexes can be good. It’s not always the case; sometimes indexes are no help at all, or worse: they make a query slower.

 

A b-Tree index will be no help at all in a reduced scan unless the WHERE clause compares indexed columns using >, <, LIKE, IN, or BETWEEN operators. A b-Tree index cannot be used to scan for any NOT style operators: eg. !=, NOT IN, NOT LIKE. There are lots of conditions, caveats, and complexities regarding joins, sub-queries, OR predicates, functions (inc. arithmetic and concatenation), and casting that are outside the scope of this article. Consult a good SQL tuning manual.

 

Much more interesting – and important – are the cases where an index makes a SQL slower. These are particularly common in batch systems that process large quantities of data.

 

To explain the problem, we need a new metaphor. Imagine a large deciduous tree in our front yard. It’s Autumn, and it’s our job to pick up all of the leaves on the lawn. Clearly, the fastest way to do this (without a rake, or a leaf-vac…) would be get down on hands and knees with a bag and work our way back and forth over the lawn, stuffing leaves in the bag as we go. This is a Full Table Scan, selecting rows in no particular order, except that they are nearest to hand. This metaphor works on a couple of levels: we would grab leaves in handfuls, not one by one. A Full Table Scan does the same thing: when a bock is read from disk, Oracle caches the next few blocks with the expectation that it will be asked for them very soon. Type this in SQL*Plus:

 

SQL> show parameter  db_file_multiblock_read_count

 

Just to shake things up a bit (and to feed an undiagnosed obsessive compulsive disorder), we decide to pick up the leaves in order of size. In support of this endeavour, we take a digital photograph of the lawn, write an image analysis program to identify and measure every leaf, then load the results into a Virtual Reality headset that will highlight the smallest leaf left on the lawn. Ingenious, yes; but this is clearly going to take a lot longer than a full table scan because we cover much more distance walking from leaf to leaf.

 

So obviously Full Table Scan is the faster way to pick up every leaf. But just as obvious is that the index (virtual reality headset) is the faster way to pick up just the smallest leaf, or even the 100 smallest leaves. As the number rises, we approach a break-even point; a number beyond which it is faster to just full table scan. This number varies depending on the table, the index, the database settings, the hardware, and the load on the server; generally it is somewhere between 1% and 10% of the table.
The main reasons for this are :

  • As implied above, reading a table in indexed order means more movement for the disk head.
  • Oracle cannot read single rows. To read a row via an index, the entire block must be read with all but one row discarded. So an index scan of 100 rows would read 100 blocks, but a FTS might read 100 rows in a single block.
  • The db_file_multiblock_read_count setting described earlier means FTS requires fewer visits to the physical disk.
  • Even if none of these things was true, accessing the entire index and the entire table is still more IO than just accessing the table.

 

So what’s the lesson here? Know our data! If our query needs 50% of the rows in the table to resolve our query, an index scan just won’t help. Not only should we not bother creating or investigating the existence of an index, we should check to make sure Oracle is not already using an index. There are a number of ways to influence index usage; once again, consult a tuning manual. The exception to this rule – there’s always one – is when all of the columns referenced in the SQL are contained in the index. If Oracle does not have to access the table then there is no break-even point; it is generally quicker to scan the index even for 100% of the rows.

 

Summary : 

Indexes are not a dark-art; they work in an entirely predictable and even intuitive way. Understanding how they work moves Performance Tuning from the realm of guesswork to that of science; so embrace the technology and read the manual.

 

Reference:    http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_5010.htm

http://www.orafaq.com/node/1403

 

Interpreting Raw Sql Trace File

QL_TRACE is the main method for collecting SQL Execution information in Oracle. It records a wide range of information and statistics that can be used to tune SQL operations. The sql trace file contains a great deal of information . Each cursor that is opened after tracing has been enabled will be recorded in the trace file. 

The raw trace file mostly contains the cursor number . Eg, PARSING IN CURSOR #3 . EXECutes, FETCHes and WAITs are recorded against a cursor. The information applies to the most recently parsed statement within that cursor . Firstly,  let’s have a look on “Wait Events”  .

WAIT #6: nam=’db file sequential read’ ela= 8458 file#=110 block#=63682 blocks=1 obj#=221 tim=506028963546

WAIT = An event that we waited for.
nam    = What was being waited for .The wait events here are the same as are seen in view V$SESSION_WAIT .
ela  = Elapsed time for the operation.(microseconds)
p1 = P1 for the given wait event.
p2 = P2 for the given wait event.
p3 = P3 for the given wait event.

 

Example No. 1 : WAIT #6: nam=’db file sequential read’ ela= 8458 file#=110 block#=63682 blocks=1 obj#=221 tim=506028963546

The above line can be translated as  : Completed waiting under CURSOR no 6  for “db file sequential read” . We waited 8458 microseconds i.e. approx. 8.5 milliseconds .For a read of:  File 110, start block 63682, for 1 Oracle block of Object number 221. Timestamp was 506028963546 . 

 

Example no.2 : WAIT #1: nam=’library cache: mutex X’ ela= 814 idn=3606132107 value=3302829850624 where=4 obj#=-1 tim=995364327604

The above line can be translated as : Completed WAITing under CURSOR no 1 for “library cache: mutex X” .We waited 814 microseconds i.e. approx. 0.8 milliseconds .To get an eXclusive library cache latch with  Identifier 3606132107 value 3302829850624 location 4 . It was not associated with any particular object (obj#=-1) Timestamp 995364327604.

 

The trace file also show the processing of the sql statements . Oracle processes SQL statements as follow :
Stage 1: Create a Cursor
Stage 2: Parse the Statement
Stage 3: Describe Results
Stage 4: Defining Output
Stage 5: Bind Any Variables
Stage 6: Execute the Statement
Stage 7: Parallelize the Statement
Stage 8: Fetch Rows of a Query Result
Stage 9: Close the Cursor
Now let’s  move to another important term PARSING IN CURSOR #n . EXECutes, FETCHes and WAITs are recorded against a cursor. The information applies to the most recently parsed statement within that cursor.

PARSING IN CURSOR# : 

Cursor :  In order for Oracle to process an SQL statement, it needs to create an area of memory known as the context area; this will have the information needed to process the statement. This information includes the number of rows processed by the statement, a pointer to the parsed representa-tion of the statement (parsing an SQL statement is the process whereby information is transferred to the server, at which point the SQL statement is evaluated as being valid).

 

A cursor is a handle, or pointer, to the context area. Through the cursor, a PL/SQL program can control the context area and what happens to it as the statement is processed. Two important features about the cursor are

1.)  Cursors allow you to fetch and process rows returned by a SE-LECT statement, one row at a time.

2.)  A cursor is named so that it can be referenced.

 

Parsing : Oracle Parsing is the first step in processing of any database statement . PARSE record is accompanied by the cursor number. Let’s have a look on “Parsing in Cursor” of a particular trace file .

 

 PARSING IN CURSOR #2 len=92 dep=0 uid=0 oct=3 lid=0 tim=277930332201 hv=1039576264 ad=’15d51e60′ sqlid=’dsz47ssyzdb68′

select p.PID,p.SPID,s.SID from v$process p,v$session s where s.paddr = p.addr and s.sid = 12

END OF STMT

PARSE#2:c=31250,e=19173,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=836746634,tim=27930332198

EXEC #2:c=0,e=86,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=836746634,tim=77930335666

WAIT #2: nam=’SQL*Net message to client’ ela= 10 driver id=1413697536 #bytes=1 p3=0 obj#=116 tim=77930335778

FETCH #2:c=0,e=805,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,plh=836746634,tim=77930336684

WAIT #2: nam=’SQL*Net message from client’ ela= 363 driver id=1413697536 #bytes=1 p3=0 obj#=116 tim=77930337227

FETCH #2:c=0,e=31,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=836746634,tim=77930337421

STAT #2 id=1 cnt=1 pid=0 pos=1 obj=0 op=’NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us cost=0 size=152 card=1)’

STAT #2 id=2 cnt=27 pid=1 pos=1 obj=0 op=’MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=156 us cost=0 size=96 card=1)’

STAT #2 id=3 cnt=1 pid=2 pos=1 obj=0 op=’NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us cost=0 size=39 card=1)’

STAT #2 id=4 cnt=1 pid=3 pos=1 obj=0 op=’FIXED TABLE FIXED INDEX X$KSLWT (ind:1) (cr=0 pr=0 pw=0 time=0 us cost=0 size=26 card=1)’

STAT #2 id=5 cnt=1 pid=3 pos=2 obj=0 op=’FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=0 us cost=0 size=13 card=1)’

STAT #2 id=6 cnt=27 pid=2 pos=2 obj=0 op=’BUFFER SORT (cr=0 pr=0 pw=0 time=78 us cost=0 size=57 card=1)’

STAT #2 id=7 cnt=27 pid=6 pos=1 obj=0 op=’FIXED TABLE FULL X$KSUPR (cr=0 pr=0 pw=0 time=130 us cost=0 size=57 card=1)’

STAT #2 id=8 cnt=1 pid=1 pos=2 obj=0 op=’FIXED TABLE FIXED INDEX X$KSUSE (ind:1) (cr=0 pr=0 pw=0 time=0 us cost=0 size=56 card=1)’

WAIT #2: nam=’SQL*Net message to client’ ela= 7 driver id=1413697536 #bytes=1 p3=0 obj#=116 tim=77930338248

*** 2012-05-19 15:07:22.843

WAIT #2: nam=’SQL*Net message from client’ ela= 38291082 driver id=1413697536 #bytes=1 p3=0 obj#=116 tim=77968629417

CLOSE #2:c=0,e=30,dep=0,type=0,tim=77968629737

len     = the number of characters in the SQL statement
dep   = tells the application/trigger depth at which the SQL statement was executed. dep=0 indicates that it was executed by the client application. dep=1 indicates that the SQL statement was executed by a trigger, the Oracle optimizer, or a space management call. dep=2 indicates that the SQL statement was called from a trigger, dep=3 indicates that the SQL statement was called from a trigger that was called from a trigger.
uid     = Schema id under which SQL was parsed.
oct = Oracle command type.
lid = Privilege user id
tim    = Timestamp.
hv = Hash id.
ad = SQLTEXT address

PARSE #3:  c=15625,  e=177782,  p=2,  cr=3,  cu=0,  mis=1,  r=0,  dep=0,  og=1,  plh=272002086, tim=276565143470

c      = CPU time (microseconds rounded to centiseconds granularity on 9i & above)
e  = Elapsed time (centiseconds prior to 9i, microseconds thereafter)
p  = Number of physical reads.
cr  = Number of buffers retrieved for CR reads.(Consistent reads)
cu    =Number of buffers retrieved in current mode.
mis  = Cursor missed in the cache.
r  = Number of rows processed.
dep = Recursive call depth (0 = user SQL, >0 = recursive).
og = Optimizer goal: 1=All_Rows, 2=First_Rows, 3=Rule, 4=Choose

From the above Parse line it is very clear that the total time taken in parsing the statement is  0.177 sec and the no. of physical reads done are 2 .

Bind Variables : If the SQL statement does reference bind variables, then the following  SQL statement shown in the cursor can locate a section of text associated with each bind variable. For each bind variable there are a number of attributes listed.  The following are the ones we are interested in here:

mxl      =  the maximum length – ie. the maximum number of bytes occupied by the variable. Eg. dty=2 and mxl=22 denotes a NUMBER(22) column.
scl       = the scale (for NUMBER columns)
pre      = the precision (for NUMBER columns)
value  = the value of the bind variable
dty      = the datatype.  Typical values are:
1       VARCHAR2 or NVARCHAR2
2       NUMBER
8       LONG
11     ROWID
12     DATE
23     RAW
24     LONG RAW
96     CHAR
112   CLOB or NCLOB
113   BLOB
114   BFILE

EXEC :  Execute a pre-parsed statement. At this point, Oracle has all necessary information and resources, so the statement is executed. For example

EXEC #2:c=0,e=225,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=3684871272,tim=282618239403

Fetch : Fetch rows from a cursor . For example
FETCH #4:c=0,e=8864,p=1,cr=26,cu=0,mis=0,r=1,dep=0,og=1,plh=3564694750,tim=282618267037

STAT :  Lines report explain plan statistics for the numbered [CURSOR]. These let us know the ‘run time’ explain plan. For example
STAT #1 id=1 cnt=7 pid=0 pos=1 obj=0 op=’SORT ORDER BY (cr=0 pr=0 pw=0 time=0 us cost=2 size=2128 card=1)’

id       = Line of the explain plan which the row count applies to (starts at line 1).  This is effectively the row source row count for all row sources in the execution tree
cnt =  Number of rows for this row source.
pid =  Parent id of this row source.
pos  =  Position in explain plan.
obj     =  Object id of row source (if this is a base object).
op=’…’   The row source access operation

XCTEND  A transaction end marker. For example  XCTEND rlbk=0, rd_only=1, tim=282636050491
rlbk           =1   if a rollback was performed, 0 if no rollback (commit).
rd_only      =1   if transaction was read only, 0 if changes occurred.

CLOSE  cursor is closed .for example CLOSE #4:c=0,e=32,dep=0,type=0,tim=282636050688
c            = CPU time (microseconds rounded to centiseconds granularity on 9i and above)
e           = Elapsed time (centiseconds prior to 9i, microseconds thereafter)
dep       = Recursive depth of the cursor
type     = Type of close operation

Note : Timestamp are used to determine the time between any 2 operations.
Reference : Metalink [ID 39817.1]