How to Simulate an Exalogic Machine for Training Purposes

December 21, 2014 Leave a comment

In the new “Cloud Computing” era, Oracle is leading it’s private cloud offering with the new complete hardware and software platform for Enterprise applications, Oracle Exalogic. However, even within Oracle it can be quite difficult to gain access to an Exalogic machine, and getting to know the system just by reading the documentation is a daunting task. So this post will walk you through the steps of setting up a simulated Exalogic machine, in a virtual environment, that you can use for training purposes.

After completing these steps, you will have an environment where you will be able to make the same storage, network, operating system and software configurations as on the actual Exalogic machine. Of course, this will not be suitable for production, nor will any benchmarks have any relevance. It’s just something that you can use to get yourself familiarized with the machine. If you are new to the Exalogic machine, I suggest going over the Oracle Exalogic White Paper before continuing with the steps.

So, in order to build the system, you will need:

1. Oracle Virtual Box. Download it from here and install on your system if not already installed.

2. Sun Storage Simulator. This is a pre-built virtual machine simulating the actual Sun ZFS Storage Appliance that comes with the Exalogic Machine. It can be downloaded from here.

3. The Oracle Enterprise Linux image that is applied on every compute node in the Exalogic machine. You can get it from edelivery.oracle.com by searching for to “Oracle Fusion Middleware” – “Linux x86_64″, then click on “Oracle Exalogic Elastic Cloud Software 11g Media Pack” and download the two archives with “Base Image for Exalogic Linux x86-64″ in the title. Also, you can choose to download the Solaris images from the same location.

4. Specific network Configurator for Exalogic. From the same link as in step 3 download “Oracle Exalogic 2.0.0.0.0 Configuration Utility for Exalogic Linux x86-64 and Exalogic Solaris x86-64 (64-bit)”

5. You can also download the Weblogic Server software and Coherence software from the same link on eDelivery.

Once you have downloaded the above tools, you can start building the “machine”, block-by-block:

Step 1: Installing Virtual Box

Run VirtualBox installation program and follow the instructions in the installation wizard.

Step2: Import the Sun Storage VBox

From the Oracle VM VirtualBox Manager click “File -> Import Appliance…” and go through the Import Appliance wizard choosing “Sun ZFS Storage 7000.ovf” file from the downloaded Storage Simulator. After the import, the appliance should show up in your VBox Manager:

Select the newly imported VM and click “Start”. After booting, configure the basics:

Host Name: Any name you want.
DNS Domain: “localdomain”
Default Router: The same as the IP address, but put 1 as the final octet.
DNS Server: The same as the IP address, but put 1 as the final octet.
Password: Whatever you want.

You can now access the appliance interface with a browser, accessing https://:215 , usually https://192.168.56.101:215 and login with” root” and the password provided above. After accepting a few default settings, you will see an overview screen of the appliance, just as you would see on an actual Exalogic machine:

Step 3: Creating a virtual machine for the first Compute Node

Usually a quarter rack Exalogic machine has 8 compute nodes, but for demonstration purposes it is enough to build only 2 computes nodes which will be used to cluster the Middleware components. To build a simulated compute node, we will first create a virtual box and then apply the Base Image for Exalogic Linux x86_64 on it, just like installing any Oracle Enterprise Linux.

To create the Compute Node VBox, go to the Oracle VM VirtualBox Manager and click “New”. Follow the instructions in the wizard by choosing a Linux – Oracle (64 bit) operating system, whatever name you want for the virtual machine (for example CN01) and choosing the memory resources you would like to allocate to this machine. This will hardly match the 96Gb of RAM on a real Exalogic compute node, but it will serve demonstration purposes just fine.

Step 4: Applying the Exalogic Linux x86_64 image on the simulated Compute Node

Make sure that both downloaded archives are in the same folder and then execute the runMe.sh script to merge the archives in a single file. You should obtain a “el_x2-2_baseimage_linux_1.0.0.2.0_64.iso” image file.

If on Windows, you can merge the two files by running:

copy /B el_x2-2_baseimage_linux_2.0.0.0.0_6 4.iso.part0+el_x2-2_baseimage_linux_2.0.0.0.0_64.iso.part1 el_x2-2_baseimage_lin ux_2.0.0.0.0_64.iso

el_x2-2_baseimage_linux_2.0.0.0.0_64.iso.part0

el_x2-2_baseimage_linux_2.0.0.0.0_64.iso.part1

1 file(s) copied.

Go back to the Oracle VM VirtualBox Manager and select the newly created CN01 VBox, then click on Settings. Go to “Storage” and click on “Add CD/DVD Device”. In the popup, select “Choose Disk” and navigate to the base image iso file above. You should then see the image in the IDE Controller list:

Next, navigate to “System” in the left menu and make sure that the CD/DVD-ROM is selected in the Boot Order list and is the first bootable media in the list.

Next, start up the Virtual Machine and follow the Linux OS installation process. If instead of the below screen you get a “Your CPU does not support long mode. Use a 32bit distribution” message, then you need to enable VT-x support in BIOS, depending on your host machine.

Once you install the operating system on the first compute node, you use the Oracle VirtualBox manager clone facility to create a similar virtual machine (CN02).

Step 5: Use Exalogic_one-command to configure the network interfaces and IP addresses of the storage appliance and compute nodes

After unzipping the “Oracle Exalogic 2.0.0.0.0 Configuration Utility for Exalogic” archive, you will get a series of scripts, a spreadsheet called “el_configurator.ods” and, most importantly a readme.txt file. Follow the instructions in the readme to use the spreadsheet. If not sure how to fill the IP addresses in the spreadsheet, I suggest taking a look at the Exalogic Enterprise Deployment Guide at the default Network settings. Basically, you will have to assign IP addresses to your storage and compute nodes on three interfaces:

NET0 – Management interface /ILOM

BOND0 – simulating the Private Infiniband, for traffic between the compute nodes and the storage heads (ib0 and ib1)

BOND1 – Ethernet over InfiniBand (EoIB), for Ethernet traffic, on eth interfaces.

Once you have a correct filled spreadsheet run the scripts on a master compute node (usually the first, but can be any) to configure the network “within” your Exalogic simulated machine.

All done, all that is left to do now is to consult the Enterprise Deployment Guide on how to configure the storage project and shares, install the Weblogic server and configure a domain etc.Of course, the specific optimizations for Exalogic will not be applied in this training Weblogic domain.

A good idea is to use the Oracle VM VirtualBox Manager to export this setup to a backup by going to File -> Export Appliance.. and choosing the three VBoxes you’ve just created. You will then be able to port the entire Exalogic simulated machine or restore it easily.

Producing RSS from PL/SQL

November 24, 2014 1 comment

First things first. The idea and the bulk of the code for this post are not mine, they are Sean Dillon’s. It’s a very cool idea that he came up with and it still works flawlessly

I needed to incorporate RSS into a project I’m working, so I grabbed Sean’s code. The problem is, it’s based on the AskTom table structure which means it won’t run on your database without immediately re-writing the query. Additionally, the nature of this code relies on a pretty lengthy query to generate the XML. I’ll admit, when I first looked at it, I thought “Wow, this is going to be more complex than I thought.” After looking at it for a little while longer, I realized it was actually very simple. Sean also included support for several versions of RSS, improving the functionality, but again, adding to the complexity.

So, I created an example table and simplified the code as much as possible to make it easier for everyone to understand. The table, “PLSQL_PACKAGES”, stores information about some of the built-in PL/SQL packages I use on a regular basis. The links in this table point back to the online Oracle Documentation.

This block of code is just the DDL for the sample table and the insert statements to populate it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
create table plsql_packages(
    id          varchar2(32),
    title       varchar2(255),
    description varchar2(4000),
    link        varchar2(1000),
    updated_by  varchar2(100),
    updated_on  date)
/
create or replace trigger  biu_plsql_packages before insert or
update on plsql_packages
for each row
begin
    if inserting then
        :new.id := sys_guid();
    end if;
        :new.updated_by := nvl(v('APP_USER'),user);
        :new.updated_on := sysdate;
end;
/
insert into plsql_packages(title,description,link)
     values ('DBMS_CRYPTO','DBMS_CRYPTO provides an interface
to encrypt and decrypt stored data, and can be used in
conjunction with PL/SQL programs running network
communications. It provides support for several
industry-standard encryption and hashing algorithms,
including the Advanced Encryption Standard (AES)
encryption algorithm. AES has been approved by the National
Institute of Standards and Technology (NIST) to replace
/
insert into plsql_packages(title,description,link)
     values ('DBMS_EPG','The DBMS_EPG package implements
the embedded PL/SQL gateway that enables a web browser to
invoke a PL/SQL stored procedure through an HTTP listener.','http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/d_epg.htm#sthref3481')
/
insert into plsql_packages(title,description,link)
     values (' OWA_UTIL','The OWA_UTIL package contains
utility subprograms for performing operations such as
getting the value of CGI environment variables, printing
the data that is returned to the client, and printing the
/
insert into plsql_packages(title,description,link)
     values ('UTL_MAIL','The UTL_MAIL package is a utility
for managing email which includes commonly used email
features, such as attachments, CC, BCC, and return receipt.','http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/u_mail.htm#i1001258')
/

This is the important block of code as it creates the RSS procedure. Don’t be intimidated by it though as you only need to modify a few lines. The only lines you need to customize to make it work against your own table are 4-7 and 41-43!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
create or replace procedure rss
is
    -- customizable parameters
    l_title         varchar2(255) := 'Oracle PL/SQL Packages';
    l_link          varchar2(255) := 'http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/toc.htm';
    l_description   varchar2(255) := 'This is a feed of changes to PL/SQL Package Documentation';
    l_language      varchar2(255) := 'en-us';
    -- end customizable parameters
    l_version       varchar2(10)  := '2.0';
    l_clob          clob;
    l_idx           pls_integer := 1;
    l_len           pls_integer := 255;
    l_defrows       pls_integer := 10;
    l_maxrows       pls_integer := 30;
    l_desclen       pls_integer := 250;
begin
    for i in (
      select xmlelement( "rss",
               -- Begin XML Header Block
               xmlattributes( l_version as "version"),
                 xmlelement( "channel",
                   xmlforest( l_title as "title",
                              l_link as "link",
                              l_description as "description",
                              l_language as "language"),
                 -- End XML Header Block
                 -- Begin List of Individual Articles or
                 -- Items
                 xmlagg(
                     xmlelement( "item",
                       xmlelement("title", x.title),
                       xmlelement("link", x.link),
                       xmlelement("description",
                       x.description),
                       xmlelement("pubDate",
                       to_char(x.updated_on,'Dy, DD Mon RRRR hh24:mi:ss')),
                       xmlelement("guid", XMLATTRIBUTES
                       ('false' as "isPermaLink"),x.id||
                       to_char(x.updated_on,'JHH24MISS'))
                     )
                   )
                   -- End List of Individual Articles or
Items
                 )
             ) as result
        from -- Actual Database Query that
--populates the list of Items
                select id,title,link,description,
                updated_on
                  from plsql_packages
                 where rownum < (l_maxrows+1)) x)
    loop
        l_clob := xmltype.extract(i.result,'/').getclobval;
        exit;
    end loop; --i
    --- OUTPUT RESULTS
    owa_util.mime_header('application/xml', false);
    owa_util.http_header_close;
    for i in 1..ceil(dbms_lob.getlength(l_clob)/l_len) loop
        htp.prn(substr(l_clob,l_idx,l_len));
        l_idx := l_idx + l_len;
    end loop; --i
end rss;
/

Note on line 34 the “guid” element. This is an optional element (documented here) that an aggregator can use to uniquely identify the item. I’m concatenating the ID column from the table with Julian date concatenated with hours, minutes, and seconds – to_char(sysdate,‘JHH24MISS’). This means that when you update a row, the date will change causing your aggregator to see a new guid and display a new item for the changed row.The easiest way to test this procedure is using the “OWA Output” tab in SQL Developer:SQL Developer OWA Ouput

If you’re running XE or 11g and you want to call this procedure directly through the APEX DAD, you’ll need to edit the FLOWS_XXXXXX.wwv_flow_epg_include_mod_local function and comment-out the first line as well as your procedure to the IN list.

 

 

Categories: 11g, 12c, Database, Oracle, sql, subprograms Tags: , ,

Oracle Fusion Distributed Order Orchestration on Oracle SuperCluster T5-8

November 4, 2014 Leave a comment

Oracle Fusion Distributed Order Orchestration is a next-generation Oracle Fusion Supply Chain Management (SCM) application designed to provide centralized order processing, centralized monitoring and exception management, and faithful execution against predictable order execution policies. The system improves order orchestration across diverse order-capture and fulfillment environments. Oracle Fusion Distributed Order Orchestration offers centrally managed orchestration policies, and fulfillment monitoring. Together these capabilities facilitate increased profitability and customer satisfaction while dramatically reducing fulfillment costs and order errors.

Scaling reliable order execution in the face of unpredictable demand can be critical. Failure of order execution systems or lackluster order processing performance can result in lost revenues and very real damage to brand. As a result, organizations need infrastructure that can handle today’s workloads while effortlessly scaling to meet future demands as needed, without complex and time-consuming reconfiguration processes. Infrastructure must also support high availability, offering resiliency and the ability to weather and recover gracefully from failures.

Deploying Oracle Fusion Distributed Order Orchestration on Oracle SuperCluster T5-8 represents a compelling solution that allows order creation rates to scale predictably as required. To evaluate the performance of this solution, Oracle engineers tested a half-rack Oracle SuperCluster T5-8 system, scaling to multiple instances of key components to deliver greater levels of throughput. Clustering individual components within Oracle SuperCluster provides for availability requirements, and helps ensure continuous operation and failover. As tested, the system demonstrated predictable near-linear scalability by scaling out the number of Oracle Fusion Distributed Order Orchestration and Oracle SOA Suite instances. The scalability of the Oracle SuperCluster platform meant that there was plenty of headroom for even greater order capacity.

Oracle SuperCluster T5-8

Oracle SuperCluster T5-8 is a multipurpose engineered system that has been designed, tested, and integrated to run mission-critical enterprise applications and rapidly deploy cloud services while delivering extreme efficiency, cost savings, and performance. It is well suited for multitier enterprise applications with web, database, and application components. This versatility along with powerful, included no-overhead virtualization capabilities makes it an ideal platform on which to consolidate large numbers of applications, databases, and middleware workloads such as those found in SCM. Figure 1 illustrates Oracle SuperCluster T5-8 as configured for testing Oracle Fusion Distributed Order Orchestration.

Oracle SuperCluster T5-8 is an engineered system designed to host the entire Oracle software solution stack, and it includes the following components:

  • Oracle’s SPARC T5-8 server. The SPARC T5-8 server offers a large memory capacity and a highly integrated design that supports virtualization and consolidation of mission-critical applications. The half-rack Oracle SuperCluster configuration tested for Oracle Fusion Distributed Order Orchestration featured two four-processor SPARC T5-8 servers, each configured with a terabyte of system memory.
  • Oracle Exadata Storage Servers. Oracle Exadata storage technology is provided to enhance the performance of Oracle Database. This platform is ideal for accelerating the performance of Java middleware and applications, general-purpose applications, and Oracle Database 11g Release 2.
  • Oracle ZFS Storage Appliance. An integral Oracle ZFS Storage Appliance uses flash-enabled Hybrid Storage Pools to improve application response times. Its performance scalability for file-based I/O and its ease of management make it a good fit for managing shared application data files within Oracle SuperCluster.
  • Oracle’s Sun Datacenter InfiniBand Switch 36. This InfiniBand switch provides a high-throughput, low-latency, and scalable fabric that is suitable for the consolidation of interprocess communication, network, and storage. InfiniBand delivers up to 63 percent higher transactions per second (TPS) for Oracle Real Application Clusters (Oracle RAC) than when run over gigabit Ethernet (GbE) networks. There are three InfiniBand switches in Oracle SuperCluster, offering private connectivity within the system.
  • Integrated no-cost virtualization. Oracle VM Server for SPARC (previously called Sun Logical Domains or LDoms) enhances security, increases utilization, and improves reliability when combined with Oracle Solaris Zones.

Figure 1 illustrates a half-rack Oracle SuperCluster T5-8 configured for testing as described in the sections that follow.

Figure 1. Oracle Fusion Applications deployed on a half-rack Oracle SuperCluster T5-8

Figure 1. Oracle Fusion Applications deployed on a half-rack Oracle SuperCluster T5-8.

The half-rack Oracle SuperCluster T5-8 provides an effective platform for scaling Oracle Fusion Distributed Order execution. A number of key technical capabilities allow scalability with good performance and low latency.

  • Oracle’s SPARC T5 processors provide significant computational headroom, allowing virtualized server instances to be added as needs grow to service additional orders.
  • Oracle’s no-cost virtualization solutions, including both Oracle Solaris Zones and Oracle VM Server for SPARC, mean that system resources can be easily subdivided, virtualized, and isolated, allowing considerable consolidation with predictable performance.
  • Oracle Exadata Storage Servers provided as a part of Oracle SuperCluster provide Exadata Smart Flash Cache, intelligently caching database objects in flash memory, replacing slow mechanical I/O operations to disk with very rapid flash memory operations.
  • All of the components of Oracle SuperCluster are connected by an high-speed, low-latency InfiniBand network, reducing latency for Oracle Distributed Order Execution.

SPARC T5-8 Servers

Within the half-rack Oracle SuperCluster T5-8, Oracle VM Server for SPARC is used to divide each of the two SPARC T5-8 servers into an application domain and a database domain. These domains are in turn connected to Oracle Exadata Storage Servers over a high-performance, low-latency InfiniBand network that is internal to the system.

Application Domains

Within the application domains, Oracle Solaris Zones are used to further partition the resources of the SPARC T5-8 servers.

  • Oracle Fusion Applications runs within a zone on the application domain on both SPARC T5-8 servers. This environment is ultimately used to house SCM and the Oracle Fusion Distributed Order Orchestration instances that are used to scale the solution.
  • Oracle HTTP Server runs in a separate zone on one of the servers (node 1), providing web-based access to the system.
  • Oracle Identity Management runs in a separate zone on the second server (node 2), allowing organizations to effectively manage the end-to-end lifecycle of user identities across all enterprise resources.

Database Domains

Oracle RAC 11g Release 2, a clustered version of Oracle Database, runs in the database domains. Oracle RAC uses a shared-cache, clustered database architecture that overcomes the limitations of traditional shared-nothing and shared-disk architectures to provide database performance, scalability, and reliability—with no changes to existing Oracle Database applications.

Oracle Exadata Storage Servers

The Oracle Exadata Storage Servers provided in Oracle SuperCluster T5-8 are configured as the following, separate disk groups to provide database acceleration for the Oracle RAC database instances:

  • An Oracle Fusion Applications disk group
  • An Oracle Internet Directory disk group
  • An Oracle Identity Management disk group

Oracle ZFS Storage Appliance

Oracle ZFS Storage Appliance is used in Oracle SuperCluster for non-database storage. The appliance is a hybrid storage system based on a unique cache-centric architecture featuring massive DRAM plus Flash, and is powered by a multithreaded Symmetric Multiprocessing (SMP) operating system. As a result, 70 to 90 percent of I/O operations are typically served from DRAM on the appliance, helping to consolidate data-intensive workloads. In this test deployment, the Directory and Application tiers made use of storage on the Oracle ZFS Storage Appliance.

Testing and Results

To evaluate performance, the half-rack Oracle SuperCluster T5-8 was exercised with a predefined simulated workload using Loadrunner. Performance was measured in terms of order creation TPS per hour. Additional scale-out hosts were allocated as the workload increased, with a number of metrics collected to monitor and evaluate system performance as the workload increased.

Workload Description

A typical order lifecycle is depicted in Figure 2. A sales order consists of multiple lines—usually averaging five to six line items per order. An orchestration process consisting of multiple fulfillment steps is assigned to one or more order lines.

Figure 2. A typical  order lifecycle for Oracle Fusion Distributed Order Orchestration.

Figure 2. A typical order lifecycle for Oracle Fusion Distributed Order Orchestration.

In testing conducted by Oracle, this customer representative scenario had a service payload size of 59 KB with two line items per order. The line items are grouped in a ship set. In this test scenario, orders are received and decomposed; the orchestration engine then processes the line items through a two-step process. The process starts with scheduling both line items as a group (ship set). After successful scheduling, a credit check is executed. Credit checking uses Oracle Fusion Distributed Order Orchestration’s Template Task Layer (TTL) feature. Finally, the order lines progress to closure.

Extensible by end users, the Oracle Fusion Distributed Order Orchestration application harnesses both the Processing Constraints and Oracle Business Rules frameworks. To help with the business process, commonly applicable constraints and Oracle Business Rules are seeded out of the box. Depending the functional needs, the applicable rules and constraints would get executed during various stages of the order lifecycle. This ensures that downstream services—such as order promising and credit checking—get functionally pertinent inputs. In this representative scenario, 17 seeded Oracle Business Rules and 12 seeded processing constraints were executed during order processing. Attribute cross-referencing was also carried out on 33 functionally important attributes belonging to the order header and lines.

The Oracle Distributed Order Orchestration infrastructure supports multiple concurrent users simultaneously submitting orders. Oracle engineers simulated this real-life situation by emulating hundreds of concurrent virtual users, with each virtual user submitting orders in a configurable frequency and a pattern.

Performance Results

Testing was performed by horizontally scaling out the number of virtual hosts (scale-out hosts) dedicated to processing SCM (Oracle Fusion Distributed Order Orchestration/Oracle SOA Suite). Configurations with one, two, three, and four virtual hosts were tested. As shown in Figure 3, near linear scalability was realized in terms of order creations per minute, with four scale-out nodes producing just under 10,000 order creations per minute at the peak.

Figure 3. Adding to  the number of Oracle Fusion Distributed Order Orchestration nodes provides  near-linear scalability in terms of the number of order creations per minute.

Figure 3. Adding to the number of Oracle Fusion Distributed Order Orchestration nodes provides near-linear scalability in terms of the number of order creations per minute.

Table 1 provides the details for the tests, listing the average order creation TPS per hour.

Table 1. Scale-out hosts, users, cores, and order lines created per hour realized during Oracle’s testing.

Number of Virtual Users Processor Cores (Scale-Out Hosts) Processor Cores (Oracle RAC Nodes) Number of Scale-Out Hosts Order Lines Created per Hour
50 14 cores 32 cores 1 85.3 K
100 28 cores 32 cores 2 164 K
150 42 cores 32 cores 3 268 K
200 56 cores 32 cores 4 480 K

Resource Utilization on the Application Domain and Oracle RAC Nodes

Even though the Oracle SuperCluster T5-8 produced near linear scalability, system resource utilization remained moderate on the scale-out nodes running in the application domain, with considerable headroom remaining for additional scalability. Figure 4 shows the CPU utilization percentage on each of the four scale-out virtual hosts running Oracle Fusion Distributed Order Orchestration and Oracle SOA Suite. The workload is distributed very consistently between the four nodes. At no time did the CPU utilization exceed 50 percent.

Figure 4. As nodes are  added and order creations scale, the CPU utilization percentage remains modest,  indicating substantial headroom.

Figure 4. As nodes are added and order creations scale, the CPU utilization percentage remains modest, indicating substantial headroom.

System resource usage also remained moderate on the two Oracle RAC nodes. Figure 5 shows the CPU utilization percentage on the Oracle RAC nodes during the four-node testing. Again, this chart shows that resource utilization is distributed consistently, and steady-state operation still retains considerable headroom.

Figure 5. Even with four nodes running Oracle Fusion Distributed  Order Orchestration, CPU utilization percentage remained modest for the Oracle  RAC nodes.

Figure 5. Even with four nodes running Oracle Fusion Distributed Order Orchestration, CPU utilization percentage remained modest for the Oracle RAC nodes.

Deploying Oracle Fusion Distributed Order Orchestration on Oracle SuperCluster

Oracle Fusion Distributed Order Orchestration is built on an open and standards-based service-oriented architecture for flexible integration and lower total cost of ownership (TCO). The following software releases were utilized in Oracle testing:

  • Oracle Solaris 11 (11.1 SRU7.5)
  • Java Development Kit (JDK) 1.6 u71
  • Oracle Identity Management Server
  • Oracle RAC 11g Release 2 (11.2.0.3)
  • Oracle Fusion Applications Release 7 P22 (including Oracle HTTP Server, Oracle SOA Server, Fusion Applications Distributed Order Orchestration Server, Oracle Fusion Global Order Promising Server, Fusion Applications Advanced Planning Server, SCM Common Server)
  • Oracle Fusion Applications Distributed Order Orchestration patches (18076574, 18157443, 18272928, 18891548)

Figure 6 provides additional details on how the half-rack Oracle SuperCluster T5-8 was deployed for the testing of Oracle Fusion Supply Chain Management.

  • Database domains were configured with 256 GB of RAM and one SPARC T5-8 processor with sixteen cores.
  • Application domains were configured with 768 GB of RAM and three 16-core SPARC T5-8 processors for a total of 48 cores.

High availability requirements were taken into consideration for the deployment architecture, and the virtualization technology allowed considerable flexibility for managing and distributing resources such as processor, memory, and so on.

The deployment activity was divided into three important phases, which are described in the sections that follow:

  • Deployment of Oracle RAC and the Oracle Exadata Storage Servers
  • Deployment of the Oracle Fusion Identity Management infrastructure utilizing an Oracle WebLogic Server domain
  • Deployment of Oracle Fusion Applications utilizing a second Oracle WebLogic Server domain for SCM

Figure 6. Oracle  Fusion Distributed Order Optimization deployed on Oracle SuperCluster T5-8.

Figure 6. Oracle Fusion Distributed Order Optimization deployed on Oracle SuperCluster T5-8.

Deployment of Oracle RAC and Oracle Exadata Storage Servers

The database domains were configured to contain the following Oracle RAC databases, which are shown in Figure 6:

  • FUSIONDB1 and FUSIONDB2 contained the Oracle Fusion Applications transactional databases.
  • OIDDB1 and OIDDB2 contained the identity and policy store for Oracle Internet Directory.
  • IDMDB1 and IDMDB2 were provided for Oracle Identity Manager, Oracle Access Manager, Oracle SOA Suite, and Oracle Metadata Services in Oracle Fusion Middleware.

The Oracle Identity Management and Oracle Fusion Applications schemas were loaded into the respective Oracle RAC database instances using the Repository Creation Utility provided with Oracle Fusion Middleware. The database instances used Oracle Automatic Storage Management for the storage of data.

For each database instance, two disk groups were created in the Oracle Exadata Storage Servers:

  • DATA disk groups were used to store the data files.
  • REDO disk groups were used for the redo files.

There were four Oracle Exadata Storage Servers in the configuration with 12 disks each, yielding 48 disks in total. Assuming normal redundancy on the storage servers yields a total of 18 GB of usable space to work with. To create a 300 GB grid disk implies that each disk drive will contribute approximately 17 GB of storage (300 GB / 18 GB = ~17 GB).

Deployment of the Oracle Fusion Identity Management Infrastructure

The Oracle Fusion Identity Management deployment was done by choosing the Oracle Fusion Middleware Enterprise Deployment (EDG) topology, as shown in Figure 7. The deployment was based on the EDG topology recommendations and used the following components:

  • It used six Oracle Solaris Zones to deploy the Oracle Internet Directory, Oracle Identity Manager, and Oracle Access Manager components and the web tier instances of Oracle HTTP Server. (The logical layout of these components is shown in Figure 6 by the OID1, OID2, IDM1, IDM2, OHS1, and OHS2 items.)
  • Oracle Traffic Director (the OTD item in Figure 6) was used to meet the load balancer requirement for the EDG topology. It was deployed in a dedicated Oracle Solaris Zone. A hardware-based load balancer could be used instead.
  • The Oracle Internet Directory instances were active/active deployments.
  • The Oracle Access Manager, Oracle Identity Manager, and Oracle SOA Suite servers and the Oracle Fusion Distributed Order Orchestration and Oracle SOA Suite servers (the DOO/SOA1–DOO/SOA4 items in  Figure 6). The SOA1–SOA4 items in Figure 6 were active/active deployments.
  • The Oracle WebLogic administration server was active/passive.
  • The Directory Tier and the Application Tier used the same shared storage from the Oracle ZFS Storage Appliance.

Please visit see “Introduction to the Enterprise Deployment Reference Topologies” for more details about the Enterprise Deployment topology.

Figure 7. Selecting  the EDG topology and specifying appropriate host names for the respective  components.

Figure 7. Selecting the EDG topology and specifying appropriate host names for the respective components.

The recommended deployment topology for Oracle Identity Manager for Oracle Fusion Applications is based on the following important considerations:

  • High availability and scalability are provided for identity and access services.
  • Every component is configured as a cluster.
  • Every component is associated with its own machine name.
  • The machine name can be assigned to the same server or a different server depending on the available resources.
  • Virtual host names and IP address are used for relocation of the services.
  • Oracle WebLogic servers are configured to listen on virtual IP addresses.

The configuration consisted of three main tiers, which were virtualized by using Oracle Solaris Zones.

  • The Directory Tier
  • The Application Tier
  • The Web Tier

Directory Tier

The Directory Tier is the deployment tier where all the LDAP services reside. This tier includes products such as Oracle Internet Directory and Oracle Virtual Directory. The Directory Tier is managed by directory administrators providing enterprise LDAP service support. The Directory Tier is closely tied with the Data Tier. Oracle Internet Directory relies on Oracle Database as its back end.

The following configuration was used for the deployment evaluated by Oracle:

  • The identity and policy store information was kept in the same database.
  • Separate Oracle RAC databases were used as the data store for Internet directory as well as identity and access management.
  • Oracle Directory Server was used exclusively.
  • The two Oracle Directory Server instances were located in two Oracle Solaris Zones: LDAPHOST1 (etc27-z11) and LDAPHOST2 (etc27-z12).

Application Tier

The Application Tier is the tier where Java EE applications are deployed. Products such as Oracle Identity Manager, Oracle Identity Federation, Oracle Directory Services Manager, and Oracle Enterprise Manager Fusion Middleware Control are the key Java EE components that can be deployed in this tier. Applications in this tier benefit particularly from the high availability support of Oracle WebLogic Server, and were configured as follows:

  • IDMHOST1 (etc27-z9) and IDMHOST2 (etc27-z10) have Oracle Identity Manager and Oracle SOA installed. Oracle Identity Manager is a user provisioning application. Oracle SOA deployed in this topology was exclusively used for providing workflow functionality for Oracle Identity Manager.
  • Oracle Enterprise Manager Fusion Middleware Control is integrated with Oracle Access Manager using the Oracle Platform Security Services agent.
  • The Oracle WebLogic Server console, Oracle Enterprise Manager Fusion Middleware Control, and Oracle Access Management console were always bound to the listen address of the administration server.
  • The Oracle WebLogic administration server was a singleton service. It ran on only one node at a time. In the event of failure, it was restarted on a surviving node.
  • The WLS_ODS1 managed server on IDMHOST1 (etc27-z9) and WLS_ODS2 managed server on IDMHOST2 (etc27-z10) were in a cluster and the Oracle Directory Services Manager applications were targeted to the cluster.
  • The WLS_OAM1 Managed Server on IDMHOST1 (etc27-z9) and WLS_OAM2 Managed Server on IDMHOST2 (etc27-z10) were in a cluster and the Oracle Access Manager applications were targeted to the cluster.
  • Oracle Directory Services Manager was bound to the listen addresses of the WLS_ODS1 and WLS_ODS2 Managed Servers. By default, the listen address for these managed servers is set to IDMHOST1 (etc27-z9) and IDMHOST2 (etc27-z10), respectively.
  • The WLS_OIM1 Managed Server on IDMHOST1 (etc27-z9) and WLS_OIM2 Managed Server on IDMHOST2 (etc27-z10) were in a cluster and the Oracle Identity Manager applications were targeted to the cluster.
  • The WLS_SOA1 Managed Server on IDMHOST1 (etc27-z9) and WLS_SOA2 Managed Server on IDMHOST2 (etc27-z10) were in a cluster and the Oracle SOA applications were targeted to the cluster.
  • The WLS_OIF1 Managed Server on IDMHOST1 (etc27-z9) and WLS_OIF2 Managed Server on IDMHOST2 (etc27-z10) were in a cluster and the Oracle Identity Federation applications were targeted to the cluster.

Web Tier

The Oracle HTTP Servers were deployed in the Web Tier. The Web Tier is required to support enterprise-level single sign-on using products such as Oracle Application Server Single Sign-On and Oracle Access Manager.
In the Web Tier, the following were configured:

  • WEBHOST1 (etc27-z7) and WEBHOST2 (etc27-z8) had Oracle HTTP Server, WebGate (an Oracle Access Manager component), and the mod_wl_ohs plug-in module installed. The mod_wl_ohs plug-in module enabled requests to be proxied from Oracle HTTP Server to an Oracle WebLogic Server running in the Application Tier.
  • Oracle HTTP Server 11g and WebGate for Oracle Access Manager used the Oracle Access Protocol (OAP) to communicate with Oracle Access Manager running on IDMHOST1 (etc27-z9) and IDMHOST2 (etc27-z10) in the Oracle Identity Manager demilitarized zone (DMZ). Oracle HTTP Server 11g and WebGate for Oracle Access Manager were used to perform operations such as user authentication.

Deployment of Oracle Fusion Applications

This section provides an overview of the Oracle Fusion Applications deployment. Please refer to the Oracle Fusion Applications Installation Guide for more in-depth information.

The topology chosen during the creation of the provisioning profile for the Oracle Fusion Applications deployment was “One host per application and middleware component,” as shown in Figure 8. This topology gives the highest flexibility in terms of deployment of Oracle Fusion Applications.

Figure 8. Figure 8. Oracle  Fusion Applications offers a choice of deployment topologies.

Figure 8. Oracle Fusion Applications offers a choice of deployment topologies.

This configuration choice creates a topology where the common domain host is separated from the SCM domain hosts. The domain is further split into primary and secondary hosts. The primary host contains the admin server of the SCM domain. The secondary host contains all of the managed servers such as SCM common, Oracle Product Information Management, Oracle Cost Management, Oracle Fusion Distributed Order Orchestration, Logistics, Oracle Advanced Planning, Oracle SOA servers, and so on. The Global Order Processing (GOP) server is hosted on the secondary node associated with the Oracle Advanced Planning and Scheduling server.

Common elements of the solution are defined as follows:

  • Primordial host. The Primordial host is the location of the Common domain (specifically the Administration Server of the Common domain). Only one primordial host exists in each environment.
  • Primary host. The Primary host is the location where the administration server for a domain runs. Only one primary host exists in a domain.
  • Secondary host. The secondary host is the location where the managed servers for any application reside when they are not on the same host as the administration server of the same domain. The term secondary host is meaningful when a domain spans more than one physical server. The server or servers that do not have the administration server are referred to as secondary hosts.

Some key highlights of the deployment include the following:

  • The secondary scale-out hosts were added after completing the initial provisioning. The scale-out hosts contained both an Oracle SOA managed server and an Oracle Fusion Distributed Order Orchestration managed server.
  • Each of these hosts was in an Oracle Solaris Zone, as shown in Figure 6.
  • A hard partition was used to create a resource pool with a set number of processors, and the secondary zones and the associated scale-out hosts were added to the resource pools.
  • The Oracle SOA Suite server scale-out needed specific changes on the Java Message Service server side. Each Oracle SOA server in a cluster uses a separate file on the shared folder. Please see “Additional Configuration Procedures for Scaling Out Oracle SOA Suite Server” for detailed instructions for scaling out the Oracle SOA Suite server.
  • The zones communicated over the InfiniBand network. The JDBC connection to the Oracle RAC database was over the InfiniBand Listener.

Figure 9 illustrates the configuration of the primordial host (etc27-z18).

Figure 9. Configuring  the primordial host.

Figure 9. Configuring the primordial host.

Figure 10 illustrates the configuration of the primary host (etc27-z20) and the secondary host (etz27-z21).

Figure 10. Configuring  the primary and secondary hosts.

Figure 10. Configuring the primary and secondary hosts.

Figure 11 illustrates the configuration of the SCM Web Tier instance of Oracle HTTP Server (etc27-z19), which was hosted on an Oracle Solaris Zone.

Figure 11. Configuring  the SCM instance of Oracle HTTP Server.

Figure 11. Configuring the SCM instance of Oracle HTTP Server.

Tuning Recommendations

While full tuning and table layout is beyond the scope of this document, the sections that follow provide a high-level overview of tuning practices employed during Oracle testing of Oracle Fusion Distributed Order Orchestration on Oracle SuperCluster T5-8.

Oracle HTTP Server Tuning

A number of Oracle HTTP Server tunings were made to ifModule mpm_work_module, as shown below.

# worker MPM
# StartServers: initial number of server processes to start
# MaxClients: maximum number of simultaneous client connections
# MinSpareThreads: minimum number of worker threads which are kept spare
# MaxSpareThreads: maximum number of worker threads which are kept spare
# ThreadsPerChild: constant number of worker threads in each server process
# MaxRequestsPerChild: maximum number of requests a server process serves
# Specify "ServerLimit nnn" before MaxClients if MaxClients/ThreadsPerChild > 16.
# Specify "ThreadLimit nnn" before MaxClients if ThreadsPerChild > 64.
<IfModule mpm_worker_module>
ThreadLimit        64
ServerLimit        512
StartServers         20
MaxClients        20000
MinSpareThreads    3200
MaxSpareThreads     4800
ThreadsPerChild    48
MaxRequestsPerChild  0
AcceptMutex fcntl
LockFile "/var/tmp/http_lock"
</IfModule>

Oracle Fusion Applications Tuning

Various application elements in SCM were tuned to optimize performance in the tested configuration, as described in the sections that follow.

Modifying Memory Parameters of SCM Domain Managed Servers

Every domain within the SCM application has its own set of properties, which are specified in the fusionapps_start_params.properties file located in the configuration directory of the domain. Please refer this excellent blog for more details on fusionapps_start_params.properties.

For each of the following items in the fusionapps_start_params.properties file, change the name and the value pair. If the name is not present, it should be added to the file.

  • #Fusion Default Sysprops
    fusion.default.default.sysprops=-Dapplication.top=${WL_HOME}/../applications/scm/deploy 
    -Djbo.ampool.minavailablesize=1 -Djbo.doconnectionpooling=true 
    -Djbo.load.components.lazily=true -Djbo.max.cursors=5 -Djbo.recyclethreshold=75 
    -Djbo.txn.disconnect_level=1 -Djps.auth.debug=false -Doracle.fusion.appsMode=true 
    -Doracle.notification.filewatching.interval=60000 -Dweblogic.SocketReaders=3 
    -Dweblogic.security.providers.authentication.LDAPDelegatePoolSize=20 -Djps.authz=ACC 
    -Djps.combiner.optimize.lazyeval=true -Djps.combiner.optimize=true 
    -Djps.policystore.hybrid.mode=false -Djps.subject.cache.key=5 
    -Djps.subject.cache.ttl=600000 
    -Ddiagfwk.diagnostic.test.location=${WL_HOME}/../applications/jlib/diagnostic,
    ${ATGPF_ORACLE_HOME}/archives/applications/diagnostics -Doracle.multitenant.enabled=false 
    -Doracle.jdbc.createDescriptorUseCurrentSchemaForSchemaName=true        
    -Dapplication.config.location.ocm=${FA_INSTANCE_HOME}/ocm  
    -Dweblogic.security.SSL.trustedCAKeyStore=/u01/oracle/instance/keystores/fusion_trust.jks  
    -Dweblogic.mdb.message.MinimizeAQSessions=true  
    -Dweblogic.ejb.container.MDBDestinationPollIntervalMillis=6000 
    -Dweblogic.http.client.defaultReadTimeout=300000 
    -Dweblogic.http.client.defaultConnectTimeout=300000 
    -DHTTPClient.socket.readTimeout=300000 -DHTTPClient.socket.connectionTimeout=300000 
    -Dwebcenter.owsm.gpa.enabled=true -Dprovisioning.start.params.processed=true 
    -DXDO_FONT_DIR=${WL_HOME}/../applications/../bi/common/fonts 
    -Dweblogic.LoginTimeoutMillis=50000
    
  • #SCM Domain Admin Server
    fusion.AdminServer.SunOS-sparc.memoryargs=-XX:PermSize=1g -XX:MaxPermSize=1g 
    -XX:+UseParallelGC  -XX:+HeapDumpOnOutOfMemoryError  
    -XX:HeapDumpPath=${FA_INSTANCE_HOME}/debug -XX:+ParallelGCVerbose 
    -XX:ReservedCodeCacheSize=128m -XX:+UseParallelOldGC  -XX:ParallelGCThreads=2
    
  • #Advanced Planning Server
    fusion.AdvancedPlanningCluster.SunOS-sparc.memoryargs=-XX:PermSize=256m 
    -XX:MaxPermSize=512m -XX:+UseParallelGC  -XX:+HeapDumpOnOutOfMemoryError  
    -XX:HeapDumpPath=${FA_INSTANCE_HOME}/debug -XX:+ParallelGCVerbose 
    -XX:ReservedCodeCacheSize=128m -XX:+UseParallelOldGC  -XX:ParallelGCThreads=4  
    -XX:+UseCompressedOops -XX:StringTableSize=500009
    
  • #Cost Management Server
    fusion.CostManagementCluster.SunOS-sparc.memoryargs=-XX:PermSize=256m 
    -XX:MaxPermSize=512m -XX:+UseParallelGC  -XX:+HeapDumpOnOutOfMemoryError  
    -XX:HeapDumpPath=${FA_INSTANCE_HOME}/debug -XX:+ParallelGCVerbose 
    -XX:ReservedCodeCacheSize=128m -XX:+UseParallelOldGC  -XX:ParallelGCThreads=4 
    -XX:StringTableSize=500009
    
  • #Order Orchestration Server
    fusion.OrderOrchestrationCluster.SunOS-sparc.memoryargs=-XX:PermSize=756m 
    -XX:MaxPermSize=756m -XX:+UseParallelGC  -XX:+HeapDumpOnOutOfMemoryError  
    -XX:HeapDumpPath=${FA_INSTANCE_HOME}/debug -XX:+ParallelGCVerbose 
    -XX:ReservedCodeCacheSize=128m -XX:+UseParallelOldGC  -XX:+UseCompressedOops 
    -XX:ParallelGCThreads=20 -XX:LargePageSizeInBytes=2g -XX:StringTableSize=500009 
    -Xnoclassgc -XX:-UseAdaptiveSizePolicy
    
  • #SCM Common Server
    fusion.SCMCommonCluster.SunOS-sparc.memoryargs=-XX:PermSize=256m 
    -XX:MaxPermSize=756m -XX:+UseParallelGC   -XX:+HeapDumpOnOutOfMemoryError 
    -XX:HeapDumpPath=${FA_INSTANCE_HOME}/debug -XX:+ParallelGCVerbose 
    -XX:ReservedCodeCacheSize=128m -XX:+UseParallelOldGC -XX:ParallelGCThreads=4 
    -XX:StringTableSize=500009
    
  • #SCM SOA Server
    fusion.SCM_SOACluster.SunOS-sparc.memoryargs=-XX:PermSize=756m 
    -XX:MaxPermSize=756m -XX:+UseParallelGC  -XX:+HeapDumpOnOutOfMemoryError  
    -XX:HeapDumpPath=${FA_INSTANCE_HOME}/debug -XX:+ParallelGCVerbose 
    -XX:ReservedCodeCacheSize=128m -XX:+UseParallelOldGC -XX:+UseCompressedOops 
    -XX:ParallelGCThreads=20 -XX:LargePageSizeInBytes=2g -XX:StringTableSize=500009   
    -Xnoclassgc -XX:-UseAdaptiveSizePolicy
    
  • #Added Sysprops for AdvancedPlanningCluster
    fusion.AdvancedPlanningCluster.SunOS-sparc.sysprops=
    -Dweblogic.ejb.container.MDBDestinationPollIntervalMillis=6000 
    -Dweblogic.mdb.message.MinimizeAQSessions=true
    
  • #Added Sysprops for OrderOrchestrationCluster
    fusion.OrderOrchestrationCluster.SunOS-sparc.sysprops=
    -Dweblogic.ejb.container.MDBDestinationPollIntervalMillis=6000 
    -Dweblogic.mdb.message.MinimizeAQSessions=true    
    
  • #Memory Changes for each Servers
    fusion.SCMDomain.AdminServer.default.minmaxmemory.main=-Xms12g -Xmx12g
    fusion.SCMDomain.AdvancedPlanningCluster.default.minmaxmemory.main=-Xms4g -Xmx4g
    fusion.SCMDomain.CostManagementCluster.default.minmaxmemory.main=-Xms512m -Xmx2048m
    fusion.SCMDomain.LogisticsCluster.default.minmaxmemory.main=-Xms512m -Xmx2048m
    fusion.SCMDomain.OrderOrchestrationCluster.default.minmaxmemory.main=-Xms16g -Xmx16g -Xmn6g
    fusion.SCMDomain.SCM_SOACluster.default.minmaxmemory.main=-Xms12g -Xmx12g -Xmn4608m
    

JDBC Tuning

The generic way to set the datasource parameters for JDBC (excluding the database pool) is to log in to the admin server console and execute the following steps:

  1. In the Domain Structure tree, expand Services, and then select Data Sources.
  2. On the Summary of Data Sources page, select the data source.
  3. In the Change Center of the Administration Console, click Lock and Edit.
  4. On the “Settings for <DataSourceName>” page, go to Configuration -> Connection Pool.
  5. Set the Initial Capacity, Maximum Capacity and Minimum Capacity.
  6. Expand the “advanced” properties view and set the Shrink Frequency.
  7. Click “Save” to save settings.
  8. Click “Activate Changes” in the Change Center.

The connection pool within a JDBC data source contains a group of JDBC connections that applications reserve, use, and then return to the pool. The connection pool and the connections within it are created when the connection pool is registered, usually when starting up Oracle WebLogic Server, or when deploying the data source to a new target. The config.xml file was altered as follows in the testing performed by Oracle.

DB pool(DataSourceName-rac1, DataSourceName-rac2)
SOALocalTxDataSource:Initial Capacity=500, max capacity=3500, min capacity=500
JRFWSAsyncDSAQ:  Initial Capacity=50, max capacity=300, min capacity=50
ApplicationServiceDB: Initial Capacity=150, max capacity=400, min capacity=150
SOADataSource: Initial Capacity=500, max capacity=3500, min capacity=500
mds-ApplicationMDSDB: Initial Capacity=50, max capacity=300, min capacity=50
ApplicationDB: Initial Capacity=150, max capacity=2000, min capacity=150
EDNSource: Initial Capacity=50, max capacity=100, min capacity=50
EDNDataSource: Initial Capacity=50, max capacity=100, min capacity=50


Shrink frequency: 600s
ApplicationDB
ApplicationServiceDB
JRFWSAsyncDSAQ
mds-ESS_MDS_DS
mds-soa
SOADataSource
SOALocalTxDataSource
mds-ApplicationMDSDB
mds-CustomPortalDS

Message Driven Bean (MDB) Tuning

For MDB pool settings, the maximum number of beans in the free pool was increased as shown in Figure 12. The following steps were performed:

  1. Log in to the admin console.
  2. Select Lock and Edit, and then Deployments (in the left side navigator).
  3. Select the application and find the MDBs with names ending in AsyncRequestProcessorMDB and AsyncResponseProcessorMDB.
  4. For each, click the MDB and select the Configuration tab.
  5. Change the Max beans value in the free pool to 15.
  6. Repeat this process for both request and response.
  7. Click Save.
  8. Repeat these steps for both the Oracle Distributed Order Applications and Applications servers.

Figure 12. Configuring  MDB pool settings.

Figure 12. Configuring MDB pool settings.

Oracle SOA Suite Tuning

Within the Oracle SOA Suite, the number of threads was tuned to select for performance. The following steps were performed from the Oracle Enterprise Manager Console to set the Oracle Business Process Execution Language (BPEL) Process Manager:

  1. Select Farm SCMDomain, SOA, then SOA-infra, and then SOA Administration.
  2. Select BPEL Properties.
  3. The System threads were set to 5, BPEL Engine Threads were set to 400, and BPEL Invoke Threads were set to 300.

For Mediator threads, the following was performed from the Oracle Enterprise Manager Console:

  1. Select Farm_SCMDomain, SOA, then SOA-infra, and then SOA Administration.
  2. Select Mediator Properties.
  3. The number of Parallel Worker Threads was set to 20.

To set threads for the Oracle Event Delivery Network (EDN) of the Oracle SOA Suite, the following was performed from the Oracle Enterprise Manager Console:

  1. Select Farm_SCMDomain, SOA, then SOA-infra, and then SOA Administration.
  2. Select System Mbean browser, and then Application defined Mbean.
  3. Next select oracle.as.soainfra.config, and then Server: soa_server-x.
  4. Finally select EDNConfig, and then EDN.
  5. The number of EDN threads was set to 40.

Oracle Global Order Promising Server Tuning

To enhance Oracle Global Order Promising Server, edit the gopServerConfig.xml file (instance/gop_1/config/GOP/GlobalOrderPromisingServer1/gopServerConfig.xml) and add the following line:

<num-msg-autosave>1000000</num-msg-autosave>

In the same file, also change the -finest level to incident error logging:

<logLevel>INCIDENT_ERROR</logLevel>

Java.security Tuning

Switch the security providers in the java.security file found in the java.home/lib/security directory, as follows:

security.provider.1=sun.security.provider.Sun
security.provider.2=sun.security.pkcs11.SunPKCS11${java.home}/lib/security/sunpkcs11-solaris.cfg
security.provider.3=sun.security.rsa.SunRsaSign

Network Tunings

For both the scale-out zones and the SCM Oracle HTTP Server zones, issue the following commands using the ndd utility:

# ndd -set /dev/tcp tcp_conn_req_max_q 96000
# ndd -set /dev/tcp tcp_conn_req_max_q0 64000
# ndd -set /dev/tcp tcp_xmit_hiwat 524288
# ndd -set /dev/tcp tcp_recv_hiwat 524288
# ndd -set /dev/tcp tcp_naglim_def 1
# ndd -set /dev/tcp tcp_smallest_anon_port 10000
# ndd -set /dev/tcp tcp_time_wait_interval 6000

Oracle RAC Tuning

Oracle Fusion Applications for SCM requires some level of database tuning to perform optimally. For testing performed by Oracle, database tuning was done in the following areas:

  • Modifying init parameters through the init.ora file (shown below)
  • Partitioning of specific tables for Oracle Fusion Distributed Order Orchestration and Oracle SOA Suite
  • Re-creating the line of business (LOB) segments using a secured file for BLOB
  • Collecting optimizer statistics on certain tables
  • Re-creating the SOA-INFRA and Oracle Fusion Distributed Order Processing specific index with global hash partitions

A number of key changes were made to the init.ora file, as follows:

_gc_defer_time=0
_gc_policy_time=0
_gc_undo_affinity=FALSE
filesystemio_options=SETALL

open_cursors=500
sga_target=64G 
pga_aggregate_target=8G 
shared_pool_size=4G 

nls_sort=BINARY
open_cursors=500
session_cached_cursors=500
plsql_code_type=NATIVE
processes=15000
db_securefile=ALWAYS

Redo log file size=4 files of 20GB each
Categories: Oracle, SuperCluster Tags: , ,

Linux Kernel Upgrade on Exadata(manual way)

October 24, 2014 1 comment

Kernel upgrade can be applied node by node on exadata so there will be no service interruption. Kernel upgrades are required when you need new functionality or when you hit bugs on the current kernel version. I had to upgrade kernel of a box. It is a good experience and The following procedure is based on kernel upgrade on Oracle Linux 5.8 with Unbreakable Enterprise Kernel [2.6.32], a compute node of exadata.

PRE-UPGRADE

==> If you have EM12C the targets on the host will be unavailable for upgrade period. Put them in blackout state so that no false alarms generated from them.

==> Run the upgrade step on X-windows like vnc. This will prevent any disconnection issues from ssh clients.

==> Disable all NFS mounts on the system. check the locations /etc/rc.local , /etc/fstab

==> Is there any asm operations going on the system. Wait for them to finish. Make sure no rebalance job is running on the ASM part. check v$asm_operation.

==> Backup the grup startup file /boot/grub/grub.conf. you might need it for rollback.

==> Shutdown the crs and disable crs auto start. Also shutdown any databases or listeners that are not registered with the csr.
[root@host1 ~]# /u01/app/11.2.0.3/grid/bin/crsctl disable crs
CRS-4621: Oracle High Availability Services autostart is disabled.
[root@host1 ~]# /u01/app/11.2.0.3/grid/bin/crsctl stop crs -f

==> Make sure crs is not running
[root@host1 ~]# ps -ef | grep d.bin
root 66664 60395 0 09:55 pts/1 00:00:00 grep d.bin

==> Reboot the system and make sure it is able to restart before any kernel changes  :)

==> Check the ilom problem page and make sure there is no problem on the server. If there are any like memory problems etc. fix them.

==> Record the current kernel
[root@host1 ~]# uname -r
2.6.32-400.11.1.el5uek

==> Check the server version and make sure the next kernel is designed for the server.
[root@host1 ~]# dmidecode -s system-product-name
SUN FIRE X4170 M3

==> Make sure enough space is available
[root@host1 ~]# df -h

==> Shutdown any database or listeners that hasn’t been registered with the crs. check the crs for the last time.
[root@host1 ~]# ps -ef | grep d.bin
root 66664 60395 0 09:55 pts/1 00:00:00 grep d.bin

UPGRADE
==> upgrade the kernel

[root@host1 ~]# rpm -ivh kernel-uek-firmware-2.6.32-400.34.1.el5uek.noarch.rpm kernel-uek-2.6.32-400.34.1.el5uek.x86_64.rpm ofa-2.6.32-400.34.1.el5uek-1.5.1-4.0.58.1.x86_64.rpm
Preparing… ########################################### [100%]
1:kernel-uek-firmware ########################################### [ 33%]
2:kernel-uek ########################################### [ 67%]
3:ofa-2.6.32-400.34.1.el5########################################### [100%]

==> Reboot the system
[root@host1 ~]# reboot

POST-UPGRADE
==> Check ilom for any errors. Check /var/log/messages for any errors.

==> Check the new kernel version
[root@host1 ~]# uname -r
2.6.32-400.34.1.el5uek

==> Start the crs and enable crs auto start
[root@host1 ~]# /u01/app/11.2.0.3/grid/bin/crsctl enable crs
CRS-4622: Oracle High Availability Services autostart is enabled.
[root@host1 ~]# /u01/app/11.2.0.3/grid/bin/crsctl start crs

==> Check if crs is starting

[root@host1 ~]# ps -ef | grep d.bin
root 11852 1 4 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/ohasd.bin reboot
oracle 12013 1 0 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/oraagent.bin
oracle 12025 1 0 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/mdnsd.bin
oracle 12109 1 1 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/gpnpd.bin
root 12119 1 0 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/orarootagent.bin
oracle 12122 1 1 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/gipcd.bin
root 12137 1 1 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/osysmond.bin
root 12150 1 0 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/cssdmonitor
root 12167 1 0 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/cssdagent
oracle 12169 1 1 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/diskmon.bin -d -f
oracle 12187 1 2 10:22 ? 00:00:00 /u01/app/11.2.0.3/grid/bin/ocssd.bin
root 12389 10620 0 10:23 pts/0 00:00:00 grep d.bin
[root@host1 ~]#

==> Enable any NFS mount on the system and mount them

==> On EM12c end the blackout period for the targets.

Now you can move on the other server in the cluster.

Change exadata flashcache mode from WriteThrough to WriteBack

October 24, 2014 Leave a comment

We can enable the WriteBack mode for flashcaches without shutting down the ASM or any instance on the Exadata. This will be done in a rolling fashion one cell disk at a time. You have to make sure you finish one cell before starting the operation on the next cell.

I will follow the following document to change the flashcache mode in a Exadata Database Machine X3-2 Eighth Rack System. Also you can find why we need to use WriteBack mode and other useful information in that document.

Exadata Write-Back Flash Cache – FAQ (Doc ID 1500257.1)

4. How to determine if you have write back flash cache enabled?

[root@testdbadm01 ~]# dcli -g ~/cell_group -l root cellcli -e "list 
cell attributes flashcachemode"
testceladm01: WriteThrough
testceladm02: WriteThrough
testceladm03: WriteThrough

5. How can we enable the write back flash cache?

Before proceeding any further, make sure all the griddisks are online and there are no problems on the cells.

[root@testdbadm01 ~]#  dcli -g cell_group -l root cellcli -e list 
griddisk attributes asmdeactivationoutcome, asmmodestatus
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm01: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm02: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE
testceladm03: Yes        ONLINE

On the 1/8th Rack system flashcache is reduced to half size. Becuase of that the size appears to be 744 GB.

[root@testdbadm01 ~]#  dcli -g cell_group -l root cellcli -e list flashcache detail
testceladm01: name:                      testceladm01_FLASHCACHE
testceladm01: cellDisk:                  FD_03_testceladm01,FD_02_testceladm01,FD_00_testceladm01,FD_01_testceladm01,FD_06_testceladm01,FD_05_testceladm01,FD_04_testceladm01,FD_07_testceladm01
testceladm01: creationTime:              2013-07-29T17:55:28+03:00
testceladm01: degradedCelldisks:
testceladm01: effectiveCacheSize:        744.125G
testceladm01: id:                        5adas74d-asdc-4477-382d-30c14052c23d
testceladm01: size:                      744.125G
testceladm01: status:                    normal
testceladm02: name:                      testceladm02_FLASHCACHE
testceladm02: cellDisk:                  FD_03_testceladm02,FD_07_testceladm02,FD_02_testceladm02,FD_00_testceladm02,FD_06_testceladm02,FD_01_testceladm02,FD_04_testceladm02,FD_05_testceladm02
testceladm02: creationTime:              2013-07-29T17:55:40+03:00
testceladm02: degradedCelldisks:
testceladm02: effectiveCacheSize:        744.125G
testceladm02: id:                        80c140a3-0c14-40ba-8364-10c1460d802f
testceladm02: size:                      744.125G
testceladm02: status:                    normal
testceladm03: name:                      testceladm03_FLASHCACHE
testceladm03: cellDisk:                  FD_06_testceladm03,FD_03_testceladm03,FD_00_testceladm03,FD_04_testceladm03,FD_05_testceladm03,FD_02_testceladm03,FD_01_testceladm03,FD_07_testceladm03
testceladm03: creationTime:              2013-07-29T17:55:25+03:00
testceladm03: degradedCelldisks:
testceladm03: effectiveCacheSize:        744.125G
testceladm03: id:                        6950c148-992b-4347-a1e1-e60c1406bda6
testceladm03: size:                      744.125G
testceladm03: status:                    normal

Repeat the following procedure for all the cells. We have 3 cells on 1/8 the rack exadata. Operate only on one cell at a time. Before proceeding the next cell, make sure the cell is operational. I will follow the Support ID but I prefer to use the cellcli commands in the cellcli command prompt.

Also I prefer to do any disk operation during off peak hours.

1. Drop the flash cache on that cell

CellCLI>  drop flashcache
Flash cache testceladm01_FLASHCACHE successfully dropped

2. Check if ASM will be OK if the grid disks go OFFLINE. The following command should return ‘Yes’ for the grid disks being listed:

CellCLI> list griddisk attributes name,asmmodestatus,asmdeactivationoutcome
         DATA_TEST_CD_00_testceladm01    ONLINE  Yes
         DATA_TEST_CD_01_testceladm01    ONLINE  Yes
         DATA_TEST_CD_02_testceladm01    ONLINE  Yes
         DATA_TEST_CD_03_testceladm01    ONLINE  Yes
         DATA_TEST_CD_04_testceladm01    ONLINE  Yes
         DATA_TEST_CD_05_testceladm01    ONLINE  Yes
         DBFS_DG_CD_02_testceladm01      ONLINE  Yes
         DBFS_DG_CD_03_testceladm01      ONLINE  Yes
         DBFS_DG_CD_04_testceladm01      ONLINE  Yes
         DBFS_DG_CD_05_testceladm01      ONLINE  Yes
         RECO_TEST_CD_00_testceladm01    ONLINE  Yes
         RECO_TEST_CD_01_testceladm01    ONLINE  Yes
         RECO_TEST_CD_02_testceladm01    ONLINE  Yes
         RECO_TEST_CD_03_testceladm01    ONLINE  Yes
         RECO_TEST_CD_04_testceladm01    ONLINE  Yes
         RECO_TEST_CD_05_testceladm01    ONLINE  Yes

3. Inactivate the griddisk on the cell

CellCLI> alter griddisk all inactive
GridDisk DATA_TEST_CD_00_testceladm01 successfully altered
GridDisk DATA_TEST_CD_01_testceladm01 successfully altered
GridDisk DATA_TEST_CD_02_testceladm01 successfully altered
GridDisk DATA_TEST_CD_03_testceladm01 successfully altered
GridDisk DATA_TEST_CD_04_testceladm01 successfully altered
GridDisk DATA_TEST_CD_05_testceladm01 successfully altered
GridDisk DBFS_DG_CD_02_testceladm01 successfully altered
GridDisk DBFS_DG_CD_03_testceladm01 successfully altered
GridDisk DBFS_DG_CD_04_testceladm01 successfully altered
GridDisk DBFS_DG_CD_05_testceladm01 successfully altered
GridDisk RECO_TEST_CD_00_testceladm01 successfully altered
GridDisk RECO_TEST_CD_01_testceladm01 successfully altered
GridDisk RECO_TEST_CD_02_testceladm01 successfully altered
GridDisk RECO_TEST_CD_03_testceladm01 successfully altered
GridDisk RECO_TEST_CD_04_testceladm01 successfully altered
GridDisk RECO_TEST_CD_05_testceladm01 successfully altered

4. Shut down cellsrv service

CellCLI> alter cell shutdown services cellsrv 

Stopping CELLSRV services... 
The SHUTDOWN of CELLSRV services was successful.

5. Set the cell flashcache mode to writeback

CellCLI> alter cell flashCacheMode=writeback
Cell testceladm01 successfully altered

6. Restart the cellsrv service

CellCLI> alter cell startup services cellsrv

Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.

7. Reactivate the griddisks on the cell

CellCLI> alter griddisk all active
GridDisk DATA_TEST_CD_00_testceladm01 successfully altered
GridDisk DATA_TEST_CD_01_testceladm01 successfully altered
GridDisk DATA_TEST_CD_02_testceladm01 successfully altered
GridDisk DATA_TEST_CD_03_testceladm01 successfully altered
GridDisk DATA_TEST_CD_04_testceladm01 successfully altered
GridDisk DATA_TEST_CD_05_testceladm01 successfully altered
GridDisk DBFS_DG_CD_02_testceladm01 successfully altered
GridDisk DBFS_DG_CD_03_testceladm01 successfully altered
GridDisk DBFS_DG_CD_04_testceladm01 successfully altered
GridDisk DBFS_DG_CD_05_testceladm01 successfully altered
GridDisk RECO_TEST_CD_00_testceladm01 successfully altered
GridDisk RECO_TEST_CD_01_testceladm01 successfully altered
GridDisk RECO_TEST_CD_02_testceladm01 successfully altered
GridDisk RECO_TEST_CD_03_testceladm01 successfully altered
GridDisk RECO_TEST_CD_04_testceladm01 successfully altered
GridDisk RECO_TEST_CD_05_testceladm01 successfully altered

8. Verify all grid disks have been successfully put online using the following command:

(Currently DATA_TEST diskgroup started syncronization)

CellCLI> list griddisk attributes name, asmmodestatus
DATA_TEST_CD_00_testceladm01 SYNCING
DATA_TEST_CD_01_testceladm01 SYNCING
DATA_TEST_CD_02_testceladm01 SYNCING
DATA_TEST_CD_03_testceladm01 SYNCING
DATA_TEST_CD_04_testceladm01 SYNCING
DATA_TEST_CD_05_testceladm01 SYNCING
DBFS_DG_CD_02_testceladm01 OFFLINE
DBFS_DG_CD_03_testceladm01 OFFLINE
DBFS_DG_CD_04_testceladm01 OFFLINE
DBFS_DG_CD_05_testceladm01 OFFLINE
RECO_TEST_CD_00_testceladm01 OFFLINE
RECO_TEST_CD_01_testceladm01 OFFLINE
RECO_TEST_CD_02_testceladm01 OFFLINE
RECO_TEST_CD_03_testceladm01 OFFLINE
RECO_TEST_CD_04_testceladm01 OFFLINE
RECO_TEST_CD_05_testceladm01 OFFLINE

9. Recreate the flash cache

CellCLI> create flashcache all
Flash cache testceladm01_FLASHCACHE successfully created

10. Check the status of the cell to confirm that it’s now in WriteBack mode:

CellCLI> list cell attributes flashCacheMode
writeback

11. Repeat these same steps again on the next cell. However, before taking another storage server offline, execute the following making sure ‘asmdeactivationoutcome’ displays YES: (Currently RECO_TEST diskgroup started synchronization)

CellCLI> list griddisk attributes name,asmmodestatus,asmdeactivationoutcome
DATA_TEST_CD_00_testceladm01 ONLINE Yes
DATA_TEST_CD_01_testceladm01 ONLINE Yes
DATA_TEST_CD_02_testceladm01 ONLINE Yes
DATA_TEST_CD_03_testceladm01 ONLINE Yes
DATA_TEST_CD_04_testceladm01 ONLINE Yes
DATA_TEST_CD_05_testceladm01 ONLINE Yes
DBFS_DG_CD_02_testceladm01 ONLINE Yes
DBFS_DG_CD_03_testceladm01 ONLINE Yes
DBFS_DG_CD_04_testceladm01 ONLINE Yes
DBFS_DG_CD_05_testceladm01 ONLINE Yes
RECO_TEST_CD_00_testceladm01 SYNCING Yes
RECO_TEST_CD_01_testceladm01 SYNCING Yes
RECO_TEST_CD_02_testceladm01 SYNCING Yes
RECO_TEST_CD_03_testceladm01 SYNCING Yes
RECO_TEST_CD_04_testceladm01 SYNCING Yes
RECO_TEST_CD_05_testceladm01 SYNCING Yes

All disk groups are synchronized now we can proceed on the next cell.

CellCLI> list griddisk attributes name,asmmodestatus,asmdeactivationoutcome
DATA_TEST_CD_00_testceladm01 ONLINE Yes
DATA_TEST_CD_01_testceladm01 ONLINE Yes
DATA_TEST_CD_02_testceladm01 ONLINE Yes
DATA_TEST_CD_03_testceladm01 ONLINE Yes
DATA_TEST_CD_04_testceladm01 ONLINE Yes
DATA_TEST_CD_05_testceladm01 ONLINE Yes
DBFS_DG_CD_02_testceladm01 ONLINE Yes
DBFS_DG_CD_03_testceladm01 ONLINE Yes
DBFS_DG_CD_04_testceladm01 ONLINE Yes
DBFS_DG_CD_05_testceladm01 ONLINE Yes
RECO_TEST_CD_00_testceladm01 ONLINE Yes
RECO_TEST_CD_01_testceladm01 ONLINE Yes
RECO_TEST_CD_02_testceladm01 ONLINE Yes
RECO_TEST_CD_03_testceladm01 ONLINE Yes
RECO_TEST_CD_04_testceladm01 ONLINE Yes
RECO_TEST_CD_05_testceladm01 ONLINE Yes

FINALLY
After changing the flashcache modes on all cells, check if flashcache modes are changed to write-back for all cells.

[root@testdbadm01 ~]# dcli -g ~/cell_group -l root cellcli -e "list cell attributes flashcachemode"
testceladm01: writeback
testceladm02: writeback
testceladm03: writeback
Categories: 11g, Exadata, Oracle Tags: ,
Follow

Get every new post delivered to your Inbox.

Join 533 other followers