Home > Oracle, SuperCluster > Oracle Fusion Distributed Order Orchestration on Oracle SuperCluster T5-8

Oracle Fusion Distributed Order Orchestration on Oracle SuperCluster T5-8

Oracle Fusion Distributed Order Orchestration is a next-generation Oracle Fusion Supply Chain Management (SCM) application designed to provide centralized order processing, centralized monitoring and exception management, and faithful execution against predictable order execution policies. The system improves order orchestration across diverse order-capture and fulfillment environments. Oracle Fusion Distributed Order Orchestration offers centrally managed orchestration policies, and fulfillment monitoring. Together these capabilities facilitate increased profitability and customer satisfaction while dramatically reducing fulfillment costs and order errors.

Scaling reliable order execution in the face of unpredictable demand can be critical. Failure of order execution systems or lackluster order processing performance can result in lost revenues and very real damage to brand. As a result, organizations need infrastructure that can handle today’s workloads while effortlessly scaling to meet future demands as needed, without complex and time-consuming reconfiguration processes. Infrastructure must also support high availability, offering resiliency and the ability to weather and recover gracefully from failures.

Deploying Oracle Fusion Distributed Order Orchestration on Oracle SuperCluster T5-8 represents a compelling solution that allows order creation rates to scale predictably as required. To evaluate the performance of this solution, Oracle engineers tested a half-rack Oracle SuperCluster T5-8 system, scaling to multiple instances of key components to deliver greater levels of throughput. Clustering individual components within Oracle SuperCluster provides for availability requirements, and helps ensure continuous operation and failover. As tested, the system demonstrated predictable near-linear scalability by scaling out the number of Oracle Fusion Distributed Order Orchestration and Oracle SOA Suite instances. The scalability of the Oracle SuperCluster platform meant that there was plenty of headroom for even greater order capacity.

Oracle SuperCluster T5-8

Oracle SuperCluster T5-8 is a multipurpose engineered system that has been designed, tested, and integrated to run mission-critical enterprise applications and rapidly deploy cloud services while delivering extreme efficiency, cost savings, and performance. It is well suited for multitier enterprise applications with web, database, and application components. This versatility along with powerful, included no-overhead virtualization capabilities makes it an ideal platform on which to consolidate large numbers of applications, databases, and middleware workloads such as those found in SCM. Figure 1 illustrates Oracle SuperCluster T5-8 as configured for testing Oracle Fusion Distributed Order Orchestration.

Oracle SuperCluster T5-8 is an engineered system designed to host the entire Oracle software solution stack, and it includes the following components:

  • Oracle’s SPARC T5-8 server. The SPARC T5-8 server offers a large memory capacity and a highly integrated design that supports virtualization and consolidation of mission-critical applications. The half-rack Oracle SuperCluster configuration tested for Oracle Fusion Distributed Order Orchestration featured two four-processor SPARC T5-8 servers, each configured with a terabyte of system memory.
  • Oracle Exadata Storage Servers. Oracle Exadata storage technology is provided to enhance the performance of Oracle Database. This platform is ideal for accelerating the performance of Java middleware and applications, general-purpose applications, and Oracle Database 11g Release 2.
  • Oracle ZFS Storage Appliance. An integral Oracle ZFS Storage Appliance uses flash-enabled Hybrid Storage Pools to improve application response times. Its performance scalability for file-based I/O and its ease of management make it a good fit for managing shared application data files within Oracle SuperCluster.
  • Oracle’s Sun Datacenter InfiniBand Switch 36. This InfiniBand switch provides a high-throughput, low-latency, and scalable fabric that is suitable for the consolidation of interprocess communication, network, and storage. InfiniBand delivers up to 63 percent higher transactions per second (TPS) for Oracle Real Application Clusters (Oracle RAC) than when run over gigabit Ethernet (GbE) networks. There are three InfiniBand switches in Oracle SuperCluster, offering private connectivity within the system.
  • Integrated no-cost virtualization. Oracle VM Server for SPARC (previously called Sun Logical Domains or LDoms) enhances security, increases utilization, and improves reliability when combined with Oracle Solaris Zones.

Figure 1 illustrates a half-rack Oracle SuperCluster T5-8 configured for testing as described in the sections that follow.

Figure 1. Oracle Fusion Applications deployed on a half-rack Oracle SuperCluster T5-8

Figure 1. Oracle Fusion Applications deployed on a half-rack Oracle SuperCluster T5-8.

The half-rack Oracle SuperCluster T5-8 provides an effective platform for scaling Oracle Fusion Distributed Order execution. A number of key technical capabilities allow scalability with good performance and low latency.

  • Oracle’s SPARC T5 processors provide significant computational headroom, allowing virtualized server instances to be added as needs grow to service additional orders.
  • Oracle’s no-cost virtualization solutions, including both Oracle Solaris Zones and Oracle VM Server for SPARC, mean that system resources can be easily subdivided, virtualized, and isolated, allowing considerable consolidation with predictable performance.
  • Oracle Exadata Storage Servers provided as a part of Oracle SuperCluster provide Exadata Smart Flash Cache, intelligently caching database objects in flash memory, replacing slow mechanical I/O operations to disk with very rapid flash memory operations.
  • All of the components of Oracle SuperCluster are connected by an high-speed, low-latency InfiniBand network, reducing latency for Oracle Distributed Order Execution.

SPARC T5-8 Servers

Within the half-rack Oracle SuperCluster T5-8, Oracle VM Server for SPARC is used to divide each of the two SPARC T5-8 servers into an application domain and a database domain. These domains are in turn connected to Oracle Exadata Storage Servers over a high-performance, low-latency InfiniBand network that is internal to the system.

Application Domains

Within the application domains, Oracle Solaris Zones are used to further partition the resources of the SPARC T5-8 servers.

  • Oracle Fusion Applications runs within a zone on the application domain on both SPARC T5-8 servers. This environment is ultimately used to house SCM and the Oracle Fusion Distributed Order Orchestration instances that are used to scale the solution.
  • Oracle HTTP Server runs in a separate zone on one of the servers (node 1), providing web-based access to the system.
  • Oracle Identity Management runs in a separate zone on the second server (node 2), allowing organizations to effectively manage the end-to-end lifecycle of user identities across all enterprise resources.

Database Domains

Oracle RAC 11g Release 2, a clustered version of Oracle Database, runs in the database domains. Oracle RAC uses a shared-cache, clustered database architecture that overcomes the limitations of traditional shared-nothing and shared-disk architectures to provide database performance, scalability, and reliability—with no changes to existing Oracle Database applications.

Oracle Exadata Storage Servers

The Oracle Exadata Storage Servers provided in Oracle SuperCluster T5-8 are configured as the following, separate disk groups to provide database acceleration for the Oracle RAC database instances:

  • An Oracle Fusion Applications disk group
  • An Oracle Internet Directory disk group
  • An Oracle Identity Management disk group

Oracle ZFS Storage Appliance

Oracle ZFS Storage Appliance is used in Oracle SuperCluster for non-database storage. The appliance is a hybrid storage system based on a unique cache-centric architecture featuring massive DRAM plus Flash, and is powered by a multithreaded Symmetric Multiprocessing (SMP) operating system. As a result, 70 to 90 percent of I/O operations are typically served from DRAM on the appliance, helping to consolidate data-intensive workloads. In this test deployment, the Directory and Application tiers made use of storage on the Oracle ZFS Storage Appliance.

Testing and Results

To evaluate performance, the half-rack Oracle SuperCluster T5-8 was exercised with a predefined simulated workload using Loadrunner. Performance was measured in terms of order creation TPS per hour. Additional scale-out hosts were allocated as the workload increased, with a number of metrics collected to monitor and evaluate system performance as the workload increased.

Workload Description

A typical order lifecycle is depicted in Figure 2. A sales order consists of multiple lines—usually averaging five to six line items per order. An orchestration process consisting of multiple fulfillment steps is assigned to one or more order lines.

Figure 2. A typical  order lifecycle for Oracle Fusion Distributed Order Orchestration.

Figure 2. A typical order lifecycle for Oracle Fusion Distributed Order Orchestration.

In testing conducted by Oracle, this customer representative scenario had a service payload size of 59 KB with two line items per order. The line items are grouped in a ship set. In this test scenario, orders are received and decomposed; the orchestration engine then processes the line items through a two-step process. The process starts with scheduling both line items as a group (ship set). After successful scheduling, a credit check is executed. Credit checking uses Oracle Fusion Distributed Order Orchestration’s Template Task Layer (TTL) feature. Finally, the order lines progress to closure.

Extensible by end users, the Oracle Fusion Distributed Order Orchestration application harnesses both the Processing Constraints and Oracle Business Rules frameworks. To help with the business process, commonly applicable constraints and Oracle Business Rules are seeded out of the box. Depending the functional needs, the applicable rules and constraints would get executed during various stages of the order lifecycle. This ensures that downstream services—such as order promising and credit checking—get functionally pertinent inputs. In this representative scenario, 17 seeded Oracle Business Rules and 12 seeded processing constraints were executed during order processing. Attribute cross-referencing was also carried out on 33 functionally important attributes belonging to the order header and lines.

The Oracle Distributed Order Orchestration infrastructure supports multiple concurrent users simultaneously submitting orders. Oracle engineers simulated this real-life situation by emulating hundreds of concurrent virtual users, with each virtual user submitting orders in a configurable frequency and a pattern.

Performance Results

Testing was performed by horizontally scaling out the number of virtual hosts (scale-out hosts) dedicated to processing SCM (Oracle Fusion Distributed Order Orchestration/Oracle SOA Suite). Configurations with one, two, three, and four virtual hosts were tested. As shown in Figure 3, near linear scalability was realized in terms of order creations per minute, with four scale-out nodes producing just under 10,000 order creations per minute at the peak.

Figure 3. Adding to  the number of Oracle Fusion Distributed Order Orchestration nodes provides  near-linear scalability in terms of the number of order creations per minute.

Figure 3. Adding to the number of Oracle Fusion Distributed Order Orchestration nodes provides near-linear scalability in terms of the number of order creations per minute.

Table 1 provides the details for the tests, listing the average order creation TPS per hour.

Table 1. Scale-out hosts, users, cores, and order lines created per hour realized during Oracle’s testing.

Number of Virtual Users Processor Cores (Scale-Out Hosts) Processor Cores (Oracle RAC Nodes) Number of Scale-Out Hosts Order Lines Created per Hour
50 14 cores 32 cores 1 85.3 K
100 28 cores 32 cores 2 164 K
150 42 cores 32 cores 3 268 K
200 56 cores 32 cores 4 480 K

Resource Utilization on the Application Domain and Oracle RAC Nodes

Even though the Oracle SuperCluster T5-8 produced near linear scalability, system resource utilization remained moderate on the scale-out nodes running in the application domain, with considerable headroom remaining for additional scalability. Figure 4 shows the CPU utilization percentage on each of the four scale-out virtual hosts running Oracle Fusion Distributed Order Orchestration and Oracle SOA Suite. The workload is distributed very consistently between the four nodes. At no time did the CPU utilization exceed 50 percent.

Figure 4. As nodes are  added and order creations scale, the CPU utilization percentage remains modest,  indicating substantial headroom.

Figure 4. As nodes are added and order creations scale, the CPU utilization percentage remains modest, indicating substantial headroom.

System resource usage also remained moderate on the two Oracle RAC nodes. Figure 5 shows the CPU utilization percentage on the Oracle RAC nodes during the four-node testing. Again, this chart shows that resource utilization is distributed consistently, and steady-state operation still retains considerable headroom.

Figure 5. Even with four nodes running Oracle Fusion Distributed  Order Orchestration, CPU utilization percentage remained modest for the Oracle  RAC nodes.

Figure 5. Even with four nodes running Oracle Fusion Distributed Order Orchestration, CPU utilization percentage remained modest for the Oracle RAC nodes.

Deploying Oracle Fusion Distributed Order Orchestration on Oracle SuperCluster

Oracle Fusion Distributed Order Orchestration is built on an open and standards-based service-oriented architecture for flexible integration and lower total cost of ownership (TCO). The following software releases were utilized in Oracle testing:

  • Oracle Solaris 11 (11.1 SRU7.5)
  • Java Development Kit (JDK) 1.6 u71
  • Oracle Identity Management Server
  • Oracle RAC 11g Release 2 (11.2.0.3)
  • Oracle Fusion Applications Release 7 P22 (including Oracle HTTP Server, Oracle SOA Server, Fusion Applications Distributed Order Orchestration Server, Oracle Fusion Global Order Promising Server, Fusion Applications Advanced Planning Server, SCM Common Server)
  • Oracle Fusion Applications Distributed Order Orchestration patches (18076574, 18157443, 18272928, 18891548)

Figure 6 provides additional details on how the half-rack Oracle SuperCluster T5-8 was deployed for the testing of Oracle Fusion Supply Chain Management.

  • Database domains were configured with 256 GB of RAM and one SPARC T5-8 processor with sixteen cores.
  • Application domains were configured with 768 GB of RAM and three 16-core SPARC T5-8 processors for a total of 48 cores.

High availability requirements were taken into consideration for the deployment architecture, and the virtualization technology allowed considerable flexibility for managing and distributing resources such as processor, memory, and so on.

The deployment activity was divided into three important phases, which are described in the sections that follow:

  • Deployment of Oracle RAC and the Oracle Exadata Storage Servers
  • Deployment of the Oracle Fusion Identity Management infrastructure utilizing an Oracle WebLogic Server domain
  • Deployment of Oracle Fusion Applications utilizing a second Oracle WebLogic Server domain for SCM

Figure 6. Oracle  Fusion Distributed Order Optimization deployed on Oracle SuperCluster T5-8.

Figure 6. Oracle Fusion Distributed Order Optimization deployed on Oracle SuperCluster T5-8.

Deployment of Oracle RAC and Oracle Exadata Storage Servers

The database domains were configured to contain the following Oracle RAC databases, which are shown in Figure 6:

  • FUSIONDB1 and FUSIONDB2 contained the Oracle Fusion Applications transactional databases.
  • OIDDB1 and OIDDB2 contained the identity and policy store for Oracle Internet Directory.
  • IDMDB1 and IDMDB2 were provided for Oracle Identity Manager, Oracle Access Manager, Oracle SOA Suite, and Oracle Metadata Services in Oracle Fusion Middleware.

The Oracle Identity Management and Oracle Fusion Applications schemas were loaded into the respective Oracle RAC database instances using the Repository Creation Utility provided with Oracle Fusion Middleware. The database instances used Oracle Automatic Storage Management for the storage of data.

For each database instance, two disk groups were created in the Oracle Exadata Storage Servers:

  • DATA disk groups were used to store the data files.
  • REDO disk groups were used for the redo files.

There were four Oracle Exadata Storage Servers in the configuration with 12 disks each, yielding 48 disks in total. Assuming normal redundancy on the storage servers yields a total of 18 GB of usable space to work with. To create a 300 GB grid disk implies that each disk drive will contribute approximately 17 GB of storage (300 GB / 18 GB = ~17 GB).

Deployment of the Oracle Fusion Identity Management Infrastructure

The Oracle Fusion Identity Management deployment was done by choosing the Oracle Fusion Middleware Enterprise Deployment (EDG) topology, as shown in Figure 7. The deployment was based on the EDG topology recommendations and used the following components:

  • It used six Oracle Solaris Zones to deploy the Oracle Internet Directory, Oracle Identity Manager, and Oracle Access Manager components and the web tier instances of Oracle HTTP Server. (The logical layout of these components is shown in Figure 6 by the OID1, OID2, IDM1, IDM2, OHS1, and OHS2 items.)
  • Oracle Traffic Director (the OTD item in Figure 6) was used to meet the load balancer requirement for the EDG topology. It was deployed in a dedicated Oracle Solaris Zone. A hardware-based load balancer could be used instead.
  • The Oracle Internet Directory instances were active/active deployments.
  • The Oracle Access Manager, Oracle Identity Manager, and Oracle SOA Suite servers and the Oracle Fusion Distributed Order Orchestration and Oracle SOA Suite servers (the DOO/SOA1–DOO/SOA4 items in  Figure 6). The SOA1–SOA4 items in Figure 6 were active/active deployments.
  • The Oracle WebLogic administration server was active/passive.
  • The Directory Tier and the Application Tier used the same shared storage from the Oracle ZFS Storage Appliance.

Please visit see “Introduction to the Enterprise Deployment Reference Topologies” for more details about the Enterprise Deployment topology.

Figure 7. Selecting  the EDG topology and specifying appropriate host names for the respective  components.

Figure 7. Selecting the EDG topology and specifying appropriate host names for the respective components.

The recommended deployment topology for Oracle Identity Manager for Oracle Fusion Applications is based on the following important considerations:

  • High availability and scalability are provided for identity and access services.
  • Every component is configured as a cluster.
  • Every component is associated with its own machine name.
  • The machine name can be assigned to the same server or a different server depending on the available resources.
  • Virtual host names and IP address are used for relocation of the services.
  • Oracle WebLogic servers are configured to listen on virtual IP addresses.

The configuration consisted of three main tiers, which were virtualized by using Oracle Solaris Zones.

  • The Directory Tier
  • The Application Tier
  • The Web Tier

Directory Tier

The Directory Tier is the deployment tier where all the LDAP services reside. This tier includes products such as Oracle Internet Directory and Oracle Virtual Directory. The Directory Tier is managed by directory administrators providing enterprise LDAP service support. The Directory Tier is closely tied with the Data Tier. Oracle Internet Directory relies on Oracle Database as its back end.

The following configuration was used for the deployment evaluated by Oracle:

  • The identity and policy store information was kept in the same database.
  • Separate Oracle RAC databases were used as the data store for Internet directory as well as identity and access management.
  • Oracle Directory Server was used exclusively.
  • The two Oracle Directory Server instances were located in two Oracle Solaris Zones: LDAPHOST1 (etc27-z11) and LDAPHOST2 (etc27-z12).

Application Tier

The Application Tier is the tier where Java EE applications are deployed. Products such as Oracle Identity Manager, Oracle Identity Federation, Oracle Directory Services Manager, and Oracle Enterprise Manager Fusion Middleware Control are the key Java EE components that can be deployed in this tier. Applications in this tier benefit particularly from the high availability support of Oracle WebLogic Server, and were configured as follows:

  • IDMHOST1 (etc27-z9) and IDMHOST2 (etc27-z10) have Oracle Identity Manager and Oracle SOA installed. Oracle Identity Manager is a user provisioning application. Oracle SOA deployed in this topology was exclusively used for providing workflow functionality for Oracle Identity Manager.
  • Oracle Enterprise Manager Fusion Middleware Control is integrated with Oracle Access Manager using the Oracle Platform Security Services agent.
  • The Oracle WebLogic Server console, Oracle Enterprise Manager Fusion Middleware Control, and Oracle Access Management console were always bound to the listen address of the administration server.
  • The Oracle WebLogic administration server was a singleton service. It ran on only one node at a time. In the event of failure, it was restarted on a surviving node.
  • The WLS_ODS1 managed server on IDMHOST1 (etc27-z9) and WLS_ODS2 managed server on IDMHOST2 (etc27-z10) were in a cluster and the Oracle Directory Services Manager applications were targeted to the cluster.
  • The WLS_OAM1 Managed Server on IDMHOST1 (etc27-z9) and WLS_OAM2 Managed Server on IDMHOST2 (etc27-z10) were in a cluster and the Oracle Access Manager applications were targeted to the cluster.
  • Oracle Directory Services Manager was bound to the listen addresses of the WLS_ODS1 and WLS_ODS2 Managed Servers. By default, the listen address for these managed servers is set to IDMHOST1 (etc27-z9) and IDMHOST2 (etc27-z10), respectively.
  • The WLS_OIM1 Managed Server on IDMHOST1 (etc27-z9) and WLS_OIM2 Managed Server on IDMHOST2 (etc27-z10) were in a cluster and the Oracle Identity Manager applications were targeted to the cluster.
  • The WLS_SOA1 Managed Server on IDMHOST1 (etc27-z9) and WLS_SOA2 Managed Server on IDMHOST2 (etc27-z10) were in a cluster and the Oracle SOA applications were targeted to the cluster.
  • The WLS_OIF1 Managed Server on IDMHOST1 (etc27-z9) and WLS_OIF2 Managed Server on IDMHOST2 (etc27-z10) were in a cluster and the Oracle Identity Federation applications were targeted to the cluster.

Web Tier

The Oracle HTTP Servers were deployed in the Web Tier. The Web Tier is required to support enterprise-level single sign-on using products such as Oracle Application Server Single Sign-On and Oracle Access Manager.
In the Web Tier, the following were configured:

  • WEBHOST1 (etc27-z7) and WEBHOST2 (etc27-z8) had Oracle HTTP Server, WebGate (an Oracle Access Manager component), and the mod_wl_ohs plug-in module installed. The mod_wl_ohs plug-in module enabled requests to be proxied from Oracle HTTP Server to an Oracle WebLogic Server running in the Application Tier.
  • Oracle HTTP Server 11g and WebGate for Oracle Access Manager used the Oracle Access Protocol (OAP) to communicate with Oracle Access Manager running on IDMHOST1 (etc27-z9) and IDMHOST2 (etc27-z10) in the Oracle Identity Manager demilitarized zone (DMZ). Oracle HTTP Server 11g and WebGate for Oracle Access Manager were used to perform operations such as user authentication.

Deployment of Oracle Fusion Applications

This section provides an overview of the Oracle Fusion Applications deployment. Please refer to the Oracle Fusion Applications Installation Guide for more in-depth information.

The topology chosen during the creation of the provisioning profile for the Oracle Fusion Applications deployment was “One host per application and middleware component,” as shown in Figure 8. This topology gives the highest flexibility in terms of deployment of Oracle Fusion Applications.

Figure 8. Figure 8. Oracle  Fusion Applications offers a choice of deployment topologies.

Figure 8. Oracle Fusion Applications offers a choice of deployment topologies.

This configuration choice creates a topology where the common domain host is separated from the SCM domain hosts. The domain is further split into primary and secondary hosts. The primary host contains the admin server of the SCM domain. The secondary host contains all of the managed servers such as SCM common, Oracle Product Information Management, Oracle Cost Management, Oracle Fusion Distributed Order Orchestration, Logistics, Oracle Advanced Planning, Oracle SOA servers, and so on. The Global Order Processing (GOP) server is hosted on the secondary node associated with the Oracle Advanced Planning and Scheduling server.

Common elements of the solution are defined as follows:

  • Primordial host. The Primordial host is the location of the Common domain (specifically the Administration Server of the Common domain). Only one primordial host exists in each environment.
  • Primary host. The Primary host is the location where the administration server for a domain runs. Only one primary host exists in a domain.
  • Secondary host. The secondary host is the location where the managed servers for any application reside when they are not on the same host as the administration server of the same domain. The term secondary host is meaningful when a domain spans more than one physical server. The server or servers that do not have the administration server are referred to as secondary hosts.

Some key highlights of the deployment include the following:

  • The secondary scale-out hosts were added after completing the initial provisioning. The scale-out hosts contained both an Oracle SOA managed server and an Oracle Fusion Distributed Order Orchestration managed server.
  • Each of these hosts was in an Oracle Solaris Zone, as shown in Figure 6.
  • A hard partition was used to create a resource pool with a set number of processors, and the secondary zones and the associated scale-out hosts were added to the resource pools.
  • The Oracle SOA Suite server scale-out needed specific changes on the Java Message Service server side. Each Oracle SOA server in a cluster uses a separate file on the shared folder. Please see “Additional Configuration Procedures for Scaling Out Oracle SOA Suite Server” for detailed instructions for scaling out the Oracle SOA Suite server.
  • The zones communicated over the InfiniBand network. The JDBC connection to the Oracle RAC database was over the InfiniBand Listener.

Figure 9 illustrates the configuration of the primordial host (etc27-z18).

Figure 9. Configuring  the primordial host.

Figure 9. Configuring the primordial host.

Figure 10 illustrates the configuration of the primary host (etc27-z20) and the secondary host (etz27-z21).

Figure 10. Configuring  the primary and secondary hosts.

Figure 10. Configuring the primary and secondary hosts.

Figure 11 illustrates the configuration of the SCM Web Tier instance of Oracle HTTP Server (etc27-z19), which was hosted on an Oracle Solaris Zone.

Figure 11. Configuring  the SCM instance of Oracle HTTP Server.

Figure 11. Configuring the SCM instance of Oracle HTTP Server.

Tuning Recommendations

While full tuning and table layout is beyond the scope of this document, the sections that follow provide a high-level overview of tuning practices employed during Oracle testing of Oracle Fusion Distributed Order Orchestration on Oracle SuperCluster T5-8.

Oracle HTTP Server Tuning

A number of Oracle HTTP Server tunings were made to ifModule mpm_work_module, as shown below.

# worker MPM
# StartServers: initial number of server processes to start
# MaxClients: maximum number of simultaneous client connections
# MinSpareThreads: minimum number of worker threads which are kept spare
# MaxSpareThreads: maximum number of worker threads which are kept spare
# ThreadsPerChild: constant number of worker threads in each server process
# MaxRequestsPerChild: maximum number of requests a server process serves
# Specify "ServerLimit nnn" before MaxClients if MaxClients/ThreadsPerChild > 16.
# Specify "ThreadLimit nnn" before MaxClients if ThreadsPerChild > 64.
<IfModule mpm_worker_module>
ThreadLimit        64
ServerLimit        512
StartServers         20
MaxClients        20000
MinSpareThreads    3200
MaxSpareThreads     4800
ThreadsPerChild    48
MaxRequestsPerChild  0
AcceptMutex fcntl
LockFile "/var/tmp/http_lock"
</IfModule>

Oracle Fusion Applications Tuning

Various application elements in SCM were tuned to optimize performance in the tested configuration, as described in the sections that follow.

Modifying Memory Parameters of SCM Domain Managed Servers

Every domain within the SCM application has its own set of properties, which are specified in the fusionapps_start_params.properties file located in the configuration directory of the domain. Please refer this excellent blog for more details on fusionapps_start_params.properties.

For each of the following items in the fusionapps_start_params.properties file, change the name and the value pair. If the name is not present, it should be added to the file.

  • #Fusion Default Sysprops
    fusion.default.default.sysprops=-Dapplication.top=${WL_HOME}/../applications/scm/deploy 
    -Djbo.ampool.minavailablesize=1 -Djbo.doconnectionpooling=true 
    -Djbo.load.components.lazily=true -Djbo.max.cursors=5 -Djbo.recyclethreshold=75 
    -Djbo.txn.disconnect_level=1 -Djps.auth.debug=false -Doracle.fusion.appsMode=true 
    -Doracle.notification.filewatching.interval=60000 -Dweblogic.SocketReaders=3 
    -Dweblogic.security.providers.authentication.LDAPDelegatePoolSize=20 -Djps.authz=ACC 
    -Djps.combiner.optimize.lazyeval=true -Djps.combiner.optimize=true 
    -Djps.policystore.hybrid.mode=false -Djps.subject.cache.key=5 
    -Djps.subject.cache.ttl=600000 
    -Ddiagfwk.diagnostic.test.location=${WL_HOME}/../applications/jlib/diagnostic,
    ${ATGPF_ORACLE_HOME}/archives/applications/diagnostics -Doracle.multitenant.enabled=false 
    -Doracle.jdbc.createDescriptorUseCurrentSchemaForSchemaName=true        
    -Dapplication.config.location.ocm=${FA_INSTANCE_HOME}/ocm  
    -Dweblogic.security.SSL.trustedCAKeyStore=/u01/oracle/instance/keystores/fusion_trust.jks  
    -Dweblogic.mdb.message.MinimizeAQSessions=true  
    -Dweblogic.ejb.container.MDBDestinationPollIntervalMillis=6000 
    -Dweblogic.http.client.defaultReadTimeout=300000 
    -Dweblogic.http.client.defaultConnectTimeout=300000 
    -DHTTPClient.socket.readTimeout=300000 -DHTTPClient.socket.connectionTimeout=300000 
    -Dwebcenter.owsm.gpa.enabled=true -Dprovisioning.start.params.processed=true 
    -DXDO_FONT_DIR=${WL_HOME}/../applications/../bi/common/fonts 
    -Dweblogic.LoginTimeoutMillis=50000
    
  • #SCM Domain Admin Server
    fusion.AdminServer.SunOS-sparc.memoryargs=-XX:PermSize=1g -XX:MaxPermSize=1g 
    -XX:+UseParallelGC  -XX:+HeapDumpOnOutOfMemoryError  
    -XX:HeapDumpPath=${FA_INSTANCE_HOME}/debug -XX:+ParallelGCVerbose 
    -XX:ReservedCodeCacheSize=128m -XX:+UseParallelOldGC  -XX:ParallelGCThreads=2
    
  • #Advanced Planning Server
    fusion.AdvancedPlanningCluster.SunOS-sparc.memoryargs=-XX:PermSize=256m 
    -XX:MaxPermSize=512m -XX:+UseParallelGC  -XX:+HeapDumpOnOutOfMemoryError  
    -XX:HeapDumpPath=${FA_INSTANCE_HOME}/debug -XX:+ParallelGCVerbose 
    -XX:ReservedCodeCacheSize=128m -XX:+UseParallelOldGC  -XX:ParallelGCThreads=4  
    -XX:+UseCompressedOops -XX:StringTableSize=500009
    
  • #Cost Management Server
    fusion.CostManagementCluster.SunOS-sparc.memoryargs=-XX:PermSize=256m 
    -XX:MaxPermSize=512m -XX:+UseParallelGC  -XX:+HeapDumpOnOutOfMemoryError  
    -XX:HeapDumpPath=${FA_INSTANCE_HOME}/debug -XX:+ParallelGCVerbose 
    -XX:ReservedCodeCacheSize=128m -XX:+UseParallelOldGC  -XX:ParallelGCThreads=4 
    -XX:StringTableSize=500009
    
  • #Order Orchestration Server
    fusion.OrderOrchestrationCluster.SunOS-sparc.memoryargs=-XX:PermSize=756m 
    -XX:MaxPermSize=756m -XX:+UseParallelGC  -XX:+HeapDumpOnOutOfMemoryError  
    -XX:HeapDumpPath=${FA_INSTANCE_HOME}/debug -XX:+ParallelGCVerbose 
    -XX:ReservedCodeCacheSize=128m -XX:+UseParallelOldGC  -XX:+UseCompressedOops 
    -XX:ParallelGCThreads=20 -XX:LargePageSizeInBytes=2g -XX:StringTableSize=500009 
    -Xnoclassgc -XX:-UseAdaptiveSizePolicy
    
  • #SCM Common Server
    fusion.SCMCommonCluster.SunOS-sparc.memoryargs=-XX:PermSize=256m 
    -XX:MaxPermSize=756m -XX:+UseParallelGC   -XX:+HeapDumpOnOutOfMemoryError 
    -XX:HeapDumpPath=${FA_INSTANCE_HOME}/debug -XX:+ParallelGCVerbose 
    -XX:ReservedCodeCacheSize=128m -XX:+UseParallelOldGC -XX:ParallelGCThreads=4 
    -XX:StringTableSize=500009
    
  • #SCM SOA Server
    fusion.SCM_SOACluster.SunOS-sparc.memoryargs=-XX:PermSize=756m 
    -XX:MaxPermSize=756m -XX:+UseParallelGC  -XX:+HeapDumpOnOutOfMemoryError  
    -XX:HeapDumpPath=${FA_INSTANCE_HOME}/debug -XX:+ParallelGCVerbose 
    -XX:ReservedCodeCacheSize=128m -XX:+UseParallelOldGC -XX:+UseCompressedOops 
    -XX:ParallelGCThreads=20 -XX:LargePageSizeInBytes=2g -XX:StringTableSize=500009   
    -Xnoclassgc -XX:-UseAdaptiveSizePolicy
    
  • #Added Sysprops for AdvancedPlanningCluster
    fusion.AdvancedPlanningCluster.SunOS-sparc.sysprops=
    -Dweblogic.ejb.container.MDBDestinationPollIntervalMillis=6000 
    -Dweblogic.mdb.message.MinimizeAQSessions=true
    
  • #Added Sysprops for OrderOrchestrationCluster
    fusion.OrderOrchestrationCluster.SunOS-sparc.sysprops=
    -Dweblogic.ejb.container.MDBDestinationPollIntervalMillis=6000 
    -Dweblogic.mdb.message.MinimizeAQSessions=true    
    
  • #Memory Changes for each Servers
    fusion.SCMDomain.AdminServer.default.minmaxmemory.main=-Xms12g -Xmx12g
    fusion.SCMDomain.AdvancedPlanningCluster.default.minmaxmemory.main=-Xms4g -Xmx4g
    fusion.SCMDomain.CostManagementCluster.default.minmaxmemory.main=-Xms512m -Xmx2048m
    fusion.SCMDomain.LogisticsCluster.default.minmaxmemory.main=-Xms512m -Xmx2048m
    fusion.SCMDomain.OrderOrchestrationCluster.default.minmaxmemory.main=-Xms16g -Xmx16g -Xmn6g
    fusion.SCMDomain.SCM_SOACluster.default.minmaxmemory.main=-Xms12g -Xmx12g -Xmn4608m
    

JDBC Tuning

The generic way to set the datasource parameters for JDBC (excluding the database pool) is to log in to the admin server console and execute the following steps:

  1. In the Domain Structure tree, expand Services, and then select Data Sources.
  2. On the Summary of Data Sources page, select the data source.
  3. In the Change Center of the Administration Console, click Lock and Edit.
  4. On the “Settings for <DataSourceName>” page, go to Configuration -> Connection Pool.
  5. Set the Initial Capacity, Maximum Capacity and Minimum Capacity.
  6. Expand the “advanced” properties view and set the Shrink Frequency.
  7. Click “Save” to save settings.
  8. Click “Activate Changes” in the Change Center.

The connection pool within a JDBC data source contains a group of JDBC connections that applications reserve, use, and then return to the pool. The connection pool and the connections within it are created when the connection pool is registered, usually when starting up Oracle WebLogic Server, or when deploying the data source to a new target. The config.xml file was altered as follows in the testing performed by Oracle.

DB pool(DataSourceName-rac1, DataSourceName-rac2)
SOALocalTxDataSource:Initial Capacity=500, max capacity=3500, min capacity=500
JRFWSAsyncDSAQ:  Initial Capacity=50, max capacity=300, min capacity=50
ApplicationServiceDB: Initial Capacity=150, max capacity=400, min capacity=150
SOADataSource: Initial Capacity=500, max capacity=3500, min capacity=500
mds-ApplicationMDSDB: Initial Capacity=50, max capacity=300, min capacity=50
ApplicationDB: Initial Capacity=150, max capacity=2000, min capacity=150
EDNSource: Initial Capacity=50, max capacity=100, min capacity=50
EDNDataSource: Initial Capacity=50, max capacity=100, min capacity=50


Shrink frequency: 600s
ApplicationDB
ApplicationServiceDB
JRFWSAsyncDSAQ
mds-ESS_MDS_DS
mds-soa
SOADataSource
SOALocalTxDataSource
mds-ApplicationMDSDB
mds-CustomPortalDS

Message Driven Bean (MDB) Tuning

For MDB pool settings, the maximum number of beans in the free pool was increased as shown in Figure 12. The following steps were performed:

  1. Log in to the admin console.
  2. Select Lock and Edit, and then Deployments (in the left side navigator).
  3. Select the application and find the MDBs with names ending in AsyncRequestProcessorMDB and AsyncResponseProcessorMDB.
  4. For each, click the MDB and select the Configuration tab.
  5. Change the Max beans value in the free pool to 15.
  6. Repeat this process for both request and response.
  7. Click Save.
  8. Repeat these steps for both the Oracle Distributed Order Applications and Applications servers.

Figure 12. Configuring  MDB pool settings.

Figure 12. Configuring MDB pool settings.

Oracle SOA Suite Tuning

Within the Oracle SOA Suite, the number of threads was tuned to select for performance. The following steps were performed from the Oracle Enterprise Manager Console to set the Oracle Business Process Execution Language (BPEL) Process Manager:

  1. Select Farm SCMDomain, SOA, then SOA-infra, and then SOA Administration.
  2. Select BPEL Properties.
  3. The System threads were set to 5, BPEL Engine Threads were set to 400, and BPEL Invoke Threads were set to 300.

For Mediator threads, the following was performed from the Oracle Enterprise Manager Console:

  1. Select Farm_SCMDomain, SOA, then SOA-infra, and then SOA Administration.
  2. Select Mediator Properties.
  3. The number of Parallel Worker Threads was set to 20.

To set threads for the Oracle Event Delivery Network (EDN) of the Oracle SOA Suite, the following was performed from the Oracle Enterprise Manager Console:

  1. Select Farm_SCMDomain, SOA, then SOA-infra, and then SOA Administration.
  2. Select System Mbean browser, and then Application defined Mbean.
  3. Next select oracle.as.soainfra.config, and then Server: soa_server-x.
  4. Finally select EDNConfig, and then EDN.
  5. The number of EDN threads was set to 40.

Oracle Global Order Promising Server Tuning

To enhance Oracle Global Order Promising Server, edit the gopServerConfig.xml file (instance/gop_1/config/GOP/GlobalOrderPromisingServer1/gopServerConfig.xml) and add the following line:

<num-msg-autosave>1000000</num-msg-autosave>

In the same file, also change the -finest level to incident error logging:

<logLevel>INCIDENT_ERROR</logLevel>

Java.security Tuning

Switch the security providers in the java.security file found in the java.home/lib/security directory, as follows:

security.provider.1=sun.security.provider.Sun
security.provider.2=sun.security.pkcs11.SunPKCS11${java.home}/lib/security/sunpkcs11-solaris.cfg
security.provider.3=sun.security.rsa.SunRsaSign

Network Tunings

For both the scale-out zones and the SCM Oracle HTTP Server zones, issue the following commands using the ndd utility:

# ndd -set /dev/tcp tcp_conn_req_max_q 96000
# ndd -set /dev/tcp tcp_conn_req_max_q0 64000
# ndd -set /dev/tcp tcp_xmit_hiwat 524288
# ndd -set /dev/tcp tcp_recv_hiwat 524288
# ndd -set /dev/tcp tcp_naglim_def 1
# ndd -set /dev/tcp tcp_smallest_anon_port 10000
# ndd -set /dev/tcp tcp_time_wait_interval 6000

Oracle RAC Tuning

Oracle Fusion Applications for SCM requires some level of database tuning to perform optimally. For testing performed by Oracle, database tuning was done in the following areas:

  • Modifying init parameters through the init.ora file (shown below)
  • Partitioning of specific tables for Oracle Fusion Distributed Order Orchestration and Oracle SOA Suite
  • Re-creating the line of business (LOB) segments using a secured file for BLOB
  • Collecting optimizer statistics on certain tables
  • Re-creating the SOA-INFRA and Oracle Fusion Distributed Order Processing specific index with global hash partitions

A number of key changes were made to the init.ora file, as follows:

_gc_defer_time=0
_gc_policy_time=0
_gc_undo_affinity=FALSE
filesystemio_options=SETALL

open_cursors=500
sga_target=64G 
pga_aggregate_target=8G 
shared_pool_size=4G 

nls_sort=BINARY
open_cursors=500
session_cached_cursors=500
plsql_code_type=NATIVE
processes=15000
db_securefile=ALWAYS

Redo log file size=4 files of 20GB each
Advertisements
Categories: Oracle, SuperCluster Tags: , ,
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: