Archiv für die Kategorie ‘Oracle SOA Suite’

IT-Security (Part 3): WebLogic Server and Java Security Features

WebLogic Server and Java Security Features [1]

WebLogic Server supports the Java SE and Java EE Security to protect the resources of whole system. The resources could be Web applications, Uniform Resource Locator (URL), Enterprise JavaBeans (EJBs), and Connector components.

Java SE capabilities: Security APIs

Java uses APIs to access security features and functionality and its architecture contains a large set of application programming interfaces (APIs), tools, and implementations of commonly-used security algorithms, and protocols. This delivers the developer a complete security framework for writing applications and enables them to extend the platform with new security mechanisms.[2]

Java Authentication and Authorization Services (JAAS)

WebLogic Server uses the Java Authentication and Authorization Service (JAAS) classes to consistently and securely authenticate to the client. JAAS is a part of Java SE Security APIs and a set of Java packages that enable services to authenticate and enforce access controls upon users and /or fat-client authentication for applications, applets, Enterprise JavaBeans (EJB), or servlets.

JAAS uses a Pluggable Authentication Module (PAM) framework, and permits the use of new or updated authentication technologies without requiring modifications to the application. Therefore, only developers of custom Authentication providers and developers of remote fat client applications need to be involved with JAAS directly. Users of thin clients or developers of within-container fat client applications do not require the direct use or knowledge of JAAS.

JAAS LoginModules

All LoginModules are responsible for authenticating users within the security realm (we are going to discuss about that later) and for populating a subject with the necessary principals (users/groups). LoginModules contains necessary methods for Login Context, Accounts, Credentials, configuration of them, and different ways to exception handling. Each Authentication providers will be configured in a security realm, its LoginModules will store principals within the same subject too. I try to present that with an example: Via WebLogic Server Admin Console: Home >myDomain > Domain Structure click on Security Realms and then create a new realm “Moh_Realm-0” and then click on “OK”


Figure 1 create a new Realm

Select new realm and then click on tab “provider”, and then click on “New”, in order to create a new provider:


Figure 2 open the new Realm

In this use case, we select type: “WebLogic Authentication Provider” and give a name e.g. “DefAuthN”, then “OK”.  The WebLogic Authentication provider is configured in the default security realm (myrealm). The WebLogic Authentication provider allows you to edit, list, and manage users, groups, and group membership. User and group information is stored in the embedded LDAP server.[3]


 Figure 3 create a new Authentication Provider

After define “Provider”, we have to restart Admin Server. Now, we can check and compare users of new realm (Moh_Realm-0) with default realm (myrealm) of WebLogic. For myrealm, Icreated a new user named “userDOAG” and we see the following list there (Home >Summary of Security Realms >myrealm >Users and Groups)


Figure 4 users of myrealm

But I didn’t create same user for Moh_Realm-0 (Home >DefAuthN>Summary of Security Realms >Moh_Realm-0 >Users and Groups):


Figure 5 users of Moh_Realm-0

It shows, that we can use security provider in different gatherings und expand our security realm with additional user, groups, and security providers. We are working on it in next part of this article.

JAAS Control Flags

The JAAS Control Flag attribute determines how the LoginModule for the WebLogic Authentication provider is used in the login sequence. The values for the Control Flag attribute are as follows: Home >Summary of Security Realms > Moh_Realm-0 >Providers > DefAuthN


Figure 6 Control flags via Admin Consol

  • REQUIRED – This LoginModule must succeed. Even if it fails, authentication proceeds down the list of LoginModules for the configured Authentication providers. This setting is the default.
  • REQUISITE – This LoginModule must succeed. If other Authentication providers are configured and this LoginModule succeeds, authentication proceeds down the list of LoginModules. Otherwise, return control to the application.
  • SUFFICIENT – This LoginModule needs not succeed. If it does succeed, return control to the application. If it fails and other Authentication providers are configured, authentication proceeds down the LoginModule list
  • OPTIONAL – The user is allowed to pass or fail the authentication test of these Authentication providers. However, if all Authentication providers configured in a security realm have the JAAS Control Flag set to OPTIONAL, the user must pass the authentication test of one of the configured providers.[4]

Now, we can focus on two important JAAS-tasks: authentication and authorization of users…[5]


[4] Oracle Fusion Middleware: Understanding Security for Oracle WebLogic Server 12c Release 1, (12.1.1), E24484-02, January 2012:

Dynamic endpoint binding in Oracle SOA Suite

Why is dynamic endpoint binding needed?

Sometimes a BPEL process instance has to determine at run-time which implementation of a web service interface is to be called. We’ll show you how to achieve that using dynamic endpoint binding.

Let’s imagine the following scenario: we’re running a car rental agency called RYLC (Rent Your Legacy Car) which operates different locations. The process of renting a car is basically identical for all locations except for the determination which cars are currently available. This is depicted in the following diagram:


There are three different implementations of the GetAvailableCars service. But how can we achieve calling them dynamically at run-time using Oracle SOA Suite?

How to dynamically set the service endpoint

There are just a couple of implementation steps we need to perform to enable dynamic endpoint binding:

  • create a new SOA project in JDeveloper
  • add a CarRental BPEL process
  • add an external reference to the GetAvailableCars service within the composite
  • create a DVM file containing the URI’s by which the services for the different locations can be accessed
  • set the endpointURI property on the Invoke component calling the GetAvailableCars service (value is taken from the DVM file)

Those steps are described in more detail here:

The composite view should now be similar to this:


Decouple composite design from specific endpoints

We need to allocate a concrete implementation of the GetAvailableCars service to the composite when deploying it. We could use the implementation of any of the location specific services (e.g. the Berlin service) but that is generally not a good idea as once the particular service is unavailable the composite can no longer be deployed.

Therefore we decouple the CarRental composite from any specific endpoint by adding the GetAvailableCars interface as an exposed service:


The mocked GetAvailableCars process is just there for decoupling a certain location from the caller during deployment time. During run-time the mock will not be called but replaced with one of the services configured in the DVM. However, if the dynamic call fails the mocked GetAvailableCars process returns an appropriate error message.  

In the config plan used for deploying the composite we must set the location of the external reference so that the composite seems to be calling itself:


Thereby we don’t have any dependencies on location specific services while deploying the composite. At run-time, however, the endpointURI property will override the deployed settings.

For the sake of completeness we must not forget that the dynamic call as described here (using Oracle SOA Suite only) could also be achieved using the Oracle Service Bus (OSB) and its Dynamic Routing component. In scenarios where no service bus is available, however, our approach is certainly worth consideration.

IT-Security (Part 2): WebLogic Server and Oracle Platform Security Services (OPSS)

OPSS Architecture

As we discussed (, OPSS is Oracle proposals regarding enterprise security services. It is as a framework that provides a comprehensive set of security services. These services based on Java technologies and have a consistent approach for design and apply security policies to Java EE and resources. We look at OPSS architecture from two different perspectives, which are connected to each other very closely. I try to review the advantages of OPSS for developers and administrators from Application’s perspective and present the cooperating of technology components such as LDAP, Application Server and Oracle Fusion Middleware from Component’s perspective. Thereby, we can determine the main OPSS’s benefits that Oracle says:

  • Allows developers to focus on application and domain problems
  • Supports enterprise deployments
  • Supports several LDAP servers and SSO systems
  • Is certified on the Oracle WebLogic Server
  • Pre-integrates with Oracle products and technologies

Application’s point of view

Oracle Platform Security Services (OPSS) is both a security framework exposing security services and APIs, and a platform offering concrete implementation of security services. It includes these elements:

  • Common Security Services (CSS), the internal security framework on which Oracle WebLogic Server is based
  • Oracle Platform Services
  • User and Role APIs
  • Oracle Fusion Middleware Audit Framework

Figure 1 Application’s perspective  illustrations OPSS‘s architecture from application point of view. Such architecture allows OPSS to support different security and identity systems without changing the APIs. OPSS is integrated with Oracle Fusion Middleware‘s management tools to administrate and monitor the security policies implemented in the underlying identity management infrastructure.  Therefore, OFM technologies such as Oracle SOA, Oracle WebCenter Suite, Oracle Application Development Framework (ADF), Oracle Web Services Manager (OWSM) and… could use OPSS capacities.

OPSS offers abstraction layer APIs those isolate developers from security and identity management implementation details. In this way, developer can invoke the services provided by OPSS directly from the development environment (e.g. JDeveloper) using wizards. Admin can configure the services of OPSS into the WLS. As you see in Figure, the uppermost layer consists of Oracle WebLogic Server and the components and Java applications running on the server; below this is the API layer consisting of Authentication, Authorization, CSF (Credential Store Framework), and User and Role APIs, followed by the Service Provider Interface (SPI) layer and the service providers for authentication, authorization, and others. The final and bottom layer consists of repositories including LDAP and database servers.

Figure 1 Application's perspective

Figure 1 Application’s perspective

 OFM-Component’s point of view

Figure 2 OFM-Component’s perspective shows the various security components as layers. The top layer includes the OPSS security services; the next layer includes the service providers, and the bottom layer includes the OPSS security store with a repository of one of three kinds. OPSS provides auditing capabilities for components too.

The second layer [Security Services Provider Interface (SSPI)] has the capability that works with Java EE container security – named Java Authorization Contract for Containers (JACC) mode and in resource-based (non-JACC) mode, and resource-based authorization for the environment.

SSPI is a set of APIs for implementing pluggable security providers. A module implementing any of these interfaces can be plugged into SSPI to provide a particular type of security service. Therefore, OPSS has a consistent structure and is able to meet the requirements for integrating JEE Applications generally and specially OFM-Components and Oracle Security technologies, such as OAM, OID and so on.

Figure 2 OFM-Component's perspective

Figure 2 OFM-Component’s perspective


IT-Security: WebLogic Server and Oracle Platform Security Services (OPSS)

IT security is popular in a way never known before! I love it!

If I discussed e.g. in a WebLogic Server workshop about that, I heard normally form administrators: That’s not my thing, forget it! But newly, everybody wants to know “how can we secure our data and our information?!”  To be honest, you need to detect your application server that you are using, and if you are not able to use WebLogic Server security features, then this could be your problem.

WebLogic Server uses a security architecture that provides a unique and secure foundation for applications that are available via the Web. It is designed for a flexible security infrastructure and enabled to response the security challenges on the Intra- and Internet. We are able to use security capacity of WebLogic Server as a standalone feature to secure WebLogic Server and/or as part of a corporation-wide, security management system.


In order to achieve a satisfactory level of security, we have to design an integrated security policy: from lack of resources till the increasing complexity of IT systems. The elementary principles in IT security are Confidentiality and/or privacy, availability and integrity. Confidentiality and/or privacy mean information that has to be protected against unauthorized disclosure. Availability means services; IT system functions and information must be available to users when they need it. Integrity means data must be complete and unaltered.  Therefore, we understand security policy as a policy that it covers protection objectives and broad-spectrum security measures in the sense of the acknowledged requirements of an organization.

Simple to say, security is the protection of information that needs to protected, from unauthorized access. IT security could be helped us through technology, processes, policies and training, so that we can be sure that data stored and secured in a computer or passed between computers is not compromised.  Therefor data encryption is the first step in the direction IT-Security. In order to access to specific resources, user needs to provide (normally) his user name and password. Data encryption is the transformation of data into a form that cannot be understood without decryption key(s).

Security Challenges

In a world that we used to work with distributed IT-landscape, we face to with different challenges, e.g. network-based Attacks, heterogeneity on application layer from user interface till to application.  It is really difficult to stay on a standard security level for all of team members of development team. We cannot awaiting all of application developers to be able develop solve the security challenges such as privacy, identity management, compliance, audit too.  Another area is interfaces between application server and backend database.

A simple case is presented on the following diagram: most applications are multi-tiered and distributed over several systems. A client invokes an application or sends a request to server. This case presents how many systems are in transaction involve.  We have to check all of critical points and interfaces: network-based attacks, user interface, application Server and so on.


On these grounds, we need to use an enterprise security framework that allows application developers to pick and choose from a full set of reusable and standards based security services that allow security, privacy, and audit. Oracle Platform Security Services (OPSS) is a security framework that runs on WebLogic Server and is available as part of WebLogic Server. It combines the security features of BEA‘s internal security (WLS + Oracle Entitlement Server (OES)) and the OAS (Hava Platform Security (JPS) – earlier JAZN) to provide application developers, system integrators, security administrators, and independent SW vendors with a comprehensive security platform framework for Java SE and Java EE applications. In this form, Oracle is able to suggest a uniform enterprise security policy and a self-contained and independent framework with Identity management and audit services across the enterprise. The heart of whole system beats on WebLogic Server.

WebLogic Server provides authentication, authorization, and encryption services with which you can guard these resources. These services cannot provide protection, however, from an intruder who gains access by discovering and exploiting a weakness in your deployment environment. Therefore, whether you deploy WebLogic Server on the Internet or on an intranet, it is a good idea to contact an independent security expert to go over your security plan and procedures, audit your installed systems, and recommend improvements.


PS4: Error for Updating OSB Domain after applying the latest Patch Set


The customer does the following steps as PS4 Doc regarding “Updating OSB Domain after applying the latest Patch Set” (See Updating an Oracle Service Bus Domain After Applying the Latest Patch Set in

Perform the following steps for each domain to be upgraded:

1. Make sure you have backed up and shut down all domains to be upgraded.

2. Under each Oracle Service Bus 11gR1 domain to be upgraded, open a command window and run the DOMAIN/bin/setDomainEnv.cmd/sh command.

4. In the command window, switch to the directory in which the upgrade scripts resides: OSB_ORACLE_HOME/common/lib/upgrade

5. On the command line, runs the appropriate script for your operating system: Linux/Solaris: java weblogic.WLST ./

java weblogic.WLST ./

Initializing WebLogic Scripting Tool (WLST) …

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

AND he gets the following error:

Problem invoking WLST – Traceback (innermost last):

File “/oracle/fmw/Oracle_OSB1/common/lib/upgrade/./”, line 368, in ?

File “/oracle/fmw/Oracle_OSB1/common/lib/upgrade/./”, line 16, in replaceOSBServerModuleVersion

ValueError: substring not found in string.index


This issue happens if the upgrade script has been run more than once and cannot find the proper substring in the configuration because it has already been changed.

„To resolve this issue first verify that the upgrade was not already run by examining the time-stamp of the DOMAIN_HOME/config/config.xml of the domain begin upgraded. If this file has recently changed then it is likely that the upgrade ran to completion. You can also look at the config.xml file and verify the version in the section:


if you upgraded more than once, then you can solve the issue in two ways:

  1. Do not do anything and continue with the next steps of your upgrade, because script had already been run.
  2. (my suggestion) Please restore from backup and re-run your upgrade again, because you can be sure your upgrade is fully correct and you do not lost your time for problem analysis, if you find that there are other problems after the upgrade.


The following note will help you: Running The Domainupgrade.Py Script Gives Error: “substring not found in string.index” (Doc ID 1313321.1)

Using Credential Store Framework when communicating with Oracle Human Workflow API

17. Dezember 2013 1 Kommentar

For connecting to Oracle Human Workflow Engine via the provided client API, username and password of an admin user are needed. These credentials could also be useful during task processing, when actions on a task has to be performed on behalf of a user, for example in case of holidays or illness. But how can to manage the admin users credentials in secure way, independent from the target environment?

A first approach is to use a mechanism where the credentials were provided as context parameters in the web.xml, of a Facade Web Service in front of the client API to hide complexity and to force upgrade protection in case of API changes. When deploying this Web Service facade, the parameters are replaced using a deployment plan. This solution works, but has the disadvantage that username and password of the admin user are contained in the deployment plan as clear text. From a SysOps perspective this mechanism is not appropriate. 

So another possibility must be found to manage user credentials in a consistent and secure way. An approach to ensure the secure management of credentials is to use the Oracle Credential Store Framework (CSF), provided by Oracle Platform Security Services (OPSS). Configuring and using CSF is quite simple and done in a few steps:

1. Create Credentials Store in EM (Right click on Weblogic domain > [Domainname] and then choose Security > Credentials from the Dropdown menu)


2. Configure System Policy to authorize access to the configured Credential Store (Right click on Weblogic domain > [Domainname] and then choose Security > Credentials from the Dropdown menu)


The configurations, needed to allow read-only access from an application, contains the following information

Type: Codebase
Codebase: file:${domain.home}/servers/${weblogic.Name}/tmp/_WL_user/<APPLICATION_NAME>/-
  Permission Class:
  Resource Name: context=SYSTEM,mapName=WORKLIST-API,keyName=*
  Permission Actions: read

3. Deploy the application

Managing the credentials in the Credential Store may also be done by using WLST functionalities, which would be more maintainable from a SysOps perspective. Details on that could be found here. The system policies may be directly edited in <MW_HOME>/user_projects/domains/<DOMAIN_NAME>/config/fmwconfig/system-jazn-data.xml. But this approach may be error-prone and often not appropriate in clustered production environments, when OPSS configuration is done in a database or LDAP.

Accessing the so configured Credential Store by a server application is done by using the lines of code below. For developing the shown CSF access, jps-api.jar must be in the classpath of the application. At runtime the needed dependencies are provided by Oracle Weblogic Server.

package com.opitzconsulting.bpm.connection;


public final class CsfAccessor {

 static PasswordCredential readCredentialsfromCsf(String pCsfMapName, String pCsfKey) {

   try {
     return AccessController.doPrivileged(new PrivilegedExceptionAction<PasswordCredential>() {

            public PasswordCredential run() throws Exception {

              final CredentialStore credentialStore = JpsServiceLocator.getServiceLocator().lookup(CredentialStore.class)
              return (PasswordCredential) credentialStore.getCredential(pMapName, pKey);
   } catch (Exception e) {
     throw new RuntimeException(String.format("Error while retrieving information from credential store for Map [%s] and Key [%s]", pCsfMapName, pCsfKey), e);

When having more applications that need to access credentials from the Credentials Store, it is recommended to implement the access to CSF centrally and provide the functionality as a shared library within Weblogic Server. Otherwise you have to configure the corresponding System Policies, which authorizes the access to CSF, separate for every new application that needs to have access to CSF. Using the shared library approach, only the shared library itself has to be authorized for accessing the Credentials Store. Applications that need to access CSF must only specify the dependency to the shared library in the application’s deployment descriptor file, like weblogic-application.xml.

  xsi:schemaLocation="  logic-application/1.3/weblogic-application.xsd">

In order to encapsulate the access to CSF and to avoid the publication of the PasswordCredential object instance, we decided to further encapsulate the CSF access by a special Connection object, which establishes the connection to the Human Workflow API and can provide a WorkflowContext for the corresponding admin user.

package com.opitzconsulting.bpm.connection;

import java.util.Map;

import oracle.bpm.client.BPMServiceClientFactory;

public class HumanWorkflowApiConnection {

  private IWorkflowServiceClient workflowServiceClient;

  public HumanWorkflowApiConnection(Map<CONNECTION_PROPERTY, String> pProperties) {
    final BPMServiceClientFactory bpmServiceClientFactory = BPMServiceClientFactory.getInstance(pProperties, null, null);
    workflowServiceClient = bpmServiceClientFactory.getWorkflowServiceClient();

  public IWorkflowServiceClient getWorkflowServiceClient() {
    return workflowServiceClient;

  public IWorkflowContext createWorkflowContextForAdmin(String pCsfMapname, String pCsfKey) {

    final PasswordCredential passwordCredential = CsfAccessor.readCredentialsfromCsf(pCsfMapname, pCsfKey);

    try {
      return workflowServiceClient.getTaskQueryService().authenticate(passwordCredential.getName(),
      passwordCredential.getPassword(), "");
    } catch (Exception e) {
      throw new RuntimeException(String.format("Exception while authenticating Admin User [%s]", passwordCredential.getName()), e);



Monitoring large flow traces with Oracle SOA Suite (Part 1)

12. November 2013 1 Kommentar

End-to-end monitoring of BPEL process instances across composite borders is a great feature of Oracle SOA/BPM Suite 11g. It is shown in nearly every PreSales presentation and when you were used to know SOA Suite 10g or to work with other kinds of distributed, heterogenous systems it is a real improvement.

But when you implement large process chains you might realize that the newly won process transparency can raise new challenges. Imagine you have a root process which creates several instances of sub-processes. In such a case without doing any extra work you will get one flow trace for the process and all of its sub-process instances. For large process chains you need to consider the following facts:

- Transparency: Although it shows an end-to-end view of the whole execution tree, trying to find a faulted sub-process might be a real challenge. It doesn’t matter if you start the search from the root process instance or from one of the sub-processes – the flow trace always displays all components of the execution context. When you click on a sub-process and you go back to the flow trace you might have to expand all child nodes again and again.

- DataSetTooLargeException:  When your flow trace becomes longer and longer, you will observe, that there is a maximum size for the audit trail that can be displayed by Enterprise Manager. Usually it results in a
java.lang.RuntimeException: Requested audit trail size is larger than threshold … chars

For large execution trees, sub-process instances might not be displayed or you might not be able to see things in detail.

- Low Memory: It is not only the visible representation of your instance, which struggles. A huge audit trail implicitly means that your needed memory allocation for executing your process instance grows. It can grow to this extent that your process instance crashes because of running low in memory.

- Purging: With large flow traces you should always have an eye on the capacity of your soa-infra database. Why is it so? Usually you should have purging routines installed to keep your system healthy – regular deletion of “old” instances from the dehydration store. To say it in a nutshell, the purging routines eliminate “completed” instances. If one of your sub-process instances goes into a faulted state, all processes in the same execution context are ignored from being purged (except when you define the ignore_state attribute to true). This means, although 99 percent of your instances have been executed correctly and could be purged the whole instance data, which can be huge as we stated earlier, are kept in your dehydration store.

So, how to deal with all these challenges? There are two relatively small changes which we describe in the following two posts:

1) Splitting large flow traces by setting a new execution context

2) Using the CompositeInstanceTitle-Property and composite sensors to simplify instance identification via the Enterprise Manager

Options for removing data in MDS Repository

The two main database schemas in the Oracle SOA Suite database repository are: _SOAINFRA and _MDS. Composite instance and runtime information are stored in the SOAINFRA schema. Commonly used metadata like WSDLs, XSDs, rules, fault policies, etc. as well as composite deployments are stored within the MDS schema.

With every deployment / import of the metadata artifacts a new document version will be created in the MDS. This means that re-importing an updated WSDL-file into the MDS does not delete the previous version of the document. Furthermore we sometimes need to remove unnecessary and unwanted files from the repository. If this is not considered you might end in problems like below:

ORA-01654: unable to extend index DEV_MDS.MDS_PATHS_U2 by 128 in tablespace DEV_MDS ORA-06512: at "DEV_MDS.MDS_INTERNAL_COMMON", line 865 ORA-06512: at "DEV_MDS.MDS_INTERNAL_COMMON", line 1021 ORA-06512: at "DEV_MDS.MDS_INTERNAL_COMMON", line 1121 ORA-06512: at "DEV_MDS.MDS_INTERNAL_COMMON", line 1216 ORA-06512: at "DEV_MDS.MDS_INTERNAL_COMMON", line 1872 ORA-06512: at line 1

In order to avoid production problems because of a full MDS tablespace you should clean up the schema from time to time. My post “Remove data in Oracle MDS” explains the following options to remove contents from MDS:

  • Remove directories and files from MDS using WLST
  • Remove directories and files from MDS using ANT
  • Remove files from MDS using WLST
  • Remove files from MDS using MBean Browser
  • Purge Metadata Version History using Fusion Middleware Control
Kategorien:Oracle BPM Suite, Oracle SOA Suite Schlagworte: , , ,

Purging of failed SOA components

In the previous post the preferred settings of heavily used SOA composites was demonstrated. It was shown that with the correct settings applied only failed component instances are persisted in the database. This post discusses the purging possibilities of such component instances.

As stated in the earlier post, we are now facing a situation where the composite instance itself is not persisted, only its failed component instances. Since the composite is heavily used, even a short outage of a system used by the composite can lead to a huge amount of failures. Having resolved these faults, the need often arises to clear these faults from the system so that fault monitoring is not disturbed by them.

Instances without faults are not persisted in this case because no information from the composite is needed, so a straightforward approach would be to delete the failed instances with Enterprise Manager. In this case, however, this is not possible because the composite instance itself is not persisted and instance purging in Enterprise Manager is based on the composite instance.

Another possibility is to use the database purging scripts supplied by Oracle. These scripts are automatically present in the SOAINFRA schema once the schema is created with the RCU utility. If purging is done with the help of these scripts, the failed orphaned component instances are deleted with the purging run. However, purging generally has fixed time slots and it might be difficult to bring that slot forward if the failed component instances are to be deleted sooner.

The functions used by the purging scripts, however, are also available on their own. A sample script to delete the failed orphaned components could therefore be constructed by using the functions present in the SOAINFRA schema. The same functions can also be found in the scripts which are automatically supplied with every SOA Suite and JDeveloper installation and can be found in the subdirectory rcu/integration/soainfra/oracle/soa_purge under the installation directory.


  min_creation_date TIMESTAMP;
  max_creation_date TIMESTAMP;
  batch_size INTEGER;
  retention_period TIMESTAMP;
  composite_dn VARCHAR2(125);
  soa_partition_name VARCHAR2(10);
  composite_name VARCHAR2(100);
  composite_revision VARCHAR2(10);
  mediator_deleted BOOLEAN;
  decision_deleted BOOLEAN;
  bpel_deleted BOOLEAN;
  fabric_deleted BOOLEAN;


  min_creation_date := to_timestamp('2013-01-01','YYYY-MM-DD');
  max_creation_date := to_timestamp('2013-03-31','YYYY-MM-DD');
  batch_size := 20000;
  retention_period := max_creation_date;
  soa_partition_name := 'PARTITION';
  composite_name := 'TestProcess';
  composite_revision := '1.0';

  IF soa_partition_name IS NOT NULL THEN
    composite_dn := soa_partition_name;
    IF composite_name IS NOT NULL THEN
      composite_dn := composite_dn || '/' || composite_name;
      IF composite_revision IS NOT NULL THEN
          composite_dn := composite_dn || '!' || composite_revision;
      END IF;
    END IF;

  bpel_deleted := soa_orabpel.deleteNoCompositeIdInstances(
                                    p_min_creation_date => min_creation_date,
                                    p_max_creation_date => max_creation_date,
                                    p_older_than => retention_period,
                                    p_rownum => batch_size,
                                    soa_partition_name => soa_partition_name,
                                    composite_name => composite_name,
                                    composite_revision => composite_revision);

  mediator_deleted := soa_mediator.deleteNoCompositeIdInstances(
                                    p_min_creation_date => min_creation_date,
                                    p_max_creation_date => max_creation_date,
                                    p_older_than => retention_period,
                                    p_rownum => batch_size,
                                    composite_dn => composite_dn);

  decision_deleted := soa_decision.deleteNoCompositeIdInstances(
                                    p_min_creation_date => min_creation_date,
                                    p_max_creation_date => max_creation_date,
                                    p_older_than => retention_period,
                                    p_rownum => batch_size,
                                    composite_dn => composite_dn);

  fabric_deleted := soa_fabric.deleteNoCompositeIdInstances(
                                    min_created_date => min_creation_date,
                                    max_created_date => max_creation_date,
                                    retention_period => retention_period,
                                    batch_size => batch_size,
                                    composite_dn => composite_dn);

  IF (bpel_deleted OR mediator_deleted OR decision_deleted OR fabric_deleted) THEN
    dbms_output.put_line('Further instances may be present in the system, please rerun the script to delete these');
    dbms_output.put_line('All orphaned instances deleted');


This script can be executed in the SOAINFRA schema on demand – even without starting the purging run – thereby assuring fault monitoring efficiency. Keep in mind that while the script is running in the database, performance of the application might be slightly lower because of increased database load. Therefore it is advisable to limit the run time of the script by narrowing the number of processed component instances – either by selecting the composite itself or if that is not possible selecting an appropriate batch size, thereby limiting the number of deleted component instances in one run.

Kategorien:Oracle SOA Suite

Improve performance and maintenance of heavily used SOA composites

10. März 2013 2 Kommentare

As stated in a previous post, heavily used transient BPEL processes should be configured with some well-defined parameters in order to avoid storing too much data in the database, thus optimizing performance.

However, even if BPEL processes inside the composite are configured this way, data of other components like mediators and the composite itself are all stored in the database. How much information is stored can be influenced by the audit level setting. As of documentation, the audit levels are defined as follows:

  • Off: No composite instance tracking and payload tracking information is collected. No more composite instances can be created. No logging is performed. Note that disabling logging and the display of instances in Oracle Enterprise Manager Fusion Middleware Control Console can result in a slight performance increase for processing instances. Instances are created, but are not displayed.
  • Development: Enables both composite instance tracking and payload detail tracking. However, this setting may impact performance. This level is useful largely for testing and debugging purposes.
  • Production: Composite instance tracking is collected, but the Oracle Mediator service engine does not collect payload details and the BPEL process service engine does not collect payload details for assign activities (payload details for other BPEL activities are collected). This level is optimal for most normal production operations.

Generally, the SOA Infrastructure audit level is set to Production or Development – depending on the environment – and the composites are set to inherit this audit level. In case of an intensively used composite this means that a lot of data is persisted in the database.

The audit level, however, can be overridden for every composite. Since the data from the transient, short-lived composites is not needed, the setting Off for the relevant composite should be an interesting option to consider. Three questions must be answered:

  1. How much information is persisted from the composite instance?
  2. How much information is persisted from the component instances inside the composite?
  3. What happens in case of an error?

Our tests have shown that although the instances successfully complete, no information is stored in the database at all. Neither for the composite instance, nor for any business rule, BPEL or mediator instances inside it. In case of an unhandled error, however, the erroneous component instance is persisted in the database with the details of the error. The composite instance is not saved. This means that the error is seen in the error handling console, it can even be handled if a fault policy is configured, but no information about the corresponding composite instance can be obtained. The composite instance id is set to Unavailable.


We have seen that setting the composite level to Off in case of transient composites causes less data to be stored in the database, thereby increasing performance. The more instances of the composites created per day, the higher the benefit of this approach.

We have also seen, however, that the composites should be well tested before setting this option because it can be quite cumbersome to track the origin of the error. This setting is therefore ideal for a production environment with heavy load, but suboptimal for a development environment with only a handful of instances but the need for extensive testing. This calls for a different setting in each environment.

The audit level setting of a composite can be modified via Enterprise Manager but it can also be defined during deployment, the latter one being more efficient in this case. In order to do so, the following property needs to be added to the composite.xml:

<property name="auditLevel" many="false">Off</property>

The setting here will be treated as the default setting but like many other properties, this setting can be overridden with a deployment plan containing the following:

<composite name="<COMPOSITE_NAME>">
  <property name="auditLevel">


By using deployment plans, the problem of needing different audit level settings in different environments can be easily solved: the audit level of an intensively used transient composite can be set to Off in a production environment whereas the Development setting can be used in a development environment.

Kategorien:Oracle SOA Suite

Erhalte jeden neuen Beitrag in deinen Posteingang.