IT-Security (Part 6): WebLogic Server and Authorization

Key words: IT-Security, WebLogic Server, WebLogic Security Framework, Authorization, authorization process, Role Mapping, Roles, Adjudication Process, Security Service Provider Interfaces (SSPIs), Users, Groups, Principals and Subjects

We discussed about Authentication in Part 4 and 5[1]; now let us focus on Authorization topic. Authorization is known as access control too and is used to clear main questions such as: “What can you access?”, “Who has access to a WebLogic resource?”, “Is access allowed?” and in general “Who can do what?“ In order to guarantee integrity, confidentiality (privacy), and availability of resources, WebLogic are restricted accesses to these resources. In other words, authorization process is responsible to grant access to specific resources based on an authenticated user’s privileges.

Authorization: What can you access?

After authentication one user, it is the first question that system has to answer: “What can you access?” In this sense, WebLogic Server has to clear, which resources are available for a particular user, that will be cleared by using the user’s security role and the security policy assigned to the requested WebLogic resource. A WebLogic resource is generally understood as a structured object used to represent an underlying WebLogic Server entity, which can be protected from unauthorized access using security roles and security policies. WebLogic resource implementations are available for[2]:

  • Administrative resources
  • Application resources
  • Common Object Model (COM) resources
  • Enterprise Information System (EIS) resources
  • Enterprise JavaBean (EJB) resources
  • Java Database Connectivity (JDBC) resources
  • Java Messaging Service (JMS) resources
  • Java Naming and Directory Interface (JNDI) resources
  • Server resources
  • Web application resources
  • Web service resources
  • Work Context resources

The Authorization Process

I’m going to clear whole process in a top-down approach. First of all, we have to see what will be happen in Authorization Process? Figure 1 Authorization Process[3] shows how WebLogic Security Framework communicated with a particular Security Provider and Authorization providers respectively.


Authorization Process

Authorization Process

Figure 1 Authorization Process

If a user want to use one protected resource, then WebLogic send a request to “Resource Container” that handles the type of WebLogic resource being requested receives the request (for example, the EJB container receives the request for an EJB resource). It forwards to “WebLogic Security Framework” and its request parameters, including information such as the subject of the request and the WebLogic resource being requested. The Role Mapping providers use the request parameters to compute a list of roles to which the subject making the request is entitled and passes the list of applicable roles back to the WebLogic Security Framework. On this information will be decided about authorization: e.g. PERMIT and/or DENY. WebLogic Server provides an auditing to collect, store and distribute information about requests and outcomes. It calls Adjudication. It can happened that for Authorization is defined multiple providers. For such cases is an Adjudication provider available. The WebLogic Security Framework delegates the job of merging any conflicts in the Access Decisions rendered by the Authorization providers to the Adjudication provider. It resolves the conflicts and sends a final decision (TRUE or FALSE) to WebLogic Security Framework.[4]

WebLogic Security Framework

I have mentioned a bit about WebLogic Security Framework in Part 1 and 2[5]. Figure 2 WebLogic Security Service Architecture shows a high-level view of the WebLogic Security Framework. The framework contains interfaces, classes, and exceptions in the package. The Framework provides a simplified application programming interface (API) that can be used by security and application developers to define security services. Within that context, the WebLogic Security Framework also acts as an intermediary between the WebLogic containers (Web and EJB), the Resource containers, and the security providers[6].

WebLogic Security Framework

WebLogic Security Framework

Figure 2 WebLogic Security Service Architecture

The Security Service Provider Interfaces (SSPIs) can be used by developers and third-party vendors to develop security providers for the WebLogic Server environment[7].

Security Provider

Figure 1 Authorization Process presents Security Provider as next module that provides security services to applications to protect WebLogic resources.  A security provider consists of runtime classes and MBeans, which are created from SSPIs and/or Mbean types. Security providers are WebLogic security providers (provided with WebLogic Server) or custom security providers. You can use the security providers that are provided as part of the WebLogic Server product, purchase custom security providers from third-party security vendors, or develop your own custom security providers.


In order to complete authorization process, is Role Mapping within security provider necessary. Simple to say, a role mapper maps a valid token to a WebLogic user. Formerly that we focus on Roles, I would like to clarify a few more terms.

Users, Groups, Principals and Subjects

User is an entity that is authenticated in our security provider in last steps (See: Part 4 and 5 – Authentication Process[8]). A user can be a person or a software entity or other instances of WebLogic Server. As a result of authentication, a user is assigned an identity, or principal. A principal is an identity assigned to a user or group as a result of authentication and can consist of any number of users and groups. Principals are typically stored within subjects. Both users and groups can be used as principals by WebLogic Server.

Groups are logically ordered sets of users. Usually, group members have something in common. For example, a company may separate its IT-Department into two groups, Admins and Developers. In this form, it will be possible to define different levels of access to WebLogic resources, depending on their group membership. Managing groups is more efficient than managing large numbers of users individually. For example, an administrator can specify permissions for several users at one time by placing the users in a group, assigning the group to a security role, and then associating the security role with a WebLogic resource via a security policy. All user names and groups must be unique within a security realm[9].

Security Roles

Role is a dynamically computed privilege that is granted to users or groups based on specific conditions. The difference between groups and roles is that a group is a static identity that a server administrator assigns, while membership in a role is dynamically calculated based on data such as user name, group membership, or the time of day. Security roles are granted to individual users or to groups, and multiple roles can be used to create security policies for a WebLogic resource. A security role is a privilege granted to users or groups based on specific conditions[10].

Like groups, security roles allow you to restrict access to WebLogic resources for several users at once. However, unlike groups, security roles[11]:

  • Are computed and granted to users or groups dynamically, based on conditions such as user name, group membership, or the time of day.
  • Can be scoped to specific WebLogic resources within a single application in a WebLogic Server domain (unlike groups, which are always scoped to an entire WebLogic Server domain).

Granting a security role to a user or a group confers the defined access privileges to that user or group, as long as the user or group is “in” the security role. Multiple users or groups can be granted a single security role. It can be summarized as follows:

Groups are static and defined on Domain level (coarse granularity) and Roles are dynamic and defined on Resource level (fine granularity). Continued…

See too last parts of IT-Security and Oracle Fusion Middleware:


[1] See:


[2] Oracle® Fusion Middleware Understanding Security for Oracle WebLogic Server, 11g Release 1 (10.3.6), E13710-06

[3] Oracle® Fusion Middleware Securing Oracle WebLogic Server 11g Release 1 (10.3.6), E13707-06

[4] Oracle® Fusion Middleware Securing Oracle WebLogic Server 11g Release 1 (10.3.6), E13707-06

[5] See:


[6] See:

[7] See:

[8] See:


[9] See:

[10] See:

[11] See:

camunda BPM – Mocking subprocesses with BPMN Model API

A common way to call a reusable subprocess is to use a call activity in the BPMN 2.0 model. By using a call activity it is only necessary to add the process key of the subprocess to call and the version of it to the call activity properties. Thus, the modeling can be continued. Apart from this it is possible to define process variables to pass between the main and the subprocess.

But during unit testing the main process and all subprocesses referenced by the defined process keys must exist in the process engine repository.

The easiest way to solve this problem is to replace the defined process by the process key of a mock process which must exist in repository. But it is not advisable to change a process model for testing purposes only. It takes time to undo these changes when the real subprocess is completed. Moreover such changes could be forgotten, cause it is already tested successfully.

Creating a mock process with the same process key of the real subprocess is not convenient if there exist more than a few subprocesses which is often the reality.

A handy alternative since version 7.1 of camunda BPM is the use of the BPMN Model API.
It makes it possible to create, edit and parse BPMN 2.0 models as pure Java code.

Let’s make an example

The following process model consists of a main process with two call activities.

Main Proces with two Call-Activities

Main Proces with two Call-Activities

To have a reusable solution, a helper method is created and used by the test.
It creates a model instance by using BPMN Model API and deploys it in the given process engine repository as shown below.

 * Create and deploy a process model with one logger delegate as service task.
 * @param origProcessKey
 * key to call
 * @param mockProcessName
 * process name
 * @param fileName
 * file name without extension
 private void mockSubprocess(String origProcessKey, String mockProcessName,
 String fileName) {
 BpmnModelInstance modelInstance = Bpmn
 .startEvent().name("Start Point").serviceTask()
 .name("Log Something for Test")
 .name("End Point").done();
 .addModelInstance(fileName + ".bpmn", modelInstance).deploy();

The primary goal of this test is to ensure that the main process is ended successfully. Therefore a model instance for each call activity is created and deployed in the given repository. The main process is deployed via @Deployment annotation. Following code snippet illustrates the implementation.

 @Deployment(resources = "mainProcess.bpmn")
 public void shouldEnd() {

 // mock first sub process
 this.mockSubprocess("firstSubProcessKey", "Mocked First Sub Process",

 // mock second sub process
 this.mockSubprocess("secondSubProcessKey", "Mocked Second Sub Process",

 // start main process
 ProcessInstance mainInstance = runtimeService().startProcessInstanceByKey(


The created model instances look equally – it consists of a start event, a service task which references a delegate and an end event. Following code snippet shows the simple implementation of the used delegate.

public class MockLoggerDelegate implements JavaDelegate {

 private final Logger LOGGER = Logger.getLogger(MockLoggerDelegate.class

 public void execute(DelegateExecution execution) throws Exception {"\n\n ..." + MockLoggerDelegate.class.getName()
 + " invoked by " + "processDefinitionId="
 + execution.getProcessDefinitionId() + ", activtyId="
 + execution.getCurrentActivityId() + ", activtyName='"
 + execution.getCurrentActivityName() + "'" + ", processInstanceId="
 + execution.getProcessInstanceId() + ", businessKey="
 + execution.getProcessBusinessKey() + ", executionId="
 + execution.getId() + " \n\n");


Of course, it’s possible to individualize these mocks dependant on your test case. For example, you could create a delegate for each sub process which set specific process variables. This example demonstrates only the capability of this solution.

Keep in mind, it is not recommended to replace your process models by using the BPMN Model API. But it is very useful to solve small problems in a simple way – just a few lines of Java code. After completion a subprocess it is advisable to test the interaction with the main process, too.

And of course, do not forget to write automated integration tests ;-)

Oracle Application Testing Suite (OATS) – One tool for the whole testing process

One of the most challenging things, apart from test definition, during the process of testing an application is to keep the overview over the entire process, i.e. knowing the status of the current test progress.

Here you want to know (obviously there are more things):

  • what requirements need be tested
  • what are the tests belonging to them, i.e. each others association
  • test status, i.e. how many tests are executed yet, and which of them failed
  • the issues resulting out of failed test executions (issue tracking)

For all that Oracle provides you the Oracle Application Testing Suite.

OATS – TestManager

This is the central tool, that allows you to cover all aspects of the whole appication testing process, i.e. defining testing plans,  add your requirements, whole issue tracking process  adding tests, including JUnit Tests over existing ANT-File and 3rd Party Test via an executable file.

OATS also provides reports for the several categories right out of the box and the ability to export each report either to Microsoft Excel or HTML.

Obviously via an Administration tool you could control user access based on roles. And as a goody the most important roles are already on board, i.e. Planner, QA-Engineer, Developer, Tester, Read-Only Role and Full Access

OATS – OpenScript

This is the development environment based on Eclipse where the Scripts that could be chosen in the TestManager’s “add Test” functionality  are developped.

OpenScript gives you the possibility of developping Scripts for the so called Functional Testing which is simply automated Browser/GUI testing as well as the possibility of creating Scripts for Load-Testing for e.g. of Oracle Forms,  Oracle ADF applications, etc..

Further more OATS comes along with OpenScript Addons for Mozilla Firefox (at least Versoin 30 is not supported yet)  and Microsoft Internet Explorer (IE 11 works fine on my machine), that gives you the abilty to start recording Scripts from the development environment.

OATS – LoadTest

Within this part of OATS on the one hand you may define several scenarios under which the script should run., e.g. the amount of concurrent users. You also have the possibility to run the tests simulating different browsers, such as Chrome, Firefox, MSIE, Safari, etc. as well as Connection speed simulation.

As second part of this tool it gives you the possibility to gather Server Statistics, depending on predefined metric profiles that exist for Oracle Weblogic and on database side for SQL-Server and Oracle database.

Further Information

Download the latest Version at:

Further information obtain here:


Kategorien:English, Oracle FMW

You’ve Got Mail: Inbound Email Processing in WLS/OSB integration scenarios

In an integration project we are currently replacing an available integration platform using Oracle Service Bus 11g. Different incoming and outgoing message formats and protocols (HTTP, FTP, SMTP, etc.) are used from the external partners of our customer and therefore have to be supported. With OSB no problem at all, but polling a MS Exchange server for new e-mails is simply not possible with OSB standard tooling. Debt is a bug in MS Exchange server, which advertises that it supports plain authentification for login, but it does not ([1], search for AUTH=PLAIN). So when trying to access an exchange inbox from a proxy service ends up with failures, which cannot be worked around.

So we decided to implement a custom Java service that does the polling, because with plain Java the bug can be worked around by setting the corresponding Java Mail session parameters described in [1]. The challenge from a implementation perspective is that in a clustered environment, a service is in general active on all cluster nodes and so parallel access and therefore multi processing for one specific e-mail is possible. So the service has to be implemented as a Weblogic Singleton service [2] to avoid this. A Singleton service is physically deployed to the cluster and so available on all nodes, but it is only active on one specific cluster node. In case of problems on the node where the service is active, it might be activated on another node in the cluster automatically, depending on the failover configuration in the cluster.

Basically Singleton services may be implemented in two different fashions:

Standalone application

When implementing a Singleton service as a standalone application, it has to be bundled as a JAR-File and must be placed under <DOMAIN_HOME>/lib folder. Dependend third-party libs not provided by Weblogic must be also available within this folder, with a reference in the Singleton JARs manifest. Afterwards the servers has to be restarted and the Singleton service has to be registered in the Cluster using Weblogic Console.




Part of an enterprise application

When implementing a Singleton service as part of an enterprise application, it has to be packaged inside an EAR-File which has to be deployed to the cluster. The registration of the Singleton to the Cluster is done by adding an entry to weblogic-application.xml.

Deploying a singleton service as part of an enterprise application is the more flexible alternative and less invasive way regarding changes in the singleton implementation, because a simple redeployment of the application is sufficient. Using the standalone variant, a server restart is needed in case of changes in the Singletons implementation logic. In our concrete scenario we decided to implement the Mail Singleton service as part of an enterprise application.

After deploying the Singleton application to the cluster it will be activated on one of the cluster nodes and starts polling the specified email account. When stopping the server, where the Singleton service is currently active on, it will be deactivated on this node and directly be activated on another node. Observing the server logs shows this behaviour because of corresponding log outputs in the Singleton implementations activate() and deactivate() methods.


23:20:04.341 [[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] INFO  MailClientRunner - SingletonService MailClientRunner is initiated...
23:20:05.461 [[ACTIVE] ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'] INFO  MailClientRunner - SingletonService MailClientRunner is activated...
23:20:06.736 [[ACTIVE] ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'] INFO  MailReaderClient - FROM: ["Bernhardt, Sven" <>]
23:20:06.736 [[ACTIVE] ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'] INFO  MailReaderClient - SENT DATE: [Sat Jul 12 23:15:03 CEST 2014]
23:20:06.736 [[ACTIVE] ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'] INFO  MailReaderClient - SUBJECT: [Singleton Service Testmail]
23:20:07.001 [[ACTIVE] ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'] INFO  MailReaderClient - CONTENT: [Hello,

this is a test mail.


23:21:16.131 [[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] INFO  MailClientRunner - SingletonService MailClientRunner has been deactivated...

23:21:22.967 [[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] INFO  MailClientRunner - SingletonService MailClientRunner is activated...
23:21:24.220 [[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] INFO  MailReaderClient - FROM: ["Bernhardt, Sven" <>]
23:21:24.220 [[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] INFO  MailReaderClient - SENT DATE: [Sat Jul 12 23:15:03 CEST 2014]
23:21:24.220 [[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] INFO  MailReaderClient - SUBJECT: [Singleton Service Testmail]
23:21:24.481 [[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] INFO  MailReaderClient - CONTENT: [Hello,
this is a test mail.

Finally let’s have a short look on the implementation of the Singleton service:
package com.opitzconsulting.mail;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import weblogic.cluster.singleton.SingletonService;

public class MailClientRunner implements SingletonService {

private static final Logger log = LoggerFactory.getLogger(MailClientRunner.class.getSimpleName());

private MailReaderClient mailReaderClient;

public MailClientRunner() {"SingletonService MailClientRunner is initiated..."));

public void activate() {"SingletonService MailClientRunner is activated..."));

mailReaderClient = new MailReaderClient();

public void deactivate() {"SingletonService MailClientRunner has been deactivated..."));

The interaction between Oracle Service Bus and the Singleton Mail service has been implemented using JMS Queues. The Mail service reads the mails, coverts the content (CSV, XML) from the mail body or from attachments, creates a uniform message format which is independent from protocol as well as format and enqueues it into the corresponding queues. From here OSB dequeues the messages and does the further processing. The logic from this point on is the same, used for other interfaces. With this implementation approach, by combining the strenghts of of JEE and OSB, we created a flexible, maintainable and standard-based way to integrate inbound email processing in our final integration architecture.


ADF 12.1.3 – Tabellendaten nach CSV exportieren

War es bislang nur möglich, Tabelleninhalte einer Tabelle bzw. einer TreeTable direkt in das Format Microsoft Excel zu exportieren, so erweitert Oracle in der Versin  das Tag  <af:exportCollectionActionListener/> um die Möglichkeit direkt im CSV-Format zu exportieren.

Alles was zu tun ist, ist beim Hinzufügen des Listeners z.B. zu einem CommandButton, wie abgebildet die Option CSV auszuwählen.


Über das Property Filename wird dann wie gewohnt gesteuert, ob der Export direkt im Browser angezeit wird oder ob das Download-Fenster aufgeht.





IT-Security: Part 1 to 5 as PDF file

Key words:IT-Security, Security Challenges, OPSS Architecture, WebLogic Server, JAAS, JAAS LoginModules, Authentication, Basic Authentication, Certificate Authentication, Digest Authentication, perimeter Authentication and Identity Assertion

Until now I have published five parts of a series of articles on IT-Security and Oracle Fusion Middleware:


I’m going to continue the IT-Security’s articles and you can access to complete first five parts as PDF-file here:


Oracle BPM 12c – Quick Start Installation (uncensored)

Getting started in 15 minutes!

One of the challenges with previous releases was, that SOA & BPM composites couldn’t be deployed and tested on the JDeveloper integrated Weblogic server. Therefore a separate installation of SOA/BPM Suite or a virtual image was necessary to start developing. Now with the new release Oracle introduced a single-click installer for SOA & BPM Suite. Among other new features (like debugging & testing capabilities, templating, optimized foodprint, etc.) this really helps to increase developer productivity.

The video below demonstrates that with Oracle SOA & BPM 12c it just takes 15 minutes to get started – install JDeveloper, start the Weblogic server, develop a simple Hello World, deploy the process and test it from Enterprise Manager.


Do you feel inspired? Just download the software from OTN and try it yourself (SOA-Download; BPM-Download). Have fun!


Erhalte jeden neuen Beitrag in deinen Posteingang.