Archiv

Autor-Archiv

Nachlese der BPM Integration Days 2014

Am Montag, den 24.02.2014, öffneten die diesjährigen BPM Integration Days ihre Pforten. Die Besucher erlebten zwei spannende Tage, die sich mit aktuellen Themen aus den Bereichen

  • Integration
  • Business Process Management (BPM)
  • Adaptive Case Management (ACM)
  • Process Mining und Analytics
  • Fast Data / Event Processing
  • Internet of things (IoT)
  • Cloud

beschäftigten.

Wir von OPITZ CONSULTING waren diesmal an 5 Sessions und 3 Workshops beteiligt (siehe http://thecattlecrew.wordpress.com/2014/01/10/bpm-integration-days-2014). Die insgesamt 18 Sessions und 6 Workshops legten wieder sehr viel Wert darauf, die theoretischen sowie konzeptionellen Inhalte mit praktischen Bezügen zu untermauern, um so den Teilnehmern ein besseres Verständnis zu ermöglichen.

Kernbotschaften der Konferenz waren unter Anderem:

  • Entscheidungen in Unternehmen sollten IT-technisch unterstützt werden, können jedoch nicht immer automatisiert getroffen werden
  • Vorhandenen Informationen, interne wie externe, sollten dazu verwendet werden Unternehmensentscheidungen bestmöglich zu unterstützen (Context-specific Decision Making)
  • Anwender im Unternehmen sollten wieder mehr in den Fokus rücken und sind in ihrer täglichen Arbeit optimal IT-technisch zu unterstützen
  • Kreativität der Anwender ist zu fördern, Anwender müssen als Wissensarbeiter verstanden werden (“Empower the knowledge worker”)
  • Trend geht weg von vollständiger Automatisierung von Prozessen hin zu Guided Humans bzw. Adaptiven Prozessen
    • Guided Human: Alle Aktivitäten eines Ablaufes sind bekannt, allerdings nicht die Transitionen dazwischen
    • Adaptiv: Aktivitäten eines Ablaufes sind nicht von vornherein bekannt, zur Laufzeit wird auf der Grundlage von Wissen, das sich auch zur Laufzeit ändern kann bspw. durch neue, externe Einflüsse, entschieden welcher Schritt als nächstes notwendig ist (“Living Knowledge”)
  • Unterschiedliche Disziplinen, wie bspw. BPM und BI, haben viele Berührungspunkte und müssen für die optimale Unterstützung des Business in Ihrer Gesamtheit betrachtet werden

Im Panel, mit dem Thema “Nutzen und Schmerzen von BPM im Unternehmen – Was muss besser werden?”, wurde der aktuelle Status Quo von BPM sowie das zukünftige Potential von BPM und BPMN diskutiert. Outcome dieses Panels war:

  • BPM-Initiativen halten zunehmend Einzug in Unternehmen, ist allerdings noch kein Mainstream
  • Für eine erfolgreiche Implementierung eines ganzheitlichen BPM-Ansatzes sollten Fachbereich und IT noch enger zusammen arbeiten
  • BPM wird mittel- oder langfristig Mainstream werden, unter Anderem aufgrund der standardisierten BPMN Notation

Insgesamt waren es wieder zwei sehr interessante Tage, die neue Denkanstöße, Perspektiven sowie aktuelle Trends und Innovationen für das Daily Business aufgezeigt haben. Man darf gespannt sein, was sich im Laufe dieses Jahres in den diskutierten Bereichen passiert, welche neuen Entwicklungen und Trends sich ergeben. Spätestens zu den nächsten BPM Integration Days 2015 wissen wir mehr darüber…

Links:

Kategorien:ACM, BPM, Conference, German, SOA Schlagworte: , ,

Using Credential Store Framework when communicating with Oracle Human Workflow API

17. Dezember 2013 1 Kommentar

For connecting to Oracle Human Workflow Engine via the provided client API, username and password of an admin user are needed. These credentials could also be useful during task processing, when actions on a task has to be performed on behalf of a user, for example in case of holidays or illness. But how can to manage the admin users credentials in secure way, independent from the target environment?

A first approach is to use a mechanism where the credentials were provided as context parameters in the web.xml, of a Facade Web Service in front of the client API to hide complexity and to force upgrade protection in case of API changes. When deploying this Web Service facade, the parameters are replaced using a deployment plan. This solution works, but has the disadvantage that username and password of the admin user are contained in the deployment plan as clear text. From a SysOps perspective this mechanism is not appropriate. 

So another possibility must be found to manage user credentials in a consistent and secure way. An approach to ensure the secure management of credentials is to use the Oracle Credential Store Framework (CSF), provided by Oracle Platform Security Services (OPSS). Configuring and using CSF is quite simple and done in a few steps:

1. Create Credentials Store in EM (Right click on Weblogic domain > [Domainname] and then choose Security > Credentials from the Dropdown menu)

Create_CS

2. Configure System Policy to authorize access to the configured Credential Store (Right click on Weblogic domain > [Domainname] and then choose Security > Credentials from the Dropdown menu)

Create_System_Policies

The configurations, needed to allow read-only access from an application, contains the following information

Type: Codebase
Codebase: file:${domain.home}/servers/${weblogic.Name}/tmp/_WL_user/<APPLICATION_NAME>/-
Permission:
  Permission Class: oracle.security.jps.service.credstore.CredentialAccessPermission
  Resource Name: context=SYSTEM,mapName=WORKLIST-API,keyName=*
  Permission Actions: read

3. Deploy the application

Managing the credentials in the Credential Store may also be done by using WLST functionalities, which would be more maintainable from a SysOps perspective. Details on that could be found here. The system policies may be directly edited in <MW_HOME>/user_projects/domains/<DOMAIN_NAME>/config/fmwconfig/system-jazn-data.xml. But this approach may be error-prone and often not appropriate in clustered production environments, when OPSS configuration is done in a database or LDAP.

Accessing the so configured Credential Store by a server application is done by using the lines of code below. For developing the shown CSF access, jps-api.jar must be in the classpath of the application. At runtime the needed dependencies are provided by Oracle Weblogic Server.

package com.opitzconsulting.bpm.connection;
import java.security.AccessController;
import java.security.PrivilegedActionException;
import java.security.PrivilegedExceptionAction;

import oracle.security.jps.service.JpsServiceLocator;
import oracle.security.jps.service.credstore.CredentialStore;
import oracle.security.jps.service.credstore.PasswordCredential;

public final class CsfAccessor {

 static PasswordCredential readCredentialsfromCsf(String pCsfMapName, String pCsfKey) {

   try {
     return AccessController.doPrivileged(new PrivilegedExceptionAction<PasswordCredential>() {

            @Override
            public PasswordCredential run() throws Exception {

              final CredentialStore credentialStore = JpsServiceLocator.getServiceLocator().lookup(CredentialStore.class)
              return (PasswordCredential) credentialStore.getCredential(pMapName, pKey);
            }
          });
   } catch (Exception e) {
     throw new RuntimeException(String.format("Error while retrieving information from credential store for Map [%s] and Key [%s]", pCsfMapName, pCsfKey), e);
   }
  }
}

When having more applications that need to access credentials from the Credentials Store, it is recommended to implement the access to CSF centrally and provide the functionality as a shared library within Weblogic Server. Otherwise you have to configure the corresponding System Policies, which authorizes the access to CSF, separate for every new application that needs to have access to CSF. Using the shared library approach, only the shared library itself has to be authorized for accessing the Credentials Store. Applications that need to access CSF must only specify the dependency to the shared library in the application’s deployment descriptor file, like weblogic-application.xml.

<wls:weblogic-application
  xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-application"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/javaee_5.xsd http://xmlns.oracle.com/weblogic/weblogic-application http://xmlns.oracle.com/weblogic/web  logic-application/1.3/weblogic-application.xsd">
  <wls:library-ref>
    <wls:library-name>csf-accessor-shared-lib</wls:library-name>
    <wls:implementation-version>1.0.0</wls:implementation-version>
  </wls:library-ref>
</wls:weblogic-application>

In order to encapsulate the access to CSF and to avoid the publication of the PasswordCredential object instance, we decided to further encapsulate the CSF access by a special Connection object, which establishes the connection to the Human Workflow API and can provide a WorkflowContext for the corresponding admin user.

package com.opitzconsulting.bpm.connection;

import java.util.Map;

import oracle.bpel.services.workflow.client.IWorkflowServiceClient;
import oracle.bpel.services.workflow.client.IWorkflowServiceClientConstants.CONNECTION_PROPERTY;
import oracle.bpel.services.workflow.verification.IWorkflowContext;
import oracle.bpm.client.BPMServiceClientFactory;
import oracle.security.jps.service.credstore.PasswordCredential;

public class HumanWorkflowApiConnection {

  private IWorkflowServiceClient workflowServiceClient;

  public HumanWorkflowApiConnection(Map<CONNECTION_PROPERTY, String> pProperties) {
    final BPMServiceClientFactory bpmServiceClientFactory = BPMServiceClientFactory.getInstance(pProperties, null, null);
    workflowServiceClient = bpmServiceClientFactory.getWorkflowServiceClient();
  }

  public IWorkflowServiceClient getWorkflowServiceClient() {
    return workflowServiceClient;
  }

  public IWorkflowContext createWorkflowContextForAdmin(String pCsfMapname, String pCsfKey) {

    final PasswordCredential passwordCredential = CsfAccessor.readCredentialsfromCsf(pCsfMapname, pCsfKey);

    try {
      return workflowServiceClient.getTaskQueryService().authenticate(passwordCredential.getName(),
      passwordCredential.getPassword(), "jazn.com");
    } catch (Exception e) {
      throw new RuntimeException(String.format("Exception while authenticating Admin User [%s]", passwordCredential.getName()), e);
    }
  }
}

Links:

  1. http://docs.oracle.com/cd/E28280_01/apirefs.1111/e10660/toc.htm
  2. http://docs.oracle.com/cd/E21764_01/core.1111/e10043/csfadmin.htm
  3. http://www.redheap.com/2013/06/secure-credentials-in-adf-application.html

Oracle Open World 2013 – Wrap up

OOW 2013 is over and we’re heading home, so it is time to lean back and reflecting about the impressions we have from the conference.

First of all: OOW was great! It was a pleasure to be a part of it. As already mentioned in our last blog article: It was the biggest OOW ever. Parallel to the conference the America’s Cup took place in San Francisco and the Oracle Team America won. Amazing job by the team and again congratulations from our side!

Back to the conference. The main topics for us are:

  • Oracle SOA / BPM Suite 12c
  • Adaptive Case management (ACM)
  • Big Data
  • Fast Data
  • Cloud
  • Mobile

Below we will go a little more into detail, what are the key takeaways regarding the mentioned points:

Oracle SOA / BPM Suite 12c

During the five days at OOW, first details of the upcoming major release of Oracle SOA Suite 12c and Oracle BPM Suite 12c have been introduced. Some new key features are:

  • Managed File Transfer (MFT) for transferring big files from a source to a target location
  • Enhanced REST support by introducing a new REST binding
  • Introduction of a generic cloud adapter, which can be used to connect to different cloud providers, like Salesforce
  • Enhanced analytics with BAM, which has been totally reengineered (BAM Console now also runs in Firefox!)
  • Introduction of templates (OSB pipelines, component templates, BPEL activities templates)
  • EM as a single monitoring console
  • OSB design-time integration into JDeveloper (Really great!)
  • Enterprise modeling capabilities in BPM Composer

These are only a few points from what is coming with 12c. We are really looking forward for the new realese to come out, because this seems to be really great stuff. The suite becomes more and more integrated. From 10g to 11g it was an evolution in terms of developing SOA-based applications. With 12c, Oracle continues it’s way – very impressive.

Adaptive Case Management

Another fantastic topic was Adaptive Case Management (ACM). The Oracle PMs did a great job especially at the demo grounds in showing the upcoming Case Management UI (will be available in 11g with the next BPM Suite MLR Patch), the roadmap and the differences between traditional business process modeling. They have been very busy during the conference because a lot of partners and customers have been interested :-)

Big Data

Big Data is one of the current hype themes. Because of huge data amounts from different internal or external sources, the handling of these data becomes more and more challenging. Companies have a need for analyzing the data to optimize their business. The challenge is here: the amount of data is growing daily! To store and analyze the data efficiently, it is necessary to have a scalable and flexible infrastructure. Here it is important that hardware and software are engineered to work together. Therefore several new features of the Oracle Database 12c, like the new in-memory option, have been presented by Larry Ellison himself. From a hardware side new server machines like Fujitsu M10 or new processors, such as Oracle’s new M6-32 have been announced. The performance improvements, when using one of these hardware components in connection with the improved software solutions were really impressive. For more details about this, please take look at our previous blog post.

Regarding Big Data, Oracle also introduced their Big Data architecture, which consists of:

  • Oracle Big Data Appliance that is preconfigured with Hadoop
  • Oracle Exdata which stores a huge amount of data efficently, to achieve optimal query performance
  • Oracle Exalytics as a fast and scalable Business analytics system

Analysis of the stored data can be performed using SQL, by streaming the data directly from Hadoop to an Oracle Database 12c. Alternatively the analysis can be directly implemented in Hadoop using “R”. In addition Oracle BI Tools can be used to analyze the data.

Fast Data

Fast Data is a complementary approach to Big Data. A huge amount of mostly unstructured data comes in via different channels with a high frequency. The analysis of these data streams is also important for companies, because the incoming data has to be analyzed regarding business-relevant patterns in real-time. Therefore these patterns must be identified efficiently and performant. To do so, in-memory grid solutions in combination with Oracle Coherence and Oracle Event Processing demonstrated very impressive how efficient real-time data processing can be.
One example for Fast Data solutions that was shown during the OOW was the analysis of twitter streams regarding customer satisfaction. The feeds with negative words like “bad” or “worse” have been filtered and after a defined treshold has been reached in a certain timeframe, a business event was triggered.

Cloud

Another key trend in the IT market is of course Cloud Computing and what it means for companies and their businesses. Oracle announced their Cloud strategy and vision – companies can focus on their real business while all of the applications are available via Cloud. This also includes Oracle Database or Oracle Weblogic, so that companies can also build, deploy and run their own applications within the cloud. Three different approaches have been introduced:

  • Infrastructure as a Service (IaaS)
  • Platform as a Service (PaaS)
  • Software as a Service (SaaS)

Using the IaaS approach only the infrastructure components will be managed in the Cloud. Customers will be very flexible regarding memory, storage or number of CPUs because those parameters can be adjusted elastically. The PaaS approach means that besides the infrastructure also the platforms (such as databases or application servers) necessary for running applications will be provided within the Cloud. Here customers can also decide, if installation and management of these infrastructure components should be done by Oracle. The SaaS approach describes the most complete one, hence all applications a company uses are managed in the Cloud. Oracle is planning to provide all of their applications, like ERP systems or HR applications, as Cloud services.

In conclusion this seems to be a very forward-thinking strategy, which opens up new possibilities for customers to manage their infrastructure and applications in a flexible, scalable and future-oriented manner.

As you can see, our OOW days have been very very interresting. We collected many helpful informations for our projects. The new innovations presented at the confernce are great and being part of this was even greater! We are looking forward to next years’ conference!

Links:

Kategorien:Uncategorized

OSB 11g: Stuck Threads when using inbound database adapter

Using a polling database adapter in a OSB proxy service, one may have noticed the following behaviour in Weblogic server:

  • an exception in the server logs about one or even more stuck threads like this:

<BEA-000337> <[STUCK] ExecuteThread: ’10′ for queue: ‘weblogic.kernel.Default (self-tuning)’ has been busy for “705″ seconds working on the request “weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl@21b9db0″, which is more than the configured time (StuckThreadMaxTime) of “600″ seconds. Stack trace:
Thread-118 “[STUCK] ExecuteThread: ’10′ for queue: ‘weblogic.kernel.Default (self-tuning)’” <alive, suspended, waiting, priority=1, DAEMON> {
— Waiting for notification on: oracle.tip.adapter.db.InboundWork@21b8e86[fat lock]
java.lang.Object.wait(Object.java:???)
oracle.tip.adapter.db.InboundWork.run(InboundWork.java:498)
oracle.tip.adapter.db.inbound.InboundWorkWrapper.run(InboundWorkWrapper.java:43)
weblogic.work.ContextWrap.run(ContextWrap.java:39)
weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
weblogic.work.ExecuteThread.execute(ExecuteThread.java:203)
weblogic.work.ExecuteThread.run(ExecuteThread.java:170)
}>

  • and/or server health state from OSB managed server changing from state “Ok” to state “Warning”

Such a behaviour alerts administrators, thinking that something is wrong with the deployed applications or OSB services.

Looking in the Oracle documentation one can find the information that this is behaviour by design and that it can be ignored. To verify the OSB proxy service’s database adapter as the source for this the proxy service has  to be simply disabled in OSB console. Doing so makes the stuck threads disappear. The behaviour seems strange at the  - so why this?

When defining an inbound database adapter, Weblogic threads are used to perform the polling on events occurring in the defined database. Because OSB is designed to deliver high performance and throughput, a number of threads, which depends on the numberOfThreads property in the adapter’s JCA file, is exclusively reserved for the database adapter to perform the inbound polling. Such reserved threads will, due to performance reasons, never be released and never be returned to the corresponding thread pool. After the configured thread timeout, which is by default about 600 seconds, the stuck thread behaviour occurs.

Although this is the default behaviour, it is really confusing and could lead into serious problems, if real threading problems occur in deployed applications/services that are not caused by the adapter threads and which will not be noticed and handled in time. So what can be done to get rid of the stuck adapter threads?

The Oracle documentation proposes to define a separate Work manager and configure this with the proxy service transport’s dispatch policy. To do so, the following steps has to be performed:

  • Define a custom global Work manager using weblogic console with the OSB managed server as deployment target


work_manager

  • Configure the new defined Work manager to ignore stuck threads

ignoreStuckThreads_wm_config

  • Configure OSB proxy service transport’s dispatch policy to use the new defined Work manager

OSB_proxy config

Afterwards the stuck threads behaviour caused the OSB proxy service or by its configured inbound database adapter should not show up again.

Standard-based integration of human interaction in SOA

26. Mai 2012 1 Kommentar

IT infrastructures and systems following the SOA paradigm are build upon standards. In such scenarios BPMN 2.0 can be used for the automation of business processes for example. The processes can be executed by a process engine. In such business processes the integration of human actors is often needed, to make complex decisions that cannot be made by a machine. The communication between the human actor and the process engine is done by tasks, which are often managed by separate task engines. Authorized users are able to work on the tasks using a provided task UI, a so called inbox or tasklist application. As one can see there are at least two consumers for the services provided by a task engine: the task UIs and the process engines.

Although a specification, the WS-HT (WS Human Task) specification, for the standard-based integration of human interaction in SOA is available, task engine interfaces are vendor-specific and not standard-based actually, e.g. using RMI or native Java method invocation for communication. This leads to the following problems:

  • Portability issues
  • Interoperability issues
  • Tight coupling

To deal with the problems of vendor-specific interfaces in the area of human interaction, I defined a adapter framework, the Generic Human Interaction Adapter (GHIA), which provides a interface that is based on the definitions propagated by the WS-HT specification. Features of the GHIA-Framework are a standard-based interface (SOAP based on WS-HT and REST) and the opportunity to transparently communicate with more than one task-engine, e.g. for the synchronization of different inboxes. Scenarios where the GHIA-Framework can be used, delivering a great benefit, are:

  • Integration scenarios (more than one task-engine is used)
  • Decoupling scenarios (e.g. for decoupling task UIs from the vendor-specific inerfaces)
  • Migration scenarios (when moving from one release to a higher one, to ensure that long running process with human interaction can be ended within the older versioned plattform)
To validate the framework I have implemented a prototyp, using a subset of the operations defined by WS-HT, where the task-enignes from the Oracle BPM Suite 11g and the Activiti BPM Platform 5.8 are integrated transparently behind one standard-based interface. For this scenario using the GHIA framework leads to the following architectural layout:
The prototyp shows that the intended expectations I made before starting the implementation of the framework could be met. As a result a lightweight and easily extendable framework has been created which will be refined, and extended over the next months. Interested readers will be kept updated about the further evolution of the GHIA framework in this blog.

Nachlese SOA, BPM & Integration Days 2012 Spring

Bei den SOA, BPM & Integration Days  (29.02.-01.03)  gab es auch diesmal wieder interessante Vorträge und Workshops zu aktuellen Themen aus den Bereichen Adaptive Prozesse, SOA-Security und Systemintegration. Auch abseits der Vorträgen und Workshops gab es zudem genügend Freiräume für einen Meinungs- und Erfahrungsaustausch zwischen den anwesenden Speakern und anderen Experten aus unterschiedlichen Bereichen.

Am ersten Tag gab es je Themenbereich vier Vorträge. Themen hierbei waren unter anderem:

  • Bewältigung der Komplexität adaptiver Prozesse
  • Bedeutung und Rolle des Adaptive Case Management (ACM)
  • Herausforderungen bei der Integration heterogener Systeme
  • Verbindung von Master Data Management (MDM) und Integrationsansätzen
  • Vorstellung eines transparenten Vorgehens für die Identifikation von SOA Services
  • Herausforderungen aus dem Bereich SOA Security

Zum Auftakt des Tages wurde das Thema adaptive Prozesse von Ralf Müller im Rahmen einer Keynote motiviert und dabei auf die Möglichkeiten der Umsetzung mit der Oracle BPM Suite eingegangen. In einer zweiten Keynote von Sebastian Schreiber wurden im Rahmen eines Live-Hackings verschiedene Systemangriffe, wie beispielsweise über die Manipulation eines Barcodes eine SQL-Injection durchgeführt werden kann, demonstriert.

Zum Abschluss des Tages gab es noch ein, von Jürgen Kress moderiertes, Speaker-Panel, in dem die Frage „Wozu braucht man überhaupt BPM-Lösungen in Deutschland?“ diskutiert wurde. Hierbei waren alle Konferenzteilnehmer dazu eingeladen von ihren Projekterfahrungen mit dem Thema BPM zu berichten. Es entstand eine spannende Diskussion in deren Verlauf die dem Thema BPM inhärente Komplexität kontrovers diskutiert wurde, die von einigen Panelteilnehmern als Grund dafür identifiziert wurde, dass es zurzeit nur wenige BPM-Projekte in Deutschland gibt.

Am zweiten Konferenztag wurden in sechs verschiedene Workshops die am Vortag motivierten Themen noch einmal tiefergehend behandelt. Hier gab es unter anderem einen sehr guten Workshop zum Thema BPMN von Dr. Volker Stiehl.

Mein Fazit zu der Veranstaltung: Sehr gelungen! Es wurden aktuelle Probleme und Herausforderungen aus den Bereichen SOA, BPM & Integration adressiert und zudem ein Forum geboten, um Erfahrungen und Erkenntnisse auszutauschen sowie aktuelle Entwicklungen in diesen Bereichen zu diskutieren. Ich freue mich schon auf die nächsten SBI-Days!

Kategorien:Conference
Follow

Erhalte jeden neuen Beitrag in deinen Posteingang.