Archive for the ‘Uncategorized’ Category

Short recap on OFM Summer Camps 2014

Last week the Oracle Fusion Middleware summer camps took place in Lisbon. More than 100 participants attended the event, learning much new stuff about new features and enhancements, arriving with the recently available FMW 12c release. In four parallel tracks the highlights of the new major release were presented to the attendees; hands-on labs allows to get a first impression regarding the new platform features and the markedly increased productivity delivered by the enhanced, consolidated tooling.

The four tracks had different focuses, regarding the new features of the 12c release of Oracle Middleware platform:

  • SOA 12c – focusing on application integration, including Oracle Managed File Transfer (MFT), and fast data with Oracle Event Processing (OEP)
  • BPM 12c – focusing on Business Process Management, the new enhanced Business Activity Monitoring (BAM) and Adaptive Case Management (ACM)
  • SOA/BPM 12c (Fast track) – Combined track, covering the most important enhancements and concepts with reference to SOA and BPM 12c
  • Mobile Application Framework (MAF) Hackathon – Development of mobile applications using the newly released MAF (formerly known as ADF mobile)

The main topics addressed by the new OFM 12c release are:

  • Cloud integration
  • Mobile integration
  • Developer’s performance
  • Industrial SOA

Cloud integration

Integrating Cloud solutions in grown IT system landscapes is complex. With SOA Suite 12c, Oracle provides a coherent and simple approach for integrating enterprise applications with existing cloud solutions. Therefore new  JCA-based cloud adapters, e..g. for integrating with Salesforce, as well as a Cloud SDK are available. Service Bus might be used in this context to care about transformation, routing and forms the backbone of a future-oriented, flexible as well as scalable cloud application architecture.

Mobile integration

Mobile-enablement of enterprise applications is a key requirement and a necessity for application acceptance today. The new JCA REST adapter can be used to easily REST-enable existing applications. In combination with Oracle MAF and Service Bus, Oracle provides a complete Mobile Suite, where seamless development of new mobile innovations can be done.

Developer’s performance

To enhance development performance, the new SOA and BPM Quickinstalls are introduced. Using those allows the developers to have a complete SOA or BPM environment installed in 15 minutes (see the blog post of my colleague). Furthermore new debugging possibilities, different templating mechanisms (SOA project templates, Custom activity templates, BPEL subprocesses and Service Bus pipeline Templates) as well as JDeveloper as the single and only IDE deliver a maximum good development experience.

Industrial SOA

Industrializing SOA is a main goal, when starting with a SOA initiative: Transparent monitoring and management and a robust, scalable and performant platform are key to successfully implementing SOA-based applications and architectures. These points are addressed by the new OFM 12c release through the following features:

  • Lazy Composite Loading – Composites will be loaded on demand and not at platform startup
  • Modular Profiles – Different profiles provided, which enables only the features currently needed (e.g. only BPEL)
  • Improved Error Hospital and Error Handling
  • Optimized Dehydration behaviour
  • Integrated Enterprise Scheduler (ESS)

Further main enhancements that where introduced regarding SOA and BPM Suite 12c were:

  • Oracle BPM Suite 12c: Definition of Business Architecture, including definition of Key Performance Indicators (KPI) and Key Risk Indicators (KRI) to provide an integral overview from a high-level perspective; ACM enhancements in the direction of predictive analytics
  • Oralce BAM 12c: Completly re-implemented in ADF, allows operational analytics based on the defined KPIs and KRIs
  • Oracle MFT: Managed File Transfer solution for transferring big files from a specified source to a defined target; integration with SOA/BPM Suite 12c can be done by new JCA-based MFT adapters

Looking back,  a great and very interesting week lays behind me, providing a bunch of new ideas and impressions on the new Fusion Middleware 12c release. I’m looking forward to use some of this great new stuff soon, in real world’s projects.

Special thanks to Jürgen Kress for the excellent organization of the event! I’m already looking forward for next SOA Community event…

Finding differences in two Open-Office-Writer documents

If you write documents and get feedback from different persons on different versions it is a great pain to merge the documents and changes together. Microsoft Word has a functionality that works quite well. But the function to compare documents in Open Office Writer has  never work for me the way I expected.

Fortunately OO stores documents in a zip file, containing xml files. The main content of the document is the file content.xml. After changing the extension of the OO Writer document to zip it is possible to open the file with the favorite zip application and extracting the content.xml file. If you do this for both versions you can compare the both files with your favorite text compare tool and you will see … hmmm yes… thousands of changes. This happens especially if the documents have been edited with different versions of Open Office or Libre Office. Most of the changes are not relevant for your comparison.

So we would like to eliminate the changes not interested in to get an overview of the real changes.

We will do this using Notepad++, the tool I use most for work. Additionally we need for formation the document the XML Tools Plugin. Both are free.

We open both versions of content.xml with Notepad++ and do a “Linarize XML” with XML Tools first on both files.

In the next step we replace these six regular expressions with an empty string. This is done recursively until no further replace is possible:

1 [a-zA-Z0-9\-]+:[a-zA-Z0-9\-]+="[^"]*" 2 <([a-zA-Z0-9\-]+:)?[a-zA-Z0-9\-]+\s*/> 3 <([a-zA-Z0-9\-]+:)?[a-zA-Z0-9\-]+\s*>\s*</([a-zA-Z0-9\-]+:)?[a-zA-Z0-9\-]+> 4 <text:changed\-region\s*>.*?<\/text:changed\-region> 5 <office:annotation\s*>.*?<\/office:annotation> 6 <text:bookmark-ref\s*>.*?<\/text:bookmark-ref>

Finally we use the “Pretty print (libxml)” function of XML Tools to get the XML files formatted. Now it is possible to compare the two files with tool for comparing text files and you will see the real text changes.

Bernhard Mähr @ OPITZ-CONSULTING published at

Kategorien:English, Uncategorized

camunda BPM – Mocking subprocesses with BPMN Model API

A common way to call a reusable subprocess is to use a call activity in the BPMN 2.0 model. By using a call activity it is only necessary to add the process key of the subprocess to call and the version of it to the call activity properties. Thus, the modeling can be continued. Apart from this it is possible to define process variables to pass between the main and the subprocess.

But during unit testing the main process and all subprocesses referenced by the defined process keys must exist in the process engine repository.

The easiest way to solve this problem is to replace the defined process by the process key of a mock process which must exist in repository. But it is not advisable to change a process model for testing purposes only. It takes time to undo these changes when the real subprocess is completed. Moreover such changes could be forgotten, cause it is already tested successfully.

Creating a mock process with the same process key of the real subprocess is not convenient if there exist more than a few subprocesses which is often the reality.

A handy alternative since version 7.1 of camunda BPM is the use of the BPMN Model API.
It makes it possible to create, edit and parse BPMN 2.0 models as pure Java code.

Let’s make an example

The following process model consists of a main process with two call activities.

Main Proces with two Call-Activities

Main Proces with two Call-Activities

To have a reusable solution, a helper method is created and used by the test.
It creates a model instance by using BPMN Model API and deploys it in the given process engine repository as shown below.

 * Create and deploy a process model with one logger delegate as service task.
 * @param origProcessKey
 * key to call
 * @param mockProcessName
 * process name
 * @param fileName
 * file name without extension
 private void mockSubprocess(String origProcessKey, String mockProcessName,
 String fileName) {
 BpmnModelInstance modelInstance = Bpmn
 .startEvent().name("Start Point").serviceTask()
 .name("Log Something for Test")
 .name("End Point").done();
 .addModelInstance(fileName + ".bpmn", modelInstance).deploy();

The primary goal of this test is to ensure that the main process is ended successfully. Therefore a model instance for each call activity is created and deployed in the given repository. The main process is deployed via @Deployment annotation. Following code snippet illustrates the implementation.

 @Deployment(resources = "mainProcess.bpmn")
 public void shouldEnd() {

 // mock first sub process
 this.mockSubprocess("firstSubProcessKey", "Mocked First Sub Process",

 // mock second sub process
 this.mockSubprocess("secondSubProcessKey", "Mocked Second Sub Process",

 // start main process
 ProcessInstance mainInstance = runtimeService().startProcessInstanceByKey(


The created model instances look equally – it consists of a start event, a service task which references a delegate and an end event. Following code snippet shows the simple implementation of the used delegate.

public class MockLoggerDelegate implements JavaDelegate {

 private final Logger LOGGER = Logger.getLogger(MockLoggerDelegate.class

 public void execute(DelegateExecution execution) throws Exception {"\n\n ..." + MockLoggerDelegate.class.getName()
 + " invoked by " + "processDefinitionId="
 + execution.getProcessDefinitionId() + ", activtyId="
 + execution.getCurrentActivityId() + ", activtyName='"
 + execution.getCurrentActivityName() + "'" + ", processInstanceId="
 + execution.getProcessInstanceId() + ", businessKey="
 + execution.getProcessBusinessKey() + ", executionId="
 + execution.getId() + " \n\n");


Of course, it’s possible to individualize these mocks dependant on your test case. For example, you could create a delegate for each sub process which set specific process variables. This example demonstrates only the capability of this solution.

Keep in mind, it is not recommended to replace your process models by using the BPMN Model API. But it is very useful to solve small problems in a simple way – just a few lines of Java code. After completion a subprocess it is advisable to test the interaction with the main process, too.

And of course, do not forget to write automated integration tests ;-)

IoT prototype – low-level thoughts about camunda BPM and Drools Expert (part 6)

As already mentioned in the second part of this series, building systems of this type consists of several components. In this part we describe the integration of a BPMN 2.0 capable workflow management system and a business rule engine in our IoT prototype.

Why do we need such things

Currently we live in times of a knowledge society and fast changing business requirements. Business flexibility is needed more than ever and also requires flexible business processes in companies. This could be realized through the cooperation of IT and business to develop automated BPMN 2.0 models.

But this is not appropriate in an environment with many complex business rules which are changing multiple times in a short period. In such cases, business departments cannot wait at the next planned release of an application. So this scenario is predestined for the use of a business rule engine. It allows to externalize business rules from an application and develop and manage it in an unified way.

Thus, a process engine in combination with a business rule engine makes it possible to have modifiable automated processes and application-independent development of business rules.

Many companies have already recognized the need of a process engine and a business rule engine. Accordingly we focused to improve our experience in integration of already existing technologies in many companies with IoT. From the perspective of a company, it could be a key factor to keep already made investments in IT save and integrate it with IoT for new market potential.

Based on our focus and expertise, we decide to use camunda BPM as BPMN 2.0 capable workflow management system and Drools Expert by JBoss as business rule engine.

Use of camunda BPM

We have embedded the process engine inside our Spring application. In this way we could still use our Spring expertise. The integration went well in a few easy steps which are described in short below.

After adding the necessary project dependencies to our Maven project we extend the web.xml with a DispatcherServlet and configure the embedded camunda ProcessEngine and the ProcessApplication inside it as shown below.

 <!-- database transactions manager -->
 <tx:annotation-driven />
 <bean id="transactionManager"
 <property name="sessionFactory" ref="sessionFactory"></property>

 <!-- hibernate config -->
 <bean id="sessionFactory"
 <property name="dataSource" ref="dataSource" />
 <property name="hibernateProperties">
 <prop key="">${}</prop>
 <prop key="hibernate.dialect">${hibernate.dialect}</prop>
 <prop key="hibernate.default_schema">${}</prop>
 <prop key="show_sql">${hibernate.show_sql}</prop>
 <prop key="format_sql">${hibernate.format_sql}</prop>
 <property name="packagesToScan">

 <!-- datasource -->
 <bean id="dataSource"
 <property name="targetDataSource">
 <bean class="org.springframework.jdbc.datasource.SimpleDriverDataSource">
 <property name="driverClass" value="${jdbc.driverClassName}" />
 <property name="url" value="${jdbc.url}" />
 <property name="username" value="${jdbc.user}" />
 <property name="password" value="${jdbc.pass}" />

 <!-- camunda process engine configuration -->
 <bean id="processEngineConfiguration"
 <property name="processEngineName" value="default" />
 <property name="dataSource" ref="dataSource" />
 <property name="transactionManager" ref="transactionManager" />
 <property name="databaseSchemaUpdate" value="true" />
 <property name="jobExecutorActivate" value="false" />

 <!-- embedded camunda process engine -->
 <bean id="processEngine"
 <property name="processEngineConfiguration" ref="processEngineConfiguration" />

 <!-- process application -->
 <bean id="processApplication"
 depends-on="processEngine" />

 <!-- camunda process engine services -->
 <bean id="repositoryService" factory-bean="processEngine"
 factory-method="getRepositoryService" />
 <bean id="runtimeService" factory-bean="processEngine"
 factory-method="getRuntimeService" />
 <bean id="taskService" factory-bean="processEngine"
 factory-method="getTaskService" />
 <bean id="historyService" factory-bean="processEngine"
 factory-method="getHistoryService" />
 <bean id="managementService" factory-bean="processEngine"
 factory-method="getManagementService" />

Next we have configured a deployable process archive by the descriptor file processes.xml like below.

<?xml version="1.0" encoding="UTF-8" ?>


 <process-archive name="plug-switch-process">
 <property name="isDeleteUponUndeploy">false</property>
 <property name="isScanForProcessDefinitions">true</property>


That was all for extending our application with camunda BPM. Afterwards we start with development of our BPMN 2.0 model. We have realized the use cases to react to the presence & absence of users. The main steps in both cases are

  • check if user is first/last,
  • switch Kitchen plugs on/off,
  • switch special plugs on/off (injected as business rules),
  • switch user assigned plugs on/off and
  • finally set current user state.

The BPMN 2.0 model looks finally as shown below:

PlugSwitchProcess modeled with camunda BPM

PlugSwitchProcess modeled with camunda BPM

After completing the BPMN 2.0 model, we start to enrich it with Java implementations.
Therefore we defined so-called delegates (in some cases with field injection) in DispatcherServlet and used CDI to link these delegates with our model. Following code snippet shows a few definitions.

 <!-- camunda delegate services -->
 <bean id="applyFirstUser" class="com.opitz.iotprototype.delegates.ApplyFirstUserDelegate" />
 <bean id="switchKitchen" class="com.opitz.iotprototype.delegates.SwitchKitchenDelegate" />

Next we link the delegates with the BPMN 2.0 model via camunda Modeler properties view. Following image illustrates this step.

camunda BPMN 2.0 Modeler properties view

camunda BPMN 2.0 Modeler properties view

And in fact if a building has no kitchen, we have modeled a boundary Error Event as a business error as shown below.

camunda boundary error event

camunda boundary error event


Use of Drools Expert

We have embedded the rule engine inside our Spring application by adding the necessary project dependencies to our Maven project, too. Afterwards we add Business Rule Tasks to the BPMN 2.0 process model for use of the rule engine as shown below.

camunda BPM Business Rule Task

camunda BPM Business Rule Task

The implementations are linked as delegates, too. First a KnowledgeBase (Production Memory) with a corresponding business rule file (*.drl) is created. We defined business rule files for both cases and add these files to our project as resources. Following code snippet shows the part for reacting to presence of the user ‘jack’.

package com.opitz.iotprototype

import com.opitz.iotprototype.entities.User;
import java.util.HashSet;

rule "Switch ON special rooms for jack"
u : User( username == "jack" )
HashSet<String> specials = new HashSet<String>();
specials.add( "meetingroom" );
specials.add( "office123" );
insert( specials );
System.out.println( "## drools: special rule to switch ON special rooms for jack ##" );

Of course, it’s possible to store these files elsewhere outside the project. The next steps were to create a new session (Working memory) by the production memory and add the current data, so-called facts, to the working memory. This has the effect that the business rule engine applies pattern matcher on both memories and response the matched results in so-called agenda. Afterwards we filter out the resulted plug names from the session. Following image illustrates the mentioned parts of  Drools Expert.

Drools Expert overview

Drools Expert overview


Accumulated Experience

We have noticed that such heterogeneous systems need some new management tasks, e.g., device management, and even more alignment among different areas. Another point is that we embedded camunda BPM inside our Spring application and communicate with the process engine by the Java API. Alternatively it’s possible to separate camunda BPM and Drools Expert from Spring and use camunda REST-API to communicate with the process engine.

In our prototype, we have only a few business rules. So it takes some time to parse these few rules, which is not adequately. A rule engine is more efficient for complex rules with a number greater than 25.

Thus whenever a device trigger an event it’s processed by Oracle CEP and fired via REST to our Spring application. Next it’s forwarded to camunda BPM which use Drools Expert to determine special rules for given user and execute it. As you can see, integration of well-established technologies like Java, Spring and REST with the world of IoT is really possible.

Finally, the interaction among already established technologies and IoT enable new business and technical possibilities for companies and become more important soon.


If you would like to check the source code, look on GitHub for our project.

Or check out the other parts of this series:

IoT prototype – Retrospective. What did we learn? What did we miss? (part 5)

Okay so controlling our lights now works automatically based on entering and leaving the building, thats great! But what did we learn from all this? What do we believe can be transferred to a general concept found in (almost) all possible IoT projects?


What did we learn?

Communication takes more than one way

Meaning, there won’t be just TCP/IP based traffic. Rather there will be diverse forms of communicatons, all having different characteristics concerning error correction, message validation, energy consumption, range, bandwitdth, protocol support, … A satellite based communication is just as unnecessary for a UPS parcel being delivered in a metropolitan area as a 3G based communication is useless for a container on a freight ship.

This means for every form of data we send/receive or intend to do so we need to look at the transport way of this data. If the transport is not ensured by another component we need to be aware that data must not always reach its destination and that lots of data may or may not be rather expensive (in form of energy or financially).

Thing data can be diverse

Thankfully this can be immediately handed off to our Big Data guys. They have enough knowledge about handling heterogenous forms of data and create value from these. Sending data to devices on the other hand could become difficult. Here generic instruction structures could help, which can then either be interpreted by the device or transformed to a special protocol form the device understands. Interfaces & gateways between devices and servers won’t be rare.

Event Processing can help keep complexity to a minimum

Making sure every level of our architecture only sends the necessary data to the next layer is key to making sure nothing gets more data than really necessary. Sending every single temperature reading of every sensor in a managed facility to a headquarter is both unnecessary and costly. But having a gateway that processes events (temperature readings) and if something abnormal happens, sends an event to the headquarter is an obvious better solution. This could be directly compared to reporting structures in companies. A Manager only reports the information to his superiors if he thinks they find the information relevant. Of course the higher levels must be able to modify this reporting behavior if needed. Which leads to the next point:

Device Management is essential

While some companies already see what happens if you don’t manage your employees smartphones the way they should be, not managing all network enabled devices within your company in 2020 could be a major risk. One should always be able to report necessary metadata about the devices such as location, energy state, configuration, workload, time until expected replacement is needed, …

IoT more than ever before shows the necessity of a project based organization

If one intends to “start ones own” or expand an existing system in the IoT domain, more than ever before, things will always be different than last time. This means a good project management is essential to the success of a new system. Making sure risks have been reflected upon, development will be successful and solutions will reflect the expected results are goals that are never easy to achieve so if one doesn’t have the necessary manpower or expertise, one should always ask for help. Of course OC intends to do just that, help its customers achieve their goals.

Known concepts can be applied

Thankfully not everything is new. Star vs Mesh vs Bus based communication strategy? These structures are already known with all their pro’s and con’s considered in different areas. Error correction in new protocols can learn from existing ones. Data handling can make use of Big Data, BI and Data Warehouse concepts. Device Management can learn from current experiences in the mobile device environment. And integration of different sensor networks follows the same rules as integration of enterprise information systems in general.

What did we miss?

Transaction based communication

Let’s consider this example: In a smart home, the home knows what food available and what not. Now it could offer its inhabitant the service of always making sure dinner is possible. So if the fridge is empty and the inhabitant declares the inability of purchasing something before he/she gets home, the home would order food from a known service in the area that the inhabitant enjoys. Now the home orders pizza and the service provider receives the request. Even though on a technical level the home knows, the order has been received (TCP based communication), a functional order approval should be returned as well. Now if this order is not returned should the home order somewhere else? How long should it wait before ordering somewhere else? And what if it orders somewhere else and then receives an approval?

Such use cases were not covered in our prototype. We had (very) stupid things, they take commands and either don’t even send a response (433mhz) or do so but only on a technical level (Philips Hue with REST API).

But in the case of transactions, there need to be functional messages in both directions.

Working with changing environments

Our prototype was stationary. That doesn’t mean we didn’t try it in different environments but once set up, it didn’t move. Now a mobile IoT application would have to work with changing network connectivity capabilities and adapt accordingly. Sometimes, communication may not be possible at all so it needs to cache requests for an unknown time period and sync once connection is possible again. This could e.g. happen when technicians keep moving in and out of reception because they work in basements often. Oracle as an example offers a good solution for such data synchronization issues: Database Mobile Server takes care of syncing data between several mobile devices and a main database.

Being even more generic

We built a prototype, so of course it could have been done better. Our rules for changing lights based on user events are written in a Drools file that is deployed with the entire application. This means all rules must be known at the time of deployment. Nice would be a generic Event->Action framework, where users can tie generic events which could be selected from a pool of possible events (e.g. temperature from online services, rain probability, time of sunrise/set, user presence events, …) to actions concerning lights and light groups. So then a user could pick&mix events like so “if I get home and the sun has already set do X”.

Having a diverse set of things to work with

We only had the 433mhz plugs and smartphones of the users + our server  & scanning Pi as things. We did (not yet at least) connect any heating systems, A/C’s, fridges etc. to our system. Of course with every new device type the complexity rises since those things can often interact with many other things and enable us to do cool new things. E.g. start playing the same music I just heard on my phone / in my car once I get home and am still alone. Opening my top floor windows once it’s getting cool outside during the summer days and start the fan so the hot air gets blown out. Or even more advanced things like collaborative automatic music selection. Imagine 5 people standing in one room. The smart home could access all their music profiles on known portals such as spotify, iTunes, Google Music, …. and find their intersecting set which would then be the music to be played that everyone likes. Or a group meeting room that knows who enters it and sets the temperature according to the people’s preferences from past settings in their personal offices so that no one freezes and no one breaks a sweet.


We built this prototype to learn from the experiences we make and see what lies ahead if we try to play with IoT. This wasn’t a full blown project, but it still helped us understand the trend better and learn more about it. We set up an architecture that we believe can work with an enterprise IoT structure:

IoT Architektur

We believe this structure is what it takes to work with a big scale network of devices, their data and their actions possible. While many concepts are found here (BPM, [No]SQL, BigData, BI, Device Mgmt, ServiceGateways, Event Processing, …) it is still a realistic project concept that shouldn’t be avoided. Rather companies should sit their divisions together in a creative space, explain to everybody the concepts and ideas behind IoT and then see what their departments could imagine working, both in their own domain as well as across departments. If you want help or guidance with your project idea or problem solution findings, contact us! We’ll be happy to work with you on your first IoT project!


Other parts of this series:

Kategorien:camunda BPM, English, IoT, Uncategorized Schlagworte: ,

IoT prototype – low-level thoughts about the 433mhz communication (part 4)

This prototype started off as a personal project. So it’s not a big surprise why the 433mhz plugs were chosen as a first tech. They are in every household and I wanted to retrofit what was already there to become a bit smarter. Being able to group, manage and remotely control these plugs is an idea many have had before and there are several ways of controlling these out there.

But while these primitive plugs are maybe not the most elegant way of controling things, they are still just what they are. Primitive things that are a great way to learn how things look, once you leave the ground of TCP based error corrected communication over the web and step into the real world with all its different environments. No automatic error correction, just plain old radio signals. So if I wanted to keep track of which lights are on and which are off, I needed to make sure the plugs will receive my message. I read a bit about radio signal strengths and its limits set up by the government. The 433mhz emitters are cheap, but thats also why they can be problematic. But there turned out to be a really easy fix: 433 mhz antenna

while the default was only able to send its signal a few meters at most before the signal became too weak, adding the middle piece of a coat hanger as an antenna increased the range dramatically. It has something to do with half the length of the wavelength or something, I stopped reading the forums here, because it got way into physics and it worked for me so I could turn my mind to the more important tasks, the software. The point is, the signal is so strong now, that with about 200 tests, not once did my lamp not receive my signal. Also I send the same signal three times within a second when trying to change a plugs state so if one gets lost the other two probably will get through.


Native stack

In order to control the Pi’s GPIO plugs, I needed a mediator between my Spring application and the hardware. Luckily there was already somebody who created a few libraries that managed the most fundamental of problems, sending the codes out via the GPIO pins:

This allowed me to write some Java Native Interface adapters that accessed the shared C library and use its functionality. I had to rewrite some code however, because the library did not support all DIP switch configurations I wanted. A 433mhz Elro chip based plug has 2×5 binary dip switches. So e.g. 01101 00101 would be an address. But the library only allowed one bit to be 1 in the second group. no 01110 11101 was allowed, just 01101 10000 for example. I wanted to have more addresses at my disposal so I changed the code a bit but not too much (see github changelog from sept 2013).

So once this worked and the JNI methods were in place and had access to the libraries (which was quite a hassle believe me), I was able to control any plug in my house from java. yay! But there were still some steps ahead. But before I continue lets look at what I learned from the native work

  • C is a language for the really intense ones.  I can’t imagine being as productive in C as I am in Java. But maybe thats just a thing of practice..
  • Things in the IoT context can be of all different shapes and sizes. So integration and adapters are tools one cannot get around. There will have to be clean and well documented interfaces between the different worlds. I wish I had REST on this level….
  • Cheap devices and cheap hardware open the gates for great solutions. But it’s not the 3€ chip in the power outlet that will make the difference, its the software based solution and service built around it

To keep on reading check the other posts about our prototype:

Kategorien:English, IoT, Uncategorized Schlagworte:

IoT prototype – low level thoughts on Oracle CEP Embedded (part 3)

In our prototype we used an Oracle CEP Embedded application running on a separate Raspberry Pi to scan the network for arriving / leaving devices and link them to our users. We could have left this logic in our Spring server and have it running in our service layer but we wanted to learn more about Oracle CEP and this seemed a good way to start. What needed to be done:

  • Notice new devices on the network
  • link networks to users already created in the system (on the spring server / in its DB)
  • send the server a notification about an arrived user or one that left the premises
  • allow for short states of users missing (e.g. a phone rebooting or short connection loss)
  • send all found network nodes to the server on a regular base so we can link users to existing network devices in our frontend application

Okay so once we had these necessities settled we started checking out a few tutorials about CEP. There’s an Eclipse plugin that one uses to build the CEP application and its all about event flows. There are three central files (plus your code that is) that one needs to look at:

Oracle CEP imporant files

The context.xml file is all about defining all sorts of beans and is where the framework generates the Event Processing Network graphic from. This is how the two look like (code & generated design)




What we then do is this:

  1. We scan the network every 15 seconds for all network nodes (phones, tablets, pc, laptop,…) and stream them along. Each of the nodes is considered an event
  2. The networkNodeOutputAdapter sends those nodes to the server every few minutes as a bulk list, containing when it was last seen etc.
  3. The UserNodeBean gets a HashMap from our Spring server that links users to devices (which one has to configure of course. Basically it is “this iPhone XY is mine”)
  4. The UserDeviceProcessingBean then filters all events to only pass along those nodes that actually belong to a user
  5. the stateCalculatingBean then compares those found nodes with the users and sees if any user has a changed state. If a node was found that hasn’t been seen in a while the state changes from offline to online. of course this also applies the other way around but with a certain delay (2-5 minutes depending on the preferences)



Java vs. CQL

CQL (Continuous Query Language) is something that Oracle uses to filter events. While this is surely a great tool for people that are used to sql, it wasn’t anything for me, so I stuck to my java beans, which do the same job just fine. But I bet for database specialists this is a welcome tool.

Notable things about OCEP:

Though I am usually a big fan of open source projects and prefer them over proprietary products, I was rather fond of this one. The installation was very easy and even though I’m usually a big fan of IntelliJ, if I must use something else I prefer Eclipse over jDeveloper, so this was a welcome change. Also the trial and error round cycles are really short, since one doesn’t need to redeploy a whole lot, it only takes a few seconds to republish changes on the code to the CEP server. Also once it works in the VM (there is a VM created by Oracle that I used which had everything already set up for SOA development) one can almost directly pull the .jar onto the Pi, start the server there and see it take off. No difficult configuration necessary. Having an ssh connection to both Pi’s lets me still work on one Computer and not have to switch between things.

Also I found this really easy to learn, since it doesn’t have as many components as e.g. BPM frameworks of these days. I was able to model my process with the EPN tool but could still keep all my actual logic and magic within my java code.

If you’d like to check the sourcecode, look on Github for my project.

To keep on reading check the other posts about our prototype:


Erhalte jeden neuen Beitrag in deinen Posteingang.

Schließe dich 25 Followern an