Archive for the ‘Uncategorized’ Category

Finding differences in two Open-Office-Writer documents

If you write documents and get feedback from different persons on different versions it is a great pain to merge the documents and changes together. Microsoft Word has a functionality that works quite well. But the function to compare documents in Open Office Writer has  never work for me the way I expected.

Fortunately OO stores documents in a zip file, containing xml files. The main content of the document is the file content.xml. After changing the extension of the OO Writer document to zip it is possible to open the file with the favorite zip application and extracting the content.xml file. If you do this for both versions you can compare the both files with your favorite text compare tool and you will see … hmmm yes… thousands of changes. This happens especially if the documents have been edited with different versions of Open Office or Libre Office. Most of the changes are not relevant for your comparison.

So we would like to eliminate the changes not interested in to get an overview of the real changes.

We will do this using Notepad++, the tool I use most for work. Additionally we need for formation the document the XML Tools Plugin. Both are free.

We open both versions of content.xml with Notepad++ and do a “Linarize XML” with XML Tools first on both files.

In the next step we replace these six regular expressions with an empty string. This is done recursively until no further replace is possible:

1 [a-zA-Z0-9\-]+:[a-zA-Z0-9\-]+="[^"]*" 2 <([a-zA-Z0-9\-]+:)?[a-zA-Z0-9\-]+\s*/> 3 <([a-zA-Z0-9\-]+:)?[a-zA-Z0-9\-]+\s*>\s*</([a-zA-Z0-9\-]+:)?[a-zA-Z0-9\-]+> 4 <text:changed\-region\s*>.*?<\/text:changed\-region> 5 <office:annotation\s*>.*?<\/office:annotation> 6 <text:bookmark-ref\s*>.*?<\/text:bookmark-ref>

Finally we use the “Pretty print (libxml)” function of XML Tools to get the XML files formatted. Now it is possible to compare the two files with tool for comparing text files and you will see the real text changes.

Bernhard Mähr @ OPITZ-CONSULTING published at

Kategorien:English, Uncategorized

camunda BPM – Mocking subprocesses with BPMN Model API

A common way to call a reusable subprocess is to use a call activity in the BPMN 2.0 model. By using a call activity it is only necessary to add the process key of the subprocess to call and the version of it to the call activity properties. Thus, the modeling can be continued. Apart from this it is possible to define process variables to pass between the main and the subprocess.

But during unit testing the main process and all subprocesses referenced by the defined process keys must exist in the process engine repository.

The easiest way to solve this problem is to replace the defined process by the process key of a mock process which must exist in repository. But it is not advisable to change a process model for testing purposes only. It takes time to undo these changes when the real subprocess is completed. Moreover such changes could be forgotten, cause it is already tested successfully.

Creating a mock process with the same process key of the real subprocess is not convenient if there exist more than a few subprocesses which is often the reality.

A handy alternative since version 7.1 of camunda BPM is the use of the BPMN Model API.
It makes it possible to create, edit and parse BPMN 2.0 models as pure Java code.

Let’s make an example

The following process model consists of a main process with two call activities.

Main Proces with two Call-Activities

Main Proces with two Call-Activities

To have a reusable solution, a helper method is created and used by the test.
It creates a model instance by using BPMN Model API and deploys it in the given process engine repository as shown below.

 * Create and deploy a process model with one logger delegate as service task.
 * @param origProcessKey
 * key to call
 * @param mockProcessName
 * process name
 * @param fileName
 * file name without extension
 private void mockSubprocess(String origProcessKey, String mockProcessName,
 String fileName) {
 BpmnModelInstance modelInstance = Bpmn
 .startEvent().name("Start Point").serviceTask()
 .name("Log Something for Test")
 .name("End Point").done();
 .addModelInstance(fileName + ".bpmn", modelInstance).deploy();

The primary goal of this test is to ensure that the main process is ended successfully. Therefore a model instance for each call activity is created and deployed in the given repository. The main process is deployed via @Deployment annotation. Following code snippet illustrates the implementation.

 @Deployment(resources = "mainProcess.bpmn")
 public void shouldEnd() {

 // mock first sub process
 this.mockSubprocess("firstSubProcessKey", "Mocked First Sub Process",

 // mock second sub process
 this.mockSubprocess("secondSubProcessKey", "Mocked Second Sub Process",

 // start main process
 ProcessInstance mainInstance = runtimeService().startProcessInstanceByKey(


The created model instances look equally – it consists of a start event, a service task which references a delegate and an end event. Following code snippet shows the simple implementation of the used delegate.

public class MockLoggerDelegate implements JavaDelegate {

 private final Logger LOGGER = Logger.getLogger(MockLoggerDelegate.class

 public void execute(DelegateExecution execution) throws Exception {"\n\n ..." + MockLoggerDelegate.class.getName()
 + " invoked by " + "processDefinitionId="
 + execution.getProcessDefinitionId() + ", activtyId="
 + execution.getCurrentActivityId() + ", activtyName='"
 + execution.getCurrentActivityName() + "'" + ", processInstanceId="
 + execution.getProcessInstanceId() + ", businessKey="
 + execution.getProcessBusinessKey() + ", executionId="
 + execution.getId() + " \n\n");


Of course, it’s possible to individualize these mocks dependant on your test case. For example, you could create a delegate for each sub process which set specific process variables. This example demonstrates only the capability of this solution.

Keep in mind, it is not recommended to replace your process models by using the BPMN Model API. But it is very useful to solve small problems in a simple way – just a few lines of Java code. After completion a subprocess it is advisable to test the interaction with the main process, too.

And of course, do not forget to write automated integration tests ;-)

IoT prototype – low-level thoughts about camunda BPM and Drools Expert (part 6)

As already mentioned in the second part of this series, building systems of this type consists of several components. In this part we describe the integration of a BPMN 2.0 capable workflow management system and a business rule engine in our IoT prototype.

Why do we need such things

Currently we live in times of a knowledge society and fast changing business requirements. Business flexibility is needed more than ever and also requires flexible business processes in companies. This could be realized through the cooperation of IT and business to develop automated BPMN 2.0 models.

But this is not appropriate in an environment with many complex business rules which are changing multiple times in a short period. In such cases, business departments cannot wait at the next planned release of an application. So this scenario is predestined for the use of a business rule engine. It allows to externalize business rules from an application and develop and manage it in an unified way.

Thus, a process engine in combination with a business rule engine makes it possible to have modifiable automated processes and application-independent development of business rules.

Many companies have already recognized the need of a process engine and a business rule engine. Accordingly we focused to improve our experience in integration of already existing technologies in many companies with IoT. From the perspective of a company, it could be a key factor to keep already made investments in IT save and integrate it with IoT for new market potential.

Based on our focus and expertise, we decide to use camunda BPM as BPMN 2.0 capable workflow management system and Drools Expert by JBoss as business rule engine.

Use of camunda BPM

We have embedded the process engine inside our Spring application. In this way we could still use our Spring expertise. The integration went well in a few easy steps which are described in short below.

After adding the necessary project dependencies to our Maven project we extend the web.xml with a DispatcherServlet and configure the embedded camunda ProcessEngine and the ProcessApplication inside it as shown below.

 <!-- database transactions manager -->
 <tx:annotation-driven />
 <bean id="transactionManager"
 <property name="sessionFactory" ref="sessionFactory"></property>

 <!-- hibernate config -->
 <bean id="sessionFactory"
 <property name="dataSource" ref="dataSource" />
 <property name="hibernateProperties">
 <prop key="">${}</prop>
 <prop key="hibernate.dialect">${hibernate.dialect}</prop>
 <prop key="hibernate.default_schema">${}</prop>
 <prop key="show_sql">${hibernate.show_sql}</prop>
 <prop key="format_sql">${hibernate.format_sql}</prop>
 <property name="packagesToScan">

 <!-- datasource -->
 <bean id="dataSource"
 <property name="targetDataSource">
 <bean class="org.springframework.jdbc.datasource.SimpleDriverDataSource">
 <property name="driverClass" value="${jdbc.driverClassName}" />
 <property name="url" value="${jdbc.url}" />
 <property name="username" value="${jdbc.user}" />
 <property name="password" value="${jdbc.pass}" />

 <!-- camunda process engine configuration -->
 <bean id="processEngineConfiguration"
 <property name="processEngineName" value="default" />
 <property name="dataSource" ref="dataSource" />
 <property name="transactionManager" ref="transactionManager" />
 <property name="databaseSchemaUpdate" value="true" />
 <property name="jobExecutorActivate" value="false" />

 <!-- embedded camunda process engine -->
 <bean id="processEngine"
 <property name="processEngineConfiguration" ref="processEngineConfiguration" />

 <!-- process application -->
 <bean id="processApplication"
 depends-on="processEngine" />

 <!-- camunda process engine services -->
 <bean id="repositoryService" factory-bean="processEngine"
 factory-method="getRepositoryService" />
 <bean id="runtimeService" factory-bean="processEngine"
 factory-method="getRuntimeService" />
 <bean id="taskService" factory-bean="processEngine"
 factory-method="getTaskService" />
 <bean id="historyService" factory-bean="processEngine"
 factory-method="getHistoryService" />
 <bean id="managementService" factory-bean="processEngine"
 factory-method="getManagementService" />

Next we have configured a deployable process archive by the descriptor file processes.xml like below.

<?xml version="1.0" encoding="UTF-8" ?>


 <process-archive name="plug-switch-process">
 <property name="isDeleteUponUndeploy">false</property>
 <property name="isScanForProcessDefinitions">true</property>


That was all for extending our application with camunda BPM. Afterwards we start with development of our BPMN 2.0 model. We have realized the use cases to react to the presence & absence of users. The main steps in both cases are

  • check if user is first/last,
  • switch Kitchen plugs on/off,
  • switch special plugs on/off (injected as business rules),
  • switch user assigned plugs on/off and
  • finally set current user state.

The BPMN 2.0 model looks finally as shown below:

PlugSwitchProcess modeled with camunda BPM

PlugSwitchProcess modeled with camunda BPM

After completing the BPMN 2.0 model, we start to enrich it with Java implementations.
Therefore we defined so-called delegates (in some cases with field injection) in DispatcherServlet and used CDI to link these delegates with our model. Following code snippet shows a few definitions.

 <!-- camunda delegate services -->
 <bean id="applyFirstUser" class="com.opitz.iotprototype.delegates.ApplyFirstUserDelegate" />
 <bean id="switchKitchen" class="com.opitz.iotprototype.delegates.SwitchKitchenDelegate" />

Next we link the delegates with the BPMN 2.0 model via camunda Modeler properties view. Following image illustrates this step.

camunda BPMN 2.0 Modeler properties view

camunda BPMN 2.0 Modeler properties view

And in fact if a building has no kitchen, we have modeled a boundary Error Event as a business error as shown below.

camunda boundary error event

camunda boundary error event


Use of Drools Expert

We have embedded the rule engine inside our Spring application by adding the necessary project dependencies to our Maven project, too. Afterwards we add Business Rule Tasks to the BPMN 2.0 process model for use of the rule engine as shown below.

camunda BPM Business Rule Task

camunda BPM Business Rule Task

The implementations are linked as delegates, too. First a KnowledgeBase (Production Memory) with a corresponding business rule file (*.drl) is created. We defined business rule files for both cases and add these files to our project as resources. Following code snippet shows the part for reacting to presence of the user ‘jack’.

package com.opitz.iotprototype

import com.opitz.iotprototype.entities.User;
import java.util.HashSet;

rule "Switch ON special rooms for jack"
u : User( username == "jack" )
HashSet<String> specials = new HashSet<String>();
specials.add( "meetingroom" );
specials.add( "office123" );
insert( specials );
System.out.println( "## drools: special rule to switch ON special rooms for jack ##" );

Of course, it’s possible to store these files elsewhere outside the project. The next steps were to create a new session (Working memory) by the production memory and add the current data, so-called facts, to the working memory. This has the effect that the business rule engine applies pattern matcher on both memories and response the matched results in so-called agenda. Afterwards we filter out the resulted plug names from the session. Following image illustrates the mentioned parts of  Drools Expert.

Drools Expert overview

Drools Expert overview


Accumulated Experience

We have noticed that such heterogeneous systems need some new management tasks, e.g., device management, and even more alignment among different areas. Another point is that we embedded camunda BPM inside our Spring application and communicate with the process engine by the Java API. Alternatively it’s possible to separate camunda BPM and Drools Expert from Spring and use camunda REST-API to communicate with the process engine.

In our prototype, we have only a few business rules. So it takes some time to parse these few rules, which is not adequately. A rule engine is more efficient for complex rules with a number greater than 25.

Thus whenever a device trigger an event it’s processed by Oracle CEP and fired via REST to our Spring application. Next it’s forwarded to camunda BPM which use Drools Expert to determine special rules for given user and execute it. As you can see, integration of well-established technologies like Java, Spring and REST with the world of IoT is really possible.

Finally, the interaction among already established technologies and IoT enable new business and technical possibilities for companies and become more important soon.


If you would like to check the source code, look on GitHub for our project.

Or check out the other parts of this series:

IoT prototype – Retrospective. What did we learn? What did we miss? (part 5)

Okay so controlling our lights now works automatically based on entering and leaving the building, thats great! But what did we learn from all this? What do we believe can be transferred to a general concept found in (almost) all possible IoT projects?


What did we learn?

Communication takes more than one way

Meaning, there won’t be just TCP/IP based traffic. Rather there will be diverse forms of communicatons, all having different characteristics concerning error correction, message validation, energy consumption, range, bandwitdth, protocol support, … A satellite based communication is just as unnecessary for a UPS parcel being delivered in a metropolitan area as a 3G based communication is useless for a container on a freight ship.

This means for every form of data we send/receive or intend to do so we need to look at the transport way of this data. If the transport is not ensured by another component we need to be aware that data must not always reach its destination and that lots of data may or may not be rather expensive (in form of energy or financially).

Thing data can be diverse

Thankfully this can be immediately handed off to our Big Data guys. They have enough knowledge about handling heterogenous forms of data and create value from these. Sending data to devices on the other hand could become difficult. Here generic instruction structures could help, which can then either be interpreted by the device or transformed to a special protocol form the device understands. Interfaces & gateways between devices and servers won’t be rare.

Event Processing can help keep complexity to a minimum

Making sure every level of our architecture only sends the necessary data to the next layer is key to making sure nothing gets more data than really necessary. Sending every single temperature reading of every sensor in a managed facility to a headquarter is both unnecessary and costly. But having a gateway that processes events (temperature readings) and if something abnormal happens, sends an event to the headquarter is an obvious better solution. This could be directly compared to reporting structures in companies. A Manager only reports the information to his superiors if he thinks they find the information relevant. Of course the higher levels must be able to modify this reporting behavior if needed. Which leads to the next point:

Device Management is essential

While some companies already see what happens if you don’t manage your employees smartphones the way they should be, not managing all network enabled devices within your company in 2020 could be a major risk. One should always be able to report necessary metadata about the devices such as location, energy state, configuration, workload, time until expected replacement is needed, …

IoT more than ever before shows the necessity of a project based organization

If one intends to “start ones own” or expand an existing system in the IoT domain, more than ever before, things will always be different than last time. This means a good project management is essential to the success of a new system. Making sure risks have been reflected upon, development will be successful and solutions will reflect the expected results are goals that are never easy to achieve so if one doesn’t have the necessary manpower or expertise, one should always ask for help. Of course OC intends to do just that, help its customers achieve their goals.

Known concepts can be applied

Thankfully not everything is new. Star vs Mesh vs Bus based communication strategy? These structures are already known with all their pro’s and con’s considered in different areas. Error correction in new protocols can learn from existing ones. Data handling can make use of Big Data, BI and Data Warehouse concepts. Device Management can learn from current experiences in the mobile device environment. And integration of different sensor networks follows the same rules as integration of enterprise information systems in general.

What did we miss?

Transaction based communication

Let’s consider this example: In a smart home, the home knows what food available and what not. Now it could offer its inhabitant the service of always making sure dinner is possible. So if the fridge is empty and the inhabitant declares the inability of purchasing something before he/she gets home, the home would order food from a known service in the area that the inhabitant enjoys. Now the home orders pizza and the service provider receives the request. Even though on a technical level the home knows, the order has been received (TCP based communication), a functional order approval should be returned as well. Now if this order is not returned should the home order somewhere else? How long should it wait before ordering somewhere else? And what if it orders somewhere else and then receives an approval?

Such use cases were not covered in our prototype. We had (very) stupid things, they take commands and either don’t even send a response (433mhz) or do so but only on a technical level (Philips Hue with REST API).

But in the case of transactions, there need to be functional messages in both directions.

Working with changing environments

Our prototype was stationary. That doesn’t mean we didn’t try it in different environments but once set up, it didn’t move. Now a mobile IoT application would have to work with changing network connectivity capabilities and adapt accordingly. Sometimes, communication may not be possible at all so it needs to cache requests for an unknown time period and sync once connection is possible again. This could e.g. happen when technicians keep moving in and out of reception because they work in basements often. Oracle as an example offers a good solution for such data synchronization issues: Database Mobile Server takes care of syncing data between several mobile devices and a main database.

Being even more generic

We built a prototype, so of course it could have been done better. Our rules for changing lights based on user events are written in a Drools file that is deployed with the entire application. This means all rules must be known at the time of deployment. Nice would be a generic Event->Action framework, where users can tie generic events which could be selected from a pool of possible events (e.g. temperature from online services, rain probability, time of sunrise/set, user presence events, …) to actions concerning lights and light groups. So then a user could pick&mix events like so “if I get home and the sun has already set do X”.

Having a diverse set of things to work with

We only had the 433mhz plugs and smartphones of the users + our server  & scanning Pi as things. We did (not yet at least) connect any heating systems, A/C’s, fridges etc. to our system. Of course with every new device type the complexity rises since those things can often interact with many other things and enable us to do cool new things. E.g. start playing the same music I just heard on my phone / in my car once I get home and am still alone. Opening my top floor windows once it’s getting cool outside during the summer days and start the fan so the hot air gets blown out. Or even more advanced things like collaborative automatic music selection. Imagine 5 people standing in one room. The smart home could access all their music profiles on known portals such as spotify, iTunes, Google Music, …. and find their intersecting set which would then be the music to be played that everyone likes. Or a group meeting room that knows who enters it and sets the temperature according to the people’s preferences from past settings in their personal offices so that no one freezes and no one breaks a sweet.


We built this prototype to learn from the experiences we make and see what lies ahead if we try to play with IoT. This wasn’t a full blown project, but it still helped us understand the trend better and learn more about it. We set up an architecture that we believe can work with an enterprise IoT structure:

IoT Architektur

We believe this structure is what it takes to work with a big scale network of devices, their data and their actions possible. While many concepts are found here (BPM, [No]SQL, BigData, BI, Device Mgmt, ServiceGateways, Event Processing, …) it is still a realistic project concept that shouldn’t be avoided. Rather companies should sit their divisions together in a creative space, explain to everybody the concepts and ideas behind IoT and then see what their departments could imagine working, both in their own domain as well as across departments. If you want help or guidance with your project idea or problem solution findings, contact us! We’ll be happy to work with you on your first IoT project!


Other parts of this series:

Kategorien:camunda BPM, English, IoT, Uncategorized Schlagworte: ,

IoT prototype – low-level thoughts about the 433mhz communication (part 4)

This prototype started off as a personal project. So it’s not a big surprise why the 433mhz plugs were chosen as a first tech. They are in every household and I wanted to retrofit what was already there to become a bit smarter. Being able to group, manage and remotely control these plugs is an idea many have had before and there are several ways of controlling these out there.

But while these primitive plugs are maybe not the most elegant way of controling things, they are still just what they are. Primitive things that are a great way to learn how things look, once you leave the ground of TCP based error corrected communication over the web and step into the real world with all its different environments. No automatic error correction, just plain old radio signals. So if I wanted to keep track of which lights are on and which are off, I needed to make sure the plugs will receive my message. I read a bit about radio signal strengths and its limits set up by the government. The 433mhz emitters are cheap, but thats also why they can be problematic. But there turned out to be a really easy fix: 433 mhz antenna

while the default was only able to send its signal a few meters at most before the signal became too weak, adding the middle piece of a coat hanger as an antenna increased the range dramatically. It has something to do with half the length of the wavelength or something, I stopped reading the forums here, because it got way into physics and it worked for me so I could turn my mind to the more important tasks, the software. The point is, the signal is so strong now, that with about 200 tests, not once did my lamp not receive my signal. Also I send the same signal three times within a second when trying to change a plugs state so if one gets lost the other two probably will get through.


Native stack

In order to control the Pi’s GPIO plugs, I needed a mediator between my Spring application and the hardware. Luckily there was already somebody who created a few libraries that managed the most fundamental of problems, sending the codes out via the GPIO pins:

This allowed me to write some Java Native Interface adapters that accessed the shared C library and use its functionality. I had to rewrite some code however, because the library did not support all DIP switch configurations I wanted. A 433mhz Elro chip based plug has 2×5 binary dip switches. So e.g. 01101 00101 would be an address. But the library only allowed one bit to be 1 in the second group. no 01110 11101 was allowed, just 01101 10000 for example. I wanted to have more addresses at my disposal so I changed the code a bit but not too much (see github changelog from sept 2013).

So once this worked and the JNI methods were in place and had access to the libraries (which was quite a hassle believe me), I was able to control any plug in my house from java. yay! But there were still some steps ahead. But before I continue lets look at what I learned from the native work

  • C is a language for the really intense ones.  I can’t imagine being as productive in C as I am in Java. But maybe thats just a thing of practice..
  • Things in the IoT context can be of all different shapes and sizes. So integration and adapters are tools one cannot get around. There will have to be clean and well documented interfaces between the different worlds. I wish I had REST on this level….
  • Cheap devices and cheap hardware open the gates for great solutions. But it’s not the 3€ chip in the power outlet that will make the difference, its the software based solution and service built around it

To keep on reading check the other posts about our prototype:

Kategorien:English, IoT, Uncategorized Schlagworte:

IoT prototype – low level thoughts on Oracle CEP Embedded (part 3)

In our prototype we used an Oracle CEP Embedded application running on a separate Raspberry Pi to scan the network for arriving / leaving devices and link them to our users. We could have left this logic in our Spring server and have it running in our service layer but we wanted to learn more about Oracle CEP and this seemed a good way to start. What needed to be done:

  • Notice new devices on the network
  • link networks to users already created in the system (on the spring server / in its DB)
  • send the server a notification about an arrived user or one that left the premises
  • allow for short states of users missing (e.g. a phone rebooting or short connection loss)
  • send all found network nodes to the server on a regular base so we can link users to existing network devices in our frontend application

Okay so once we had these necessities settled we started checking out a few tutorials about CEP. There’s an Eclipse plugin that one uses to build the CEP application and its all about event flows. There are three central files (plus your code that is) that one needs to look at:

Oracle CEP imporant files

The context.xml file is all about defining all sorts of beans and is where the framework generates the Event Processing Network graphic from. This is how the two look like (code & generated design)




What we then do is this:

  1. We scan the network every 15 seconds for all network nodes (phones, tablets, pc, laptop,…) and stream them along. Each of the nodes is considered an event
  2. The networkNodeOutputAdapter sends those nodes to the server every few minutes as a bulk list, containing when it was last seen etc.
  3. The UserNodeBean gets a HashMap from our Spring server that links users to devices (which one has to configure of course. Basically it is “this iPhone XY is mine”)
  4. The UserDeviceProcessingBean then filters all events to only pass along those nodes that actually belong to a user
  5. the stateCalculatingBean then compares those found nodes with the users and sees if any user has a changed state. If a node was found that hasn’t been seen in a while the state changes from offline to online. of course this also applies the other way around but with a certain delay (2-5 minutes depending on the preferences)



Java vs. CQL

CQL (Continuous Query Language) is something that Oracle uses to filter events. While this is surely a great tool for people that are used to sql, it wasn’t anything for me, so I stuck to my java beans, which do the same job just fine. But I bet for database specialists this is a welcome tool.

Notable things about OCEP:

Though I am usually a big fan of open source projects and prefer them over proprietary products, I was rather fond of this one. The installation was very easy and even though I’m usually a big fan of IntelliJ, if I must use something else I prefer Eclipse over jDeveloper, so this was a welcome change. Also the trial and error round cycles are really short, since one doesn’t need to redeploy a whole lot, it only takes a few seconds to republish changes on the code to the CEP server. Also once it works in the VM (there is a VM created by Oracle that I used which had everything already set up for SOA development) one can almost directly pull the .jar onto the Pi, start the server there and see it take off. No difficult configuration necessary. Having an ssh connection to both Pi’s lets me still work on one Computer and not have to switch between things.

Also I found this really easy to learn, since it doesn’t have as many components as e.g. BPM frameworks of these days. I was able to model my process with the EPN tool but could still keep all my actual logic and magic within my java code.

If you’d like to check the sourcecode, look on Github for my project.

To keep on reading check the other posts about our prototype:

IoT prototype – an architectural overview (part 2)

With our prototype we had a few goals:

  1. learn about the technologies that are involved in the IoT space
  2. Follow a good architectural principle
  3. get the attention (and trigger their curiosity) of visitors at different events

Of course there were also the functional requirements of being able to effectively know when a user is present or not, the management of different lamps & devices as well as the implementation of rules that determine what actions should be taken if certain events are triggered.

We have had many projects in the Java enterprise application space so naturally we looked at frameworks in this domain. Having worked with Spring MVC before, it seemed to suit our needs just fine. A good environment to deploy a well structured REST API and have services that can take care of logic and handling different situations. Also using Hibernate as an interface to a H2 database, we didn’t have to worry about our database model.

The frontend was supposed to be multi-platform and just while starting our project, the first alpha version of the ionic framework came out, a framework that is based on AngularJS, a very popular and powerful MVC JavaScript framework. Since this was all about learning new technologies, it got thrown in the mix.

Lets talk Hardware: A Raspberry Pi as well as a 433mhz sender module attached to its GPIO pins were needed as base. Of course some radio controlled power plugs to test the system were needed as well.

We used Java JNI to create the connection between the 433mhz module attached to the GPIO pins and our Spring application. This closes the gap from native C code to our platform independent Java application.

So this was a good place to start. However, due to other developments we had a few more things to look at: Camunda BPM to be integrated into Spring, managing the process of reacting to a user state change to show the capability of Camunda’s BPM framework and have this then running on a Pi. Also Oracle’s CEP was asked for, since they recently also got involved in the whole IoT trend and use it on their gateway devices to perform some logic before sending events to backends in the cloud. We were curious about this so we included this technology as well, although as a separate module, not within Spring. We needed a second raspberry pi for this, since two Java applications for one Pi is a bit much. This wasn’t too bad though, since it underlines the idea of IoT, being several devices talking to each other and creating an interesting application through interaction with each other. Lastly we decided to include Drools as well, to model our business rules in a unified way. We integrated drools in camunda to access the rules from our user state change process.

All of this together and the project architecture (maven modules) looks like this:




Now we have a system that can control lights based on users presence / absence (or rather their smartphones presence/absence, which in todays world correlates with the user in 99% of the cases)

To summarize this all I found a sketching I created in the beginning of the project rather fitting, so here it is:

iot doku pieces - Page 3


A few important sidenotes:

  • Where possible we use REST API’s. We like to think this is a potential common language many devices that are TCP/IP capable could speak. Anything lower-level would then be  managed with a gateway between the lower-level protocols and TCP/IP.
  • Through our Service layer we could integrate further technologies like zigbee controlled devices etc. and publish them through one uniform rest service
  • The parts are modular and could be replaced by some other technology (e.g. native iOS or Android app that integrated the user login/logoff through a geo fencing logic)

To keep on reading check the other posts about our prototype:

OPITZ CONSULTING’s own IoT prototype to demonstrate capabilities (part 1)

In the last few months, there has been so much talk about IoT it must have been hard not to stumble across a news about it. But even though, everyone talks about it, its hard to get a grasp on things. What is IoT really? Is it a method like BPM or concept like Big Data? Or is it a technology that was recently invented and is now ready for use like WebRTC? Well for us it is a a trend or a vision of how things could look like on a broader scale but in order to make this vision come true lots of small things and technologies need to be integrated in a system and work together as a team and communicate openly with other systems.

But the IoT that is described in so many talks and posts these days is not the one we need in order to profit from it. There doesn’t need to be a fully automated city that guides its inhabitants autonomous vehicles to the right parking spot in order to save a few minutes of searching, before we can join the trend and profit from all of this as well. Any medium sized company can sit down and think about the opportunities that arise from recent tech development and think about how they can use this to make their own business a bit smarter or more controllable or how to create new services that haven’t been seen before. Many such examples already exist:

Bildschirmfoto 2014-06-18 um 09.11.57

Once many do this, lots of small systems will arise that can start talking to each other and provide services that can be consumed by each other, closing the gaps between them and lead to the ultimate goal.

Bildschirmfoto 2014-06-18 um 09.12.31

Our motivation

So why do we at OPITZ CONSULTING are so eager about IoT? Well first its necessary to know as much as you can about a technology trend everybody is talking about. But while doing our research we also found that most of the pieces necessary to build this puzzle that is IoT, we already have in our skill-set. BigData, BPM or Adaptive Case Management (ACM), BI, Enterprise Applications, as well as Project Management skills they are all key parts to a successful IoT story. What we lack as a software company is a deep knowledge of building little sensors and pieces that connect to the net and allow us to sense the environment or act within it. But thankfully these are only the enabling technology and we dont need to know how to build these in order to take part of this development. Lots of Hardware producing companies out there, we focus on the software, the part where the magic happens.

Lets build a prototype!

Still we wanted to know more about the underlying parts so we decided to develop a prototype, integrating some key parts such as BPM, Event Processing, Java, some native code running on the low level parts of devices, a different communication stack than TCP/IP which is so well known by now and so on. Also we wanted to build a prototype that can show the actual potential of IoT and not just a technology demo of a small children’s toy robot being controlled by a smartphone app. The ultimate goal after all is generating some form of value for our customers.

There is a great infographic about IoT (also the source of the pictures above) that also shows the expected business impact for several categories such as transport or buildings & infrastructure. This infigraphic has been created by Harbor Research, a consulting firm that has been around since 1984 and has always been focusing on what is now hyped as the Internet of Things. So both the idea and the concepts aren’t new, but rather the market just recently realized the potential. 

a prototype in this area would be both interesting on a personal level as well as relevant from a business perspective. Curiously our prototype addresses one of their example compound applications what they call “Smart Buildings + Mobility”.

Bildschirmfoto 2014-06-18 um 09.16.15

Our protoype is a event & rule based home/office lights automation system that reacts to the presence & absence of users and controls lights and other simple devices accordingly. While this started as a hobby project to control radio controlled power plugs from a smartphone, it is now a pretty good example of what is possible with IoT technologies. We work with different technologies, 3G, Wifi, 433mhz radio and maybe soon zigbee. Through this we learned all sorts of things about the heterogenous environment that our devices are deployed in.

To give you an idea of what our system does lets consider two examples:

  • A family using our system has all their lights set up using this concept. Now if any one of them returns home from work/school/college, their smartphone logs into the local wifi. This is registered by our network scanning utility and the device is mapped to a user. An event is triggered, saying that User “Shelly” has just returned from school and is now home. A user state change process is triggered in our system and the business rules are looked at to see if any apply. Since Shelly is the first user, but its already beginning to get dark outside, all lights in the entry way are lit up as well as the kitchen and the living room. Now Shelly could also set up a rule that makes sure her private room’s lights are turned on as well.
  • A worker in a insurance company leaves his workplace. He shares his office with two other coworkers. Both left work early, so he’s the last one to leave the office. Once his smartphone logs off from the companies Wifi, the user goes “missing”. After 5 minutes he’s considered offline and an event is fired again. The rules state that any office that is uninhabited should not be lit up so all lights turn off in this office. Since no one else on the floor is still at work as well, the entire 15th floor’s lights are shut down as well.

Our prototype was supposed to be able to control such situations. To see how we did it, see these other posts going into greater detail about some parts of the prototype:



Images from Harbor Research Infographic

Kategorien:camunda BPM, English, IoT, Mobile, Uncategorized Schlagworte: ,

OMG released formal version of CMMN 1.0

In May 2014 the OMG released version 1.0 of the Case Management Model and Notation (CMMN). You can access the documents associated with it under

Great to see that the specification also includes the “CMMN – Claims Management Example” (page 78) from our “ACM in Practice” poster.



Kategorien:ACM, Uncategorized Schlagworte: ,

Mit der Oracle Mobile Security Suite die BYOD-Risiken beherrschen

Derzeit nutzen viele Mitarbeiter ihre privaten Geräte im Unternehmensumfeld und speichern Unternehmensdaten darauf. Der als „Bring Your Own Device“(BYOD) bezeichnete Trend birgt für Unternehmen viele Gefahren, welche die Oracle Mobile Security Suite zu beherrschen versucht. Die Oracle Mobile Security Suite wurde von dem Start-Up Bitzer Mobile entwickelt, später von Oracle gekauft und in die Produktpalette integriert.

Herausforderungen bei BYOD

Bei der Nutzung der eigenen mobilen Endgeräte im Unternehmensumfeld sind die Unternehmensdaten mehr Gefahren ausgesetzt, als bei unternehmenseigenen Geräten. Falls der Mitarbeiter zum Beispiel den Unternehmens-Exchange Server auf seinem eigenen Smartphone konfiguriert, ist es dem Mitarbeiter jederzeit möglich auf seine E-Mails, Kalender und Kontakte zuzugreifen. Dies erhöht zwar die Erreichbarkeit und Produktivität, bringt jedoch auch einige Gefahren mit sich. Bei Verlust des Smartphones sind auch die Zugänge und die auf dem Handy befindlichen Daten verloren.  Dies gilt natürlich gleichermaßen für private und geschäftlich Geräte, doch bei privaten Geräten ist die Handlungsfähigkeit des Unternehmens eingeschränkt. Da die privaten und geschäftlichen Daten auf dem Smartphone nicht klar voneinander getrennt sind, ist es dem Unternehmen nicht gestattet, Daten des Smartphones zu löschen. Dadurch würden auch die privaten Daten des Mitarbeiters gelöscht werden. Darüber hinaus ist es dem Unternehmen nicht möglich, Richtlinien für die geschäftlichen Daten zu aktivieren bzw. deren Einhaltung zu kontrollieren, ohne das dies auch die privaten Daten betreffen würde.

Funktionen der Oracle Mobile Security Suite

Die Oracle Mobile Security Suite adressiert diese Probleme durch die Möglichkeit private und geschäftliche Daten klar voneinander zu trennen. Ausgangspunkt dafür ist ein sogenannter Container, der die geschäftlichen Apps  und deren Daten kapselt. Unternehmensanwendungen werden nur innerhalb des Containers installiert und dadurch in einer abgesicherten Umgebung ausgeführt. Jegliche Zugriffe von Apps aus dem Container heraus sind durch eine Art VPN-Tunnel grundsätzlich verschlüsselt, ein Zugriff von privaten Applikationen in den Container herein ist nicht möglich. Der Container ist selbstverständlich passwortgeschützt (Anbindung an Enterprise Active-Directory o.ä. möglich) und jegliche Daten innerhalb des Containers werden verschlüsselt abgespeichert.

Oracle Mobile Security Suite

Um nun ein Gerät mit der Oracle Mobile Security Suite auszurüsten, muss im ersten Schritt die Container-App installiert werden. Die Container-App sieht auf den ersten Blick wie eine normale App aus, nach Start der Container-App muss man sich zuerst erfolgreich authentifizieren, dann erscheint der Unternehmens-Workspace. Dieser Unternehmens-Workspace beinhaltet einen App-Katalog und die installierten Apps. Der App-Katalog wird verwendet um weitere Applikationen innerhalb des Containers zu beziehen. Innerhalb des App-Katalogs stehen per Default ein sicherer Browser, Mail, Kalender, Kontakte-App zur Verfügung. In den App-Katalog können aber auch individuelle Apps hinzugefügt werden, von denen die kompilierte aber nicht signierte App zur Verfügung steht.

Um eine App in den App-Katalog aufzunehmen, steht die Mobile Security Administrative Console zur Verfügung. Sowohl auf iOS, als auch auf Android Geräten wird dadurch der plattformspezifische App-Store umgangen. Auf Apple Geräten ist daher kein App-Review durch das iOS-Team notwendig, um eine App in dem App-Katalog zu listen.

Oracle Mobile Security Suite

Die Mobile Security Administrative Console ermöglicht die Verwaltung der Berechtigungen, Apps und installierter Container. Es ist möglich einzelne Container zu sperren oder zu löschen (wipen). Bei dem wipen eines Containers bleiben alle privaten Daten des mobilen Gerätes unberührt, nur die Daten innerhalb des Containers werden gelöscht. Damit dieser Löschvorgang ausgeführt wird muss das mobile Gerät eine Netzwerkverbindung besitzen, der Container aber nicht gestartet sein (Push-Benachrichtigung). Durch ein Rechtesystem ist es möglich bestimmten Mitarbeitern bzw. Rollen den Zugriff auf Apps oder anderer Funktionen anhand verschiedener Parameter (z.B.  Location, Uhrzeit, Netzwerkverbindung) zu kontrollieren. Dadurch ist es z.B. möglich einem Mitarbeiter nur Zugriff auf besonders kritische Apps zu erlauben, wenn er sich innerhalb des Unternehmens-Campus oder WLAN-Netzwerkes befindet. Es gibt auch eine Möglichkeit um neue Versionen von Apps hochzuladen und das Update auf Mitarbeitergeräte zu erzwingen.

Mehr Informationen zur Mobile Security Suite gibt es unter:



Erhalte jeden neuen Beitrag in deinen Posteingang.