Es gab am 6. Juni in Mainz den DOAG 2013 IMC Summit, unter dem Motto „Infrastruktur meets Middleware“. Ich habe dort einen Vortrag über „Orchestrator“: IT-Paradigmenwechsel im Zeitalter des Cloud Computing gehalten, das ich als eine Erweiterung des „DevOps“-Konzepts betrachte. Hier finden Sie mein Abstract und meine Präsentation:
Orchestration ist eine riesige Symphonie verschiedenster Komponenten im IT-Bereich. Der „Orchestrator“ ist daher mehr als nur ein klassischer Administrator. Er ist eine neue Generation von Experten im Zeitalter des Cloud Computing, der veränderten Herausforderungen in einer verteilten, heterogenen und noch mehr komplexen IT-Welt gegenüber steht.
In diesem Vortrag wird gezeigt, dass allein die traditionellen IT-Ansätze und Maßnahmen nicht ausreichen, um neue technische aber auch organisatorische Fragen im IT-Bereich zu beantworten. Obwohl Experten von unterschiedlichen Perspektiven das Problem aus betrachten können, erreichen sie meistens eine gemeinsame Lösung: Orchestration!
Themen wie (HW und SW) Virtualization, Cloud Computing, SOA, etc. aber auch die Infrastruktur sind vom IT-Paradigmenwechsel betroffen. Nach einer kurzen Einführung in die Problematik werden verschiedene Lösungsansätze vorgestellt. Ein besonderer Fokus wird dabei auf die Herausforderungen und Solutions gelegt, welche sich aus der Sicht der IT-Infrastruktur ergeben.
Link zu meiner Präsentation:
End-to-end monitoring of BPEL process instances across composite borders is a great feature of Oracle SOA/BPM Suite 11g. It is shown in nearly every PreSales presentation and when you were used to know SOA Suite 10g or to work with other kinds of distributed, heterogenous systems it is a real improvement.
But when you implement large process chains you might realize that the newly won process transparency can raise new challenges. Imagine you have a root process which creates several instances of sub-processes. In such a case without doing any extra work you will get one flow trace for the process and all of its sub-process instances. For large process chains you need to consider the following facts:
- Transparency: Although it shows an end-to-end view of the whole execution tree, trying to find a faulted sub-process might be a real challenge. It doesn’t matter if you start the search from the root process instance or from one of the sub-processes – the flow trace always displays all components of the execution context. When you click on a sub-process and you go back to the flow trace you might have to expand all child nodes again and again.
- DataSetTooLargeException: When your flow trace becomes longer and longer, you will observe, that there is a maximum size for the audit trail that can be displayed by Enterprise Manager. Usually it results in a
java.lang.RuntimeException: oracle.soa.management.facade.DataSetTooLargeException: Requested audit trail size is larger than threshold … chars
For large execution trees, sub-process instances might not be displayed or you might not be able to see things in detail.
- Low Memory: It is not only the visible representation of your instance, which struggles. A huge audit trail implicitly means that your needed memory allocation for executing your process instance grows. It can grow to this extent that your process instance crashes because of running low in memory.
- Purging: With large flow traces you should always have an eye on the capacity of your soa-infra database. Why is it so? Usually you should have purging routines installed to keep your system healthy – regular deletion of “old” instances from the dehydration store. To say it in a nutshell, the purging routines eliminate “completed” instances. If one of your sub-process instances goes into a faulted state, all processes in the same execution context are ignored from being purged (except when you define the ignore_state attribute to true). This means, although 99 percent of your instances have been executed correctly and could be purged the whole instance data, which can be huge as we stated earlier, are kept in your dehydration store.
So, how to deal with all these challenges? There are two relatively small changes which we describe in the following two posts:
1) Splitting large flow traces by setting a new execution context
2) Using the CompositeInstanceTitle-Property and composite sensors to simplify instance identification via the Enterprise Manager
Knowledge-driven processes are typically unpredictable in their execution. Experts working on them decide what’s the next best action to take. This is in contrast to traditional BPM, in which all possible paths of a process are predetermined and modeled into the process. Case management is a way to control and implement these unstructured processes. With the poster below we’d like to bring some of the key aspects of Adaptive Case Management (ACM) on one page.
Feel free to download the PDF-version if you are interested in (login required):
- What is ACM?
- Why should I use ACM?
- How can ACM user interfaces look like?
- What are the main building blocks of an ACM solution?
- How to visualize ACM cases with CMMN 1.0?
Send your feedback with #acmposter:
Last friday i was visiting an outstanding conference on BPM – BPMCon 2013 in Berlin. BPMCon is hosted by Camunda Services GmbH a company which is behind Camunda BPM Platform. Camunda BPM is now independent fork of the well known Activiti BPM Platform. I noticed that the community behind Camunda is constantly growing. I enjoyed the chance to meet a lot of enthusiastic java developerswho are “coding” BPM . I also enjoyed the possibility to speak to the core developers of that platform, its committers and customers of Camunda as well.
At the beginning, a brilliant and provocative key note was given by Gunter Dueck with a very funny metaphor of cats and dogs (IT guys v.s. “normal” people). Then key features of Camunda has been presented by two founders Jakob Freund and Bernd Rücker. They argued that Camunda BPM is lightweight and developer-friendly (especially for java developers) BPM solution. The development of Camunda BPM Platform focuses on BPM Process modeling and execution and does not try to offer to many features out of the box. It is also Open Source and agile developed from release to release.
More success stories has been presented from a couple of companies e.g. HASPA, Zalando, myToys or the patent department of Switzerland. Especially Zalando seems to do some innovative work, but I‘m unsure if they share their work back to the community…
Last but not least we saw interesting tools in a field of Process Mining. e.g. Fluxicon and emerging Start-Up company cupenya. Both got much attention of theaudience. Also SAP people presented integrated “end-to-end” Process Monitoring and control system.
Another highlight of the conference was Fishbowl discussion - a new discussion format to me. Changing discussants were involved in an interesting talk about general problems in companies that starting adopt BPM and switch to “Process Driven” organizations.
I enjoyed that conference especially industry feedback and as well the possibility to get direct information on the future development of Camunda BPM Engine.
Great Conf! I‘m waiting for next year!
OOW 2013 is over and we’re heading home, so it is time to lean back and reflecting about the impressions we have from the conference.
First of all: OOW was great! It was a pleasure to be a part of it. As already mentioned in our last blog article: It was the biggest OOW ever. Parallel to the conference the America’s Cup took place in San Francisco and the Oracle Team America won. Amazing job by the team and again congratulations from our side!
Back to the conference. The main topics for us are:
- Oracle SOA / BPM Suite 12c
- Adaptive Case management (ACM)
- Big Data
- Fast Data
Below we will go a little more into detail, what are the key takeaways regarding the mentioned points:
Oracle SOA / BPM Suite 12c
During the five days at OOW, first details of the upcoming major release of Oracle SOA Suite 12c and Oracle BPM Suite 12c have been introduced. Some new key features are:
- Managed File Transfer (MFT) for transferring big files from a source to a target location
- Enhanced REST support by introducing a new REST binding
- Introduction of a generic cloud adapter, which can be used to connect to different cloud providers, like Salesforce
- Enhanced analytics with BAM, which has been totally reengineered (BAM Console now also runs in Firefox!)
- Introduction of templates (OSB pipelines, component templates, BPEL activities templates)
- EM as a single monitoring console
- OSB design-time integration into JDeveloper (Really great!)
- Enterprise modeling capabilities in BPM Composer
These are only a few points from what is coming with 12c. We are really looking forward for the new realese to come out, because this seems to be really great stuff. The suite becomes more and more integrated. From 10g to 11g it was an evolution in terms of developing SOA-based applications. With 12c, Oracle continues it’s way – very impressive.
Adaptive Case Management
Another fantastic topic was Adaptive Case Management (ACM). The Oracle PMs did a great job especially at the demo grounds in showing the upcoming Case Management UI (will be available in 11g with the next BPM Suite MLR Patch), the roadmap and the differences between traditional business process modeling. They have been very busy during the conference because a lot of partners and customers have been interested
Big Data is one of the current hype themes. Because of huge data amounts from different internal or external sources, the handling of these data becomes more and more challenging. Companies have a need for analyzing the data to optimize their business. The challenge is here: the amount of data is growing daily! To store and analyze the data efficiently, it is necessary to have a scalable and flexible infrastructure. Here it is important that hardware and software are engineered to work together. Therefore several new features of the Oracle Database 12c, like the new in-memory option, have been presented by Larry Ellison himself. From a hardware side new server machines like Fujitsu M10 or new processors, such as Oracle’s new M6-32 have been announced. The performance improvements, when using one of these hardware components in connection with the improved software solutions were really impressive. For more details about this, please take look at our previous blog post.
Regarding Big Data, Oracle also introduced their Big Data architecture, which consists of:
- Oracle Big Data Appliance that is preconfigured with Hadoop
- Oracle Exdata which stores a huge amount of data efficently, to achieve optimal query performance
- Oracle Exalytics as a fast and scalable Business analytics system
Analysis of the stored data can be performed using SQL, by streaming the data directly from Hadoop to an Oracle Database 12c. Alternatively the analysis can be directly implemented in Hadoop using “R”. In addition Oracle BI Tools can be used to analyze the data.
Fast Data is a complementary approach to Big Data. A huge amount of mostly unstructured data comes in via different channels with a high frequency. The analysis of these data streams is also important for companies, because the incoming data has to be analyzed regarding business-relevant patterns in real-time. Therefore these patterns must be identified efficiently and performant. To do so, in-memory grid solutions in combination with Oracle Coherence and Oracle Event Processing demonstrated very impressive how efficient real-time data processing can be.
One example for Fast Data solutions that was shown during the OOW was the analysis of twitter streams regarding customer satisfaction. The feeds with negative words like “bad” or “worse” have been filtered and after a defined treshold has been reached in a certain timeframe, a business event was triggered.
Another key trend in the IT market is of course Cloud Computing and what it means for companies and their businesses. Oracle announced their Cloud strategy and vision – companies can focus on their real business while all of the applications are available via Cloud. This also includes Oracle Database or Oracle Weblogic, so that companies can also build, deploy and run their own applications within the cloud. Three different approaches have been introduced:
- Infrastructure as a Service (IaaS)
- Platform as a Service (PaaS)
- Software as a Service (SaaS)
Using the IaaS approach only the infrastructure components will be managed in the Cloud. Customers will be very flexible regarding memory, storage or number of CPUs because those parameters can be adjusted elastically. The PaaS approach means that besides the infrastructure also the platforms (such as databases or application servers) necessary for running applications will be provided within the Cloud. Here customers can also decide, if installation and management of these infrastructure components should be done by Oracle. The SaaS approach describes the most complete one, hence all applications a company uses are managed in the Cloud. Oracle is planning to provide all of their applications, like ERP systems or HR applications, as Cloud services.
In conclusion this seems to be a very forward-thinking strategy, which opens up new possibilities for customers to manage their infrastructure and applications in a flexible, scalable and future-oriented manner.
As you can see, our OOW days have been very very interresting. We collected many helpful informations for our projects. The new innovations presented at the confernce are great and being part of this was even greater! We are looking forward to next years’ conference!
Oracle Open World 2013 started today and San Francisco is ready to rock. This year approx. 60.000 visitors are attending to this great event – this is a 20% increase compared with last year. Very impressive! From my point of view there is no other event where you can get so many useful information in just five days. Keynotes with the latest anouncements, many customer cases, sessions, networking, demos, hand-on labs and of course personal meetings with certain members of the product management team – a perfect combination to be prepared for our daily business and to be successful in our projects.
This year our schedule is full with the following topics: Big Data / Fast Data, Real Time Analytics, Event Processing, Governance with Oracles Enterprise Repository, large File Transfers, Business Intelligence, Web Center, Oracle ADF, Internet of Things (IoT), Mobile and of course SOA & BPM. This is going to be a very busy week
In his Welcome keynote Oracle CEO Larry Ellison was in a good mood because Oracle Team USA won two America’s Cup races today. Overall he came with the following announcements:
Oracle Database In-Memory Option
With Oracle’s new In-Memory option, he explained, queries are going to run at least 100-times faster, INSERT of rows run 3- to 4-times faster, UPDATES are 2-times faster and the JOIN of tables are at least 10-times faster. Furthermore reports run 20-times faster without predefined cubes. How is that possible? As Larry Ellison described transactions run faster in row format and analytics run faster in column format. Oracle 12c stores the same data in row AND column format simultaneously – it is a Dual Format In-Memory database. But how can you do more things in less time? Replace the indexes with the In-Memory columnar technology. Indexes work well for predictable access patterns. They are mostly used for analytic queries but the administrator needs to decide what to index and what not to index. Larry Ellison described that the column store replaces the analytic indexes which means less tuning and less administration.
Oracle In-Memory requires zero application changes. It requires no restrictions on SQL and will be turned on with the following three steps:
- Configure Memory Capacity (inmemory_size = XXX GB)
- Configure tables & partitions to be in memory (ALTER TABLE …)
- Drop analytic indexes
Afterwards Larry Ellison showed a live demonstration on a two socket server and a database table with 3 billion rows about searches that people have done on wikipedia. A traditional database with index achieved 2005 million row scans / second. Without index the result was much less – only 5 million row scans / second. Finally after enabling the In-Memory option the same test achieved an outstanding value – 7151 million rows / second.
Big Memory Machine M6-32
The second announcement was a new member of Oracle’s Engineered System family, the Big Memory Machine M6-32. It contains 32 Terabytes of DRAM memory and 32 SPARC M6 Chips. Larry Ellison said: “It’s the fastest machine in the world for databases stored in memory”. The live demonstration with this machine was again impressive – 341072 million row scans / second.
The third announcement was that the M6-32 is also available in SuperCluster form.
Oracle Database Backup, Logging, Recovery Appliance
The last announcement was the one with a name that is definitely distinctive, the “Oracle Database Backup, Logging, Recovery Appliance”. It is architected for the protection of critical business data. While the database is running the delta / changes are shipped to the appliance. The backup appliance is designed to backup databases (not files) and it doesn’t have to be in the same data center. The key characteristics of the product are:
- Real-time log shipping
- Fast restore to any time point
- Real time log & change deltas
Oracle is also offering the product as a public cloud service. It allows the backup of databases directly to the cloud while the data is encrypted at source.
The first day was already great and we are looking forward to take a lot of useful content with us.
weblogic.server.ServerLifecycleException: Cannot get to the relevant ServerRuntimeMBean for server MGSRV
Lately, I got the following error:
weblogic.server.ServerLifecycleException: Can not get to the relevant ServerRuntimeMBean for server MGSRV.
… weblogic.management.scripting.ScriptException: Error occured while performing shutdown : Error shutting down the server : Can not get to the relevant ServerRuntimeMBean for server MGSRV.
Use dumpStack() to view the full stacktrace
It’s happened, when I wanted to shutdown a managed server via WebLogic Server script “stopManagedWebLogic.sh”. In addition, nodemanger-utility was not able to shutdown managed server via administration console.
After review the issue, I faced to an Oracle Document:
WebLogic Managed Server shutdown failing when using SSL t3s Admin Server address as ADMIN_URL (Doc ID 851065.1)
The main reason for this issue is: “JMX clients using secure protocols were not able to invoke operations on MBeans registered in the Domain Runtime MBeanServer of the AdminServer. Authentication of JMX clients was not correctly performed.”
It seems that is a bug (Bug 8359946):
“Not able to shutdown the managed serves using t3s protocol in the WLST scripts. The same script with t3 protocol is working fine. Adminserver is shutting down fine with either protocol. Not able to shutdown the managed server using WLST. Getting, connecting to successfully connected to Admin Server ‘adminserver’ that belongs to domain…”
You can test it with following steps:
1) Create a domain for WLS 10.3 with 1 Admin and 1 Managed server.
2) Enable SSL and configure the Demo Identity and Demo trust for the Keystores on both the Admin and Managed server.set ADMIN_URL=t3s://localhost:7002
And add the -Dweblogic.security.TrustKeyStore=DemoTrust
4) Start the Admin server.
5) Start the Managed server.
6) Stop the Managed server. Then: Error
What is the solution?
Quick and dirty solution:
You can kill managed server with kill -9 of MGSRV and using administration console for start/stop with nodemanager.
· Advantage: You have a uniform start/stop method for managed server via administration console.
· Disadvantage: Automatic start/stop of managed server(s) via script(s) and operating system is not possible.
The bug is fixed in: 12.1.1, for previous WebLogic Server versions are patches available:
WLS Version Patch Number
10.0.1 Patch 8589531
10.0.2 Patch 8359946
10.3.0 Patch 8359946
Please note that patches are applied per WLS installation and not per domain. That is, if you apply this patch on one WLS installation, then all of the servers from all the domains in that installation will have this patch. On the other hand, if you have a managed server in another machine in a domain (that is, set up with its own WLS installation), you need to install this patch on that other machine as well. Generally, patches can only be applied while the server is not running because WLS locks the needed files while it is running. If, however, you are able to apply a patch while WLS is running, you must restart WLS before the patch will take effect.