Using JMS Unit-of-Order in a High Availability Environment



The WLS JMS Unit-of-Order feature (UOO) is well documented in various papers and blogs; it mainly enables numerous message producer to group messages into a single unit that is processed sequentially in the order the messages were created. Until message processing for a message is complete, the remaining unprocessed messages for that Unit-of-Order are blocked. This behavior makes the usage of UOO essential in cases that a specific processing order must be adhered to.

But what happens in a High Availability Environment using distributed queues, when a node breaks down? How can the processing order be guaranteed under these circumstances? This article is a practice report answering these questions.

Preparing the test


For our tests we are using an OSB cluster with two nodes and an Admin server in the Version; the WLS is in the version 10.3.6.

Configuring the JMS Server

Use the Admin Console (Admin Server) to perform the following steps:

  1. Define a data source (e.g. jdbc/jmsDS)
  2. Define a persistence store on each OSB Cluster-Node using the data source from Step1


- JDBCStore4JMS1 targets osb_server1 (migratable)

- JDBCStore4JMS2 targets osb_server2 (migratable)


  1. Define a JMS Server on each OSB Cluster Node targeting a migratable target. Note: When a JMS server is targeted to a migratable target, you have to use a custom persistence store which must be configured and targeted to the same migratable target.


  1. Define a JMS Module targeting both JMS Servers; define an appropriate connection factory and a distributed queue.



  1. Pointing at the distributed queue, you can now see both queues and the messages holding by them


Performing the test


A message producer sends 14 messages using UOO KEY1, KEY2, and KEY3 in the following way:

  • 3 messages with UOO: KEY1
  • 2 messages with UOO: KEY2
  • 9 messages with UOO: KEY3

The following was observed:

  1. The messages were received by the queue and distributed to the configured persistence stores
  2. Messages with the same UOO were stored in the same persistence store
  3. After a cluster node was shut down, all new messages containing the same UOO as those processed by the JMS Server (and the persistence store) pinned to the disconnected node, have been rejected; all other messages continued to be enqueued (and dequeued)
  4. After the migration of the JMS Server (and its persistence store) which was pinned to the shutdown node, to another working node – which took approximately 10 sec.- all rejected messages were processed and the distributed queue regained fully functionality.




The processing order of messages can be guaranteed in a clustered Environment when using JMS Unit-Of-Order.

If a cluster node crashes the distributed queue is not able to receive messages of a certain UOO for approximately 10 sec. which might be for the most cases a sufficient small amount of time.

Kategorien:Weblogic Server Schlagworte: , ,

Microservices architectures – Thoughts from a SOA perspective

Thoughts by: Sven Bernhardt, Richard Attemeyer, Torsten Winterberg, Stefan Kühnlein, Stefan Scheidt


A frequently discussed topic these days is the Micorservices architectural paradigm. Discussions on various internet blogs and forums are showing the trend that proponents of this approach are not tired of emphasizing why Microservices are different to a holistic SOA approach, when dealing with breaking up or avoiding monolithic software architectures.

For this reason it’s time for the Cattle Crew team, to take a closer look on this arising architectural style and the corresponding discussions from a different perspective.

Microservices Architectures

Amongst others Martin Fowler published a blog about what is characteristic for Microservices and applications build on the foundation of this architectural style [1].  According to this and other blog posts (see also [2], [3]), the goal of a Microservices approach is to avoid software systems to become monolithic, inflexible and hardly manageable, by splitting a system into modular, lightweight and maximum cohesive services. Applications build on this architecture should ensure the agility regarding changes caused by changed business requirements, because affected services of an application can simply be adapted and be redeployed independently from other components.

Effectively a Microservice is a in itself cohesive, atomic application, which fulfills a dedicated purpose. Therefore it encapsulates all needed elements, e.g. UIs, logic components, may also have its own separated persistent store and may run in a separate JVM, to ensure as less impairment to other services as possible. Furthermore the implementation technologies for a specific service may vary. For each service the best-fitting technology should be used; there should be no restrictions regarding the used technologies.

To ensure consistency as well as compatibility with already existing components in case of changes and to guarantee seamless release management of changed components, a Continuous Delivery procedure is indispensable for succeeding. In addition the implementation efficiency benefits from the Microservices approach, because different components may be developed in parallel. Communication between the services, if needed, is done via lightweight protocols such as HTTP. Well defined interfaces are depicting the service contracts.

Where there is light, there is also shadow…

One of the basic questions, we asked ourselves when discussing the Microservices approach, is how to determine respectively which metrics to use for evaluating, if a service is a Micorservice or not. Resulting from that it would be interesting what if it is no longer a Microservice: is it directly a monolithic service?

A clear definition about what are the differentiating and unique characteristics of a Microservice cannot be found. Metrics like lines of code or number of classes are no appropriate characteristics, so something spongy, like specific business functionality has to be taken as a distinctive mark for a real Microservice. But that’s not really measureable…

Besides the missing clarification about the Microservice term as such, building business applications using a modular Microservices architecture means a higher complexity than when using a classical monolithic approach. From conception to delivery to the operating this higher complexity may raise the following challenges:

  • Right level of service granularity; not to coarse grained, not to fine-grained
  • Comprehensible service distribution for scalability reasons and the corresponding monitoring
  • Complex testing procedures because of loose coupling and service distribution
  • Consistent and tolerant Error-handling, because a referenced service might be down for maintenance reasons (Timeouts, Retry mechanisms, etc.)
  • Reliable message delivery
  • Consistent transaction handling, because of cross-service communications using non-transactional protocols like HTTP mean the establishment of complex compensation mechanisms
  • Guarantee of data synchronization and consistency, when services have their own and exclusive persistent stores
  • Sophisticated Service Lifecycle Management, because every service has its own lifecycle including challenges like how to deal with incompatible interface changes

Another risk we see with a naive implementation of the Micorservices architecture is a fall-back to the days of distributed objects: Calling a magnitude of services is not a good idea because of latency and (missing) stability.

Microservices and SOA

When dealing with the Microservices approach it is conspicuous that at least one paragraph could often be found, stating something like “Microservices vs. SOA”, where it is depicted that Microservices and SOA are conceptually different. In this context the idea of SOA is often reduced to technology terms like WS* or ESB and therefore referred to as heavyweights. This definition of SOA only covers possible implementation details, because from a conceptual perspective SOA is also an architectural paradigm, making no premises regarding implementation details or the corresponding technologies to use for a concrete SOA implementation.

When looking at the sections before, describing Microservices-based architectures and its challenges, it can be stated that very similar concepts and challenges arise in SOA-style architectures, because characteristics for Service-oriented architectures are loose-coupling, standard-based and stable service contracts, distributed services and flexibility as well as agility regarding changing business requirements. The resulting challenges are nearly the same. As a reason for this, in our opinion the both approaches aren’t so different essentially.

SOA-style architectures historically often use SOAP-style communications. But regarding this fact we are observing a change: the number REST-style SOA services is growing. A trend which is mainly influenced by the increasing need of multi-channel support, e.g. for mobile devices, where REST-style communications by using a JSON-based data format is the preferred variant. The big software vendors, e.g. Oracle, also recognize this trend and therefore have extended out-of-the-box support for REST services.

In system architectures that are based on the SOA paradigm, an ESB is often used to integrate different enterprise applications. Classically it cares about routings, protocol as well as data transformations and service virtualization. So point-to-point integrations between systems can be avoided and makes an IT system landscape more flexible regarding adding new applications as well as services. In our opinion this can’t be called heavyweight and is indispensable for increasing agility.

Furthermore the microservices proponents do not say any word about service governance. This is also ok for some sorts of SOA services. We often differentiate services into public and private services. Systems consisting of multiple components are often organized as a set of collaborating private services [4]. These interfaces are not published enterprise-wide and must therefore not adhere to more strict policies applied to public services. The offical / public interfaces of a system are in contrast published as public services.

Thus from our perspectives, Microservices are nothing new, but rather an implementation of the concept of private services.


Microservices architectures are primarily focusing on a local, application-based scope and provide a very flexible as well as modular approach for developing easy-to-extend applications. In summary it can be said that the Microservices architectural paradigm seams to deliver great benefits though it must be stated that the approach is not a completely new concept. Compared to that a SOA approach has a farsighted, global scope aiming at the enterprise level. From a SOA perspective, Microservices could be understood as private services, not necessarily exposing their functionalities for reusing them in other systems or services. But that is ok, because in service-oriented architectures one does not expose a service, when there is no need – which means no need for reuse in other applications or services – for it.

Taking all points discussed in this article into account, we would recommend that discussions about differences between Microservices and SOA should be avoided. Instead it should be evaluated how and if a coexistence of these two very similar approaches is possible to deliver the most valuable benefit for system architectures as possible, making IT system landscapes more flexible and therefore promoting business agility.






Kategorien:Architecture Schlagworte: , ,

Eindrücke vom ADF Community Meeting

Am gestrigen Dienstag fand das Meeting der Deutschen ADF Community im CVC der Oracle Niederlassung in Berlin statt. Der Fokus lag dieses Mal weniger auf den technischen Feinheiten des Frameworks, sondern auf dem Thema „Vertrieb“. Aufgrund des Themenschwerpunkts waren neben altbekannten Teilnehmern auch viele neue Gesichter aus den Tiefen des Vertriebs der verschiedenen Partnerunternehmen anwesend. OPITZ CONSULTING war gleich mit drei Teilnehmern vertreten und gab somit schon alleine durch die Mannstärke ein eindeutiges Commitment zu ADF ab. Das bunte Programm ließ keine Wünsche offen.

Wie agil die ADF Community auf neue Anforderungen reagieren kann, zeigte bereits der erste Vortrag zum Thema „Enterprise Mobility“. Aufgrund von Schwierigkeiten mit der Flugverbindung konnte der Referent nicht physisch anwesend sein, so dass kurzerhand eine Web- und Telefonkonferenz aufgesetzt wurde. Die Runde war sich weitestgehend einig, dass mobile Lösungen im Enterprise-Bereich in Zukunft eine stärkere Rolle einnehmen werden und sieht ADF in diesem Zusammenhang durch das Mobile Application Framework (MAF) und den Zukauf von Bitzer gut aufgestellt. Diesem Trend folgend drehte sich auch der zweite Beitrag dieses Tages auf um das Thema MAF. Michael Krebs von esentri referierte dabei sehr bildlich über „Oracle MAF und die Positionierung einer mobilen Unternehmensstrategie“. Am Nachmittag folgten dann noch Vorträge zu „ADF im Kontext der Oracle Fusion Middleware“ (Ingo Prestel, Oracle), „Oracle ADF und Forms-Modernisierung“ (Andreas Gaede, PITSS).


Jochen Rieg von virtual7 moderierte eine interaktive Session zu „Oracle ADF als Basis für moderne Unternehmens-Anwendungen“. Dabei wurden von den Teilnehmern fachliche Einsatzszenarien, USPs von ADF, sowie eine Statistik zu Branchen in denen ADF Projekte bereits umgesetzt werden, erarbeitet. Es wurde in diesem Zusammenhang deutlich, dass derzeit die meisten Partnerunternehmen Projekte für Kunden im „public sector“ umsetzen. Da wir die Statistik selbst „gefälscht“ hatten gab es keinen Grund ihr in diesem Punkt nicht zu trauen. Neben den spannenden Sessions diskutierte die Runde sehr lebhaft und konstruktiv über mögliche Vertriebswege von ADF. Die Teilnehmer waren sich einig, dass die Technologie deutliche Vorteile im Vergleich zu anderen Frameworks aufweist, im Markt aber noch zu unbekannt sei. Die zahlreichen Aktivitäten der ADF Community werden in Zukunft sicher ihren Teil dazu beitragen, dies zu ändern. Erste Anzeichen sind in diesem Bereich schon wahrzunehmen. So wird beispielsweise bei der diesjährigen Konferenz der Deutschen Oracle Anwendergruppe zum ersten Mal ein eigener Slot für die ADF Community reserviert.

Alles in allem kann man wieder einmal von einem sehr gelungenen Treffen der ADF Community sprechen. Die thematische Ausrichtung auf das Thema „Vertrieb“ sorgte für interessante Diskussionen. Wir freuen uns, Teil dieser lebendigen Community zu sein und werden gerne am nächsten Community-Meeting teilnehmen.


SOA & BPM Suite 12c – Erfahrungsberichte und Live Demos für Architekten und Entwickler

  Die neuen Oracle 12c Versionen sind da!

SOA Suite & BPM Suite Launch Events vom Oracle Platinum Partner OPITZ CONSULTING

    23.10.14 Düsseldorf | 28.10.14 München

Sehr geehrte Damen und Herren,

die SOA Suite ist Oracle’s Lösung für Systemintegrationen jeglicher Art. Die BPM Suite bietet alles zur Prozessautomatisierung und zum Bau von Workflowlösungen mit BPMN 2.0.

Erleben Sie in einem komprimierten Abendprogramm in lockerer Atmosphäre bei Getränken, Snacks und einem anschließenden Abendessen, warum wir als Projekthaus die neuen 12c Versionen beider Suiten für einen großen Wurf halten.

Wir bleiben dabei marketingfrei, zeigen viele Live Demos und diskutieren in den Pausen gerne bis in beliebige Tiefen der neuen Produkte.

Es erwarten Sie diese Themen:

  • Oracle SOA Suite & BPM Suite 12c: Was bringen mir die neuen Versionen?
  • Neue Features SOA Suite: Erfahrungen aus einem SOA Suite 11g Upgrade auf 12c
  • BPM Suite 12c live: Workflows erstellen auch mit komplexen Formularen
  • Internet der Dinge: Live Demo mit Raspberry Pi, SOA Suite, BPM Suite und Oracle Event Processing 12c


Jetzt kostenfrei anmelden:
23. Oktober 2014, Oracle Deutschland, Geschäftsstelle Düsseldorf
28. Oktober 2014, Oracle Deutschland, Geschäftsstelle München

Details und Anmeldung

Wir freuen uns auf Ihren Besuch!

Mit herzlichen Grüßen

Torsten Winterberg
Business Development & Innovation
Torsten Winterberg, Business Development & Innovation, OPITZ CONSULTING


OPITZ CONSULTING, Oracle PlatinumPartner

Short recap on OFM Summer Camps 2014

Last week the Oracle Fusion Middleware summer camps took place in Lisbon. More than 100 participants attended the event, learning much new stuff about new features and enhancements, arriving with the recently available FMW 12c release. In four parallel tracks the highlights of the new major release were presented to the attendees; hands-on labs allows to get a first impression regarding the new platform features and the markedly increased productivity delivered by the enhanced, consolidated tooling.

The four tracks had different focuses, regarding the new features of the 12c release of Oracle Middleware platform:

  • SOA 12c – focusing on application integration, including Oracle Managed File Transfer (MFT), and fast data with Oracle Event Processing (OEP)
  • BPM 12c – focusing on Business Process Management, the new enhanced Business Activity Monitoring (BAM) and Adaptive Case Management (ACM)
  • SOA/BPM 12c (Fast track) – Combined track, covering the most important enhancements and concepts with reference to SOA and BPM 12c
  • Mobile Application Framework (MAF) Hackathon – Development of mobile applications using the newly released MAF (formerly known as ADF mobile)

The main topics addressed by the new OFM 12c release are:

  • Cloud integration
  • Mobile integration
  • Developer’s performance
  • Industrial SOA

Cloud integration

Integrating Cloud solutions in grown IT system landscapes is complex. With SOA Suite 12c, Oracle provides a coherent and simple approach for integrating enterprise applications with existing cloud solutions. Therefore new  JCA-based cloud adapters, e..g. for integrating with Salesforce, as well as a Cloud SDK are available. Service Bus might be used in this context to care about transformation, routing and forms the backbone of a future-oriented, flexible as well as scalable cloud application architecture.

Mobile integration

Mobile-enablement of enterprise applications is a key requirement and a necessity for application acceptance today. The new JCA REST adapter can be used to easily REST-enable existing applications. In combination with Oracle MAF and Service Bus, Oracle provides a complete Mobile Suite, where seamless development of new mobile innovations can be done.

Developer’s performance

To enhance development performance, the new SOA and BPM Quickinstalls are introduced. Using those allows the developers to have a complete SOA or BPM environment installed in 15 minutes (see the blog post of my colleague). Furthermore new debugging possibilities, different templating mechanisms (SOA project templates, Custom activity templates, BPEL subprocesses and Service Bus pipeline Templates) as well as JDeveloper as the single and only IDE deliver a maximum good development experience.

Industrial SOA

Industrializing SOA is a main goal, when starting with a SOA initiative: Transparent monitoring and management and a robust, scalable and performant platform are key to successfully implementing SOA-based applications and architectures. These points are addressed by the new OFM 12c release through the following features:

  • Lazy Composite Loading – Composites will be loaded on demand and not at platform startup
  • Modular Profiles – Different profiles provided, which enables only the features currently needed (e.g. only BPEL)
  • Improved Error Hospital and Error Handling
  • Optimized Dehydration behaviour
  • Integrated Enterprise Scheduler (ESS)

Further main enhancements that where introduced regarding SOA and BPM Suite 12c were:

  • Oracle BPM Suite 12c: Definition of Business Architecture, including definition of Key Performance Indicators (KPI) and Key Risk Indicators (KRI) to provide an integral overview from a high-level perspective; ACM enhancements in the direction of predictive analytics
  • Oralce BAM 12c: Completly re-implemented in ADF, allows operational analytics based on the defined KPIs and KRIs
  • Oracle MFT: Managed File Transfer solution for transferring big files from a specified source to a defined target; integration with SOA/BPM Suite 12c can be done by new JCA-based MFT adapters

Looking back,  a great and very interesting week lays behind me, providing a bunch of new ideas and impressions on the new Fusion Middleware 12c release. I’m looking forward to use some of this great new stuff soon, in real world’s projects.

Special thanks to Jürgen Kress for the excellent organization of the event! I’m already looking forward for next SOA Community event…

IT-Security (Part 7): WebLogic Server, Roles, Role Mapping and Configuring a Role Mapping Provider

Key words: IT-Security, WebLogic Server, Authorization, authorization process, Role Mapping, Roles and  XACML Role Mapping Provider

Let’s continue with Authorization topic. We discussed about the Authorization Process and its main components such as WebLogic Security Framework and Security Provider. Now, we look at Security Provider’s subcomponents: Role Mapping and Security Policies.  

The Role Mapping: Is access allowed?

Role Mapping providers help to clear, weather a user has the adequate role to access a resource? The Authorization provider can with this role information answer the “is access allowed?” question for WebLogic resources.[1]

The Role Mapping Process

Role mapping is the process whereby principals are dynamically mapped to security roles at runtime. The WebLogic Security Framework sends Request Parameter to specific Role Mapping provider that is configured for a security realm as a part of an authorization decision. Figure 1 Role Mapping Process presents how the Role Mapping providers interact with the WebLogic Security Framework to create dynamic role associations. The result is a set of roles that apply to the principals stored in a subject at a given moment.[2]


Role Mapping Process

Role Mapping Process

Figure 1 Role Mapping Process

Let’s review each part again[3]:

  • The request parameters are including information such as the subject of the request and the WebLogic resource being requested.
  • Role Mapping provider contains a list of the roles. For instance, if a security policy specifies that the requestor is permitted to a particular role, the role is added to the list of roles that are applicable to the subject.
  • As response, get WebLogic Security Framework the list of roles.
  • These roles can then be used to make authorization decisions for protected WebLogic resources, as well as for resource container and application code. I’m going to discuss about that in part 9.

Configuring a Role Mapping Provider

The XACML Role Mapping provider and DefaultRoleMapper are included by WebLogic Server. In addition, you can use a custom Role Mapping provider in your security realm too. By default, most configuration options for the XACML Role Mapping provider are already defined. However, you can set Role Mapping Deployment Enabled, which specifies whether or not this Role Mapping provider imports information from deployment descriptors for Web applications and EJBs into the security realm. This setting is enabled by default. In order to support Role Mapping Deployment Enabled, a Role Mapping provider must implement the DeployableRoleProvider SSPI. Roles are stored by the XACML Role Mapping provider in the embedded LDAP server.[4] XACML Role Mapping provider is the standard Role Mapping provider for the WebLogic Security Framework. To configure a Role Mapping provider:

  • In the Change Center of the Administration Console, click Lock & Edit

Change Center

Change Center

Figure 2 Change Center

  • In the left pane, select Security Realms and click the name of the realm you are configuring.

Domain Structure: Click Security Realms

Domain Structure: Click Security Realms

Figure 3 Domain Structure: Click Security Realms


Summary of Security Realms

Summary of Security Realms

Figure 4 Summary of Security Realms


  • Select Providers > Role Mapping. The Role Mapping Providers table lists the Role Mapping providers configured in this security realm

myrealm: Role Mapping

myrealm: Role Mapping

Figure 5 myrealm: Role Mapping

  • Click New. The Create a New Role Mapping Provider page appears.

WebLogic Server default Role Mapping Provider: XACMLRoleMapper

WebLogic Server default Role Mapping Provider: XACMLRoleMapper

Figure 6 WebLogic Server default Role Mapping Provider: XACMLRoleMapper

  • In the Name field, enter a name for the Role Mapping provider. From the Type drop-down list, select the type of the Role Mapping provider (e.g. DefaultRoleMapper or XACMLRoleMapper) and click OK.

a New Role Mapping Provider: Default_1

a New Role Mapping Provider: Default_1

Figure 7 a New Role Mapping Provider: Default_1


  • Select Providers > Role Mapping and click the name of the new Role Mapping provider to complete its configuration.


Role Mapping Configuration

Role Mapping Configuration

Figure 8 Role Mapping Configuration

  • Optionally, under Configuration > Provider Specific, set Role Deployment Enabled if you want to store security roles that are created when you deploy a Web application or an Enterprise JavaBean (EJB) (See Figure 8 Role Mapping Configuration).
  • Click Save to save your changes.
  • In the Change Center, click Activate Changes and then restart WebLogic Server.

XACML Role Mapping Provider

As we discussed above, a WebLogic security realm is configured by default with the XACML Role Mapping provider. It implements XACML 2.0, the standard access control policy markup language (the eXtensible Access Control Markup Language). WebLogic XACML Role Mapping Provider is saved as a .dat file und available on e.g.: $Domain-Home/XACMLRoleMapper.dat and has the following options (see Figure 8 Role Mapping Configuration):

  • Name: The name of your WebLogic XACML Role Mapping Provider.
  • Description: The description of your Weblogic XACML Role Mapping Provider.
  • Version: The version of your Weblogic XACML Role Mapping Provider.
  • Role Deployment Enabled: Returns whether this Role Mapping provider stores roles that are created while deploying a Web application or EJB.

You can see file structure on the following example: XACMLRoleMapper.dat has different User/Groups. For each User assigned particular Roles, Policies and associated resources. For example, you see description of Group and User “Administrators” below:

XACMLRoleMapper.dat: description of Group and User “Administrators”

XACMLRoleMapper.dat: description of Group and User “Administrators”

Figure 9 XACMLRoleMapper.dat: description of Group and User “Administrators”

You see a policy contains Description, Target and Rule. Each element is associated to different attributes and with this form prepared one “authorization matrix” that it helps to decide Application Server about a user or a group. Continued…


See too last parts of IT-Security and Oracle Fusion Middleware:


[1] Oracle® Fusion Middleware Securing Oracle WebLogic Server 11g Release 1 (10.3.6), E13707-06

[2] Oracle® Fusion Middleware Understanding Security for Oracle WebLogic Server 11g Release 1 (10.3.6), E13710-06

[3] Oracle® Fusion Middleware Understanding Security for Oracle WebLogic Server 11g Release 1 (10.3.6), E13710-06

[4] Oracle® Fusion Middleware Securing Oracle WebLogic Server 11g Release 1 (10.3.6), E13707-06

Finding differences in two Open-Office-Writer documents

If you write documents and get feedback from different persons on different versions it is a great pain to merge the documents and changes together. Microsoft Word has a functionality that works quite well. But the function to compare documents in Open Office Writer has  never work for me the way I expected.

Fortunately OO stores documents in a zip file, containing xml files. The main content of the document is the file content.xml. After changing the extension of the OO Writer document to zip it is possible to open the file with the favorite zip application and extracting the content.xml file. If you do this for both versions you can compare the both files with your favorite text compare tool and you will see … hmmm yes… thousands of changes. This happens especially if the documents have been edited with different versions of Open Office or Libre Office. Most of the changes are not relevant for your comparison.

So we would like to eliminate the changes not interested in to get an overview of the real changes.

We will do this using Notepad++, the tool I use most for work. Additionally we need for formation the document the XML Tools Plugin. Both are free.

We open both versions of content.xml with Notepad++ and do a “Linarize XML” with XML Tools first on both files.

In the next step we replace these six regular expressions with an empty string. This is done recursively until no further replace is possible:

1 [a-zA-Z0-9\-]+:[a-zA-Z0-9\-]+="[^"]*" 2 <([a-zA-Z0-9\-]+:)?[a-zA-Z0-9\-]+\s*/> 3 <([a-zA-Z0-9\-]+:)?[a-zA-Z0-9\-]+\s*>\s*</([a-zA-Z0-9\-]+:)?[a-zA-Z0-9\-]+> 4 <text:changed\-region\s*>.*?<\/text:changed\-region> 5 <office:annotation\s*>.*?<\/office:annotation> 6 <text:bookmark-ref\s*>.*?<\/text:bookmark-ref>

Finally we use the “Pretty print (libxml)” function of XML Tools to get the XML files formatted. Now it is possible to compare the two files with tool for comparing text files and you will see the real text changes.

Bernhard Mähr @ OPITZ-CONSULTING published at

Kategorien:English, Uncategorized

Erhalte jeden neuen Beitrag in deinen Posteingang.

Schließe dich 26 Followern an