The Scheer PAS Process Engine

Scheer PAS Process Engine

When it comes to automation and orchestration of workflows or even complex end-2-end business processes, it is beneficial to look under the hood of potential solution providers and understand how the process concept is supported in practice.

This includes not only the implementation phase, i.e. designing the process with a standardized modelling notation, but especially the way how processes are executed. Most business processes involve asynchronous processing, require the persistence of state information and proper error handling. It is important that the underlying technological concepts of the engines executing the processes support these requirements.

In this blog post, I will explain the basic concept of the Scheer PAS Process Engine.

Process Design and Implementation

The Scheer PAS Designer is the Low-Code tool of the PAS platform to design and develop end to end processes and their implementation. The modelling of the process is done in BPMN (Business Process Model and Notation) which is a standardized graphical notation, that provides a visual representation of the steps and activities involved in a business process. BPMN has the advantage that it is easily understandable by process experts, business users and developers which makes it a powerful notation to not only model but also automate processes. Automating a process with Scheer PAS always means drawing a BPMN in the first step. The figure below shows a simple example using the most common notation elements:

Scheer PAS Process Engine Designer image

During process implementation, the modeler connects these elements (task, gateway, event, ...) with activities to be carried out, which can be pre-built functional building blocks from libraries, user forms , or specific logic captured in a variety of ways (script, mapping diagram, activity diagram), for which Scheer PAS brings its own Low-Code editors.

The Process Execution

Process models are translated by Scheer PAS into semantically equivalent state machines, which is a very powerful approach to define systems with complex and dynamic behaviour. It describes the behaviour of systems that can exist in different states and transitions between them based on certain conditions or events. Execution of these state machines is where the Scheer PAS Runtime comes into play. Related to BPMN process execution, the key elements of this process state machine can be described as follows:

States: A state represents a specific condition or situation that a process instance can be in at a given point in time.

Transitions: Transitions define the movement of a process instance from one state to another in response to a trigger or event.

Events: Events are occurrences or stimuli that can trigger a transition from one state to another.

Actions: Actions are tasks or operations associated with a state or a transition.

PAS Process Engine 1

Following this state machine approach results in a set of advantages that our process engine can provide when it comes to the execution of a business process:

Asynchronous processing: The state machine concept is inherently asynchronous. This means that it is not necessary for previously started process instances to have ended in order to start a new process instance. Or in other words, several process instances can run in parallel, each one in different stages of the process.

Well-defined state: Each running process instance is in a well-defined state, at all times. It is always clear and transparent which actions have been performed, which events have occurred and what is still to happen.

Data safety: Information about the current state of a process instance as well as instance related process data can be persisted. This means, even if a running process instance is aborted due to unforeseeable reasons (e.g. system crash, power outage, …), the data is not lost, and the process instance can be restarted exactly in the same state with the same data.

In addition to the process state machine for executing the BPMN process, the Scheer PAS process engine also contains a service state machine, which encloses the process state machine.

This service state machine consists of a small set of states surrounding the process state machine and defining the Initialized, Aborted, Error and Done states of the process. This allows transitions from each process state into an Error, Abort or Done state without the need to explicitly model these options in the BPMN. The main advantage that comes with this approach is how potential errors can be handled. Let’s assume the following example:

One activity in the BPMN represents the task to connect to an SAP system and fetch data about a business partner. Due to some reason, the SAP system is not available at the time the process is executed. If the developer has special requirements on error handling and modelled it in the implementation of the process, then this explicit error handling defines what will happen. An example for an explicit handling of the error could be to include a timer event that triggers a reconnect to the SAP system on the next working day. The original process proceeds when the connection can be established.

If, however, there are no specific requirements, and the potential error is not explicitly taken care of – which is mostly the case in practice – then the process transitions automatically in the Error state of the service state machine. This prevents that the process gets stuck in some undefined intermediate state. There are two options now how to proceed from this state: The activity can be repeated manually when the SAP system is available again, or an automatic retry can be configured in which the process engine retries the SAP connection at regular intervals.

In general, this approach allows the developer to identify errors that occur in production, fix the process implementation (including proper error handling if reasonable), redeploy the improved service and restart the process instances from the state where they went into error. All this without losing data or actions that have been processed in the affected instances before the error occurred (-> see Data safety), and without affecting other running instances of the process.

REST Interface for processes

Within the Scheer PAS platform, each implementation of a process is built and deployed as individual service in a Docker container. This has the advantage that each process runs separately and is not affected by errors or maintenance of other services/processes.

To access the process, the Scheer PAS Process engine automatically generates a REST interface for the process state machine. This REST API provides endpoints to gather information on BPMN process instances and their state(s), and to trigger transitions. This approach also provides the option to realize inter-process communication via REST APIs, to build a custom UI layer on top of the process execution or to manage and publish process APIs via the Scheer PAS API Management.

Outlook: Scaling of Processes

With one of the coming Scheer PAS releases we will move to a Kubernetes platform architecture. In addition to separate services for each process, this will allow an easy and automated scaling, i.e. several replicas of the same service / process running in parallel. An individual process can be deployed several times, depending on the required resources, and handle more load in parallel or provide a high availability setup.

The highlight is: Even when there are several replicas of the same service, each executing different instances of the same process, the persisted data and state information will be synchronized between the replicas. This means if one replica of the service crashes, another replica will pick up the started process instances at the point where they were stopped and process them further.

I will explain this unique and powerful concept a bit more in detail in another blog post.

Dr. Christian Linn - Author
Dr. Christian Linn
Head of Product Development
Process Automation:

It's not a Circle, it's a Spiral!

From Homer to Corner

It is becoming increasingly difficult (some may say almost impossible) to find a company which has not in some way delved into AI. It is outright impossible not to find the term “AI” anywhere on its marketing materials. Still, another term gets thrown around even though AI constantly tries to insert itself in it. That is: “Process Automation”.

It’s weird to think that the first time the word “automaton” was mentioned, it was mentioned by a poet, not an engineer. Homer described the word "automaton" as the door which automatically opens or a wheeled tripod which is moving on its own. Still, unlike today, it did not involve any sense of autonomy and for some sadly, it still does not.

Let’s be honest…

Process Automation blogpost Homer Corner - Picture 2

Whenever someone throws around the term “Process Automation”, the first thing that comes to everyone’s mind is a big stack of papers which involve some kind of repetitive work. To be fair, it is firmly based on our collective experience, or at least on perceived time spent on getting rid of the aforementioned stack of papers.

Process Automation blogpost Homer Corner - Picture 1

Fig. Perceived time spent on repetitive work across countries (Clockify.me)

It's not just about those same old tasks gobbling up all our time. What's even more surprising is how many things we end up doing over and over again just because no one read Homer's poetry. From Germany (5 hours and 9 minutes per week) to Japan (2 hours and 56 minutes per week), people are spending time not only on repetitive but also on duplicate work.

Process Automation blogpost Homer Corner - Picture 3

To get back on the track, of course there is repetition and duplication of work in every enterprise. Process automation in its bare essentials can get rid of some of it saving the time and the money. Also, as previously mentioned, anyone can just throw in the term AI to “enhance” or “further improve” process automation but the problem is more often than not rooted somewhere deeper.

Back on the predetermined track

Process Automation blogpost Homer Corner - Picture 5

Like Homer, like James Watt, like a lot of visionaries and company leaders, automating any kind of work includes some kind of control mechanism (physical or virtual) to automatically follow a sequence of operations, or respond to predetermined instructions. In other words, if one wants to reduce that hypothetical (or the real) stack of papers, the same one must create a repeating flow of operations and triggers.

Around 4 years ago an event, which pushed every industry into overdrive, occurred. Rapid digitization was a must for some, and the death for others. However terrifying that event was, it got a lot of us thinking about getting rid of the papers entirely. It got us thinking about automation without the term predetermination. It got us excited about the fact that the stacks of papers could be gone forever. However, it took almost two years for the real missing piece of this paperless, stackless and never again repetitive puzzle to appear.

Process Automation blogpost Homer Corner - Picture 6

Of course, it was the massive increase of AI popularity. Not just “an AI” since AI was popular even before ChatGPT became a thing, but the generative AI which promised the long-forgotten autonomy. Homer’s automatic door could now suddenly open when they “feel” like they should, his wheeled tripod could not just move but also choose where and when to move, the stack of repetitive jobs could truly disappear, and not just be reduced. Why is there a “could” in each example?

It starts with an idea.

Process Automation blogpost Homer Corner - Picture 7

The true process automation in any kind of work cannot (or at least shouldn’t) be generated from the bucket of existing workflows. Its creation and details are too specific to be automated, and its benefits are too specialized to be easily foreshadowed by an AI. In other words, to reap the true benefits of process automation, one cannot just throw a bunch of variables at the screen or as a prompt and expect results.

Still, putting everything on the screen provides a barely sufficient and structured overview to start the brainstorming. The entire idea of process automation thrives on brainstorming since the majority of work is done in a creative way. In similar fashion in which psychologists lead their patients to the conclusion which they already have in them, just by putting the business problems and the processes entangled with them in one place, shapes the new and creative ideas which help to solve them.

Just like in times of the inception of the word automation (even though now we’re talking about IT systems), it’s not the engineers who need to look at the problems and conjure ideas. It’s the poets of the enterprise world. AI, as helpful as it may be one day, cannot be the creative lead on the process automation as some marketing materials can lead one to believe. Generative AI learning revolves around rehashing the old ideas which is an immediate block towards the creativity. This creativity is almost always required when automating the processes in enterprise.

It's not a circle, it’s a spiral!

Process Automation blogpost Homer Corner - Picture 8

Still, some basic jobs in enterprise are circular and they could and maybe even should be automated in a good old “getting rid of the paperwork” way. Few percentages here and the other few there could all add up and save a lot of time for the company but why does it still feel like pushing the progress into a corner?

Process automation with, or without AI, cannot push any enterprise into the real digital transformation. It helps, it speeds things up, it keeps the stacks of paper at minimum, but it does not change anything. The processes should not be just automated, they should not be thrown at the board while AI tries to dig the data for suggestions…they should be rethought.

Yes, rethinking your business takes a lot of tries and it should sometimes seem to be going in circles...but it’s not a circle, it’s a spiral. Automating what you have and do will surely reduce the radius of the circle but when observed from another dimension (literally) it is an equivalent of going down the spiral. Broadening one’s views and rethinking the core processes is a long and difficult task, but with a right platform underneath for support, it can actually lead away from the corner towards an agile future.

Ivan Tadic
Technical Content Marketing Manager
Blog Post

Identity & Access Management @ Scheer PAS

IAM is crucial for maintaining security, compliance, and efficiency in modern IT environments, especially as organizations deal with an increasing number of digital identities and diverse systems.  

That’s why it is not surprising that we are asked by many prospects on how identity and access management works in the Scheer PAS platform. In this blog post I want to shed some light on this topic, with explaining basic concepts and their realisation in Scheer PAS.  

Authentication and Authorization – what’s the difference?

When talking about Identity and Access Management, one can distinguish two separate aspects:  

Authentication is the process of verifying the identity of a user, system, or entity. It ensures that the person or system claiming to be a particular identity is, in fact, who or what it claims to be. When you log in to an online account by entering your username and password, the system checks these credentials against stored information to confirm your identity.  

Authorization is the process of granting or denying access to specific resources or actions based on the authenticated user's identity and their permissions. Once a user's identity is confirmed through authentication, authorization determines what actions or resources that user is allowed to access. After logging into an email account (authentication), authorization determines whether you have permission to read, send, or delete emails, based on your user role or privileges. 

OpenAuth, OpenID Connect, SAML,…?

For exchanging authentication and authorization information, there exist some standard methods and protocols. Mostly used are OAuth, OpenID Connect and SAML (Security Assertion Markup Language).  

OAuth is primarily designed as an authorization protocol to define user access for specific resources hosted by a service provider without exposing the user's credentials. It is an open standard, commonly used for securing APIs and authorizing third-party applications to access user data. 

OpenID Connect (OIDC) is a specific authentication layer built on top of OAuth 2.0. While OAuth is primarily focused on authorization, OpenID Connect extends it to provide information about the end user, including authentication details in the form of JSON Web Tokens (JWTs). These JWTs are digitally signed and can be verified by the intended recipient. 

SAML is an XML-based standard primarily designed for exchanging authentication data between parties, particularly in the context of web browser single sign-on (SSO). It is often used in enterprise environments.  

Single-Sign-On, Multi-Factor-Authentication

Modern Identity & Access Management solutions provide additional convenience and security features that are nowadays de facto standard for enterprise applications. The most important ones are Single-Sign-On and Multi-Factor-Authentication.  

Single Sign-On (SSO) is an authentication process that allows a user to access multiple applications or services with a single set of login credentials (such as username and password). The main idea behind SSO is to simplify the user experience by reducing the need to remember and enter different usernames and passwords for each application.  

The goal of Multi-Factor Authentication (MFA) or Two-Factor Authentication (2FA) is to increase security by requiring users to provide multiple forms of identification to access a system or application. Typically, the two factors are: Username and password as a first factor and a code from a mobile app as a second factor. This significantly improves security by reducing the risk of unauthorized access, even if login credentials are compromised. 

What are we doing with Scheer PAS?

In the Scheer PAS platform, we use Keycloak as central IAM solution. Keycloak is one of the most popular enterprise-ready open source IAM solutions and it provides a rich feature set: It supports Single-Sign-On, Multi-Factor-Authentication and several protocols like OpenID Connect, OAuth, SAML.  

In Scheer PAS, we rely on OAuth 2 and OpenID Connect as state-of-the-art authentication and authorization methods – both for Scheer PAS internal components like PAS Designer, Administration, etc., as well as for custom integration services, APIs and applications built with the Scheer PAS Designer.  

Identity & Access Management Scheer PAS - Approach

Let’s for example assume a user wants to access a custom-made application. Then the browser on the user’s end device forwards the user to the login-page provided by Scheer PAS Keycloak. By entering the user credentials, an authentication request is sent to Keycloack, which checks if the user’s identity exists and is correct. The browser receives an authorisation token from Keycloak containing information about the access rights for the requested resource, i.e. the application for lead management. The user is redirected to the application URL sending the authorization token whose signature is verified and access is checked. In case access rights exist, then the application URL is opened in the browser.  

In terms of authentication features, Scheer PAS also supports Single-Sign-On, to allow users to access multiple applications or services with a single set of login data, and Two-Factor Authentication to enhance security. 

For managing the user identity there are two options available: Either customers can store the user information within Scheer PAS Keycloak itself or the Keycloak instance can be connected to external user federation via LDAP (e.g. Active Directory) or identity providers via OpenID Connect v1.0 and SAML v2.0. Scheer PAS also provides an easy-to-use application to manage access rights and enhance user information, encapsulating the technical complexity of Keycloak.  

In summary, with Scheer PAS we provide an inbuilt Identity & Access Management with state-of-the-art convenience and security features, while in parallel allowing high flexibility for adjustments to customer specific needs, e.g. integration of several external identity providers. For securing access to APIs, Scheer PAS also includes a separate API Gateway. But more on this and about API Management in general in one of the next blog posts…. – stay tuned by following Scheer PAS on LinkedIn.  

Dr. Christian Linn - Author
Dr. Christian Linn
Head of Product Development
Code to the moon

Now we need more than 145,000 lines of code...

Blog post Development code Scheer PAS Moon Landing

On July 20th, 1969, at 20:17 UTC, the Apollo Lunar Module Eagle landed on the Moon as part of the Apollo 11 mission. Just six hours and 39 minutes later, Neil Armstrong etched his name in history as the first human to set foot on the Moon. Later, he described this majestic event as "one small step for a man, one giant leap for mankind." While the world marveled at the astronauts, only few marveled at the 145,000 lines of code that safely guided them to the moon and back.

„Software engineer“

Blog post Software Development code Scheer PAS Margaret Hamilton

Margaret Hamilton, a lead computer scientist on the mission, played a pivotal role in creating the 145,000 lines of code. She not only contributed to this remarkable feat but also coined the term "Software Engineer" to describe her work. Unlike today, software development in the 60s closely resembled engineering tasks. Although commended for inspiring women to pursue engineering careers, Hamilton utilized Core Rope memory, ironically mostly woven by women, to significantly expand the memory on the Apollo Lunar Module.

Despite the code's sheer size, a famous photograph shows Margaret Hamilton standing next to a stack of handwritten code created by her team for the Apollo Mission computers. The code's impressiveness extended beyond size; it featured dynamic software loading for adaptability, priority scheduling for error handling, rope memory programming to increase usable memory, and rigorous documentation and verification.

Blog post code Scheer PAS Rope Memory Module
Core Rope Memory

While deemed a success, it took several months for Margaret and her team to write, test, and document the code. Focused solely on the code, this team of creative computer scientists played a small but critical role in the entire mission, primarily ensuring the safe landing of the Lunar Module Eagle. Their emphasis on code, written in assembly for easy execution, allowed the team to iron out bugs and enable the astronauts to complete their mission.

How did they do it?

Focus on the task!

Just think about it! Large enterprises today face challenges similar to those of the Apollo 11 team. Their businesses venture into an unknown, ever-changing market, where operational circumstances change rapidly, and their "astronauts" need constant support from "computer scientists." Comparing this unique and dangerous ‘60s mission to the everyday life of a large enterprise might seem unfair—unfair not to the Apollo 11 mission, but the other way around.

Flying closer to the Sun

Blog post Software Development code Scheer PAS Cloud Computing

Similar to Apollo 11, enterprise IT systems soared to the cloud. While Apollo journeyed far beyond the clouds, IT systems for large companies didn't just make a brief visit; they permanently moved to the cloud to enable further development and stay competitive. This shift required significant work, not only for migration but also for laying the foundation for future endeavors. Many companies completely reorganized both their company and IT structures after moving to the cloud.

Blurring the lines

Blog post Software Development code Scheer PAS Fusion Teams

Margaret and her team weren't just excellent computer scientists; their main task was writing software to safely transport astronauts to the moon and back, requiring knowledge of physics and astronomy. Similarly, today's modification or creation of new IT systems supporting business operations demands collaboration between rare IT specialists and businesspeople. Creating "fusion teams" that include businesspeople from different branches alongside IT specialists has enabled some enterprises to expedite development and yield more focused end products.

Error handling

Blog post Software Development code Scheer PAS Error Handling

Not to diminish the impressive work of the computer scientists' team on Apollo 11, the fast-moving world we live in demands faster and more efficient error handling than the methodology Margaret helped develop. In IT enterprise systems and general company management, errors are unavoidable. Therefore, more agile methodologies, followed by flexible and adaptable IT systems, have become necessary for any enterprise striving for success.

No time for focus

The original team needed several months to create the software that would land the Eagle on the moon, mostly limited by the technology of the time. While technology limitations still exist, they no longer significantly hinder development. RAD (Rapid Application Development) serves as a basis for the majority of current work but is being overtaken by the necessity of creating new applications to keep businesses growing. Instead of handwriting 145,000 lines of code, enterprises now require agile technologies that don't involve writing any code at all. Low-Code and No-Code application development are already essential for many enterprises to succeed.

In essence, modern business environments allow no time to focus and create comprehensive, impressive stacks of code just to complete a single mission. Still, a modern equivalent of a cutting-edge IT system, limited only by technology, simmilar to the one the Apollo 11 computer engineering team created, requires agility and flexibility. Although everything has changed from 1969 to today, some things have stayed the same.

Conclusion

To wrap up our journey from Apollo 11 to today's enterprise IT: "Yes, now we need more than 145,000 lines of code but we do not have (time) to write them!". Margaret Hamilton and her team faced tech challenges in the '60s but left a lasting mark on how we approach innovation. Their knack for focus, teamwork, and careful workmanship still shapes our IT world, where staying nimble and flexible is key. In comparison, the success which an enterprise seeks nowadays could seem unreachable as was the moon back in ‘60s and the technology as limiting as the punch cards did but the key to unlocking it lies in embracing the change.

The story of Apollo 11 isn't just about the past; it is something we still encounter. We're reminded to adapt, work together across different fields, and keep pushing for efficiency. While our tools have evolved, the heart of innovation remains the same. As we navigate the tech universe, let's remember Apollo 11 as a guide, urging us to explore new horizons in the ever-changing landscape of technology and business.

Ivan Tadic
Technical Content Marketing Manager
On the cloud

Moving up in IT World

Blogpost Cloud IT Infrastructure API Header

As your company is moving up in the world, so does the need for upgrading your IT infrastructure. Of course, it is getting more expensive and complicated to upgrade one's infrastructure so why not clear out the server rooms? Whether it is windy or not, the business atmosphere is mainly cloudy these days so why not move some of your infrastructure to the place where the sun may shine upon your systems?

Where do you plug in?

Blogpost Cloud IT Infrastructure API Picture 1

Transferring your entire IT system infrastructure to cloud-based solution certainly isn't an easy task. Additionally, it leaves a valid question: “Where do you plug in?” Less metaphorically speaking, having an easy access not only for you as a manager but also for your applications and services is a must-have. Thankfully, Cloud-Based service providers thought about this, and therefore even before this moving to cloud trend became a real trend, there were API-s.

Extremely short description

What is an API?

Blogpost Cloud IT Infrastructure API Picture 2

Do not worry, this will not get any more technical than it needs to. The easiest way to describe an API is by comparing it to an ethernet cable hanging out from your applications. The cable is infinitely long and can basically connect to everything. Now that you’ve got a master’s degree in most of the computer sciences, moving your IT systems to the cloud would not mean that you and your clients/users would lose any ease of access to the services and applications. Furthermore, you would not need to worry about the physical IT system infrastructure and its safety.

Here is a slightly more technical definition:

What is an API?

An API, or application programming interface, is a set of defined rules that enable different applications to communicate with each other. It acts as an intermediary layer that processes data transfers between systems, letting companies open their application data and functionality to external third-party developers, business partners, and internal departments within their companies.

The definitions and protocols within an API help businesses connect the many different applications they use in day-to-day operations, which saves employees time and breaks down silos that hinder collaboration and innovation. For developers, API documentation provides the interface for communication between applications, simplifying application integration.

Source

Now that you’re already rethinking your entire IT system strategy, enough of the questions. Let’s imagine a scenario and have fun in the clouds.

Having fun in the clouds

Blogpost Cloud IT Infrastructure API Picture 3

When your head is already in the clouds, let’s take a look at some API-s which your applications and services could use if you decided to go on with the transition. Some of them are really useful, the other one’s you would develop yourself, and some of them are just weird but paint a good picture of cloud applications and services and their possibilities.

1. REST Countries

Blogpost Cloud IT Infrastructure API Example 1.1

Considering creating an application or service that requires global country data? Look no further—this API has you covered. Sustained by donations, this complimentary API offers details on a country's currency, capital, region, language, and more. Try clicking on this link to get more info about Germany!

2. Open Weather API

You've likely encountered this API in action before on your phone. It supplies weather data for over 200,000 cities. Additionally, you can leverage the API to retrieve historical weather data for your application, enabling tasks such as analysis or prediction. The best thing is if your application or service calls this data under 1000 times per day, it’s free.

3. Bored API

Blogpost Cloud IT Infrastructure API Example 3.1

Enhance your personal website with the Bored API, ensuring perpetual engagement for users. Upon request, it provides a random activity suggestion, and you can customize parameters such as the type of activity and the number of participants. Keep your users entertained and intrigued! You can actually try this one in your browser on this link.

4. Pokemon API

Blogpost Cloud IT Infrastructure API Example 4.1

There is a small chance that your business apps and services will need every detail about every Pokemon that is out there but still… Explore the comprehensive world of Pokemon data with this API, consolidating information in one accessible place. With over 250,000,000 API calls served monthly, it allows you to request details by sending the Pokemon's name and receive a JSON response (the code that you’re seeing but your applications could make it look nice) containing all relevant information. Plus, the best part—no API key required. Here is all the info your services could retrieve for free about my favourite Pokemon.

5. News API

Maybe your applications and services would benefit from having a list of news about a certain topic from all around the world?

For integrating news data into your project, consider using this API, preferred by over 500,000 developers worldwide. It facilitates the discovery of articles and breaking news headlines from various sources and blogs across the web. Of course, not everything is free (some would say that nothing in life is) so you can get an API key on this link.

Interlude

Interlude: I have already explained how API-s work and of course, just by moving your IT system infrastructure to the cloud does not mean that the data your services and applications are serving should not be monitored and secured. By locking your API-s behind API Keys and monitoring the use of the keys you’re actually getting more security than by selecting who to provide the information from the servers that are racking up the electricity and servicing bills of your company.

 

And the last but not least (although the weirdest):

6. Chuck Norris API

Blogpost Cloud IT Infrastructure API Example 6.1

Access a plethora of hand-curated Chuck Norris facts with this free JSON API. Not only does it offer a collection of amusing anecdotes, but it also features integration with Slack and Facebook Messenger. For instance, you can effortlessly retrieve a random Chuck Norris joke in JSON format. Enjoy the humor and seamless integration! Just to prove how API-s can be awesome, each time you click on this link, there is a new joke there just for you!

It's all fun and games until...

...you realize you need to manage the access to your API-s

Jokes, Pokemon facts, and weather data aside, since a lot of companies are moving their IT infrastructure to the cloud solutions, there is a rising need for the software which would help them manage all connection points to applications and services. In other words, managing the API-s which your company uses is not an easy task, but it can be. More on that, on this link.

Ivan Tadic
Technical Content Marketing Manager
Dos and Don'ts

One ERP to Rule Them All?

Scheer PAS ERP Consolidation Blog Header

Enterprise Resource Planning (ERP) systems play a pivotal role in modern businesses, helping streamline processes and improve efficiency. However, managing multiple ERPs can lead to fragmentation and inefficiencies. To overcome this challenge, companies often embark on an ERP consolidation journey. So if your company is already on its journey or is thinking about starting one, let's first jump into the key strategies for driving a successful ERP consolidation.

The Imagined Journey to ERP Consolidation

Scheer PAS ERP Consolidation Blog Pic 1

Before we explore the dos and don'ts, let's take a look at the idealized path that many organizations think that they will go through. In this ideal scenario, different regional and global ERP systems are streamlined into a single and cohesive Global ERP system. The benefits are mostly clear – streamlined operations, improved data management, and enhanced decision-making capabilities.

Side note: If you want more info on how The Imagined Journey towards Consolidation (and even on the stage of Implementation), check out this link.

The Real Journey to ERP Consolidation

Scheer PAS ERP Consolidation Blog Pic 2

However, the reality of consolidation is always more complex. Organizations have to deal with multiple ERP systems, each with its own unique characteristics and challenges. The road to consolidation can involve a mix of regional ERPs, legacy systems, and newly acquired solutions. This complexity requires careful planning, execution, and the right platform.

Strategic Planning Assumptions

Scheer PAS ERP Consolidation Blog Pic 3

Gartner identifies several factors that can drive or hinder the success of ERP consolidation. To be more direct, this is what they have to say on ERP approach and the revision of their own systems:

"By 2026, more than 40% of large organizations with a global instance ERP approach will revisit their instance architecture."

Tomas Kienast, Dixie John, 14th September 2023, Driving a Successful ERP
Consolidation Strategy, Gartner Application Innovation & Business Solutions Summit 2023, London, UK

On the positive side, agility in adapting to new business needs and the increasing adoption of Software as a Service (SaaS) and platform capabilities can be catalysts for consolidation. On the flip side, a lack of a business-led focus on Enterprise Resource and Planning strategy and the belief that centralized core ERP is the best way to manage IT workloads can be roadblocks.

Retrospective look

If you want to take a look at some of the older Gartner assumptions and see if they actually got some things right, check out this link.

Dos and Don'ts of ERP Consolidation

Scheer PAS ERP Consolidation Blog Pic 4

Do...

  • Assess Operational and Strategic Fitness Factors: When embarking on ERP consolidation, it's crucial to assess both operational and strategic fitness factors. Consider whether consolidation aligns with the organization's long-term goals and whether it can adapt to evolving business needs.
  • Recognize and Act on the Typical Challenges: ERP consolidation is not without its challenges. Be prepared to tackle issues such as data migration, user resistance, and integration complexities. Identifying and addressing these challenges proactively is crucial for success.
  • Keep Composable Principles in Mind: Understand that not all ERP components are equal. Analysts like Gartner suggest categorizing them into Systems of Record, Systems of Differentiation, and Systems of Innovation. This approach allows you to tailor your consolidation strategy to specific business needs.
  • Align Business Capabilities and ERP Strategy: Ensure that your ERP strategy aligns with the business's capabilities. Different business units may have unique requirements, so flexibility should be balanced with enterprise standardization. There is some valuable research on this topic on this link if you want some more information.
  • Assess Standardization vs. Flexibility Goals: Finding the right balance between standardization and flexibility is critical. Each organization's ideal balance may differ depending on its industry, size, and specific needs.

Don't...

  • Restrict Architectural Options: Avoid the mistake of limiting your architectural options prematurely. There is a range of ERP architecture options, from independent operation to a single system with multiple instances, and highlights the pros and cons of each.
  • Assume New Tech Solves Old Problems: While new technologies can offer innovative solutions, they don't automatically resolve all existing issues. Evaluate how new tech fits into your consolidation strategy rather than blindly adopting it.
  • Ignore Bad Factors: Consider factors that may lead you to rethink consolidation or standardization. These could include cultural, political, or other motivators that impact your organization's unique dynamics.

Remember: Consolidation Is Not the Goal — Extract Value Out of Consolidation!

Scheer PAS ERP Consolidation Blog Pic 5

ERP consolidation is a complex journey that requires careful planning and execution. By following the dos and don'ts outlined in this blog post, organizations can embark on a successful consolidation strategy. However, the path to consolidation isn't one-size-fits-all, and leveraging Application Composition Platforms like Scheer PAS can significantly aid in achieving your consolidation goals. Therefore, consolidating systems with microservices and interconnected services as opposed to building a monolithic ERP solution has several advantages, particularly in the context of a digital transformation. Some of them are:

      • Scalability
      • Enhanced Maintenance and Upgrades
      • Interoperability
      • Reduced Risk of Failure
      • Customization

By incorporating Scheer PAS into your ERP consolidation strategy, you can harness the power of automation, streamline business processes, and ensure a successful transition to a consolidated ERP system interconnected with microservices. Scheer PAS acts as an enabler, supporting the journey toward a more efficient, integrated, and value-driven Enterprise, Resource and Planning landscape.

In summary, ERP consolidation is not just about reducing the number of systems; it's about extracting maximum value and enhancing business capabilities. With a clear understanding of the dos and don'ts, and the use of Application Composition Platforms like Scheer PAS, organizations can transform their ERP landscape, making it more agile, efficient, and aligned with their long-term business goals.

Ivan Tadic
Technical Content Marketing Manager
Gone in 45 minutes

When Technical becomes Financial

Technical debt catastrophe blog post header 2

On August 1st, 2012, the world of high-frequency equities trading witnessed a catastrophic event that sent shockwaves through the financial industry. Knight Capital, a prominent trading firm, suffered losses to the tune of $462 million in a matter of minutes. This disaster, dubbed the "Knightmare," was attributed to a single critical mistake by a sysadmin. Let’s delve into the events surrounding “the Knightmare” and find the key factors that led to this colossal financial meltdown. Or in other words, how can one technical debt turn into a financial one in a matter of minutes.

The Role of Automated Trading Systems

Technical debt catastrophe automated systems

Automated systems responsible for executing high-frequency trades were at the heart of Knight Capital's trading operations. 

These systems were designed to handle vast volumes of trading activity at speeds which human traders cannot even perceive. Sounds impressive, right?

Among these systems was one called "SMARS," which played a central role in “the Knightmare”. SMARS' primary function was to receive "parent" orders to buy equities, which it then converted into "child" orders that executed the actual purchase of shares. Additionally, an outdated feature called "Power Peg" was lurking within SMARS, although it had not been used for years. Crucially, this feature had not been removed.

The Fatal Upgrade

Technical debt catastrophe update

In 2012, Knight Capital decided to upgrade SMARS to interface with the New York Stock Exchange's "Retail Liquidity Program (RLP)." The objective was to replace the obsolete Power Peg code with the new RLP code. However, during the deployment of this upgrade, a critical error occurred.

A technician responsible for the upgrade failed to copy the new code to one of the eight SMARS computer servers. Tragically, there was no secondary technician to review this deployment, and no one at Knight Capital realized that the Power Peg code was still present on the eighth server. There were no written procedures in place that mandated such a review.

The Catastrophic Consequences

Technical debt catastrophe consequences

When the server missing the SMARS upgrade went live on August 1st, 2012, chaos ensued. Orders sent to this server triggered the outdated and malfunctioning Power Peg code. As a result, the server started generating "child" orders for significantly more shares than Knight Capital or its clients intended to purchase.

This led to extreme price fluctuations in some stocks, leaving Knight with a massive financial burden, as they held shares that nobody wanted at prices nobody was willing to pay.

Ultimately, Knight Capital incurred a staggering $462 million in losses.

The U.S. Securities and Exchange Commission (SEC) did not spare Knight Capital in its assessment of “the Knightmare”. The SEC criticized the company for its lax risk management practices and its inability to detect problematic trades. Furthermore, the poor software development and deployment process were brought into sharp focus. On top of that, Knight Capital was fined for $12 million by the SEC. (Details of which you can find HERE)

Knight Capital lacked written code development and deployment procedures for SMARS, a glaring oversight when other parts of the company had established protocols. The absence of a requirement for a secondary technician to review code deployments in SMARS only exacerbated the situation. Additionally, there were no written protocols for accessing unused code on production servers, nor were there procedures for testing such code to ensure proper functionality.

In the aftermath of the Knightmare, it becomes evident that addressing technical debt is not merely an option but a necessity for organizations operating in today's technology-driven world. One promising solution is embarking on a journey of digital transformation using an application composition platform.

Lessons Learned

An application composition platform empowers organizations to modernize their existing applications and systems by decomposing monolithic architectures (equivalent to putting all eggs into one basket) into smaller, modular components. This transformation enables companies to eliminate technical debt gradually while creating a more flexible and scalable IT infrastructure.

By leveraging an application composition platform, Knight Capital could have taken the following steps:

  • Identification of Technical Debt: The platform would assist in identifying areas of technical debt within the SMARS system, including the outdated Power Peg feature.
  • Componentization: The monolithic SMARS application could be broken down into smaller, manageable components, making it easier to isolate and remove obsolete code.
  • Rigorous Testing: Each component could be rigorously tested to ensure functionality and security, reducing the risk of introducing errors during updates.
  • Continuous Improvement: With the ability to update components independently, Knight Capital could have continuously improved its systems, reducing the risk of catastrophic failures.

To Sum it up

The Knightmare of August 1st, 2012, serves as a stark reminder of the perils of technical debt and the critical importance of managing it effectively. To prevent such disasters, organizations must address technical debt as an ongoing priority.

Digital transformation through the use of an application composition platform represents a forward-looking solution. It enables organizations to not only manage technical debt but also adapt and evolve their technology landscape in response to changing market conditions. In the ever-evolving landscape of financial technology and high-frequency trading, digital transformation is the path forward—a way to mitigate risk, enhance agility, and ensure that a Knightmare scenario remains firmly in the past.

Source

Ivan Tadic
Technical Content Marketing Manager

Scheer PAS at Gartner Application Innovation & Business Solutions Summit

Scheer PAS Gartner 2023 London Summary

Scheer PAS recently made a significant mark in the heart of London's tech scene, demonstrating how the delivery of transformative solutions and successful strategies is paramount in the pursuit of a Composable Enterprise. For those who couldn't join us at Gartner Application Innovation & Business Solutions Summit, we're here to provide a recap of the highlights, with a special focus on Toyota Germany's Composable Architecture journey.

Toyota Germany's Composable Architecture Journey

Scheer PAS Gartner 2023 London Summary Toyota

One of the standout moments of the event was the spotlight on Toyota Germany's remarkable journey towards Composable Architecture. Their story is one of transformation success, as they navigated the challenging path of legacy reduction and modernization. This journey serves as a example of what's possible when a company is committed to embracing change and innovation.

  • Transformation Success: Toyota Germany's achievement in reducing legacy systems and embracing modernization showcased the power of well-executed transformation strategies. Their story demonstrated how thoughtful planning, clear objectives, and the right technology partners can lead to remarkable outcomes.
  • Fusion-Teams and Hybrid Approach: Innovation was a key theme of Toyota Germany's journey. They highlighted the significance of fusion-teams, which combine diverse talents and perspectives to drive transformation. This hybrid approach, blending traditional and modern methods, proved to be instrumental in Toyota's success.
  • iPaaS, Automation, and Low-Code Solutions: The journey towards Composable Enterprise involved harnessing the potential of technology. Toyota Germany's adoption of iPaaS (Integration Platform as a Service), process automation, and Low-Code solutions exemplified their commitment to staying at the forefront of digital transformation. These technologies streamlined processes, increased efficiency, and empowered their workforce.
  • Real Challenges and Tangible Benefits: Toyota Germany's case study addressed the real challenges they encountered. It provided an honest look at the obstacles faced during their journey, demonstrating that the path to Composable Enterprise is not without its difficulties. However, the tangible benefits they reaped, such as improved agility and enhanced customer experiences, showcased the rewards of their efforts.

Key Conference Insights

Scheer PAS Gartner 2023 London Summary Keynote

In addition to Toyota Germany's success story, the Scheer PAS team actively participated in various aspects of the conference, including attending keynotes and engaging in discussions. Several critical topics were at the forefront of the event, shaping the future of business:

  • Low-Code Application Development: The conference placed significant emphasis on Low-Code Application Development as a transformative tool for businesses. The potential to empower non-developers to create applications and automate processes was a recurring theme.
  • AI in Integration: Artificial Intelligence (AI) in Integration emerged as a key discussion point. AI's role in enhancing decision-making, optimizing workflows, and automating tasks garnered significant attention.
  • Process Automation: Process Automation remained a focal point, highlighting how businesses can streamline operations, reduce errors, and enhance productivity by automating routine tasks and workflows.
  • Composability: Composability, a critical concept for future-proofing businesses, took center stage. The ability to adapt, integrate, and evolve quickly in response to changing circumstances was recognized as a vital strategy.

Beyond the Conference

Scheer PAS in London image

Of course, our team could not let the opportunity of being in London go to waste so after gathering as much inspiration at the conference, our journey took us to some of the most beautiful parts of London where the real team-building took its place.

Process automation through hybrid cloud integration at Nikon
“Scheer PAS reduced the complexity of the integration to such an extent that business employees were able to lead the project.”
Knowledge Corner
Compact. Informative. Up-to-date. Follow our experts live in the Scheer PAS Online Sessions and find out more about features and functions in our whitepapers.
From Chaos to Checkout: Automated Retail Renaissance
How to reach renaissance-styled retail store with all of the automation, and none of the excel sheets that require manual analysis?
Scheer PAS Blog

AI - A Buzzword with a Cool Connection

Scheer PAS Open AI connector blogpost header 2

Even after setting everything up and seeing the benefits of having an Application Composition Platform in your business, there is still someone on the payroll draining your resources (and wasting their own talent and time) just by watching the stream of your business data and monitoring for any inconsistencies.

Having AI do the monitoring and alerting you or your staff of anything unusual or different will not only save you time but also help your business in general.

Because, let's be honest...

AI was, and still is, the buzzword of the last couple of years, and with good reason. From automation to process monitoring, all the way to regulatory compliance and innovative user interfaces, AI can do a lot of things.

Now, having an Application Composition Platform that already covers a wide range of use cases by itself might not seem to fit into the "AI story." If a platform can already cover your Low-Code application development, process automation and mining, and system integration even without much technological expertise, where does the "A" tie in?

Scheer PAS Open AI connector blogpost it cooperation

Imagine that your business staff (alongside the help from your IT department) is already working on such a platform and managing to fill all the nooks and crannies of your business. Then AI wants to jump in. Where would you put it?

Somewhere between the IT department and the business department, AI fits perfectly. Just by having an OpenAI connector, your business staff can use natural language to describe the intricacies of your business workflows and create intelligent process automation. But that is just the start.

Scheer PAS Open AI connector blogpost it support

Your users (or clients) would benefit from having someone helping them 24/7, but your budget does not allow more recruitment. That is where the beauty of OpenAI’s natural language interaction comes in with enhanced chatbots and support. Let’s face it, when the majority of your IT department is relaxing, the great idea that someone from your core business staff had usually has to wait because they’re not that skilled in Pro-Coding something so customized. Then the "A" joins the "I," and your business does not have to wait.

Scheer PAS Open AI connector blogpost it regulations

Still, there is someone who has to wait for slow regulations and sign tons of documents. That someone is you. Because your business has to comply with different regulations. Such a repetitive yet mandatory task can drain your energy quickly, forcing you to think less about improving your business and more about how you forgot to sign the J51-385 document. Since it is repetitive and takes your or your staff’s time, why not let AI cover all of the compliance regulations and signing? Or does that heap of documents excite you for the working day?

Now when AI is covering the majority of the monotonous work, your staff does not have to deal with it anymore. What’s next? You've addressed all the inconsistencies, dealt with business staff not wanting to become experts in programming, covered your users with support, and now you want to move things further. Don't focus your relieved staff on everything because there is still much more that AI can do for your business.

  1. Training and Onboarding: Just by having an OpenAI connector, AI can develop interactive and engaging training modules that use natural language interactions to onboard new users and train them on the platform's functionalities, saving a ton of time and money.
  2. Collaborative Process Design: The integration can facilitate collaborative process design and modification by allowing multiple users to interact with the system using natural language, ensuring better alignment and communication among team members. You should be honest, playing Chinese Whispers with your staff is slowing you down.
  3. Data Insights and Analytics: Nobody understands why data analysts love staring at huge chunks or streams of characters and numbers. By analyzing large volumes of textual data, AI can provide valuable insights to users regarding process performance, bottlenecks, trends, and potential improvements. Let your staff stare at your future, not the data that represents your past.
  4. Process Documentation: Because let’s face it, it is difficult enough to make something work, let alone document the entire process. The OpenAI connector can assist in generating comprehensive process documentation, summaries, and reports. Users can easily communicate process details and updates with stakeholders through well-structured and coherent language.

Where "A" meets the eye

All of these things are within your reach just by using the OpenAI connector on a platform that is already comprehensive. Also, as AI gets smarter, and as you have more time to think about important things for your business, more ideas of using AI in it will come to you. You can always try punching the clock to get more time but automating the rest. Now that should be pretty easy for you when the "A" meets your business's "I".

Ivan Tadic
Technical Content Marketing Manager
Scheer PAS Blog

Shaping Shifting Skies: Microservices' Puzzle in Business Evolution

Scheer PAS Composability Blog post picture 1 Microservices

Imagine working on a puzzle on which all pieces fit in any place. Seems daunting?

Because creating a big picture is often difficult and the puzzle pieces in business are rarely distinct in such way that it helps. Just think about fitting the pieces of sky. Why would creating a puzzle on which all pieces fit in all places be helpful in any kind of way?

Just in the last couple of years, the figurative sky of business world has shattered so quickly that a lot of companies failed to pick up the pieces of the puzzle that they had almost finished in the years prior. The market changed so rapidly and drastically that many of the top players disappeared just because their leaders could not see “the sky” from all of the missing puzzle pieces. The pieces changed, their ends transformed, and nothing could be fitted anywhere anymore. Talking less metaphorically but still on the same subject, airlines were hit the worst alongside with hospitality industry, energy service providers, automobile industry, and specialty retailers (in this exact order - Source).

There were talks about the “New Normal” which not only people as consumers needed to accept but the businesses as well. Getting back to the metaphor, the sky which figuratively shattered was a part of that “New Normal”. The most difficult pieces of puzzle just weren’t fitting anymore, and some new horizons appeared for some, while others were left to pick up the pieces – no pun intended. If you have read our blog about the failed monolithic systems (which you should), you’re probably already seeing the big picture – again, no pun intended.

Forcefully trying to fit all the pieces of a business puzzle as they were, became a pointless effort, and the transition from the monolithic systems to ones which use the flexibility of made this pointless effort the “Old Normal”. As the big picture (or the market) constantly changes, instead of reorganizing and putting together all of the puzzle pieces from scratch, why shouldn’t you just accept the new picture which is changing so fast it basically became a video? Videos are taking over photos. Right?

Scheer PAS Composability Blog post picture 2 Microservices

To get back to the topic, monolithic systems are a thing of the past. The majority of the businesses that rely on them will sooner or later face the fact that in order to benefit from the changes in the market, they have to change as well. One way of implementing the microservices is to create them from scratch but that requires both manpower and time. The other way, and in the opinion of many analysts – the right way, is to use Low-Code app development, integration with the legacy systems to ease the transition, and microservices which all communicated with each other.

Basically, by making the business puzzle pieces smaller and all fitting everywhere, industry leaders have transitioned from seeing “the big picture” (a.k.a. the future of their business), to watching the video of the current and everchanging market with all of their pixels (interconnected microservices) reacting in real time to show the entire scene.

However, there is a problem.

Scheer PAS Composability Blog post picture 3 Microservices

Creating microservices is a time-consuming process which requires great coordination between the business and the IT sector of any company. Low-Code development can speed up the process alongside the creation of the fusion-teams. Still, the transition to microservices is long and difficult, especially if each service (or the piece of the puzzle) has to be created from scratch.

Reusing the previous pieces and borrowing them from others is the main key in streamlining this process and actually taking advantage of microservices. Composability is something that cannot be achieved without reusability (Source).

Basically, to finish the metaphor, microservices are essentially small puzzle pieces which fit in all places of the “big picture” of business, and no matter the changes in the market, these puzzle pieces change with it, while keeping their ends open for connection.

Nowadays almost everyone is at the crossroad choosing between holding on to the existing systems which may me familiar and easy to use but cannot change or adapt or going through the process of transition to microservice architecture (which is empowered by the reusability aspect).

Other than streamlining the process of the transition to the microservice architecture, there are a lot of direct benefits of actually reaching that goal.

Scheer PAS Composability Blog post picture 4 Microservices
  1. Rapid Application Development and Communication:

Microservices empower businesses to develop applications more swiftly than ever before. Each microservice, like a specialized puzzle piece, serves a specific function within the larger system. This modularity allows teams to work independently on different microservices, resulting in faster development cycles. Furthermore, these microservices communicate seamlessly with each other, enabling the sharing of data and resources. Imagine if each piece of your puzzle could self-adjust its position and seamlessly exchange information with its neighboring pieces.

  1. Scalability without Overhaul:

In the monolithic world, expanding your business often meant restructuring the entire system. With microservices, growth is less daunting. When your business expands, you can add new microservices tailored to the new requirements, without disrupting existing services. Like adding new puzzle pieces to an already assembled section, scaling becomes a matter of extending your solution, not overhauling it.

  1. Efficient Resource Utilization:

Just as the right puzzle piece fits perfectly in its designated spot, microservices allow for precise resource allocation. Each service can be optimized for its specific task, ensuring efficient use of computational power and memory. This fine-grained resource management means your investments yield maximum returns, much like a perfectly completed puzzle.

  1. Resilience and Continuity:

Picture a puzzle with a missing piece – the entire picture may remain incomplete. Similarly, a failure in a monolithic system can lead to a complete breakdown. Microservices, on the other hand, offer a safety net. If one microservice encounters an issue, the rest can continue functioning. It's akin to a puzzle with interconnected sections – if one section has a gap, the rest remain intact.

  1. Adaptability to Change:

The business landscape is like a dynamic video, always evolving. Microservices mirror this dynamism. As market conditions change, microservices can evolve, adapt, and even be replaced, all without disrupting the entire system. Imagine a puzzle that can morph its pieces to create a new image, allowing your business to stay aligned with market trends.

In essence, microservices are the ultimate evolution of the puzzle metaphor. Each piece is specialized, adaptable, and seamlessly integrated, enabling your business to navigate the complex puzzle of the market with agility and precision.

In our exploration of the intricate world of business transformation, we've likened it to solving a complex puzzle. Each piece fits intricately into the bigger picture, embodying the agility and adaptability that modern businesses need. Today, we unveil an exciting addition that seamlessly aligns with this puzzle-solving journey: the Scheer PAS Asset Repository.

What is Scheer PAS Asset Repository?

Scheer PAS Composability - Asset Drawer

The Scheer PAS Asset Repository offers pre-designed puzzle pieces that fit seamlessly into your business architecture, accelerating development. The Asset Repository provides access to a collection of agile microservices designed to enhance efficiency.

Collaboration is encouraged through the Asset Repository, allowing you to share your unique microservices within the Designer platform. This fosters innovation and ongoing evolution of the puzzle pieces.

The Asset Drawer, an extension of the Asset Repository, acts as a catalog of diverse microservices. It's easy to explore, select, and integrate these assets, and updating them is effortless, ensuring your puzzle is always current.

Clear and comprehensive documentation, provided by the Publish Assets wizard, makes integration smooth for developers using your microservices.

The Asset Drawer's seamless integration of updated microservices reflects the adaptable nature of business, enabling you to navigate changing market dynamics with ease.

Want to know more? Click here

Continuing the Microservices Journey

As we conclude this chapter of our exploration, the Scheer PAS Asset Repository emerges as a natural extension of the microservices ethos we've been discussing. It isn't just a tool; it's an embodiment of the principles that empower modern businesses to evolve, innovate, and succeed.

So, whether you're already well into your microservices journey or just embarking on it, the Asset Marketplace is here to offer a helping hand – a collection of puzzle pieces that enrich your business's tapestry of success.

Ivan Tadic
Technical Content Marketing Manager