Bpmn.io and Natural Language Processing

We had an interesting talk by processtalksDOTcom in the Berliner Camunda User Group this week. They use bpmn_io and NLP technology combined to create a language controlled modeling tool with it.

I do not know if this is the right forum, but I was wondering if you guys see any applications or scenarios where a language controlled modeler would make sense to you?

Wanna check this out? [https://youtu.be/NJe9wopyMn4]

From the tech point I find it spectacular, but appreciate some feedback for market scenarios.

What do you think?


Host of the CUG Berlin

1 Like

Hi Björn!

I was joking for some years about how natural language BPMN modeling would work in a process workshop-style setting where many people meet to source a workflow collaboratively. I imagined how you could shout at the computer if you disagree with what your colleague modeled, and it would revert it. Imagine the dynamics that develop if you have to persuade the computer your ideas are the better ones :wink:. We event planed an April Fools’ day joke regarding this once. We did not follow up on it, though. We did just this year present WASDENN though. Money quote:

[…] We actively explore voice recognition for future versions, giving you more freedom to interact with the assistant. You can look forward to vocally disagreeing with WASDENN’s proposals, sharing your excitement, or command it to your desire on the voice line.

Jokes aside, this highlights some of the challenges that NLP in the BPMN modeling domain needs to overcome to be a replacement for regular editor interactions:

(1) You need to be as clear and efficient to model your diagram using your language as you are using a proper tool (i.e. bpmn.io tooling). Which language exactly? Many people do not even understand the names of the symbols they are drawing.

(2) Natural language is ambiguous (example: or), and one point of BPMN is to formalize things to get rid of most of that ambiguity. Not sure how NLP will address this. Trying to fix errors, being precise, and not being understood by an AI can be a smash your screen moment. There have been too many puns regarding this one.

(3) All kinds of assistance and error recovery built into a good editor must be built again, into an NL interface. And it has to be done well, or you fail (cf. (2)).

(4) Misunderstanding needs to be accounted for (cf. (1), (2)). The potential is that the editor would identify ambiguity and ask for clarification / suggest what could be done.

From the editor builder perspective, I firmly believe that we are very very very early on the NLP technology adoption cycle. I also believe NLP in modeling will change the way we build our editors substantially (cf. (1) and (4)).

I do see NLP have a place where people cannot interact with the tool in the normal manner, i.e. because they are impaired (visually and/or input-wise). I’ve recently seen an inspiring tweet on how a blind person uses the iPhone to interact. For blind people specifically, you’d need language output, too, which is a a challenge on its own. Thinking in this direction, NLP could open the door to BPMN editing for more people.

I think it will take a long time until NLP replaces keyboard and mouse interactions in a well-crafted BPMN editor. I’m happy to be proven wrong, of course.


Dear Nico,
thanks for your feedback, all are fair comments. Let me try to give my 5 cents here.

First, let me set the context: NLP and speech recognition are two different technologies, that currently are at a very different maturity level. Whilst NLP has evolved in the last 5 years amazingly fast, speech technology is still on its early stages. Hence, communicating with a system through text should be considered as a less challenging problem than doing the same by voice. We witness this when we have implemented it in the Process Talks technology.

I believe your analysis is only considering one dimension: NLP as substitution to the click&drag’n’drop way of modeling. NLP interaction indeed faces these problems that you comment, and perhaps it will be hard to completely substitute the traditional click&drag’n’drop. But while we do not get there, have you thought on a hybrid interaction, where the modeler can either communicate in natural language or do it in the traditional way ? Shall that bring interesting additive value ?

But more importantly, I believe you are missing the great amount of opportunities that come when you rise the level of interaction to a higher level, as it happens when you simply explain your process to the technology in natural language. Let me try to enumerate the ones I believe are important:

  1. By not touching the final diagram but instead communicating your modeling editions, you are now leaving place for a technology to assist you on finding the best model; for instance, even if you are an experienced modeler, you may be tempted sometimes to create BPMN fragments that are not compliant with the BPMN 2.0 standard, or simply are impossible to execute. If you, instead, ask the technology to do the editions for you, it can take care of returning always a correct, BPMN 2.0 compliant diagram.

  2. In a similar way, when the modeler is no longer responsible to layouting the model, it is relieved of that task. Imagine that you have a 100-tasks complex process model (maybe created by someone else) and now you have to incorporate a slight modification. Would it be cool to simply tell in a couple of sentences the modifications and let the technology to come back with the new process model completely layouted for you ? So, not only modeling, but maintenance of a repository of process models can be very easy if you have such technology.

  3. By the same token of the 2 previous items, now think on collaboration. By rising the interaction to sentences that are not necessarily incorporated to the process model, you now open the door to really collaborate without the burden to always agree! In our technology, concurrent edits are very easy because you simply send a bunch of modeling commands and the technology incorporates them to the model if they are compatible. If you eventually contradict others, you will be notified, but if not, you are modeling as if you were alone. Cool, isn’t it ?

  4. The language of interaction can be adapted for segments: so imagine that in the healthcare sector, the way processes are modeled is different from logistics sectors. You can simply grasp it by analysing the sentences that did not make it to successful editing commands, and react accordingly. This cannot be done easily when the interaction is click&drag’n’drop, since what you see from the user is precisely this: clicks and drags’n’drops. So by changing to natural language interaction a new avenue of optimization gets opened in front of you: evolve the language to the one your customers use.

There are other advantages, but those four for me stand up from the rest.

Another comment: NLP is not only an aid for interaction! There are other dimensions to explore! An important one is mining unstructured data (BTW, see this prediction from Gartner about the need for this technology in the future). There is this use case of organizations writing in plain text their processes. We have been able to create a technology that is able to extract process models from unstructured data (textual documents). This also shows that NLP may really impact on that dimension, being offered as a jump start to modeling.

Finally, I agree on your comment on the potential for NLP to assist people with disabilities. In fact, we have done some sessions with blind people who has been able to do simple models with Process Talks. And yes, NLG (Natural Language Generation) is a must for them, because thats the only way to read the diagram. We have incorporated an NLG technique in our modeling assistant that will describe your process model in simple words. Actually, for the case of blind people is more to adapt the modeling front-ends to the technology they use for screen reading (e.g. JAWS, NVDA), because they are quite efficient in working this way.

All the best,

Josep Carmona
Process Talks

1 Like

Hi @CarmonaJosep. Thank you so much for jumping in and adding some more background on NLP and your application specifically.

I do see great potential, especially in the area of hybrid applications. In areas where people do not exactly know what they want or cannot express it precisely. However they can already talk about it, so others (or a NLP interfaced computer) can help out. I did mention that in my previous post already, too:

In the end someone translating what I explained into a picture works in real-life to get a shared understanding, too. Why would it not work for human <-> computer interaction?

The devil is in the details though. At least this is what I’ve learned building decent BPMN modeling tooling in the last years.

Looking forward how the whole topic evolves in the future.