skip to main content
10.1145/3646548.3672584acmconferencesArticle/Chapter ViewAbstractPublication PagessplcConference Proceedingsconference-collections
research-article
Open access

Modelling Engineering Processes in Natural Language: A Case Study

Published: 02 September 2024 Publication History

Abstract

Engineering process management aims to formally specify processes which are executable, measurable, and controllable. Common representations include text-based domain-specific languages (DSLs) or graphical notations such as the Business Process Modelling Notation (BPMN). The specification itself can be seen as a Software Product Line (SPL), building upon concepts such as tasks, UI forms, fields and actions. Domain experts provide requirements for processes but often lack the technical programming skills to formalize them in a process specification language. We present an interactive SPL application prototype that allows domain experts to model simple processes in natural language. Our framework for the reliable generation of formal specifications with Large Language Models (LLMs) supports the machine-translation from natural language to a JSON-based process DSL. In this case study, five domain experts were asked to model any process of their choice through natural-language interactions. As a result, the user interface corresponding to the process DSL was shown as immediate feedback. We documented their perceived translation quality and interviewed them on their impressions of this methodology. An average user-assessed performance rating of 68% was achieved. Even though the modelling strategies differed greatly between individuals, the tool was able to adequately capture the majority of instructions, leaving an overall positive impression on the participants. More context awareness and additional conventional interaction elements were the main aspects found to be improved for a productive implementation.

References

[1]
Sven Apel, Don Batory, Christian Kästner, Gunter Saake, Sven Apel, Don Batory, Christian Kästner, and Gunter Saake. 2013. Software product lines. Feature-Oriented Software Product Lines: Concepts and Implementation (2013), 3–15.
[2]
Saimir Bala, Giray Havur, Simon Sperl, Simon Steyskal, Alois Haselböck, Jan Mendling, and Axel Polleres. 2016. SHAPEworks: A BPMS Extension for Complex Process Management. In Proceedings of the BPM Demo Track 2016 Co-located with the 14th International Conference on Business Process Management (BPM 2016), Rio de Janeiro, Brazil, September 21, 2016(CEUR Workshop Proceedings, Vol. 1789), Leonardo Azevedo and Cristina Cabanillas (Eds.). CEUR-WS.org, 50–55.
[3]
Ana Chacón-Luna, Antonio Gutierrez, José Galindo, and David Benavides. 2020. Empirical software product line engineering: A systematic literature review. Information and Software Technology 128 (08 2020), 106389. https://doi.org/10.1016/j.infsof.2020.106389
[4]
Matthias Cosler, Christopher Hahn, Daniel Mendoza, Frederik Schmitt, and Caroline Trippel. 2023. nl2spec: Interactively Translating Unstructured Natural Language to Temporal Logics with Large Language Models. In Computer Aided Verification - 35th International Conference, CAV 2023, Paris, France, July 17-22, 2023, Proceedings, Part II(Lecture Notes in Computer Science, Vol. 13965), Constantin Enea and Akash Lal (Eds.). Springer, 383–396.
[5]
Itana Maria de S. Gimenes, Marcelo Fantinato, and Maria Beatriz F. de Toledo. 2008. A Product Line for Business Process Management. In 2008 12th International Software Product Line Conference. 265–274. https://doi.org/10.1109/SPLC.2008.10
[6]
Michael Desmond, Evelyn Duesterwald, Vatche Isahagian, and Vinod Muthusamy. 2022. A No-Code Low-Code Paradigm for Authoring Business Automations Using Natural Language. CoRR (2022). arXiv:2207.10648https://doi.org/10.48550/arXiv.2207.10648
[7]
Roberto dos Santos Rocha and Marcelo Fantinato. 2013. The use of software product lines for business process management: A systematic literature review. Information and Software Technology 55, 8 (2013), 1355–1373. https://doi.org/10.1016/j.infsof.2013.02.007
[8]
Philipp Kogler, Andreas Falkner, and Simon Sperl. 2024. Reliable Generation of Formal Specifications using Large Language Models. In SE 2024 - Companion. Gesellschaft für Informatik e.V., 141–153. https://doi.org/10.18420/sw2024-ws_10
[9]
Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng, Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, Xuchao Zhang, Tianjiao Zhao, Amit Panalkar, Wei Cheng, Haoyu Wang, Yanchi Liu, Zhengzhang Chen, Haifeng Chen, Chris White, Quanquan Gu, Carl Yang, and Liang Zhao. 2023. Beyond One-Model-Fits-All: A Survey of Domain Specialization for Large Language Models. CoRR abs/2305.18703 (2023). https://doi.org/10.48550/ARXIV.2305.18703 arXiv:2305.18703
[10]
Yiheng Liu, Tianle Han, Siyuan Ma, Jiayue Zhang, Yuanyuan Yang, Jiaming Tian, Hao He, Antong Li, Mengshen He, Zhengliang Liu, Zihao Wu, Dajiang Zhu, Xiang Li, Ning Qiang, Dingang Shen, Tianming Liu, and Bao Ge. 2023. Summary of ChatGPT/GPT-4 Research and Perspective Towards the Future of Large Language Models. (4 2023). http://arxiv.org/abs/2304.01852
[11]
Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heintz, and Dan Roth. 2023. Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey. Comput. Surveys (6 2023).
[12]
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2023. CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.
[13]
Phind. 2023. Beating GPT-4 on HumanEval with a Fine-Tuned CodeLlama-34B. https://www.phind.com/blog/code-llama-beats-gpt4 Accessed: 2024-07-01.
[14]
Gabriel Poesia, Alex Polozov, Vu Le, Ashish Tiwari, Gustavo Soares, Christopher Meek, and Sumit Gulwani. 2022. Synchromesh: Reliable Code Generation from Pre-trained Language Models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
[15]
Klaus Pohl, Günter Böckle, and Frank Van Der Linden. 2005. Software product line engineering: foundations, principles, and techniques. Vol. 1. Springer.
[16]
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve, and Meta Ai. 2023. Code Llama: Open Foundation Models for Code. https://github.com/facebookresearch/codellama
[17]
Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cristian Cantón Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open Foundation and Fine-Tuned Chat Models. ArXiv abs/2307.09288 (2023). https://api.semanticscholar.org/CorpusID:259950998
[18]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Vol. 30. Curran Associates, Inc.https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
[19]
Bailin Wang, Zi Wang, Xuezhi Wang, Yuan Cao, Rif A. Saurous, and Yoon Kim. 2023. Grammar Prompting for Domain-Specific Language Generation with Large Language Models. In Advances in Neural Information Processing Systems, A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds.). Vol. 36. Curran Associates, Inc., 65030–65055. https://proceedings.neurips.cc/paper_files/paper/2023/file/cd40d0d65bfebb894ccc9ea822b47fa8-Paper-Conference.pdf
[20]
Frank F. Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn. 2022. A systematic evaluation of large language models of code. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming (San Diego, CA, USA) (MAPS 2022). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3520312.3534862

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SPLC '24: Proceedings of the 28th ACM International Systems and Software Product Line Conference
September 2024
103 pages
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 September 2024

Check for updates

Author Tags

  1. Domain-specific Languages
  2. Generative Artificial Intelligence
  3. Large Language Models
  4. Process Management
  5. Process Modelling
  6. Reliable Code Generation

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

SPLC '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 167 of 463 submissions, 36%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 18
    Total Downloads
  • Downloads (Last 12 months)18
  • Downloads (Last 6 weeks)18
Reflects downloads up to 15 Sep 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media