ENASE 2023 Abstracts


Area 1 - Challenges and Novel Approaches to Systems and Software Engineering (SSE)

Full Papers
Paper Nr: 20
Title:

Multivocal Literature Review on Non-Technical Debt in Software Development: An Exploratory Study

Authors:

Hina Saeeda, Muhammad O. Ahmad and Tomas Gustavsson

Abstract: Earlier research has focused on technical debt (TD). While numerous issues connected to non-technical aspects of software development (SD) that are equally worthy of ”debt” status are neglected. Simultaneously, these types of debts regularly develop significant challenges to be addressed, demonstrating that the debt metaphor may be used to reason about elements other than technical ones. It motivates us to create the new umbrella term ”Non-Technical Debt” (NTD) to investigate people, processes, culture, social, and organizational concerns under its cover. All types of debt are similar in some ways, and they are often caused by making risky decisions. Therefore, ignoring any one dimension of debt can have severe consequences on the successful completion of the SD projects. This study investigates recent literature on the current state of knowledge about NTD, its causes, and mitigation strategies. We identified 40 primary studies out of 110 records published until April 2022. By using a thematic analysis approach, we found five NTD types (i.e., people, process, culture, social, and organizational). We further identified their accumulation causes and discussed remedies for mitigation
Download

Paper Nr: 77
Title:

Rethinking Certification for Higher Trust and Ethical Safeguarding of Autonomous Systems

Authors:

Dasa Kusnirakova and Barbora Buhnova

Abstract: With the increasing complexity of software permeating critical domains such as autonomous driving, new challenges are emerging in the ways the engineering of these systems needs to be rethought. Autonomous driving is expected to continue gradually overtaking all critical driving functions, which is adding to the complexity of the certification of autonomous driving systems. As a response, certification authorities have already started introducing strategies for the certification of autonomous vehicles and their software. But even with these new approaches, the certification procedures are not fully catching up with the dynamism and unpredictability of future autonomous systems, and thus may not necessarily guarantee compliance with all requirements imposed on these systems. In this paper, we identified a number of issues with the proposed certification strategies, which may impact the systems substantially. For instance, we emphasize the lack of adequate reflection on software changes occurring in constantly changing systems, or low support for systems’ cooperation needed for the management of coordinated moves. Other shortcomings concern the narrow focus of the awarded certification by neglecting aspects such as the ethical behaviour of autonomous software systems. The contribution of this paper is threefold. First, we discuss the motivation for the need to modify the current certification processes for autonomous driving systems. Second, we analyze current international standards used in the certification processes towards requirements derived from the requirements laid on dynamic software ecosystems and autonomous systems themselves. Third, we outline a concept for incorporating the missing parts into the certification procedure.
Download

Short Papers
Paper Nr: 58
Title:

Modelling Adaptive Systems with Nets-Within-Nets in Maude

Authors:

Lorenzo Capra and Michael Köhler-Bussmeier

Abstract: Systems able to dynamically adapt their behaviour gain growing attention to raising service quality by reduc- ing development costs. On the other hand, adaptation is a major source of complexity and calls for suitable methodologies during the whole system life cycle. A challenging point is the system’s structural reconfigu- ration in front of particular events like component failure/congestion. This solution is so common in modern distributed systems that it has led to defining ad-hoc extensions of known formal models (e.g., the pi-calculus) But even with syntactic sugar, these formalisms differ enough from daily programming languages. This work aims to bridge the gap between theory and practice by introducing an abstract machine for the “nets-within- nets” paradigm. Our encoding is in the well-known Maude language, whose rewriting logic semantics ensures the mathematical soundness needed for analysis and an intuitive operational perspective.
Download

Paper Nr: 65
Title:

Reverse Engineering of OpenQASM3 Quantum Programs to KDM Models

Authors:

Luis Jiménez-Navajas, Ricardo Pérez-Castillo and Mario Piattini

Abstract: The development of quantum computing is following a substantial growth. This leads us closer to the implementation of practical solutions based on quantum software that address problems that are not computable by classical software in a practical timeframe. Hence, some companies will need to adapt their development practices and, so, their information systems to take advantage of quantum computing. Unfortunately, there is still a lack of tools, frameworks, and processes to support the evolution of current systems towards the combination of the quantum and classical paradigms into information systems. Hence, this paper presents a reverse engineering technique to generate abstract models based on the Knowledge Discovery Metamodel (KDM) by analyzing quantum software written in OpenQASM3. The main implication is that KDM models represent, in a technology-agnostic way, the different components and interrelationships of quantum software. These models then can be used to restructure and redesign the target hybrid information system.
Download

Paper Nr: 92
Title:

Digital Twins for Trust Building in Autonomous Drones Through Dynamic Safety Evaluation

Authors:

Danish Iqbal, Barbora Buhnova and Emilia Cioroaica

Abstract: The adoption process of innovative software-intensive technologies leverages complex trust concerns in different forms and shapes. Perceived safety plays a fundamental role in technology adoption, being especially crucial in the case of those innovative software-driven technologies characterized by a high degree of dynamism and unpredictability, like collaborating autonomous systems. These systems need to synchronize their maneuvers in order to collaboratively engage in reactions to unpredictable incoming hazardous situations. That is however only possible in the presence of mutual trust. In this paper, we propose an approach for machine-to-machine dynamic trust assessment for collaborating autonomous systems that supports trust-building based on the concept of dynamic safety assurance within the collaborative process among the software-intensive autonomous systems. In our approach, we leverage the concept of digital twins which are abstract models fed with real-time data used in the run-time dynamic exchange of information. The information exchange is performed through the execution of specialized models that embed the necessary safety properties. More particularly, we examine the possible role of the Digital Twins in machine-to-machine trust building and present their design in supporting dynamic trust assessment of autonomous drones. Ultimately, we present a proof of concept of direct and indirect trust assessment by employing the Digital Twin in a use case involving two autonomous collaborating drones.
Download

Paper Nr: 96
Title:

Programming Language Identification in Stack Overflow Post Snippets with Regex Based Tf-Idf Vectorization over ANN

Authors:

Aman Swaraj and Sandeep Kumar

Abstract: Software Question-Answer (SQA) sites such as Stack Overflow (SO) comprise a significant portion of a developer’s resource for knowledge sharing. Owing to their mass popularity, these SQA sites require an appropriate tagging mechanism to better facilitate discussion among users. An intrinsic part of predicting these tags is predicting programming languages of the code segments associated with the questions. Usually, state of art models such as BERT and embedding-based algorithms such as word2vec are preferred for the text classification task, however, in the case of code snippets that are different from natural language in both syntactic as well as semantic composition, embedding techniques might not yield as precise results as traditional methods. To this predicament, we propose a regex-based tf-idf vectorization approach followed by chi-square feature reduction over an ANN classifier. Our method achieves an accuracy of 85% over a corpus of 232,727 stack overflow code snippets which surpasses several baselines.
Download

Paper Nr: 42
Title:

Constraint-Logic Object-Oriented Programming with Free Arrays of Reference-Typed Elements via Symbolic Aliasing

Authors:

Hendrik Winkelmann and Herbert Kuchen

Abstract: Constraint-logic object-oriented programming is a young programming paradigm that aims to bring constraint-solving techniques to an audience more accustomed to imperative programming. A prototypical language of this paradigm, Muli, allows for the use not only of primitive-typed free variables, but also for free objects and free arrays of primitive-typed elements. In the work at hand, we extend the current version of Muli so that it supports free arrays of arrays and free arrays of objects. We do so by utilizing the concept of symbolic aliasing. Our evaluation shows that the presented approach can speed up program validation and test case generation, as well as solving complex constraint satisfaction problems.
Download

Paper Nr: 60
Title:

Towards Synthesis of Code for Calculations Using Their Specifications

Authors:

Advaita Datar, Amey Zare, Venkatesh R and Asia A

Abstract: Banking, Financial Services, and Insurance (BFSI) software are calculation intensive. In general, these cal- culations are formally specified in spreadsheets, known as Calculation Specification (CS) sheets. CS sheets describe the calculation inputs and the business logic applied on these inputs to compute calculation output(s). Additionally, an illustration of the calculation is provided with at least one valid value for each calculation in- put. However, manual implementation of code corresponding to such CS sheets remains to be effort-intensive and tedious. This includes writing database queries to retrieve values for calculation inputs from the enterprise database and converting these queries and corresponding business logic to code. We propose a novel idea to synthesize code corresponding to CS sheets that will i) automatically identify the calculation inputs ii) formu- late a Programming By Example (PBE) specification for each calculation input where, PBE input is the textual description of the calculation input, PBE output is the valid value provided in the calculation’s illustration, iii) then for each PBE specification a) synthesize a set of possible database queries, b) manually review them and mark the intended query, and finally iv) generate code, in desired target language, for all intended queries and the business logic specified in CS sheets.
Download

Paper Nr: 111
Title:

Impact of Machine Learning on Software Development Life Cycle

Authors:

Maryam Navaei and Nasseh Tabrizi

Abstract: This research concludes an overall summary of the publications so far on the applied Machine Learning (ML) techniques in different phases of Software Development Life Cycle (SDLC) that includes Requirement Analysis, Design, Implementation, Testing, and Maintenance. We have performed a systematic review of the research studies published from 2015-2023 and revealed that Software Requirements Analysis phase has the least number of papers published; in contrast, Software Testing is the phase with the greatest number of papers published.
Download

Paper Nr: 112
Title:

Leveraging Transformer and Graph Neural Networks for Variable Misuse Detection

Authors:

Vitaly Romanov, Gcinizwe Dlamini, Aidar Valeev and Vladimir Ivanov

Abstract: Understanding source code is a central part of finding and fixing software defects in software development. In many cases software defects caused by an incorrect usage of variables in program code. Over the years researchers have developed data-driven approaches to detect variable misuse. Most of modern existing approaches are based on the transformer architecture, trained on millions of buggy and correct code snippets to learn the task of variable detection. In this paper, we evaluate an alternative, a graph neural network (GNN) architectures, for variable misuse detection. Popular benchmark dataset, which is a collection functions written in Python programming language, is used to train the models presented in this paper. We compare the GNN models with the transformer-based model called CodeBERT.
Download

Area 2 - Systems and Software Engineering (SSE) for Emerging Domains

Full Papers
Paper Nr: 94
Title:

Git Workflow for Active Learning: A Development Methodology Proposal for Data-Centric AI Projects

Authors:

Fabian Stieler and Bernhard Bauer

Abstract: As soon as Artificial Intelligence (AI) projects grow from small feasibility studies to mature projects, developers and data scientists face new challenges, such as collaboration with other developers, versioning data, or traceability of model metrics and other resulting artifacts. This paper suggests a data-centric AI project with an Active Learning (AL) loop from a developer perspective and presents ”Git Workflow for AL”: A methodology proposal to guide teams on how to structure a project and solve implementation challenges. We introduce principles for data, code, as well as automation, and present a new branching workflow. The evaluation shows that the proposed method is an enabler for fulfilling established best practices.
Download

Short Papers
Paper Nr: 19
Title:

Evaluation of Contemporary Smart Contract Analysis Tools

Authors:

Baocheng Wang, Shiping Chen and Qin Wang

Abstract: Smart contracts are an innovative technology built into Blockchain 2.0 that enables the same program (business logic) to run on multiple nodes for consistent results. Smart contracts are widely used in current Blockchain systems such as Ethereum for different purposes such as transferring cryptocurrencies. However, smart contracts can be vulnerable due to intentional or unintentional injection of bugs, and due to the immutable nature of the Blockchain, any bugs or errors become permanent once published, which can lead to smart contract developers and users suffering from significant economic loss. To avoid such problems, it is necessary to perform vulnerabilities detection to the smart contracts before deployment, and a large number of analysis tools have also emerged to ensure the security. However, the quality of the analysis tools that currently exist on the market varies widely, and there is a lack of systematic quality assessment of these tools. Our research aims to fill this gap by conducting a systematic evaluation of some existing smart contract analysis tools.
Download

Paper Nr: 27
Title:

A Platform Selection Framework for Blockchain-Based Software Systems Based on the Blockchain Trilemma

Authors:

Jan Werth, Nabil El Ioini, Mohammad H. Berenjestanaki, Hamid R. Barzegar and Claus Pahl

Abstract: Blockchains are used in many software systems to deal with trusted storage. The selection of the appropriate software architecture stack in distributed systems is generally driven by scalability, security, and decentralization as central qualities. In the blockchain domain, these are known as the blockchain trilemma, as they oppose each other. We select the most popular blockchain platforms based on these trilemma properties and other indicators to provide a platform review. Specific metrics will be derived from the overall goals and applied to the platform options. This serves as a basis to create a Selection framework to facilitate the choice of the best possible platform for a given system architecture. The selection framework is evaluated through a use case.
Download

Paper Nr: 89
Title:

mHealthSwarm: A Unified Platform for mHealth Applications

Authors:

Ben Philip, Yasmeen Alshehhi, Mohamed Abdelrazek, Scott Barnett, Alessio Bonti and John Grundy

Abstract: Mobile health (mHealth) applications are ubiquitous and offer several benefits such as easier access to one’s health and wellness data through smartphones. However, their growth in popularity has also introduced several challenges for both end-users and developers. Users face challenges around the poor user experience (UX) introduced by the need to install several apps and limited customizability which is exacerbated by the limited control over app functionality. While features common across different apps may not directly affect individual developers, the fact that one app may provide a better implementation of features leads to wasted developer effort. A single platform that can satisfy all user needs does not currently exist, and given the diversity of the mHealth domain, a single app would be far too complex if it existed. In this paper, we present a new approach and a platform for mHealth apps - micro-mHealth apps, and we discuss them as an alternative to the current mHealth app development model, which we believe could improve the current state of mHealth app development and adoption. We are currently evaluating our prototype with several micro-mHealth apps built using common features in the mHealth apps available in commercial app stores.
Download

Paper Nr: 99
Title:

Blockchain Technology in Medical Data Processing: A Study on Its Applications and Potential Benefits

Authors:

Olga Siedlecka-Lamch and Sabina Szymoniak

Abstract: Blockchain is a digital ledger technology that uses a decentralised, distributed database to record and validate transactions. It allows multiple parties to access the same information and make changes to it securely and transparently without the need for a central authority. The potential of blockchain technology can streamline and improve efficiency in many industries and sectors. One such application is the processing of medical data. The use of blockchain is associated with the need to meet many challenges related to the scalability of processing and storing large amounts of medical data, their security and interoperability. This article presents an original idea for storing and processing medical data by combining blockchain technology with relational databases. Such a combination will bring positive effects in terms of protecting patients’ privacy, increasing trust in the system and increasing the efficiency and effectiveness of medical data management. In the pro- posed model, blockchain technology will ensure security, immutability and transparent medical data storage. A relational database, on the other hand, will facilitate the processing and sharing of data. The model includes patient data, their insurance, bills, healthcare workers, doctors, nurses, and data related to the treatment pro- cess: visits, referrals, releases, test results, diagnoses and medications.
Download

Paper Nr: 21
Title:

Multi-Step Reasoning for IoT Devices

Authors:

José M. Blanco and Bruno Rossi

Abstract: Internet of Things (IoT) devices are constantly growing in numbers, forecasted to reach 27 billion in 2025. With such a large number of connected devices, energy consumption concerns are a major priority for the upcoming years. Cloud / Edge / Fog Computing are critically associated with IoT devices as enablers for data communication and coordination among devices. In this paper, we look at the distribution of Semantic Reasoning between IoT devices and define a new class of reasoning, multi-step reasoning, that can be associated at the level of the edge or fog node in the context of IoT devices. We conduct an experiment based on synthetic datasets to evaluate the performance of multi-step reasoning in terms of power consumption, memory, and CPU usage. Overall we found that multi-step reasoning can help in reducing computation time and energy consumption on IoT devices in the presence of larger datasets.
Download

Paper Nr: 61
Title:

A Study on Hybrid Classical: Quantum Computing Instructions for a Fragment of the QuickSI Algorithm for Subgraph Isomorphism

Authors:

Radu-Iulian Gheorghica

Abstract: The purpose of the research presented in this paper is replacing classical computing instructions of a QuickSI Algorithm (Shang and collaborators, 2008)(Lee and collaborators, 2012) fragment for subgraph isomorphism with quantum computing instructions that serve the same purpose, but have a much better performance in terms of execution times. The key results are quantum circuits that can replace specified instructions in the QuickSI Algorithm source code. The quantum circuits have the role of oracles which are composed of gates that manipulate qubits. In the following sections are presented three quantum computing approaches: two for graph creation and one for generating truly random numbers.
Download

Paper Nr: 85
Title:

A Goal-Oriented Requirements Engineering Approach for IoT Applications

Authors:

Deepika Prakash and Naveen Prakash

Abstract: Requirements Engineering of IoT systems has the twin objectives of specifying functionality as well as communication objectives of the system. Existing goal-oriented and use case approaches were not developed to bring out communication objectives of systems. Consequently, when these techniques are applied then communication remains subordinate to functionality. We integrate both objectives in the notion of a GOT, GOal of Things. The GOT model represents the structure of GOTs and an instance of this model is the requirements specification of an IoT. The accompanying GOT Process provides three ways of GOT reduction. We illustrate its application to an Accident Reporting System. The GOT proposal is compared with a use-case oriented approach and a goal oriented approach.
Download

Paper Nr: 103
Title:

Web Platform for Job Recommendation Based on Machine Learning

Authors:

Iuliana Marin and Hanoosh Amel

Abstract: After three years of dealing with a global medical catastrophe, our society is attempting to re-establish normalcy. While companies are still struggling to get back on track, workers have grown afraid to seek new jobs, either because they offer low pay or an uncertain schedule. The result is a disconnected environment that does not merge, even though it appears to. The proposed approach creates a suitable recommender system for those looking for jobs in data science. The first-hand information is gathered by collecting Indeed.com’s data science job listings, analysing the top talents that employers value, and generating job ideas by matching a user’s skills to openings that have been listed. This process of job suggestion would assist the user in concentrating on the positions where he has the greatest chance of succeeding rather than applying to every position in the system. With the aid of this recommendation system, a recruiter’s burden would be decreased because it lowers the quantity of undesirable prospects.
Download

Paper Nr: 108
Title:

An Innovative Approach to Develop Persona from Application Reviews

Authors:

Dylan Clements, Elysia Giannis, Fi Crowe, Mike Balapitiya, Jason Marshall, Paul Papadopoulos and Tanjila Kanij

Abstract: Software end users are diverse by nature and their different facets influence the way they use software. An understanding of the users and their needs are achieved by engaging with the users during requirement engineering. However, sometimes recruiting users during requirement engineering phase can be very challenging. An accessible way to understand a user’s perspective and traits is through user application reviews. This research paper proposes an innovative approach to develop user personas from a data set of e-commerce application user reviews by using GPT-3 and PATHY. This enables the development teams to see different demographic data, as well as overall frustrations and expectations that users of their platform possess, so developers know how to enhance their software solutions. This is also helpful to developers of new e-commerce applications.
Download

Area 3 - Systems and Software Quality

Full Papers
Paper Nr: 10
Title:

Live Code Smell Detection of Data Clumps in an Integrated Development Environment

Authors:

Nils Baumgartner, Firas Adleh and Elke Pulvermüller

Abstract: Code smells in software systems create maintenance and extension challenges for developers. While many tools detect code smells, few provide refactoring suggestions. Some of the tools support live detection in an integrated development environment. We present a tool for the live detection of data clumps in Java with generated suggestions and semi-automatic refactoring. To achieve this, our research examines projects and their associated abstract syntax trees and analyzes types of variables. Thereby, we aim to detect data clumps, a type of code smells, and generate suggestions to counteract them. We implemented our approach to live data clumps detection as an IntelliJ integrated development environment application plugin. The live detection achieved a median of less than 0.5 s for the ArgoUML software project, which we analyzed as an example. From over 1500 investigated files, our approach detected 125 files with data clumps and that of CBSD (Code Bad Smell Detector) detected 97 files with data clumps. For both approaches, 92 of the files found were the same. We combined the manual steps for refactoring, resulting in a semi-automatic elimination of data clumps.
Download

Paper Nr: 11
Title:

Visualizing Dynamic Data-Flow Analysis of Object-Oriented Programs Based on the Language Server Protocol

Authors:

Laura Troost, Jonathan Neugebauer and Herbert Kuchen

Abstract: Although studies emphasized the effectiveness of analyzing data-flow coverage as opposed to branch coverage in the area of testing, there is still a lack of appropriate tools. We propose an approach to visualize data flows of programs within code editors based on the Language Server Protocol (LSP). For this purpose, we define extensions of the LSP to increase usability in the given application. Furthermore, we present a prototype with implementations of a language server as well as the two language clients IntelliJ IDEA and Visual Studio Code. Moreover, we outline how the different components can interact effectively based on the LSP to enable the analysis and visualization of data-flows. We evaluate our prototype based on various benchmarks.
Download

Paper Nr: 32
Title:

Predictive Power of Two Data Flow Metrics in Software Defect Prediction

Authors:

Adam Roman, Rafał Brożek and Jarosław Hryszko

Abstract: Data flow coverage criteria are widely used in software testing, but there is almost no research on low-level data flow metrics as software defect predictors. Aims: We examine two such metrics in this context: dep- degree (DD) proposed by Beyer and Fararooy and a new data flow metric called dep-degree density (DDD). Method: We investigate the importance of DD and DDD in SDP models. We perform a correlation analysis to check if DD and DDD measure different aspects of the code than the well-known size, complexity, and documentation metrics. Finally, we perform experiments with five different classifiers on nine projects from the Unified Bug Dataset to compare the performance of the SDP models trained with and without data flow metrics. Results: 1) DD is noticeably correlated with many other code metrics, but DDD is not correlated or is very weakly correlated with other metrics considered in this study; 2) both DD and DDD are highly ranked in the feature importance analysis; 3) SDP models that use DD and DDD perform better than models that do not use data flow metrics. Conclusions: Data-flow metrics: DD and DDD can be valuable predictors in SDP models.
Download

Paper Nr: 84
Title:

Towards Automated Prediction of Software Bugs from Textual Description

Authors:

Suyash Shukla and Sandeep Kumar

Abstract: Every software deals with issues such as bugs, defect tracking, task management, development issue to a customer query, etc., in its entire lifecycle. An issue-tracking system (ITS) tracks issues and manages software development tasks. However, it has been noted that the inferred issue types often mismatch with the issue title and description. Recent studies showed machine learning (ML) based issue type prediction as a promising direction, mitigating manual issue type assignment problems. This work proposes an ensemble method for issue-type prediction using different ML classifiers. The effectiveness of the proposed model is evaluated over the 40302 manually validated issues of thirty-eight java projects from the SmartSHARK data repository, which has not been done earlier. The textual description of an issue is used as input to the classification model for predicting the type of issue. We employed the term frequency-inverse document frequency (TF-IDF) method to convert textual descriptions of issues into numerical features. We have compared the proposed approach with other widely used ensemble approaches and found that the proposed approach outperforms the other ensemble approaches with an accuracy of 81.41%. Further, we have compared the proposed approach with existing issue-type prediction models in the literature. The results show that the proposed approach performed better than existing models in the literature.
Download

Paper Nr: 102
Title:

ASMS: A Metrics Suite to Measure Adaptive Strategies of Self-Adaptive Systems

Authors:

Koen Kraaijveld and Claudia Raibulet

Abstract: In the last two decades, research in Self-Adaptive Systems (SAS) has proposed various approaches for inducing a software system with the ability to change itself at runtime in terms of self-adaptation strategies. For the wider adoption of these strategies, there is a need for a framework and tool support to enable their analysis, evaluation, comparison, and eventually their selection in overlapping cases. In this paper, we take a step in this direction by proposing a comprehensive metric suite, i.e., the Adaptive Strategies Metric Suite (ASMS), to measure the design and runtime properties of the adaptive strategies for SAS. ASMS consists of metrics that can be applied through both static and dynamic code analysis. The metrics pertaining to static code analysis have been implemented as a plugin for Understand tool.
Download

Paper Nr: 113
Title:

Specification Based Testing of Object Detection for Automated Driving Systems via BBSL

Authors:

Kento Tanaka, Toshiaki Aoki, Tatsuji Kawai, Takashi Tomita, Daisuke Kawakami and Nobuo Chida

Abstract: Automated driving systems(ADS) are major trend and the safety of such critical system has become one of the most important research topics. However, ADS are complex systems that involve various elements. Moreover, it is difficult to ensure safety using conventional testing methods due to the diversity of driving environments. Deep Neural Network(DNN) is effective for object detection processing that takes diverse driving environments as input. A method such as Intersection over Union (IoU) that defines a threshold value for the discrepancy between the bounding box of the inference result and the bounding box of the ground-truth-label can be used to test the DNN. However, there is a problem that these tests are difficult to sufficiently test to what extent they meet the specifications of ADS. Therefore, we propose a method for converting formal specifications of ADS written in Bounding Box Specification Language (BBSL) into tests for object detection. BBSL is a language that can mathematically describe the specification of OEDR (Object and Event Detection and Response), one of the tasks of ADS. Using these specifications, we define specification based testing of object detection for ADS. Then, we evaluate that this test is more safety-conscious for ADS than tests using IoU.
Download

Short Papers
Paper Nr: 8
Title:

Schfuzz: Detecting Concurrency Bugs with Feedback-Guided Fuzzing

Authors:

Hiromasa Ito, Yutaka Matsubara and Hiroaki Takada

Abstract: It is challenging to detect concurrency bugs with fuzzing. There are two main reasons for this. First, manifesting them by exploring input space is inefficient because they only occur under specific interleavings. Second, re-giving an input detected a bug in a fuzzing campaign does not necessarily reproduce the bug because typical runtimes do not schedule threads deterministically. This research proposes Schfuzz, a novel approach for detecting concurrency bugs with feedback-guided fuzzing. This approach executes programs under test deterministically based on test cases generated by fuzzers. In addition, it feeds back dynamic memory-access orders to aid fuzzers in detecting concurrency bugs more efficiently and effectively. We evaluate Schfuzz with a hand-made motivating example and four benchmark programs from SCTBench (Thomson et al., 2016). The result shows that it can detect concurrency bugs more efficiently and effectively than traditional feedback-guided fuzzing.
Download

Paper Nr: 15
Title:

An Empirical Evaluation of System-Level Test Effectiveness for Safety-Critical Software

Authors:

Muhammad N. Zafar, Wasif Afzal and Eduard P. Enoiu

Abstract: Combinatorial Testing (CT) and Model-Based Testing (MBT) are two recognized test generation techniques. The evidence of their fault detection effectiveness and comparison with industrial state-of-the-practice is still scarce, more so at the system level for safety-critical systems, such as those found in trains. We use mutation analysis to perform a comparative evaluation of CT, MBT, and industrial manual testing in terms of their fault detection effectiveness using an industrial case study of the safety-critical train control management system. We examine the fault detection rate per mutant and relationship between the mutation scores and structural coverage using Modified Condition Decision Coverage (MC/DC). Our results show that CT 3-ways, CT 4-ways, and MBT provide higher mutation scores. MBT did not perform better in detecting ‘Logic Replacement Operator-Improved’ mutants when compared with the other techniques, while manual testing struggled to find ‘Logic Block Replacement Operator’ mutants. None of the test suites were able to find ‘Time Block Replacement Operator’ mutants. CT 2-ways was found to be the least effective test technique. MBT-generated test suite achieved the highest MC/DC coverage. We also found a generally consistent positive relationship between MC/DC coverage and mutation scores for all test suites.
Download

Paper Nr: 18
Title:

Carbon-Box Testing

Authors:

Sangharatna Godboley, G. M. Rani and Sindhu Nenavath

Abstract: Combinatorial testing tools can be used to generate test cases automatically. The existing methodologies such as Random Testing etc. have always the scope of achieving better branch coverage. This is because most of the time the boundary values which are corner cases have been ignored to consider, as a result, we achieve low branch coverage. In this paper, we present a new type of testing type named Carbon-Box Testing. This Carbon name justifies the influence of Black-Box testing techniques we use with a lightweight White-Box testing technique. We show the strength of our proposed method i.e. Dictionary Testing to enhance the branch coverage. In Dictionary Testing, we trace the input variables and their dependent values statically and use them as test inputs. This is a fact that utilizing the statically extracted values is insufficient for achieving the maximal Branch coverage, hence we consider Random Testing to generate the test inputs. The initial values are the real-time Linux process ids, and then we perform mini-fuzzing with basic arithmetic operations to produce more test inputs. Pairwise testing or 2-way testing in Combinatorial testing is a well-known black-box testing technique. It requires a set of test inputs so that it can apply the mechanism to produce new test inputs. Our main proposed approach involves the generation of test inputs for achieving Branch coverage from Random testing values, Dictionary testing values, and a combination of both Random as well as Dictionary values with and without pairwise testing values. We have evaluated the effectiveness of our proposed approach using several experimental studies with baselines. The experimental results, on average, show that among all the approaches, the fusion of Random and Dictionary tests with Pairwise testing has superior results. Hence, this paper shows a new technique which is a healthy combination of two black-box and one white-box testing techniques which leads to Carbon-Box Testing.
Download

Paper Nr: 55
Title:

A Comparison of Source Code Representation Methods to Predict Vulnerability Inducing Code Changes

Authors:

Rusen Halepmollası, Khadija Hanifi, Ramin F. Fouladi and Ayse Tosun

Abstract: Vulnerability prediction is a data-driven process that utilizes previous vulnerability records and their associated fixes in software development projects. Vulnerability records are rarely observed compared to other defects, even in large projects, and are usually not directly linked to the related code changes in the bug tracking system. Thus, preparing a vulnerability dataset and building a predicting model is quite challenging. There exist many studies proposing software metrics-based or embedding/token-based approaches to predict software vulnerabilities over code changes. In this study, we aim to compare the performance of two different approaches in predicting code changes that induce vulnerabilities. While the first approach is based on an aggregation of software metrics, the second approach is based on embedding representation of the source code using an Abstract Syntax Tree and skip-gram techniques. We employed Deep Learning and popular Machine Learning algorithms to predict vulnerability-inducing code changes. We report our empirical analysis over code changes on the publicly available SmartSHARK dataset that we extended by adding real vulnerability data. Software metrics-based code representation method shows a better classification performance than embedding-based code representation method in terms of recall, precision and F1-Score.
Download

Paper Nr: 70
Title:

A New Approach for Software Quality Assessment Based on Automated Code Anomalies Detection

Authors:

Andrea Biaggi, Umberto Azadi and Francesca A. Fontana

Abstract: Methods and tools to support quality assessment and code anomaly detection are crucial to enable software evolution and maintenance. In this work, we aim to detect an increase or decrease in code anomalies leveraging on the concept of microstructures, which are relationships between entities in the code. We introduce a tools pipeline, called Cadartis, which uses an innovative immune-inspired approach for code anomaly detection, tailored to the organization’s needs. This approach has been evaluated on 3882 versions of fifteen open-source projects belonging to three different organizations and the results confirm that the approach can be applied to recognize a decrease or increase of code anomalies (anomalous status). The tools pipeline has been designed to automatically learn patterns of microstructures from previous versions of existing systems belonging to the same organization, to build a personalized quality profiler based on its codebase. This work represents a first step towards new perspectives in the field of software quality assessment and it could be integrated into continuous integration pipelines to profile software quality during the development process.
Download

Paper Nr: 86
Title:

Using Bigrams to Detect Leaked Secrets in Source Code

Authors:

Anton V. Konygin, Andrey V. Kopnin, Ilya P. Mezentsev and Alexandr A. Pankratov

Abstract: Leaked secrets in source code lead to information security problems. It is important to find sensitive information in the repository as early as possible and neutralize it. By now, there are many different approaches to leaked secret detection without human intervention. Often, these are heuristic algorithms using regular expressions. Recently, more and more approaches based on machine learning have appeared. Nevertheless, the problem of detecting secrets in the code remains relevant since the available approaches often give a large number of false positives. In this paper, we propose an approach to leaked secret detection in source code based on machine learning using bigrams. This approach significantly reduces the number of false positives. The model showed a false positive rate of 2.4% and false negative rate of 1.9% on test dataset.
Download

Paper Nr: 106
Title:

Investigate How Developers and Managers View Security Design in Software

Authors:

Asif Imran

Abstract: Software security requirements have been traditionally considered as a non-functional attribute of the soft- ware. However, as more software started to provide services online, existing mechanisms of using firewalls and other hardware to secure software have lost their applicability. At the same time, under the current world circumstances, the increase of cyber-attacks on software is ever increasing. As a result, it is important to con- sider the security requirements of software during its design. To design security in the software, it is important to obtain the views of the developers and managers of the software. Also, it is important to evaluate if their viewpoints match or differ regarding the security. Conducting this communication through a specific model will enable the developers and managers to eliminate any doubts on security design and adopt an effective strategy to build security into the software. In this paper, we analyzed the viewpoints of developers and man- agers regarding their views on security design. We interviewed a team of 7 developers and 2 managers, who worked in two teams to build a real-life software product that was recently compromised by a cyber-attack. We obtained their views on the reasons for the successful attack by the malware and took their recommendations on the important aspects to consider regarding security. Based on their feedback, we coded their open-ended responses into 4 codes, which we recommended using for other real-life software as well.
Download

Paper Nr: 16
Title:

VeriCombTest: Automated Test Case Generation Technique Using a Combination of Verification and Combinatorial Testing

Authors:

Sangharatna Godboley

Abstract: We propose VeriCombTest which is the combination of Verification and Combinatorial Testing. We experimented with 38 C-Programs from The RERS challenge repository. Verification (CBMC) produced 940 test cases and Combinatorial Testing (PICT) populated a total of 42053 test cases. The good point is that for 40 programs, PICT consumed only 2.6 Minutes to populate the test inputs, however, CBMC which is a Static Symbolic Executor consumed 546.99 Minutes to generate the test inputs. We performed mutation analysis for this work. VeriCombTest has 355 extra killed mutants as compared to the baseline. VeriCombTest is a fully automated tool.
Download

Paper Nr: 22
Title:

Studying Synchronization Issues for Extended Automata

Authors:

Natalia Kushik and Nina Yevtushenko

Abstract: The paper presents a study of synchronization issues for one of non-classical state models, i.e., a state identification problem widely used in the area of Model based Testing (MBT) and run-time verification / monitoring. We consider Finite Automata (FA) augmented with the context variables and their related updates when the transitions are executed. For such Extended Automata (EA) we define the notions of merging and synchronizing sequences that serve as reset words in MBT, and show that under certain conditions and when every context variable is defined over a ring, it is possible for the extended automata of the studied class to ‘repeat’ the necessary and sufficient conditions established for the classical automata. Otherwise, in a general case, the problem can be reduced to deriving reset words for classical FA that represent corresponding EA slices.
Download

Paper Nr: 24
Title:

SmartMuVerf: A Mutant Verifier for Smart Contracts

Authors:

Sangharatna Godboley and P. R. Krishna

Abstract: Smart contracts are the logical programs holding the properties in Blockchain. These Blockchain technologies enable society towards trust-based applications. Smart contracts are prepared between the parties to hold their deals. If the deal held by a smart contract is complex and non-trivial, then there is a high chance of attracting issues and loss of assets. These contracts also consider expensive assets. This necessitates the verification and testing of a smart contract. Since we have the source code of a smart contract, then it is reasonable to apply verification and testing techniques. From the traditional ways, it has been observed that mutation testing is one of the important testing techniques. But, this testing technique suffers from the issues of time and cost. It is true that fault-based testing is a good mechanism to perform. So, looking at the issues we introduce a new technique for Mutation Verification for Smart Contracts. In this paper, we present an approach for measuring the mutation score using a verification approach. We experimented with a total of 10 smart contracts.
Download

Paper Nr: 43
Title:

MODINF: Exploiting Reified Computational Dependencies for Information Flow Analysis

Authors:

Jens Van der Plas, Jens Nicolay, Wolfgang De Meuter and Coen De Roover

Abstract: Information Flow Control is important for securing applications, primarily to preserve the confidentiality and integrity of applications and the data they process. Statically determining the flows of information for security purposes helps to secure applications early in the development pipeline. However, a sound and precise static analysis is difficult to scale. Modular static analysis is a technique for improving the scalability of static analysis. In this paper, we present an approach for constructing a modular static analysis for performing Information Flow Control for higher-order, imperative programs. A modular analysis requires information about data dependencies between modules. These dependencies arise as a result of information flows between modules, and therefore we piggy-back an Information Flow Control analysis on top of an existing modular analysis. Additionally, the resulting modular Information Flow Control analysis retains the benefits of its modular character. We validate our approach by performing an Information Flow Control analysis on 9 synthetic benchmark programs that contain both explicit and implicit information flows.
Download

Paper Nr: 90
Title:

Timed Transition Tour for Race Detection in Distributed Systems

Authors:

Evgenii Vinarskii, Natalia Kushik, Nina Yevtushenko, Jorge López and Djamal Zeghlache

Abstract: The paper is devoted to detecting output races in distributed systems. We perform such detection through testing their implementations. As an underlying model for our test generation strategy we consider a Timed Finite State Machine or a TFSM (for short), where each input/output transition is augmented with a timed guard and an output delay. A potential output race can thus be simulated as an output delay mutant; this formalism is introduced in the paper. In order to build a test suite, we adapt a well-known test generation strategy, a transition tour method. The novelty of the proposed method relies on choosing appropriate timestamps for inputs, yielding a timed transition tour. We discuss its fault coverage for output race detection. As an application case study, we consider a Software Defined Networking (SDN) framework where the system under test is represented by the composition of a controller and a switch. Experimental results show that the timed transition tour can detect races in the behavior of the widely used ONOS controller.
Download

Paper Nr: 114
Title:

Enhancing Unit Tests in Refactored Java Programs

Authors:

Anna Derezińska and Olgierd Sobieraj

Abstract: Refactoring provides systematic changes to program code in order to improve its quality. These changes could also require modifications of unit tests associated with a refactored program. Developer environments assist with many code refactoring transformations, which also support some modifications of the tests. Two popular environments for Java programs have been found to be unable to update these tests for all refactoring in a satisfactory way. The flaws in refactoring, the adaptation of the tests after refactoring, and possible improvements were discussed. A tool extension has been introduced to integrate with a refactoring in the Eclipse environment and maintain the corresponding tests. For selected refactorings, additional test cases could also be created to increase code coverage and improve the testing of a refactored program. Experiments have been conducted to evaluate the proposed solutions and verify their limitations.
Download

Paper Nr: 116
Title:

An Empirical Study on the Relationship Between the Co-Occurrence of Design Smell and Refactoring Activities

Authors:

Lerina Aversano, Mario L. Bernardi, Marta Cimitile, Martina Iammarino and Debora Montano

Abstract: Due to the continuous evolution of software systems, their architecture is subject to damage and the formation of numerous design issues. This empirical study focuses on the co-occurrence of design smells in software systems and refactoring activities. To this end, a detailed analysis is carried out of the data relating to the presence of Design Smells, the use of refactoring, and the consequences of such use. Specifically, the evolution of 17 different types of design odors in five open-source Java software projects has been examined. Overall, the results indicate that the application of refactoring is not used by developers on design smells. This work also offers new and interesting insights for future research methods in this field.
Download

Area 4 - Theory and Practice of Systems and Applications Development

Full Papers
Paper Nr: 2
Title:

Human-Centered Design for the Efficient Management of Smart Genomic Information

Authors:

Alberto García S., Mireia Costa, Ana León, Jose F. Reyes and Oscar Pastor

Abstract: Genomics is a massive and complex domain that requires great efforts to extract valuable knowledge. Due to the reduction in sequencing costs and the advent of Next Generation Sequencing, the amount of publicly available genomics data has increased notably. These data are complex and heterogeneous, which makes the development of intuitive and usable tools critical. However, bioinformatics tools have been developed without oncsidering usability and User Interface design. As a result, there are relevant usability problems that complicate the work of bioinformaticians. Human-Centered Design consists of a design approach that grounds the User Interface design process on the needs and desires of users and can be a suitable solution to improve the usability of new genomics tools. This work shows how intuitive and usable bioinformatics tools can be produced using HCD principles.
Download

Paper Nr: 4
Title:

Security Tools’ API Recommendation Using Machine Learning

Authors:

Zarrin Tasnim Sworna, Anjitha Sreekumar, Chadni Islam and Muhammad Ali Babar

Abstract: Security Operation Center (SOC) teams manually analyze numerous tools’ API documentation to find appropriate APIs to define, update and execute incident response plans for responding to security incidents. Manually identifying security tools’ APIs is time consuming that can slow down security incident response. To mitigate this manual process’s negative effects, automated API recommendation support is desired. The state-of-the-art automated security tool API recommendation uses Deep Learning (DL) model. However, DL models are environmentally unfriendly and prohibitively expensive requiring huge time and resources (denoted as “Red AI”). Hence, “Green AI” considering both efficiency and effectiveness is encouraged. Given SOCs’ incident response is hindered by cost, time and resource constraints, we assert that Machine Learning (ML) models are likely to be more suitable for recommending suitable APIs with fewer resources. Hence, we investigate ML model’s applicability for effective and efficient security tools’ API recommendation. We used 7 real world security tools’ API documentation, 5 ML models, 5 feature representations and 19 augmentation techniques. Our Logistic Regression model with word and character level features compared to the state-of-the-art DL-based approach reduces 95.91% CPU core hours, 97.65% model size, 291.50% time and achieves 0.38% better accuracy, which provides cost-cutting opportunities for industrial SOC adoption.
Download

Paper Nr: 5
Title:

What's in a Persona? A Preliminary Taxonomy from Persona Use in Requirements Engineering

Authors:

Devi Karolita, John Grundy, Tanjila Kanij, Humphrey Obie and Jennifer McIntosh

Abstract: Personas have been widely used during requirements engineering-related tasks. However, the presentation, composition, level of details and other characteristics varies greatly by domain of use. To better understand these, we formed a curated set of nearly 100 personas from 41 academic papers and analysed their similarities and differences. We then used our analysis to formulate a preliminary taxonomy of personas used for Requirements Engineering-related tasks. We describe our key findings from our analysis with examples, our preliminary taxonomy, and discuss ways the taxonomy can be used and further improved.
Download

Paper Nr: 6
Title:

Time-Constrained, Event-Driven Coordination of Composite Resources’ Consumption Flows

Authors:

Zakaria Maamar, Amel Benna and Nabil Otsmane

Abstract: This paper discusses the composition of primitive resources in preparation for their run-time consumption by business processes. This consumption is first, subject to time constraints impacting the availability of primitive resources and second, dependent on events impacting the selection of primitive resources. To address primitive resources’ disparate time-availabilities that could lead to conflicts, a coordination approach is designed, developed, and tested using Allen’s time algebra and a simulated dataset. The approach produces composite resources’ consumption flows on-the-fly after discovering time relations between primitive resources that ensure their availabilities and hence, assignment to business processes. Implementation results demonstrate the technical doability of the approach along with identifying time-related obstacles that could prevent primitive resources’ availabilities. Solutions addressing these obstacles are also reported in the implementation results.
Download

Paper Nr: 29
Title:

Compaction of Spacecraft Operational Models with Metamodeling Domain Knowledge

Authors:

Kazunori Someya, Toshiaki Aoki and Naoki Ishihama

Abstract: Due to the difficulty of performing repairs during flight, spacecraft is operated according to operational scenarios tested before launch. Operational models, a type of SysML activity diagram, can be used to depict these scenarios. Although it aids in communication between engineers and stakeholders, the activity diagram can rapidly grow rather large. Due to the extensive operational model, it is therefore challenging to review the activity diagram, which could result in serious issues. Therefore, to make operational models easier to evaluate, they should be made concise. This study offers a metamodel that offers stereotypes that can succinctly characterize spacecraft operational scenarios. First, a mind map was used to depict the domain knowledge of spacecraft operations. Second, stereotype metamodel was created by extracting common knowledge from the mind map. Utilizing stereotypes, operational models’ size can be decreased; however, crucial review-related data could be lost. Therefore, shrinking stereotypes assures that crucial information would not be lost. Several trials were conducted and showed that the number of elements of operational models with stereotypes, as well as their size, reduced by almost half, compared with the original ones, allowing for a simplified review process and boosting trust in the accuracy of the operational scenarios.
Download

Paper Nr: 33
Title:

Requirements Elicitation and Modelling of Artificial Intelligence Systems: An Empirical Study

Authors:

Khlood Ahmad, Chetan Arora, Mohamed Abdelrazek, John Grundy and Muneera Bano

Abstract: Artificial Intelligence (AI) systems have gained significant traction in the recent past, creating new challenges in requirements engineering (RE) when building AI software systems. RE for AI practices have not been studied much and have scarce empirical studies. Additionally, many AI software solutions tend to focus on the technical aspects and ignore human-centered values. In this paper, we report on a case study for eliciting and modeling requirements using our framework and a supporting tool for human-centred RE for AI systems. Our case study is a mobile health application for encouraging type-2 diabetic people to reduce their sedentary behavior. We conducted our study with three experts from the app team – a software engineer, a project manager and a data scientist. We found in our study that most human-centered aspects were not originally considered when developing the first version of the application. We also report on other insights and challenges faced in RE for the health application, e.g., frequently changing requirements.
Download

Paper Nr: 49
Title:

Toward a Deep Contextual Product Recommendation for SO-DSPL Framework

Authors:

Najla Maalaoui, Raoudha Beltaifa and Lamia L. Jilani

Abstract: Today’s demand for customized service-based systems requires that industry understands the context and the particular needs of their customers. Service Oriented Dynamic Software Product Line practices enable companies to create individual products for every customer by providing an interdependent set of features presenting web services that are automatically activated and deactivated depending on the running situation. Such product lines are designed to support their self-adaptation to new contexts and requirements. Users configure personalized products by selecting desired features based on their needs. However, with large feature models, users must understand the functionalities of features and the impact of their gradual selections and their current context in order to make appropriate decisions. Thus, users need to be guided in configuring their product. To tackle this challenge, users can express their product requirements by textual language and a recommended product will be generated with respect to the described requirements. In this paper, we propose a deep neural network based recommendation approach that provides personalized recommendations to users which ease the configuration process. In detail, our proposed recommender system is based on a deep neural network that predicts to the user relevant features of the recommended product with the consideration of their requirements, contextual data and previous recommended products. In order to demonstrate the performance of our approach, we compared six different recommendation algorithms in a smart home case study.
Download

Paper Nr: 56
Title:

A Distance-Based Feature Selection Approach for Software Anomaly Detection

Authors:

Suravi Akhter, Afia Sajeeda and Ahmedul Kabir

Abstract: An anomaly of software refers to a bug or defect or anything that causes the software to deviate from its normal behavior. Anomalies should be identified properly to make more stable and error-free software systems. There are various machine learning-based approaches for anomaly detection. For proper anomaly detection, feature selection is a necessary step that helps to remove noisy and irrelevant features and thus reduces the dimensionality of the given feature vector. Most of the existing feature selection methods rank the given features using different selection criteria, such as mutual information (MI) and distance. Furthermore, these, especially MI-based methods fail to capture feature interaction during the ranking/selection process in case of larger feature dimensions which degrades the discrimination ability of the selected feature set. Moreover, it becomes problematic to make a decision about the appropriate number of features from the ranked feature set to get acceptable performance. To solve these problems, in this paper we propose anomaly detection for software data (ADSD), which is a feature subset selection method and is able to capture interactive and relevant feature subsets. Experimental results on 15 benchmark software defect datasets and two bug severity classification datasets demonstrate the performance of ADSD in comparison to four state-of-the-art methods.
Download

Paper Nr: 80
Title:

Methods for Model-Driven Development of IoT Applications: Requirements from Industrial Practice

Authors:

Benjamin Nast and Kurt Sandkuhl

Abstract: The Internet of Things (IoT) has become a crucial topic in research and industry over recent years. Enterprises often fail to create business value from IoT technology because they have difficulties defining organizational integration. Model-driven Development (MDD) is considered an effective technique for IoT application development. We argue that methods for MDD should comprise the organizational as well as the system development and integration. This paper aims to provide an overview of the current state of research on MDD of IoT applications. For this purpose, we conducted a structured literature review (SLR). A research gap was identified as no specific research could be found on MDD of IoT applications with a focus on organizational and system aspects. We also derived requirements from an industrial use case. The main contributions of this paper are (a) requirements from medium-sized enterprises (SMEs) to methodical and technical IoT development support derived from a use case, (b) the results of a systematic literature analysis in this field, and (c) an initial structure for the methodical support and initial architecture for the accompanying tool support.
Download

Paper Nr: 82
Title:

Managing Domain Analysis in Software Product Lines with Decision Tables: An Approach for Decision Representation, Anomaly Detection and Resolution

Authors:

Nicola Boffoli, Pasquale Ardimento and Alessandro N. Rana

Abstract: This paper proposes an approach to managing domain analysis in Software Product Lines (SPLs) using Deci- sion Tables (DTs) that are adapted to the unique characteristics of SPLs. The adapted DTs enable clear and explicit representation of the intricate decisions involved in deriving each software product. Additionally, a method is presented for detecting and resolving anomalies that may disrupt proper product derivation. The effectiveness of the approach is evaluated through a case study, which suggests that it has the potential to significantly reduce development time and costs for SPLs. Future research directions include investigating the integration of SAT solvers or other methods to improve specific cases of scalability and conducting empirical validation to further assess the effectiveness of the proposed approach.
Download

Paper Nr: 95
Title:

Improved Business Analysis by Using 3D Models

Authors:

David Kuhlen and Andreas Speck

Abstract: The use of 3D objects may enhance the requirements engineering procedures. Different approaches using 3D modelling in requirements engineering, are described in the academic literature. Furthermore, the state of research shows objectives which can be obtained by using 3D modelling. This provides a good basis to formulate a reference model which helps project managers to plan the use of 3D modelling in requirements engineering. In order to support the usage of 3D modelling in requirements engineering, a reference model is designed. This model should help project managers, to plan the usage of 3D modelling. For doing so, project managers need to know why, how and when to use 3D modelling. The findings from literature were used in combination with an experiment, to elaborate a recommendation for using 3D models in requirements engineering. The experiment was combined with a survey in a requirements engineering lecture at IU International University of Applied Sciences. The experiment shows 3D modelling by using LEGO® SERIOUS PLAY® to be a liked method. The method seems to facilitate the motivation and collaboration. However, a further pre-analysis, before using the regular LEGO® SERIOUS PLAY® method shows no significant effect, to improve the analysis. A reference model was proposed to guide the usage of 3D modelling in requirements engineering. Especially the phases of requirements elicitation and the solution design benefit from using 3D modelling techniques in this proposal.
Download

Paper Nr: 100
Title:

Features and Supervised Machine Learning Based Method for Singleton Design Pattern Variants Detection

Authors:

Abir Nacef, Sahbi Bahroun, Adel Khalfallah and Samir Ben Ahmed

Abstract: Design patterns codify standard solutions to common problems in software design and architecture. Given their importance in improving software quality and facilitating code reuse, many types of research are proposed on their automatic detection. In this paper, we focus on singleton pattern recovery by proposing a method that can identify orthodox implementations and non-standard variants. The recovery process is based on specific data created using a set of relevant features. These features are specific information defining each variant which is extracted from the Java program by syntactical and semantic analysis. We are based on the singleton analysis and different proposed features in ou previous work (Nacef et al., 2022) to create structured data. This data contains a combination of feature values defining each singleton variant to train a supervised Machine Learning (ML) algorithm. The goal is not limited to detecting the singleton pattern but also the specification of the implemented variant as so as the incoherent structure that inhib the pattern intent. We use different ML algorithms to create the Singleton Detector (SD) and compare their performance. The empirical results demonstrate that our method based on features and supervised ML, can identify any singleton implementation with the specific variant’s name achieving 99% of precision, and recall. We have compared the proposed approach to similar studies namely DPDf and GEML. The results show that the SD outperforms the state-of-the-art approaches by more than 20% on evaluated data constructed from different repositories; PMART, DPB and DPDf corpus in terms of precision.
Download

Short Papers
Paper Nr: 26
Title:

Supervised Machine Learning for Recovering Implicit Implementation of Singleton Design Pattern

Authors:

Abir Nacef, Sahbi Bahroun, Adel Khalfallah and Samir Ben Ahmed

Abstract: An implicit or indirect implementation of the Singleton design Pattern (SP) is a programming implementation whose purpose is to restrict the instantiation to a single object without actually using the SP. This structure may not be faulty or errant but can impact negatively the software quality especially if they are used in inappropriate contexts. To improve the quality of the source code, the injection of the SP is sometimes mandatory. In order to assuring that, a specific structure must be identified and automatically detected. However, due to their vague and abstract nature, they can be implemented in various ways, which are not conducive to automatic and accurate detection. This paper presents the first method dedicated to the automatic detection of Singleton Implicit Implementations (SII) based on supervised Machine Learning (ML) algorithms. In this work, we define the different variants of SII, then based on the detailed definition we propose relevant features and we create a dataset named FTD (Feature Train Data) according to the corresponding variant. Based on Long Short Term Memory (LSTM) models, trained by the FTD data we extract features values from Java program. Then we create another data named SDTD (Singleton Detector Train Data) containing feature combination values to train the ML classifier. We resolve the problem of automatic detection of SII with different ML algorithms like KNN, SVM, Naive Bayes and Random Forest for classification task. Based on different public Java corpus, we create and label a data named SDED (Singleton Detector Evaluating Data), this data is used for evaluating and choosing the appropriate ML model. The empirical results prove the performance of our technique to automatically detect the SII.
Download

Paper Nr: 38
Title:

Software Engineering Comments Sentiment Analysis Using LSTM with Various Padding Sizes

Authors:

Sanidhya Vijayvargiya, Lov Kumar, Lalita B. Murthy, Sanjay Misra, Aneesh Krishna and Srinivas Padmanabhuni

Abstract: Sentiment analysis for software engineering(SA4SE) is a research domain with huge potential, with applications ranging from monitoring the emotional state of developers throughout a project to deciphering user feedback. There exist two main approaches to sentiment analysis for this purpose: a lexicon-based approach and a machine learning-based approach. Extensive research has been conducted on the former; hence this work explores the efficacy of the ML-based approach through an LSTM model for classifying the sentiment of the text. Three different data sets, StackOverflow, JIRA, and AppReviews, have been used to ensure consistent performance across multiple applications of sentiment analysis. This work aims to analyze how LSTM models perform sentiment prediction across various kinds of textual content produced in the software engineering industry to improve the predictive ability of the existing state-of-the-art models.
Download

Paper Nr: 44
Title:

Mapping Process Mining Techniques to Agile Software Development Perspectives

Authors:

Cyrine Feres, Sonia A. Ghannouchi and Ricardo Martinho

Abstract: Agile Software Development (ASD) processes have surfaced as an effective alternative for more efficient software project management. They concentrate on a set of informal best practices instead of a standardised process, making it difficult to determine the degree of real implementation in an organization. Process Mining (PM) can play a key role in such analysis by discovering the software development process model followed in a certain set of software projects, and by analysing event logs that report the projects’ executed tasks. These discovered processes can then be compared to standardised ASD methods such as Scrum and eXtreme Programming (XP), and improved accordingly. Motivated by this, we present in this paper a literature review revealing the state of the art of Process Mining and its usage in ASD processes, but under a correlation between the three main research areas of PM (discovery, conformance, and enhancement), and the main ASD process perspectives including organisational/team, control-flow, quality, time, cost & risk, and data. We then analyse and discuss the results of this review quantitatively and qualitatively and prospect future opportunities for research accordingly.
Download

Paper Nr: 46
Title:

A Dynamic Service Placement in Fog Infrastructure

Authors:

Mayssa Trabelsi, Nadjib M. Bendaoud and Samir Ben Ahmed

Abstract: The Internet of Things (IoT) is a key technology that improves the connectivity between applications and devices over different geographical locations. However, IoT devices, particularly those used for monitoring, have stringent timing requirements that Cloud Computing might not be able to satisfy. Fog computing, which uses fog nodes close to IoT devices, can solve this problem. In this paper, we propose a Dynamic Service Placement (DSP) algorithm for Fog infrastructures. DSP’s objective is to dynamically place the services emitted by applications one at a time and in real time on Fog nodes. The algorithm chooses the fog node with the least response time over the infrastructure and dynamically places the incoming service in it. The algorithm is implemented in the iFogSim simulator, and its performances were evaluated and compared to other algorithms. DSP showed very encouraging results, as it proceeded to minimize the average response times and the application placement rate, thus lowering the infrastructure usage and energy consumption.
Download

Paper Nr: 51
Title:

Empirical Analysis for Investigating the Effect of Machine Learning Techniques on Malware Prediction

Authors:

Sanidhya Vijayvargiya, Lov Kumar, Lalita B. Murthy, Sanjay Misra, Aneesh Krishna and Srinivas Padmanabhuni

Abstract: Malware is used to attack computer systems and network infrastructure. Therefore, classifying malware is essential for stopping hostile attacks. In the after-effects of COVID-19, the virtual presence of individuals has greatly increased. From money transactions to personal information, everything is shared and stored in cyberspace. This has led to increased and more innovative malware attacks. Advanced packing and obfuscation methods are being used by malware variants to get access to private information for profit. There is an urgent need for better software security. In this paper, we identify the best ML techniques that can be used in combination with various ML and ensemble classifiers for malware classification. The goal of this work is to identify the ideal ML pipeline for detecting the family of malware. Imbalanced datasets and a lack of feature selection have plagued many previous works. The best tools for describing malware activity are application programming interfaces (APIs). However, creating API call attributes for classification algorithms to achieve high accuracy is challenging. The dataset used to validate the proposed method includes API call count histogram features extracted by dynamic analysis. The experimental results demonstrate that the proposed ML pipeline may effectively and accurately categorize malware, producing state-of-the-art results.
Download

Paper Nr: 52
Title:

Detecting Outliers in CI/CD Pipeline Logs Using Latent Dirichlet Allocation

Authors:

Daniel Atzberger, Tim Cech, Willy Scheibel, Rico Richter and Jürgen Döllner

Abstract: Continuous Integration and Continuous Delivery are best practices used in the context of DevOps. By using automated pipelines for building and testing small software changes, possible risks are intended to be detected early. Those pipelines continuously generate log events that are collected in semi-structured log files. In practice, these log files can amass 100 000 events and more. However, the relevant sections in these log files must be manually tagged by the user. This paper presents an online learning approach for detecting relevant log events using Latent Dirichlet Allocation. After grouping a fixed number of log events in a document, our approach prunes the vocabulary to eliminate words without semantic meaning. A sequence of documents is then described as a discrete sequence by applying Latent Dirichlet Allocation, which allows the detection of outliers within the sequence. By integrating the latent variables of the model, our approach provides an explanation of its prediction. Our experiments show that our approach is sensitive to the choice of its hyperparameters in terms of the number and choice of detected anomalies.
Download

Paper Nr: 68
Title:

From Descriptive to Predictive: Forecasting Emerging Research Areas in Software Traceability Using NLP from Systematic Studies

Authors:

Zaki Pauzi and Andrea Capiluppi

Abstract: Systematic literature reviews (SLRs) and systematic mapping studies (SMSs) are common studies in any disci- pline to describe and classify past works, and to inform a research field of potential new areas of investigation. This last task is typically achieved by observing gaps in past works, and hinting at the possibility of future re- search in those gaps. Using an NLP-driven methodology, this paper proposes a meta-analysis to extend current systematic methodologies of literature reviews and mapping studies. Our work leverages a Word2Vec model, pre-trained in the software engineering domain, and is combined with a time series analysis. Our aim is to forecast future trajectories of research outlined in systematic studies, rather than just describing them. Using the same dataset from our own previous mapping study, we were able to go beyond descriptively analysing the data that we gathered, or to barely ‘guess’ future directions. In this paper, we show how recent advancements in the field of our SMS, and the use of time series, enabled us to forecast future trends in the same field. Our proposed methodology sets a precedent for exploring the potential of language models coupled with time series in the context of systematically reviewing the literature.
Download

Paper Nr: 74
Title:

Text Mining Studies of Software Repository Contents

Authors:

Bartosz Dobrzyński and Janusz Sosnowski

Abstract: Issue tracking systems comprise data which are useful in evaluating or improving software development processes. Revealing and interpreting this information is a challenging problem which needs appropriate algorithms and tools. For this purpose, we use text mining schemes adapted to the specificity of the software repository. They base on a detailed analysis of the used dictionaries which comprise Natural Language Words (NLW) and are enhanced with specialized entities in issue descriptions (e.g., emails, code snippets, technical names). They are defined with specially developed regular expressions. The pre-processed texts are submitted to original text mining algorithms (machine learning). This approach has been verified in commercial and open-source projects and showed possible development improvements.
Download

Paper Nr: 81
Title:

Software Code Smells and Defects: An Empirical Investigation

Authors:

Reuben Brown and Des Greer

Abstract: Code smells indicate weaknesses in software design that may slow down development or increase the risk of bugs or failures in the future. This paper aims to investigate the correlation of code smells with defects within classes. The method used uses a tool to automatically detect code smells in selected projects and then assesses the correlation of these to the number of defects found in the code. Most existing articles determine that software modules/classes with more smells tend to have more defects. However, while the experiments in this paper covering a range of languages agreed with this, the correlation was found to be weak. There remains a need for further investigation of the types of code smells that tend to indicate or predict defects occurring. Future work will perform more detailed experiments by investigating a larger quantity and variety of software systems as well as more granular studies into types of code smell and defects arising.
Download

Paper Nr: 88
Title:

Sustainability-Driven Meetings as a Way to Incorporate Sustainability into Software Development Phases

Authors:

Ana Carolina Moises de Souza, Daniela Cruzes and Letizia Jaccheri

Abstract: Software sustainability has been a trending topic in the last decade in academia. Studies related to software sustainability propose models, frameworks, or practices that can be applied in the industry. But most of these proposals are still not systematically adopted in the industry. Therefore, there is an opportunity to create a structured meeting to support the concrete adoption of sustainability practices in software development. This paper aims to provide an overview of these frameworks and how they can help facilitate sustainability-driven meetings (SusDM). Seeking this, we present practical examples and a workflow to prepare the meeting by applying the existing sustainability frameworks in SusDM. As a position paper, our hypothesis is that the contributions of this meeting may be related to improving the knowledge of software developers on sustainable software engineering, discovering new sustainability requirements, prioritization, and implementing software sustainability practices.
Download

Paper Nr: 93
Title:

A Study on Early & Non-Intrusive Security Assessment for Container Images

Authors:

Mubin U. Haque and Muhammad A. Babar

Abstract: The ubiquitous adoption of container images to virtualize the software contents bring significant attention in its security configuration due to intricate and evolving security issues. Early security assessment of container images can prevent and mitigate security attacks on containers, and enabling practitioners to realize the secured configuration. Using security tools, which operate in intrusive manner in the early assessment, raise critical concern in its applicability where the container image contents are considered as highly sensitive. Moreover, the sequential steps and manual intervention required for using the security tools negatively impact the development and deployment of container images. In this regard, we aim to empirically investigate the effectiveness of Open Container Initiative (OCI) properties with the Machine Learning (ML) models to assess the security without peeking inside the container images. We extracted OCI properties from 1,137 real-world container images and investigated six traditional ML models with different OCI properties to identify the optimal ML model and its generalizability. Our empirical results show that the ensemble ML models provide the optimal performance to assess the container image security when the model is built with all the OCI properties. Our empirical evidence will guide practitioners in the early security assessment of container images in non-intrusive way as well as reducing the manual intervention required for using security tools to assess the security of container images.
Download

Paper Nr: 97
Title:

Toward a Goal-Oriented Methodology for Artifact-Centric Process Modeling

Authors:

Razan Abualsaud, Hanh N. Tran, Ileana Ober and Minh K. Nguyen

Abstract: Process management systems aim to enable a flexible yet systematic control of the execution of a process on the basis of its model. Therefore, a process model should encompass all necessary information for achieving the monitoring goals on the operational process. Up to this time, the research in process modeling has tended to focus on developing process modeling languages to represent process models. However, it lacks concrete guidelines for modelers to systematically define such models with the adequate details to enable an effective control of process execution. In this study, we address this lack by proposing a goal-oriented methodology for systematically modeling processes. Our methodology is dedicated to Artifact-Centric Process Modeling (ACPM). ACPM is an emerging approach that combines both data and process in a holistic manner, thus it’s more suitable to model complex unstructured processes. We illustrate the proposed methodology by applying it practically to model a portion of the Rational Unified Process (RUP) with the intention of enhancing traceability at execution time.
Download

Paper Nr: 105
Title:

Extracting Queryable Knowledge Graphs from User Stories: An Empirical Evaluation

Authors:

Ayodeji Ladeinde, Chetan Arora, Hourieh Khalajzadeh, Tanjila Kanij and John Grundy

Abstract: User stories are brief descriptions of a system feature told from a user’s point of view. During requirements elicitation, users and analysts co-specify these stories using natural language. A number of approaches have tried to use Natural Language Processing (NLP) techniques to extract different artefacts, such as domain models and conceptual models, and reason about software requirements, including user stories. However, large collections of user story models can be hard to navigate once specified. We extracted different components of user story data, including actors, entities and processes, using NLP techniques and modelled them with graphs. This allows us to organise and link the structures and information in user stories for better analysis by different stakeholders. Our NLP-based automated approach further allows the stakeholders to query the model to view the parts of multiple user stories of interest. This facilitates project development discussions between technical team members, domain experts and users. We evaluated our tool on user story datasets and through a user study. The evaluation of our approach shows an overall precision above 96% and a recall of 100%. The user study with eight participants showed that our querying approach is beneficial in practical contexts.
Download

Paper Nr: 7
Title:

A Perspective from Large vs Small Companies Adoption of Agile Methodologies

Authors:

Manuela Petrescu and Simona Motogna

Abstract: To offer a perspective from large versus small companies’ adoption of Agile Methodologies we performed a study in which we discussed and analyzed the answers of 31 interviewees from 14 companies. We carefully selected the data set to be representative for all the actors in the market from the company’s size, business model, role balance, and gender balance. We found out that most of the companies adopted the Agile methodologies in a more or less extensive manner. By far, the most used ceremony was daily meetings, other ceremonies were sometimes merged or removed from the methodology implementation. We also investigate how companies adopted the Agile mindset by analyzing how the Agile Manifesto was implemented in these companies, even if it was not in the scope of our study. From a scientific point of view, this study contributes from a methodological perspective to a better understanding of Agile adoption in different types of companies and of the challenges related to Agile practices and processes. For practitioners, the results presented in this paper offer evaluation means and solutions for Agile adoption.
Download

Paper Nr: 14
Title:

LibSteal: Model Extraction Attack Towards Deep Learning Compilers by Reversing DNN Binary Library

Authors:

Jinquan Zhang, Pei Wang and Dinghao Wu

Abstract: The need for Deep Learning (DL) based services has rapidly increased in the past years. As part of the trend, the privatization of Deep Neural Network (DNN) models has become increasingly popular. The authors give customers or service providers direct access to their created models and let them deploy models on devices or infrastructure out of the control of the authors. Meanwhile, the emergence of DL Compilers makes it possible to compile a DNN model into a lightweight binary for faster inference, which is attractive to many stakeholders. However, distilling the essence of a model into a binary that is free to be examined by untrusted parties creates a chance to leak essential information. With only DNN binary library, it is possible to extract neural network architecture using reverse engineering. In this paper, we present LibSteal. This framework can leak DNN architecture information by reversing the binary library generated from the DL Compiler, which is similar to or even equivalent to the original. The evaluation shows that LibSteal can efficiently steal the architecture information of victim DNN models. After training the extracted models with the same hyper-parameter, we can achieve accuracy comparable to that of the original models.
Download

Paper Nr: 31
Title:

An OWL Multi-Dimensional Information Security Ontology

Authors:

Ines Meriah, Latifa A. Rabai and Ridha khedri

Abstract: To deal with problems related to Information Security (IS) and to support requirements activities and architectural design, a full conceptualisation of the IS domain is essential. Several works proposed IS ontologies capturing partial views of the IS domain. These ontologies suffer from incompleteness of concepts, lack of readability, portability, and dependability to a specific IS sub-domain. Following a rigorous and repeatable process, we systematically developed a comprehensive IS ontology. This process includes four steps: concept extraction, XML generation, OWL generation and dimensional view extraction. The obtained ontology is multidimensional, portable and supports ontology modularization. It is presented under XML format and OWL format. It comprises 2660 security concepts and 331 security dimensions.
Download

Paper Nr: 34
Title:

A Step to Achieve Personalized Human Centric Privacy Policy Summary

Authors:

Ivan Simon, Sherif Haggag and Hussein Haggag

Abstract: Online users continuously come across privacy policies for the service they use. Due to the complexity and verbosity of policies, majority of the users skip the tedious task of reading the policy and accept it. Without reading and evaluating the document users risk giving up all kinds of rights to their personal data and for the most part, are unaware of the data sharing and handling process. Efforts have been made to address the complex and lengthy structure of privacy policies by creating a standardized machine-readable format of privacy policies for the web browsers to process it automatically, a repository of crowdsourced summarized versions of some privacy policies, or by using natural language processing to summarize the policies. PirvacyInterpreter is one unique tool that acknowledges human-centric factors while summarising the policy. Thus, it generates a personalised summary of the privacy policy for the user providing relevant information to appease their privacy concerns. This paper presents the conceptualization of PrivacyInterpreter and implements a proof-of-concept model using configured RoBERTa(base) model to classify a privacy policy and produce a summary based on privacy aspects that reflect users’ privacy concerns.
Download

Paper Nr: 41
Title:

Evaluation of Approaches for Documentation in Continuous Software Development

Authors:

Theo Theunissen, Stijn Hoppenbrouwers and Sietse Overbeek

Abstract: With the adoption of values, principles, practices, tools and processes from Agile, Lean, and DevOps, knowledge preservation has become a serious issue because documentation is largely left out. We identify two questions that are relevant for knowledge acquisition and distribution concerning design decisions, rationales, or reasons for code change. The first concerns which knowledge is required upfront to start a project. The second question concerns continuation after initial development and addresses which knowledge is required by those who deploy, use or maintain a software product. We evaluate two relevant approaches for alleviating the issues, which are ‘Just enough Upfront’ and ‘Executable Documentation’ with a total of 25 related artifacts. For the evaluation, we conducted a case study supported by a literature review, organizational and project metrics, and a survey. We looked into closed source-code and closed classified source-code. We found two conclusive remarks. First, git commit messages typically contain what has been changed but not why source-code has been changed. Design decisions, rationale, or reasons for code change should be saved as close as possible to the source-code with Git Pull Requests. Second, knowledge about a software product is not only written down in artifacts but is also a social construction between team members.
Download

Paper Nr: 45
Title:

Efficient Academic Retrieval System Based on Aggregated Sources

Authors:

Virginia Niculescu, Horea Greblă, Adrian Sterca and Darius Bufnea

Abstract: On account of the extreme expansion of the scientific research paper databases, the usage of searching and recommender systems in this area increased, as they can help the researchers to find appropriate papers by searching in the enormous indexed datasets. Depending on where the papers are published, there might be stricter policies that force the author to also add the needed metadata, but still there are other for which these metadata are not complete. As a result, many of the current solutions for searching and recommending papers are usually biased to a certain database. This paper proposes a retrieval system that can overcome these problems by aggregating data from different databases in a dynamic and efficient way. Extracting data from different sources dynamically and not only statically, based on a certain database, is important for assuring a complete interrogation, but in the same time incur complex operations that may affect the performance of the system. The performance could be maintained by using carefully designed architecture that relies on tools that allow high level of parallelization. The main original characteristic of the system is represented by the hybrid interrogation of static data (stored in databases) and dynamic data (obtained through web interrogations).
Download

Paper Nr: 57
Title:

Software Vulnerability Prediction Knowledge Transferring Between Programming Languages

Authors:

Khadija Hanifi, Ramin F. Fouladi, Basak G. Unsalver and Goksu Karadag

Abstract: Developing automated and smart software vulnerability detection models has been receiving great attention from both research and development communities. One of the biggest challenges in this area is the lack of code samples for all different programming languages. In this study, we address this issue by proposing a transfer learning technique to leverage available datasets and generate a model to detect common vulnerabilities in different programming languages. We use C source code samples to train a Convolutional Neural Network (CNN) model, then, we use Java source code samples to adopt and evaluate the learned model. We use code samples from two benchmark datasets: NIST Software Assurance Reference Dataset (SARD) and Draper VDISC dataset. The results show that proposed model detects vulnerabilities in both C and Java codes with average recall of 72%. Additionally, we employ explainable AI to investigate how much each feature contributes to the knowledge transfer mechanisms between C and Java in the proposed model.
Download

Paper Nr: 66
Title:

A Reflection on the Use of Systemic Thinking in Software Development

Authors:

Paolo Ciancarini, Mirko Farina, Artem Kruglov, Giancarlo Succi and Ananga Thapaliya

Abstract: The research examines the value and potential usefulness of using systemic thinking, which looks at the interconnectedness of things, to comprehend the complexities of software development projects and the technical and human factors involved. It considers two different aspects of systemic thinking - psychological and sociological - and posits that these can assist in understanding how software teams function and attain their objectives, as well as the goals of the entities for which they work. Our research aims to provide a novel contribution to the field by investigating the use of systemic thinking in software development teams and organizations. We evaluate the reliability and validity of the survey applied to different groups of relevant participants, relate our findings to existing literature, and identify the most representative factors of systemic thinking. Despite the popularity of various factors that fall under the umbrella of ’systems thinking’, there is limited understanding of their effectiveness in improving organizational performance or productivity, particularly when it comes to psychological and sociological systemic factors. The relationship between the use of systems thinking and organizational performance is often based on anecdotal evidence, rather than the identification and application of specific factors. Our work emphasizes the importance of understanding and applying such factors in order to build a solid foundation for the effective use of system dynamics and systems thinking tools, which is crucial for software development teams.
Download

Paper Nr: 67
Title:

Clouds Coalition Detection for Business Processes Outsourcing

Authors:

Amina Ahmed Nacer, Mohammed Riyadh Abdmeziem and Claude Godart

Abstract: Companies outsourcing their BP (Business Processes) to the cloud must be sure that sensitive information included in their BP are protected. While there are several existing methods that include splitting the model into a collection of BP fragments before a multi-cloud deployment minimizing therefore the likelihood of a coalition, a risk still remains. We propose in this paper an approach for detecting malicious cloud providers that initiate or participate to a coalition. To do that, we rely on decoy processes having the same structure and number of data as a real process, but with decision strategy making as well as data sets that are completely different (fake) from the real one. Our objective is to detect unexpected exchanges of messages between malicious clouds, that may signify an attempt to initiate or participate to a coalition. The level of reputation of each cloud initiating or joining the coalition will be modified accordingly.
Download

Paper Nr: 73
Title:

Impact of COVID-19 on the Factors Influencing on-Time Software Project Delivery: An Empirical Study

Authors:

Mahmudul Islam, Farhan Khan, Mehedi Hasan, Farzana Sadia and Mahady Hasan

Abstract: The objective of this research paper is to investigate the impact of COVID-19 on the factors influencing on-time software project delivery in different Software Development Life Cycle (SDLC) models such as Agile, Incremental, Waterfall, and Prototype models. Also to identify the change of crucial factors with respect to different demographic information that influences on-time software project delivery. This study has been conducted using a quantitative approach. We surveyed Software Developers, Project Managers, Software Architect, QA Engineer and other roles using a Google form. Python has been used for data analysis purposes. We received 72 responses from 11 different software companies of Bangladesh, based on that we find that Attentional Focus, Team Stability, Communication, Team Maturity, and User Involvement are the most important factors for on-time software project delivery in different SDLC models during COVID-19. On the contrary, before COVID-19 Team Capabilities, Infrastructure, Team Commitment, Team Stability and Team Maturity are found as the most crucial factors. Team Maturity and Team Stability are found as common important factors for both before and during the COVID-19 scenario. We also identified the change in the impact level of factors with respect to demographic information such as experience, company size, and different SDLC models used by participants. Attentional focus is the most important factor for experienced developers while for freshers all factors are almost equally important. This study finds that there is a significant change among factors for on-time software project delivery before and during the COVID-19 scenario.
Download

Paper Nr: 91
Title:

Ontology of Online Management Tools Aimed at Artificial Management Implementation: An Example of Use in Software Design

Authors:

Olaf Flak

Abstract: After the first age of robotics in mechanical processes rapid development of computer science and Internet causes that AI will overwhelm team management in the future. Both, the rapid development of artificial intelligence in business management and the need of an adequate ontology to represent the organizational world has created a significant research gap. As the result of that the research problem should be solved: if it is possible to create a comprehensive, coherent and formalized methodological concept of the management sciences, which will allow to design and implement real artificial management. The aim of the paper is to present the solution to the research problem in its ontological part, and to show the use of such an ontology to replace the human manager with an artificial manager. The paper describes the definition of ontologies and the considerations for their creation in various software applications, presents the results of theoretical and practical research on the creation of a theoretical concept, called the system of organizational terms, which contains an ontology of organizational reality that meets the requirements for the practice of creating ontologies for software and enables the design and implementation of artificial managers.
Download

Paper Nr: 101
Title:

Toward a Modeling Language Prototype for Modeling the Behavior of Wireless Body Area Networks Communication Protocols

Authors:

Bethaina Touijer

Abstract: Modeling and evaluating the behavior of the medium access control (MAC) protocols of wireless body area networks (WBANs) through the model-checker toolset UPPAAL-SMC necessitate a certain level of expertise. The thing that is not available for many MAC protocol designers. To facilitate the use of UPPAAL-SMC, we propose to define a model-driven engineering (MDE) approach that uses a modeling method (MM) as a start and the UPPAAL-SMC as a target and back. In this paper, we use the ADOxx platform to define the domain-specific modeling language (DSML) of WBAN that is presented through the name WBAN modeling language (WBAN-ML) to model the behavior of the WBANs MAC protocols. The prototype implementation result of the WBAN-ML is presented in this paper.
Download

Paper Nr: 110
Title:

Various Shades of Teaching Agile

Authors:

Necmettin Ozkan, Sevval Bal and Mehmet Ş. Gök

Abstract: In parallel with the increasing demands for Agile in industry and academia, many lecturers have started teaching Agile Software Development in various programs. Teaching Agile at universities has both constraints, challenges and opportunities faced by both students and lecturers. Agile courses have been taught at universities by using different approaches that can mainly be divided into two categories: Teaching Agile in an agile way and teaching Agile in a conventional way. As the name calls for it, Agile should be taught in an agile way which is a challenging and still developing subject. Despite significance of Agile and Agile teaching, there is a lack of theoretical and comprehensive studies on Agile teaching and learning in an agile way. The existing literature seems to be more focused on practical and limited contexts as "case studies". In this study, we recommend and present various and agile ways to teach Agile by providing decision-tree-like paths with their reasonings for a course design. We aim to enlighten educators who are interested in teaching Agile within a higher education course while designing their courses.
Download

Paper Nr: 117
Title:

Tendencies in Database Learning for Undergraduate Students: Learning In-Depth or Getting the Work Done?

Authors:

Emilia-Loredana Pop and Manuela-Andreea Petrescu

Abstract: This study explores and analyzes the learning tendencies of second-year students enrolled in different lines of study related to the Databases course. There were 79 answers collected from 191 enrolled students that were analyzed and interpreted using thematic analysis. The participants in the study provided two sets of answers, anonymously collected (at the beginning and at the end of the course), thus allowing us to have clear data regarding their interests and to find out their tendencies. We looked into their expectations and if they were met; we concluded that the students want to learn only database basics. Their main challenges were related to the course homework. We combined the information and the answers related to 1) other database-related topics that they would like to learn, 2) how they plan to use the acquired information, and 3) overall interest in learning other database-related topics. The conclusion was that students prefer learning only the basic information that could help them achieve their goals: creating an application or using it at work. For these students, ”Getting the work done” is preferred to ”Learning in-depth”.
Download