ENASE 2016 Abstracts


Area 1 - Mobile Software and Systems

Short Papers
Paper Nr: 61
Title:

Preventing Hospital Acquired Infections through a Workflow-based Cyber-physical System

Authors:

Maria Iuliana Bocicor, Arthur-Jozsef Molnar and Cristian Taslitchi

Abstract: Hospital acquired infections (HAI) are infections acquired within the hospital from healthcare workers, patients or from the environment, but which have no connection to the initial reason for the patient’s hospital admission. HAI are a serious world-wide problem, leading to an increase in mortality rates, duration of hospitalisation as well as significant economic burden on hospitals. Although clear preventive guidelines exist, studies show that compliance to them is frequently poor. This paper details the software perspective for an innovative, business process software based cyber-physical system that will be implemented as part of a European Union-funded research project. The system is composed of a network of sensors mounted in different sites around the hospital, a series of wearables used by the healthcare workers and a server side workflow engine. For better understanding, we describe the system through the lens of a single, simple clinical workflow that is responsible for a significant portion of all hospital infections. The goal is that when completed, the system will be configurable in the sense of facilitating the creation and automated monitoring of those clinical workflows that when combined, account for over 90% of hospital infections.
Download

Area 2 - Service Science and Business Information Systems

Full Papers
Paper Nr: 27
Title:

Are Suggestions of Coupled File Changes Interesting?

Authors:

Jasmin Ramadani and Stefan Wagner

Abstract: Software repositories include information which can be made available for bug fixing or maintenance using repository mining. The identification of coupled changes have been proposed several times. Yet, existing studies focus on the found couplings and ignore feedback from developers. We investigate three development projects and their repositories to find files that frequently change together to support the software developers. We complement the coupled files information with details from the issue tracking system and the project documentation. We contrast our findings with feedback from the developers about how interesting our findings are for them. We found that the small size of the repositories made an insightful analysis difficult. The response to coupled changes both from experienced and inexperienced developers was mostly neutral. They accepted most of the additional attributes we presented. Furthermore, developers also suggested other additional issues to be relevant, e.g. the context of the coupled changes and the way they are presented, which we did not cover in this study. Therefore, coupled change analysis research will need to take the presentation and context information into account.
Download

Paper Nr: 38
Title:

Cloud Computing Adoption, Cost-benefit Relationship and Strategies for Selecting Providers: A Systematic Review

Authors:

Antonio Carlos Marcelino de Paula and Glauco de Figueiredo Carneiro

Abstract: Context: Cloud computing has been one of the most promising computing paradigms in industry to provide a customizable and resourceful platform to deploy software. There are a number of competing providers and available services that allows organizations to access computing services without owning the corresponding infrastructure. Goal: Identify the main characteristics of opportunities to migrate to the cloud, the respective challenges and difficulties as well as factors that affect the cost-benefit relationship of such adoption. Method: This paper presents a systematic literature review to compare reported strategies of organizations to migrate and adopt cloud computing and their perception of the cost-benefit of this adoption. Results: The overall data collected from these studies depicts that a significant part of the companies perceived inclination towards for the innovation adoption process influenced by technological, organizational and environmental contexts. Conclusion: Due to the variety of strategies, approaches and tools reported in the primary studies, it is expected that the results in this systematic literature review would help in establishing knowledge on how the companies should adopt and migrate to the cloud, how the cost-benefit relationship can be evaluated as well as providers can be selected. These findings can be a useful reference to develop guidelines for an effective use of cloud computing.
Download

Short Papers
Paper Nr: 4
Title:

Semi-automatic Generation of OrBAC Security Rules for Cooperative Organizations using Model-Driven Engineering

Authors:

Irvin Dongo and Vanea Chiprianov

Abstract: In an environment of increasing cooperation and interoperability, organizations share resources and services between them to increase their return on investment. But to control the use of shared resources, it is necessary to apply access control policies which are related to how organizations control and secure their scenarios of cooperation. In this paper, we perform a Systematic Literature Review on the current solutions to define access control policies for cooperative organizations. As a result, we identify limitations such as manual negotiation for establishing policies. To address these limitations, we introduce the Semi-Automatic Generation of Access Rules Based on OrBAC (SAGARBO) component which allows semi-automatic generation of security rules based on Model-driven engineering. This reduces negotiation time and the work of the security administrator.
Download

Paper Nr: 9
Title:

Extended Change Identification System

Authors:

Parimala N. and Vinay Gautam

Abstract: Schema evolution leads to multiple versions of the data warehouse schema. We address the issue of whether the information required by the decision maker is present in some version of the data warehouse or not by checking all the versions for the existence or the absence of the required information. The user specifies the sought information using business terms. We build Delta Ontology which captures the ontology information in terms of mapping between business terms and schema terms. The Delta Ontology is built for the modifications to the schema as it evolves. We propose an algorithm to search for the information in the latest version of E-Metadata and the Delta Ontology. Our algorithm lists all the versions where the information is available giving precedence to finding the information in a single version over across versions. The decision maker is also informed if the information is totally missing.
Download

Area 3 - Software Engineering

Full Papers
Paper Nr: 10
Title:

Breaking the Boundaries of Meta Models and Preventing Information Loss in Model-Driven Software Product Lines

Authors:

Thomas Buchmann and Felix Schwägerl

Abstract: Model-driven software product line engineering is an integrating discipline for which tool support has become available recently. However, existing tools are still immature and have several weaknesses. Among others, limitations in variability, caused by meta model restrictions, and unintended information loss are not addressed. In this paper, we present two conceptual extensions to model-driven product line engineering based on negative variability, being alternative mappings and surrogates. Alternative mappings allow for unconstrained variability, mitigating meta model restrictions by virtually extending the underlying multi-variant domain model. Surrogates prevent unintended information loss during product derivation based on a contextsensitive product analysis, which can be controlled by a declarative OCL-based language. Both extensions have been implemented in FAMILE, a model-driven product line tool that is based on EMF, provides dedicated consistency repair mechanisms, and completely automates application engineering. The added value of alternative mappings and surrogates is demonstrated by a running example.
Download

Paper Nr: 15
Title:

Extending UML/MARTE-SAM for Integrating Adaptation Mechanisms in Scheduling View

Authors:

Mohamed Naija and Samir Ben Ahmed

Abstract: The profile for Modeling and Analysis of Real-Time and Embedded systems (MARTE) defines a framework for annotating non-functional properties of embedded systems. In particular, the SAM (Schedulability Analysis Model) sub-profile offers stereotypes for annotating UML models with the needed information which will be extracted to fulfil a scheduling phase. However, SAM does not allow designers to specify data to be used in the context of adaptive systems development. It is in this context that we propose an extension for the MARTE profile, and especially the sub-profile Schedulability Analysis Modeling, to include adaptation mechanisms in scheduling view. We illustrate the advantages and effectiveness of our proposal by modeling a FESTO case study as an Adaptive Real-Time and Embedded system.
Download

Paper Nr: 19
Title:

A Methodology for Model-based Development and Safety Analysis of Transport Systems

Authors:

Simon Hordvik, Kristoffer Øseth, Jan Olaf Blech and Peter Herrmann

Abstract: We present a method to engineer the control software of transport systems and analyze their safety using the Reactive Blocks framework. The development benefits from the model-based approach and makes the analysis of the systems at design time possible. The software is analyzed for freedom of collisions and other spatiotemporal properties by combining test runs of already existing devices to find out their physical constraints with the analysis of simulation runs using the verification tool BeSpaceD. This allows us to discover potential safety hazards already during the development of the control software. In particular, we introduce a methodology for the engineering and safety analysis of transportation systems and elaborate its practical usability by means of a demonstrator based on Lego Mindstorms.
Download

Paper Nr: 41
Title:

RA2DL-Pool: New Useful Solution to Handle Security of Reconfigurable Embedded Systems

Authors:

Farid Adaili, Olfa Mosbahi, Mohamed Khalgui and Samia Bouzefrane

Abstract: The importance of security in software and hardware components becomes the major concern nowadays. This paper focuses on adaptive component-based control systems following the reconfiguration architecture analysis and design language (denoted by RA2DL). Despite of its efficiency, RA2DL component can be compromised due to its lack in terms of security. This paper proposes a new method for modeling the security of RA2DL component, argues for a more comprehensive treatment of an important security aspect with several mechanisms such as authentication and access control. In this paper, we propose a new architecture of RA2DL where pools are containers of sets of RA2DL components characterized by similar properties. The proposed approach is applied to a real case study dealing with Body-Monitoring System (BMS).
Download

Short Papers
Paper Nr: 5
Title:

Towards Semantical DSMLs for Complex or Cyber-physical Systems

Authors:

Blazo Nastov, Vincent Chapurlat, Christophe Dony and François Pfister

Abstract: MDE is nowadays applied in the context of software engineering for complex or cyber-physical systems, to build models of physical systems that can then be verified and simulated before they are built and deployed. This article focuses on DSMLs direct formal verification and simulation of their dynamic semantics. By “direct”, we mean without transforming the DSML description into an automata-like one. This paper presents xviCore, a metamdeling language to create DSMLs equipped with an abstract syntax, a concrete syntax and a dynamic semantics. We exemplify xviCore by an integration of a metamodeling language and a formal behavioral modeling language, based on the blackboard design pattern. Formal verification techniques based on the Linear Temporal Logic (LTL) and the Temporal Boolean Difference can be then applied as demonstrated by the proposed approach.
Download

Paper Nr: 16
Title:

Systematic Mapping Study of Ensemble Effort Estimation

Authors:

Ali Idri, Mohamed Hosni and Alain Abran

Abstract: Ensemble methods have been used recently for prediction in data mining area in order to overcome the weaknesses of single estimation techniques. This approach consists on combining more than one single technique to predict a dependent variable and has attracted the attention of the software development effort estimation (SDEE) community. An ensemble effort estimation (EEE) technique combines several existing single/classical models. In this study, a systematic mapping study was carried out to identify the papers based on EEE techniques published in the period 2000-2015 and classified them according to five classification criteria: research type, research approach, EEE type, single models used to construct EEE techniques, and rule used the combine single estimates into an EEE technique. Publication channels and trends were also identified. Within the 16 studies selected, homogeneous EEE techniques were the most investigated. Furthermore, the machine learning single models were the most frequently employed to construct EEE techniques and two types of combiner (linear and non-linear) have been used to get the prediction value of an ensemble.
Download

Paper Nr: 20
Title:

A Novel R-UML-B Approach for Modeling and Code Generation of Reconfigurable Control Systems

Authors:

Raja Oueslati, Olfa Mosbahi, Mohamed Khalgui and Samir Ben Ahmed

Abstract: This research paper deals with the modeling and code generation of Reconfigurable Control Systems (RCS) following UML and B methods. Reconfiguration means dynamic changes of the system behavior at run-time according to well-defined conditions to adapt it to its environment. A reconfiguration scenario is applied as a response to user requirements or any possible evolution in its environment. We affect a Reconfiguration Agent (RA) to RCS to apply an automatic reconfiguration. A new approach called (R-UML-B) is proposed. It consists of three complementary phases: UML specification, B specification and the simulation phase. The first phase models the RCS following UML class and state diagrams. The second phase translates UML specification into B specification according to the well-defined rules and R-UML-B formalism to define the Behavior, Control, Listener, Database and Executive modules of the RCS. Then, we determine the refinement model and the code generation of the B abstract model in C code. We verify the RCS by following the B method in order to guarantee the consistency and the correctness of the specification, refinement and code generation levels. The third phase imports the generated C code to implement a simulator, named B Simulator in order to test and validate the proposed approach. All the contributions of this work are applied to the benchmark production system EnAS.
Download

Paper Nr: 25
Title:

An Empirical Study of Two Software Product Line Tools

Authors:

Kattiana Constantino, Juliana Alves Pereira, Juliana Padilha, Priscilla Vasconcelos and Eduardo Figueiredo

Abstract: In the last decades, software product lines (SPL) have proven to be an efficient software development technique in industries due its capability to increase quality and productivity and decrease cost and time-to-market through extensive reuse of software artifacts. To achieve these benefits, tool support is fundamental to guide industries during the SPL development life-cycle. However, many different SPL tools are available nowadays and the adoption of the appropriate tool is a big challenge in industries. In order to support engineers choosing a tool that best fits their needs, this paper presents the results of a controlled empirical study to assess two Eclipse-based tools, namely FeatureIDE and pure::variants. This empirical study involved 84 students who used and evaluated both tools. The main weakness we observe in both tools are the lack adequate mechanisms for managing the variability, such as for product configuration. As a strength, we observe the automated analysis and the feature model editor.
Download

Paper Nr: 28
Title:

Source and Test Code Size Prediction - A Comparison between Use Case Metrics and Objective Class Points

Authors:

Mourad Badri, Linda Badri and William Flageol

Abstract: Source code size, in terms of SLOC (Source Lines of Code), is an important parameter of many parametric software development effort estimation methods. Moreover, test code size, in terms of TLOC (Test Lines of Code), has been used in many studies to indicate the effort involved in testing. This paper aims at comparing empirically the Use Case Metrics (UCM) method, a use case model based method that we proposed in previous work, and the Objective Class Points (OCP) method in terms of early prediction of SLOC and TLOC for object-oriented software. We used both simple and multiple linear regression methods to build the prediction models. An empirical comparison, using data collected from four open source Java projects, is reported in the paper. Overall, results provide evidence that the multiple linear regression model, based on the combination of the use case metrics, is more accurate in terms of early prediction of SLOC and TLOC than: (1) the simple linear regression models based on each use case metric, and (2) the simple linear regression model based on the OCP method.
Download

Paper Nr: 33
Title:

Self-Protection Mechanisms for Web Applications - A Case Study

Authors:

Claudia Raibulet, Alberto Leporati and Andrea Metelli

Abstract: Self-protection mechanisms aim to improve security of software systems at runtime. They are able to automatically prevent and/or react to security threats by observing the state of a system and its execution environment, by reasoning on the observed state, and by applying enhanced security strategies appropriate for the current threat. Self-protection mechanisms complement traditional security solutions which are mostly static and focus on the boundaries of a system, missing in this way the overall picture of a system's security. This paper presents several self-protection mechanisms which have been developed in the context of a case study concerning a home banking system. Essentially, the mechanisms described in this paper aim to improve the security of the system in the following two scenarios: users' login and bank operations. Furthermore, the proposed self-protection mechanisms are presented through the taxonomy proposed in (Yuan, 2014).
Download

Paper Nr: 34
Title:

AWSM - Agile Web Migration for SMEs

Authors:

Sebastian Heil and Martin Gaedke

Abstract: Migrating legacy desktop to web applications is an important and challenging task for SME software companies. Due to their limited resources, migration should be integrated in ongoing development processes. Existing research in this area does not consider recent paradigm shifts in web development. Therefore, our work is dedicated to supporting SME software providers in migrating to modern web applications while integrating this into ongoing development. This paper outlines our idea and presents a roadmap towards achieving this goal.
Download

Paper Nr: 43
Title:

Multi-variant Model Transformations — A Problem Statement

Authors:

Felix Schwägerl, Thomas Buchmann and Bernhard Westfechtel

Abstract: Model Transformations are a key element of Model-Driven Software Engineering. As soon as variability is involved, transformations become increasingly complicated. The lack of support for variability in model transformations impairs the acceptance of approaches to organized reuse such as software product lines. In this position paper, the general problem of multi-variant model transformations is formulated for MOF-based, XMI-serialized models. A simplistic case study is presented to specify the input and the expected output of such a transformation. Furthermore, requirements for tool support are defined, including a standardized representation of both multi-variant model instances and variability information, as well as an execution specification for multi-variant transformations. A literature review reveals that the problem is weakly identified and often solved using ad-hoc solutions; there exists no tool providing a general solution to the proposed problem statement. The observations presented here may serve for the future development of standards and tools.
Download

Paper Nr: 49
Title:

CURA: Complex-system Unified Reference Architecture - Position Paper: A Practitioner View

Authors:

Ethan Hadar and Irit Hadar

Abstract: Constructing enterprise-level solution requires integration of existing, modified, and new modular technologies. A customer specific solution is instantiated from a reference implementation owned by the services organization, as a result of multiple products and their reference design created by the R&D organization. Yet, the disciplines of R&D and enterprise architecture differ in their analysis and design processes, artifacts, and semantics, leading to a mismatch in product design, knowledge and requirements interpretation. The Complex-systems Unified Reference Architecture (CURA) was developed as a common platform for both field and R&D practices. This methodology binds a 4-layered structure and a 4-phased architecture process, controlling the solution architecture lifecycle from reference design to reference implementation and solution instantiation, and fits both agile and DevOps methodologies. The presented version of CURA was tested and implemented with several customers as a lean and minimal blueprinting approach, serving as part of the architectural deliverables. CURA can be adjusted to other visual binding notations such as UML and TOGAF modeling languages, and can scale up to system-of-systems design.
Download

Paper Nr: 50
Title:

Evaluating the Evaluators - An Analysis of Cognitive Effectiveness Improvement Efforts for Visual Notations

Authors:

Dirk van der Linden and Irit Hadar

Abstract: This position paper presents the preliminary findings of a systematic literature review of applications of the Physics of Notations: a recently dominant framework for assessing the cognitive effectiveness of visual notations. We present our research structure in detail and discuss some initial findings, such as the kinds of notations the PoN has been applied to, whether its usage is justified and to what degree users are involved in eliciting requirements for the notation before its application. We conclude by summarizing and briefly discussing further analysis to be done and valorization of such results as guidelines for better application.
Download

Paper Nr: 51
Title:

A Human-centred Framework for Combinatorial Test Design

Authors:

Maria Spichkova and Anna Zamansky

Abstract: This paper presents AHR, a formal framework for combinatorial test design that is Agile, Human-centred and Refinement-oriented. The framework (i) allows us to reuse test plans developed for an abstract level at more concrete levels; (ii) has human-centric interface providing queries and alerts whenever the specified test plan is incomplete or invalid; (iii) involves analysis of the testing constraints within combinatorial testing.
Download

Paper Nr: 52
Title:

A Research Agenda on Visualizations in Information Systems Engineering

Authors:

Jens Gulden, Dirk van der Linden and Banu Aysolmaz

Abstract: Effectively using visualizations in socio-technical artifacts like information systems and software yields a number of challenges, such as ensuring that they allow for all necessary information to be captured, that visualizations can be efficiently and correctly read, and perhaps most important: that communication is fostered, leading rather to a shared understanding instead of misunderstandings and communication breakdowns. While over the last years many strides have been made to propose visualizations for specific purposes (such as modeling language notations, software interfaces, visual methods, and games), there has been less attention for frameworks and guidelines meant to support the people making such visualizations. When taking a closer look at the deficiencies in research on visualizations in information systems today, it turns out that especially a deeper understanding of the mental processes behind comprehending visualizations and the way humans are cognitively affected by visualizations, is required in order to gain advanced theoretic underpinnings for the creation and use of visualizations in information systems. In this paper we build towards a research agenda on visualization in information systems engineering by identifying a number of relevant requirements for research to address, of fundamental, methodical and tool nature.
Download

Paper Nr: 57
Title:

Developing Green and Sustainable Software using Agile Methods in Global Software Development: Risk Factors for Vendors

Authors:

Nasir Rashid and Siffat Ullah Khan

Abstract: Global software development (GSD) is gaining momentum due to the potential benefits it offers. GSD aims at delivering remarkable software through a widely distributed pool of experts, with reduced efforts, minimum cost and time. In recent years, GSD developers have reshaped the development processes and have adopted agile techniques and green engineering principles to cope with the frequent changes in requirements, accelerate the development in short increments and to produce energy efficient and sustainable software. However, the adoption of agile methods for developing sustainable software possesses a number of challenges. This paper presents a list of potential challenges/risks identified through systematic literature review (SLR) that need to be avoided by the GSD vendors using agile methods for the development of green and sustainable software. Our findings reveal eight risk factors that are faced by GSD vendors in the development of green and sustainable software using agile methods. GSD vendors are encouraged to address properly all the identified factors in general and the most-frequently cited critical risks in particular, such as in-sufficient system documentation, limited support for real-time systems and large systems, management overhead, lack of customer’s presence, lack of formal communication and lack of long term planning.
Download

Paper Nr: 58
Title:

An Enhanced Equivalence Checking Method to Handle Bugs in Programs with Recurrences

Authors:

Sudakshina Dutta and Dipankar Sarkar

Abstract: Software designers often apply automatic or manual transformations on the array-handling source programs to improve performance of the target programs. Verdoolaege et al. (Verdoolaege et al., 2012) have proposed a method to automatically prove equivalence of the output arrays of the source and the generated transformed programs. Unlike the other approaches, the method of (Verdoolaege et al., 2012) provides the most sophisticated techniques to validate programs with non-uniform recurrences besides programs with uniform recurrences. However, if the recurrence expressions of the source and the transformed programs refer to more than one base cases of which some are non-equivalent and also if the domain of the output arrays partition based on dependences on different base cases, then some imprecision in the equivalence checking results is observed. The equivalence checker reports that the entire index spaces of the output arrays of the source program to be non-equivalent with that of the transformed program instead of the portion of the output arrays which depend on the non-equivalent base cases of the programs. In the current work, we have enhanced the method of equivalence checking of (Verdoolaege et al., 2012) so that it can precisely indicate the equivalent and non-equivalent portions of the output arrays.
Download

Paper Nr: 59
Title:

Zoetic Data and their Generators

Authors:

Paul Bailes and Colin Kemp

Abstract: Functional or “zoetic” representations of data embody the behaviours that we hypothesise are characteristic to all datatypes. The advantage of such representations is that they avoid the need, in order to realize these characteristic behaviours, to implement interpretations of symbolic data at each use. Zoetic data are not unheard-of in computer science, but support for them by current software technology remains limited. Even though the first-class function capability of functional languages inherently supports the essentials of zoetic data, the creation of zoetic data from symbolic data would have to be by repeated application of a characteristic interpreter. This impairs the effectiveness of the “Totally Functional” approach to programming of which zoetic data are the key enabler. Accordingly, we develop a scheme for synthesis of generator functions for zoetic data which correspond to symbolic data constructors but which entirely avoid the need for a separate interpretation stage. This avoidance allows us to achieve a clear separation of concerns between the definitions of datatypes on the one hand and their various applications on the other.
Download

Paper Nr: 63
Title:

Managing Usability and Reliability Aspects in Cloud Computing

Authors:

Maria Spichkova, Heinz W. Schmidt, Ian E. Thomas, Iman I. Yusuf, Steve Androulakis and Grischa R. Meyer

Abstract: Cloud computing provides a great opportunity for scientists, as it enables large-scale experiments that cannot are too long to run on local desktop machines. Cloud-based computations can be highly parallel, long running and data-intensive, which is desirable for many kinds of scientific experiments. However, to unlock this power, we need a user-friendly interface and an easy-to-use methodology for conducting these experiments. For this reason, we introduce here a formal model of a cloud-based platform and the corresponding opensource implementation. The proposed solution allows to conduct experiments without having a deep technical understanding of cloud-computing, HPC, fault tolerance, or data management in order to leverage the benefits of cloud computing. In the current version, we have focused on biophysics and structural chemistry experiments, based on the analysis of big data from synchrotrons and atomic force microscopy. The domain experts noted the time savings for computing and data management, as well as user-friendly interface.
Download

Paper Nr: 64
Title:

End to End Specification based Test Generation of Web Applications

Authors:

Khusbu Bubna

Abstract: Using formal specifications to generate test cases presents great potential for automation in testing and enhancing the quality of test cases. However, an important challenge in this direction is the design specifications which are at a more abstract level than the implementation, with many important implementation level details missing. To generate executable test cases, these implementation details must be included at some stage. Though there has been a lot of work in test generation from specification, all the existing methods suffer from this problem: either the test cases are not executable, or the process involves a non-trivial, manual step of translating the abstract test cases to concrete test cases. In this work, we present an approach of specification based test generation for web applications that overcomes the above challenge: test generation is completely automated and the test cases are fully executable on a test execution framework (e.g. Selenium RC). Further, our methodology allows generation of multiple sets of concrete test cases from the the same formal specification. This makes it possible to use the same abstract specification to generate test cases for a number of versions of the system. Throughout the paper, a case study of Learning Management System is used to illustrate our approach.
Download

Paper Nr: 65
Title:

An Appropriate Method Ranking Approach for Localizing Bugs using Minimized Search Space

Authors:

Shanto Rahman and Kazi Sakib

Abstract: In automatic software bug localization, source code analysis is usually used to localize the buggy code without manual intervention. However, due to considering irrelevant source code, localization accuracy may get biased. In this paper, a Method level Bug localization using Minimized search space (MBuM) is proposed for improving the accuracy, which considers only the liable source code for generating a bug. The relevant search space for a bug is extracted using the execution trace of the source code. By processing these relevant source code and the bug report, code and bug corpora are generated. Afterwards, MBuM ranks the source code methods based on the textual similarity between the bug and code corpora. To do so, modified Vector Space Model (mVSM) is used which incorporates the size of a method with Vector Space Model. Rigorous experimental analysis using different case studies are conducted on two large scale open source projects namely Eclipse and Mozilla. Experiments show that MBuM outperforms existing bug localization techniques.
Download

Paper Nr: 8
Title:

Evolution Taxonomy for Software Architecture Evolution

Authors:

Noureddine Gasmallah, Abdelkrim Amirat and Mourad Oussalah

Abstract: Nowadays, architects are facing the challenge of proliferation of stakeholder requirements for preserving and ensuring the effectiveness of the software, by using software evolution as a key solution. Hence, in terms of landscaping evolution space there is a great need to define the thinking on which efforts to deal with this issue have been based. In this paper, we propose a framework for software architecture evolution taxonomy based on four structural dimensions. This framework could both position existing evolution models in the field and highlight gray areas for the future. Mapping over framework dimensions, a set of quality factors and an investigation including 67 studies are performed to assess the proposals. The results contain a number of relevant findings, including the need to improve software architecture evolution by accommodating predictable changes as well as promoting the emergence of operating mechanisms.
Download

Paper Nr: 21
Title:

Constraints-based URDAD Model Verification

Authors:

Fritz Solms, Priscilla Naa Dedei Hammond and Linda Marshall

Abstract: In Model-Driven Engineering the primary artifact is a technology and architecture neutral model called a Platform Independent Model (PIM). The Use-Case, Responsibility Driven Analysis and Design (URDAD) is a service-oriented method which is used to construct a PIM commonly specified in the URDAD Domain- Specific Language (DSL). In this paper we show that model quality can be verified by specifying a set of quality constraints at metamodel level which are used to verify certain consistency, completeness, traceability and simplicity qualities of URDAD models. The set of constraints has been mapped onto the Object Constraint Language (OCL) and a tool used to verify these constraints has been developed. The set of constraints is also used by an URDAD model editor to verify aspects of model quality as it is being developed.
Download

Paper Nr: 24
Title:

Evaluating A Novel Agile Requirements Engineering Method: A Case Study

Authors:

Tanel Tenso, Alex Norta and Irina Vorontsova

Abstract: The use of agile methods during software development is a standard practice and user stories are an established way of breaking complex system requirements into smaller subsets. However, user stories do not suffice for understanding the bigger picture of system goals. While methods exists that try to solve this problem, they lack visual tool support and are too heavy for smaller projects. This article fills the gap by evaluating a novel agile agent-oriented modelling (AAOM) method for requirements engineering. The AAOM-method comprises a visual approach to agile requirements engineering that links goal-model creation techniques taken from agentoriented modelling and connects goals intuitively to user stories. A case study based evaluation explores the applicability of AAOM for requirements engineering in an agile software development process.
Download

Paper Nr: 35
Title:

Validation of Loop Parallelization and Loop Vectorization Transformations

Authors:

Sudakshina Dutta, Dipankar Sarkar, Arvind Rawat and Kulwant Singh

Abstract: Loop parallelization and loop vectorization of array-intensive programs are two common transformations applied by parallelizing compilers to convert a sequential program into a parallel program. Validation of such transformations carried out by untrusted compilers are extremely useful. This paper proposes a novel algorithm for construction of the dependence graph of the generated parallel programs. The transformations are then validated by checking equivalence of the dependence graphs of the original sequential program and the parallel program using a standard and fairly general algorithm reported elsewhere in the literature. The above equivalence checker still works even when the above parallelizing transformations are preceded by various enabling transformations except for loop collapsing which changes the dimensions of the arrays. To address the issue, the present work expands the scope of the checker to handle this special case by informing it of the correspondence between the index spaces of the corresponding arrays in the sequential and the parallel programs. The augmented algorithm is able to validate a large class of static affine programs. The proposed methods are implemented and tested against a set of available benchmark programs which are parallelized by the polyhedral auto-parallelizer LooPo and the auto-vectorizer Scout. During experiments, a bug of the compiler LooPo on loop parallelization has been detected.
Download

Paper Nr: 45
Title:

Automatic Refactoring of Component-based Software by Detecting and Eliminating Bad Smells - A Search-based Approach

Authors:

Salim Kebir, Isabelle Borne and Djamel Meslati

Abstract: Refactoring has been proposed as a de facto behavior-preserving mean to eliminate bad smells. However manually determining and performing useful refactorings is a though challenge because seemingly useful refactorings can improve some aspect of a software while making another aspect worse. Therefore it has been proposed to view object-oriented automated refactoring as a search-based technique. Nevertheless the review of the literature shows that automated refactoring of component-based software has not been investigated yet. Recently a catalogue of component-relevant bad smells has been proposed in the literature but there is a lack of component-relevant refactorings. In this paper we propose detection rules for component-relevant bad smells as well as a catalogue of component-relevant refactorings. Then we rely on these two elements to propose a search-based approach for automated refactoring of component-based software systems by detecting and eliminating bad smells. Finally, we experiment our approach on a medium-sized component-based software and we assess the efficieny and accuracy of our approach.
Download

Paper Nr: 53
Title:

Towards an Engineering Process for Developing Accessible Software in Small Software Enterprises

Authors:

Sandra Sanchez-Gordon, Mary-Luz Sánchez-Gordón and Sergio Luján-Mora

Abstract: This study presents the results of a web accessibility evaluation performed on a sample of six software products developed by small software enterprises of two countries. According to the International Standard Organization (ISO), an enterprise, organization, department or project with up to 25 people is considered small. All the products evaluated presented accessibility issues, mainly lack of HTML labels, alternative texts, and color contrast errors. These results showed there is a need in small software enterprises of an engineering development process that, taking into account their constraints of staff and budget, includes activities for improving the accessibility of their software. We present the current state of an ongoing work to define such process based on ISO/IEC 29110 that includes accessibility-related task in each of the following activities: initiation, analysis, design, construction, integration and test, and delivery.
Download

Paper Nr: 60
Title:

Engineering Real-Time Communication Through Time-triggered Subsumption - Towards Flexibility with INCUS and LLFSMs

Authors:

David Chen, René Hexel and Fawad Riasat Raja

Abstract: Engineering real-time communication protocols is a complex task, particularly in the safety-critical domain. Current protocols exhibit a strong tradeoff between flexibility and the ability to detect and handle faults in a deterministic way. Model-driven engineering promises a high level design of verifiable and directly runnable implementations. Arrangements of logic-labelled finite-state machines (LLFSMs) allow the implementation of complex system behaviours at a high level through a subsumption architecture with clear execution semantics. Here, we show that the ability of LLFSMs to handle elaborate hierarchical module interactions can be utilised towards the implementation of testable, safety-critical real-time communication protocols. We present an efficient implementation and evaluation of INCUS, a time-triggered protocol for safety-critical real-time communication that transcends the rigidity imposed by existing real-time communication systems through the use of a high-level subsumption architecture.
Download

Paper Nr: 62
Title:

On Source Code Optimization for Interpreted Languages using State Models

Authors:

Jorge López, Natalia Kushik and Nina Yevtushenko

Abstract: The paper is devoted to code optimization techniques with respect to various criteria. Code optimization is well studied for compiled languages; however, interpreted languages can also benefit when using optimization approaches. We provide a work in progress of how the code optimization can be effectively performed for the applications developed with the use of interpreted languages. Methods and techniques proposed in the paper rely on the use of formal models, and in particular state models. We propose some code optimization based on two different state models, namely weighted tree automata, and extended finite automata. The problem of extraction of such models is known to be hard, and in both cases we provide some recommendations of how such models can be derived for a code in an interpreted language. All the optimization techniques proposed in the paper are followed by corresponding illustrative examples.
Download