Disclaimer: This dissertation has been written by a student and is not an example of our professional work, which you can see examples of here.

Any opinions, findings, conclusions, or recommendations expressed in this dissertation are those of the authors and do not necessarily reflect the views of UKDiss.com.

A Systematic Literature Review on Automated Analysis of Feature Models

Info: 21860 words (87 pages) Dissertation
Published: 16th Dec 2019

Reference this

Tagged: Computer Science

A Systematic Literature Review on Automated Analysis of Feature Models

List of abbreviations

SPL   Software Product Line

FM   Feature Model

SPLE   Software Product Line Engineering

CSP   Constraint Satisfaction Problem

BDD   Binary Decision Diagram

SAT          SATISFIABILITY (Propositional Satisfiability Problem)

OWL   Ontology Web Language

RACER  Renamed ABox and Concept Expression Reasoner

BeTTy                        Benchmarking and TesTing on the analysis of feature models

FAMA                        FeAture Model Analyser

CURE                         Configuration Understanding and REmedy

ASs                 Atomic Sets

FLAME           Fama formaL frAMEwork

QBF                    Quantified Boolean Formula

QSAT      Quantified SATisfiability

FODA      Feature-Oriented Domain Analysis

Abstract

Software product line engineering is about making a set of connected products that share cohesions. Feature models are commonly used for flexibility and adaptability management in software product lines. Generally, Feature models are considered as information models to present a set of products as a set of features. The purpose of automated analysis of feature models is handling the computer–aided withdrawal of information from feature models. This master thesis delivers a broad literature review on the automated analysis of feature models from January 1990 to December 2016. This paper contributes by bringing together previous works and pointing out open issues on this thriving area outlining performed and evaluated automated analysis operation on feature model, and making a summary of these works with most important factors; number of features used in experiments, number of cross-tree constraints of studied FMs, time spent during execution of operation, type of FM, employed FM (e.g. academic, industrial, and random), number of instances of FMs, i.e. number of generated FM trees used during the experiments for each paper, employed technique, and evaluation technique (e.g. case study, performance test).

As a direct result of this SLR, we identify gaps, such as the lack of using extended feature models in experiments, and lack of performed and evaluated operations that need computational complexity during experimental process.

We also identify a number of challenges such as the formalization and computational complexity of the operations, performance comparison of the approaches and the support of extended feature models. We also note that the reviewed researches focused on performing operations more than evaluating them.

Keywords: Software product lines, feature models, automated analyses, systematic literature review.

1.                   Introduction

In this chapter, there is a brief discussion about the motivation, in order to address the main questions of the research, followed by the research objectives and research questions.

  1. Motivation

SPLs (Software Product Lines) are a family of software systems that are modeled by feature models, and performed operations of automated analysis, in order to achieve customer’s personalization. SPL engineering promotes the production of a family of software products from common features instead of producing them one by one from scratch [Benavides, 2007].

Products in a SPL are differentiated by their features, where a feature is an increment in program functionality. Individual products are specified using features, software product lines are specified using feature models [Benavides, 2007]; feature model characterizes the information of existing products of a software product line in relations of features and relationships between them.

There are some challenges that face a developer during the automated analysis of feature models, such as: (i) some automated analysis techniques don’t perform all operations of analysis; they focus on low-performance operations more than high performance that needs high level of hardware requirements (ii) there is a need to support engineer in order to better choose the appropriated techniques; depending on systematic review over presented primary reports, reviews them in detail, and defines their methodology and results and (iii) the engineers may need to know which operations were performed/evaluated through each related paper. Understanding how each technique works and making a detailed comparison helps industries applying automated analysis techniques in their own development environment.

This research aims to conduct a complete and detailed systematic literature review of existing automated analysis techniques of feature models, pointing out performed and evaluated operations, based on a wide search through papers in the SPL field.

  1. Research Objectives

The objectives will be the following:

  • To review the existing automated analysis operations.
  • To compare how each operation was evaluated according to reviewed papers.
  • To point out the open issues in this field to be addressed in future research.

We conduct a systematic literature review to go over relevant papers and researches that is related to SPL and automated analysis operations. The aim of this review is to point out how these operations are performed, their application and domain conditions, and which evaluation methods are used.

  1. Research Questions

To fulfill the purpose of this research, we translate the objectives that were mentioned in Section 1.2 to the following Research Questions (RQs):

1.3.1. RQ1: Which operations of analysis have been automated and which ones were evaluated?This research question can be divided into two sub-questions as follows:

RQ1.1: Which operations have been performed? In order to answer this question, we catalog each paper with related operations based on D. Benavides systematic literature reviews. In addition, we complement this literature review by searching for other relevant papers on this field. with experimentalresults of each paper to investigate the outcomes and make a comparison.

RQ1.2: What operations of analysis on FMs were evaluated?The goal of this question is to point out which operation(s) were evaluated by each study.

1.3.2. RQ2:What is the experimental design of the evaluation process? The goal of this question is to describe how operations of analysis on feature models are evaluated. We aim to point out the number of FM instances that were used by each experiment, the number of features for each FM, the type of FM (i.e. basic, extended, cardinality-based, computationally-hard, clone-enabled, or randomly generated), and the time of execution for each implemented operation.

1.3.3. RQ3: What are the highlights from Benavides et al. (from 2010 to 2016)? The goal of this question is to describe the main findings from our review related to the results from Benavides et al. Once our paper extends this paper by providing additional analysis and considering extra relevant studies from 2010 to 2016, we aim at answering the following sub-questions

RQ3.1: What are the main current contributions in this field?

RQ3.2: Are there significant difference between 1990 to 2009 and 2010 to 2016?

1.3.4. RQ4: What are the main gaps in the literature for this field? The goal of this question is to extract the main gaps from each of the included papers, in order to point out open issues in this field for future analysis.

  1. Thesis Structure

The paper is structured as follows. This first chapter presents the main goal of this research and the research questions to be addressed.

The second chapter presents the background and base information that we depend on it through the research.

The third chapter defines the methodology and the strategy that will be used in the research.  The fourth chapter presents the results from our literature review yielding some definitions, types of feature models, and automated analysis operations applied on these models. The last chapter summarizes the conclusions of our study and future work.

2.                   Background

In this chapter, we retrieve a brief review about SPLE, feature model, and all performed automated analysis operations, followed by the research objectives and the research questions.

  1.  Software Product Line Engineering

There is a growing need for developing more complex software systems that can be customized at lower costs, in shorterdevelopmenttime, and with higher quality. In order to satisfy this need, Software Product Line (SPL) has been gradually embraced in software industry. SPL is a set of connected systems that are built from a set of common software assets and share a managed, common set of features.

Software Product Line Engineering (SPLE) is the methodology for developing a diversity of software products and software-intensive systems, using techniques and platforms, with lower cost, time to market, and with higher productivity, product line scalability, and quality [K. Pohl et al, 2005]. SPLE is the explicit modeling of product commonalitiesand variabilitiesby using Feature Models (FMs) which will be defined in the next section.

  1. Feature Model

A Feature Model is a special type of information model that represents the information of all possible products of a SPL in terms of features and relationships among them. It can be analyzed using applied operations of automated analysis [Benavides et al, 2006]. Figure 1 depicts a car feature model example using a basic feature model notation. A car consists of a body, transmission, engine, and optionally a cruise control. A transmission is either automatic or manual, and an engine is electric–powered, gasoline–powered, or both.

Figure 1: A sample Feature Model

Figure 1: A car feature model

  1. Feature Model Notations

During our research work, we find five types of proposed FM notations that been differentiated according to features structure and relationships between it. we summarize these notations as follows:

2.3.1. Basic Feature Model

Constraints:
If feature Camera is included, then feature High Resolution must be included
If feature Basic is included, then feature GPS is excluded

– if GPS is included, then Basic is excluded

Figure 2: A mobile phone feature model with constraints [Benavides et al, 2006]

Figure 2 shows an example of a basic feature model. A basic feature model should be able to represent the following relationships among features:

Table1 depicts list of symbols used in basic feature model.

a b c d e f

Table 1. Symbols used in basic feature modeling

a. Mandatory. A child feature is said to be mandatory when it is required to appear when the parent feature appears. For instance, it is mandatory to have the calls of a mobile.

b. Optional. A child feature is said to be optional when it can or not appear when the parent features appears. For instance, it is optional to have GPS in a mobile.

c. Alternative. A set of child features are said to be alternative when only one child feature can be selected when the parent feature appears. For instance, the screen can only be high resolution, color or basic.

d. Or–relation. A set of child features are said to have an or–relation with their parent when one or more sub features can be selected when the parent feature appears. For instance, the media of a mobile can be media, camera or both at the same time.

In addition, compatibility rules are allowed.  Compatibility rules refer to two types of additional cross–tree constraints to restrict feature combinations:

 e. Excludes. A feature X excludes Y means that if X appears then Y should not appear and backwards.

 f. Requires. A feature X requires Y means that if X appears then Y should appear but not backwards.

2.3.2. Cardinality-based

Cardinality-based FM is as basic feature models that have UML-like multiplicities that have the form  [n, m]; where n is called lower bound and m is called upper bound, and we introduce the relationships in this notations as follows:

Feature cardinality: It is defined as series of intervals expressed as [n,m] with n as lower bound and m as upper bound. These intervals control the number of cases of the feature that can be part of a product.

Group cardinality:it is an interval expressed <n,m>, with n as lower bound and m as upper bound. The upper bound number determines maximum number of child features which is a fragment of a product when its parent feature is selected. Hence, an additional relationship is equal to a <1..1> group cardinality and an or–relationship is equal to <1..N>, where N is the number of features in the relationship.

An example of a cardinality-based feature model of mobile phone is given in Figure 3:

Figure 3. Cardinality-based feature model for Mobile phone family [Chang Hwan, 2015]

Table2 depicts list of symbols used in cardinality-based feature model.

a b c d e f g h i j

(i-j)

k

F

[0…m]

F

F

F<value:T>

F

Table 2. Symbols used in cardinality-based feature modeling [Chang Hwan, 2015]

Now we introduce a brief explanation for each notation as follows.

a. Mandatory: It is a child solitary feature with cardinality [1..1].

b. Optional: It is a child solitary feature with cardinality [0..1].

c. Optional Clonable:It is a child solitary feature with cardinality [0..m]; m>1.

d. Mandatory Clonable: It is a child solitary feature with cardinality [n..m]; n>0 ∩ m>1.

e. It is a child grouped feature with cardinality [0..1].

f. It is a child grouped feature with cardinality [1..1].

g. It is feature F with attribute of type T and value of value.

h. Feature model reference F.

i. Xor-Group: It is a feature group with cardinality <1-1>.

j. Or-Group: It is a feature group with cardinality <1-k>; where k is group size.

k. Feature group with cardinality <i-j>.

2.3.3. Extended Feature Model

Extend FMs include more information about features, such as feature attributes.

PRICE= MEDIA.PRICE + SCREAN.PRICE + GPS.PRICE + CALLS.PRICE

PRICE= RESOLUTION.PRICE + COLOR.PRICE + BASIC.PRICE

RTIME= MEDIA.RTIME + SCREAN. RTIME + GPS. RTIME + CALLS. RTIME

RTIME= RESOLUTION.RTIME + COLOR. RTIME + BASIC. RTIME

Figure 4. Extended feature model for an SPL in the HIS domain

In this example, each feature has two attributes: RTIME (response time) and PRICE. Each of the attributes of leaf features are in a domain of values. For instance, the price of a camera can take values between 10 and 12. In the case of parent features, the values of the attributes are the sum of all their children values (RTIME, PRICE values). For example, the price of a media is the sum of the prices of the possible (Camera or MP3) [Benavides et al., 2005].

Despite of the consensus about the inclusion of attributes and attributes relationships in feature models, there is not a standard notation nor an agreement on the type of attributes that can be included in feature models.

The following concepts are usually used when dealing with feature attributes:

Attribute.An attribute of a feature is any characteristic of a feature that can be measured.

Attribute domain.It specifies the space of possible values for an attribute. Every attribute has an associated domain. It is possible to have discrete domains (e.g. integers, booleans, enumerated values) or continuous domains (e.g. real).

Attribute value. The value of an attribute belonging to the domain. There is a default value in the case the feature is not selected. A value can be directly a value on the domain (basic attributes) or an expression combining other attributes of the same or other features (derived attributes).

Attribute relationships.A relationship between one or more attributes of a feature or a feature itself.

2.3.4. Clone-Enabled

A verification problem occurs during feature model’s customization process when not any subset of features from a feature model is a valid customizing result; valid customizing result must satisfy all constraints among features of FM.

We need to verify that all constraints among features are satisfied by the customization decision. Otherwise, this decision will be inappropriate to customization process and will decrease the efficiency of this process [Wei Zhang et al, 2008].

Another resulted problem that may occur during customization is caused by the insertion of cloneable features into the feature model; during customization, the tree structure containing a cloneable feature and all its offspring features can be cloned into many copies, and each copy can be customized individually.

The problem caused those cloneable features is losing original semantics of some constraints among features after a feature is cloned. As a result, it will lead to lose the capability of verifying whether a customizing result is a valid one based on the constraints among features.

Table 3 depicts symbols of this type of feature model:

a b c d e f g h i j k
[a..b]

type

type

Table 3. Symbols used in clone-enabled feature modeling [Wei Zhang et al, 2008]

Now we introduce a brief explanation for each notation as follows.

a. Mandatory feature with the name „X“:this feature must be selected in a customizing result, if its parent feature is selected. It must be removed if its parent is removed, but if it hasn’t a parent feature, it must be selected in all customizing results.

b. Optional feature with name „Y“:if parent feature is selected , an optional feature can be either selected or be removed from customizing result. It must be removed if its parent is removed. If it has not a parent it can be either selected or not.

c. Mandatory or Optional:This type of feature that can be either mandatory or optional, it means that a feature can be replaced by either a mandatory feature or an optional feature.

d. Feature reference:A reference to a feature that called Z.

e. Clonable Features:When the symbol (interval [a,b]; 0 < a ≤ b) is placed at the top of a feature, it means that the feature is clonable. The symbol is limiting that the number of the clonable feature’ clones between a and b.

f. Refinement relation between two Features:A refinement relation connects two features. The feature that is placed at start point of arrow is called the parent of feature connecting the end point of the arrow. A feature can only have one parent feature at most.

g. Refinement path:this type of feature denotes a path containing one or more refinement relations, and zero or more features. Each feature connects to two different refinement relations ‘arrow and non-arrow ends, respectively.

h. A requires constraint between two features:A requires constraint connects two features. The feature at start point of arrow is called the requirer and the other is requiree. This constraint means that if the requirer is bound in a customizing result, the require is also bound.

i. Excludes constraint between two features:An excludes constraint connects two features. This constraint means that the two features shouldn’t be both found in a same customizing result.

j. A binding predicate among a set of features and binding predicates:the left end connects to a composite constraint or to one of the right ends of a binding predicate. The right ends connect to a group of features and binding predicates, respectively. There are three types of binding predicate: and (denoted by ˄); or (denoted by ˅); xor (denoted by |).

k. A Composite constraint between two binding predicate:There are two types of composite constraint: requires (denoted by →); excludes (denoted by →←).

2.3.5. Computationally-hard

These are feature models with complicated structure; FM with high number of features and relations, it is used to reveal the performance of tools and operations in pessimistic cases.

  1. Automate Analysis of Software Product Lines

The process to automate the analysis of SPLs is the one depicted in Figure 5. First, the model representing the SPL is translated into a logical representation such as propositional logic, constraint programming or description logic so that it is possible to benefit from off–the–shelf solvers that automatically analyze the representation, thus making possible to automatically analyze the SPL itself.

Analysis Results

Translation into a logical representation

Solver

Feature Model

Operation

Figure 5: The process for analyzing a software product line [Benavides e al, 2010].

Figure 5 depicts a conceptual framework that provide a two–step process. Firstly, this process takes a feature model as input that will be translated into a specific representation, formula or paradigm such as propositional logic, constraint programming, description logic or ad–hoc data structures. Then, solvers or specific algorithms are used to automatically analyze the representation and provide the result as an output.

Next, we summarize the operations of analysis and modification with definition and simple example as follows:

Determining if a feature model is valid. The inputs of this operation are a feature model and a product and it returns a value determining if the product belongs to the feature model or not.

For example, the product P does not belong to the feature model of Figure1 because automatic and manual transmission cannot be in a product at the same time.

P = {Car, Body, Transmission, Automatic, Manual, Engine, Electric}

This operation can be useful for an SPL analyst in order to verify if a product described as a set of features is available in an SPL.

Determining if a feature model is void. The input of this operation is a feature model and the returned result is a Boolean value determining whether input feature model is void or not. A feature model is not void if the feature model represents at least one product.

Calculating the number of products. The input of this operation is a feature model and the returned value is the number of products of a feature model. The number of potential products is defined by 2n − 1 where n is the number of features that are taken into account. In our example of Figure 1, the number of products in the feature model is 12.

Dead features detection. This operation takes a feature model as input and returns a set of features that never appears in any product of a feature model.

Valid product configuration. This operation takes as input a feature model and a product and it returns a value determining if the product belongs to the feature model or not.

Obtaining all the products. This operation takes as input a feature model and returns all valid products. In the feature model of Figure 1, the result of this operation this list of products:

{Car, Body, Transmission, Automatic, Engine, Electric},

{Car, Body, Transmission, Manual, Engine, Electric},

{Car, Body, Transmission, Automatic, Engine, Gasoline},

{Car, Body, Transmission, Manual, Engine, Gasoline},

{Car, Body, Transmission, Automatic, Engine, Electric, Gasoline},

{Car, Body, Transmission, Manual, Engine, Electric, Gasoline},

{Car, Body, Transmission, Automatic, Engine, Electric, Cruise},

{Car, Body, Transmission, Manual, Engine, Electric, Cruise},

{Car, Body, Transmission, Automatic, Engine, Gasoline, Cruise},

{Car, Body, Transmission, Manual, Engine, Gasoline, Cruise},

{Car, Body, Transmission, Automatic, Engine, Electric, Gasoline, Cruise},

{Car, Body, Transmission, Manual, Engine, Electric, Gasoline, Cruise}

Determining if two feature models are equivalent. This operation takes as input two feature models and determine if these models are equivalent or not.

For example, the feature models of next figure are equivalents since the set of products specified by them are the same: { { A, B,C} , { A, B,C,D} }.

Providing explanations. The input of this operation is a feature model and it returns an explanation when the feature model is void, a dead feature is detected within the feature model or, in general, an error is detected.

Refactoring. As said the feature model is considered as a factor to another feature model if they are included in the same group of products considering a different structure. Refactoring is useful to restructure a feature model without changing its semantics.

Product configuration optimization. The inputs of this operation are a feature model and an objective function and it returns the best product by using an objective function which is a function that is able to decide how good that product is.

Calculating commonality. The inputs of this operation are a feature model and a feature within this feature model and it returns a value that represents the percentage of valid products where the feature appears.

For example, these are the commonality of some features of the feature model of Figure 1:

Commonalitymanual=612=0.5

Commonalitygasoline=812=0.667

Calculating variability. The only input of this operation is a feature model and it returns the ratio between the number of products of the feature model and the number of potential products if all features could be combined with all other without restriction.

For example, the variability of the feature model of Figure 1 considering all the features but the root is:

1228-1 ≈0.0471

Filtering a set of products. This operation takes three inputs; a feature model, a set of features Fi to be included and a set of features Fj to be excluded and it returns the set of products with the features of Fi and without the features of Fj.

For example, the set of products of the feature model of Figure 1 can be filtered with:

Fi = {Manual } and Fj = { Cruise } . The resulting products are:

{Car, Body, Transmission, Manual, Engine, Electric}

{Car, Body, Transmission, Manual, Engine, Gasoline}

{Car, Body, Transmission, Manual, Engine, Electric, Gasoline}

Valid Partial Configuration. The inputs of this operation are a feature model and a partial configuration and as a result it returns a value notifying whether the configuration is usable or not, i.e. a partial configuration is valid if it does not include any contradiction.

Atomic Sets. The only input of this operation is a feature model and as a result it returns a group of atomic sets of the exact model. As mentioned before, an atomic set is considered as a group of features which one of them at least can be used as a unit when performing certain analyses.

False optional features. To consider a feature as a false optional, if only it is contained in all the products of the product line, though it isn’t modeled as mandatory feature.

Providing corrective explanations. The only input of this operation is a feature model and it returns a corrective explanation when the feature model is void or a dead feature is detected.

Dependency analysis. The inputs of this operation are a feature model and a partial configuration and it returns a another configuration with the features that have to be selected and/or deleted as a result of the reproduction of constraints in the model.

Generalization. To consider a feature model A as a generalization of another feature model B, then the group of products of A inherits the group of products of  B.

Arbitrary edit. This operation can be applied when there is no explicit relationship between the input models.

Conditionally dead features. As said the feature is considered conditionally dead if it is unusable under certain circumstances, for example the selecting another feature.

Multi-step configuration. To define a multi–step configuration a problem has to be declared as the process of generating a sequence of intermediate configurations, for example a configuration path, going from a feature model configuration to another.

Specialization. To define specialization, a feature model A is a specialization of a feature model B if the group of products of A is a subgroup of the group of products of B.

Variant Features. This operation takes as input a feature model and returns the set of features that do not appear in all the products.

For example, in the feature model of Figure 1 the set of variant features is:

{ Cruise, Automatic, Manual, Electric, Gasoline }

Retrieving the core features. The input of this operation is a feature model and the output of this operation is the set of features that appear in all the products.

For example, in the feature model of figure 1 the set of core features is:

{Car, Body, Transmission, Engine}

Unique Features. Unique feature appears only in one product.

Reducing the number of possible products. The inputs of this operation are a feature model, a set of features to be selected and a set of features to be deselected and applies modifications on the original feature model in order to reduce the number of possible products of the feature model depending on the selection (or deselection) of the features.

For example, in FM of Figure 1, if automatic transmission is selected the number of possible products is reduced from 12 to 6.

Propagating decisions. The input of this operation is a feature model and the output is a modified feature model where some features are automatically selected (or deselected) by the system according to the relationships of the feature model.

For example, in FM of Figure 1, if automatic transmission is selected then manual transmission should be automatically deselected.

Simplification. The input of this operation is a feature model as input and returns a simplified feature model.

For example, the FMs of Figure 1 are two equivalent feature models after a possible simplification process where the simplified feature model cannot have at the same level set relationships (alternative and or) and binary relationships (optional and mandatory).

  1. Constraint Programming

In constraint programming the programmer states the problem as constraints between variables and then the computer is responsible for providing the solutions that satisfies the constraints. The solution method depends on the search strategy so that the same problem can be solved using different strategies (well-known search algorithms).

One of the main branches of research in constraint programming is Constraint Satisfaction Problems (CSPs).

Formal definitions of CSPs. A CSP is defined as a set of variables, have a finite range, and a set of constraints restricting all values of variables. A solution to a CSP is the mapping of a value from its domain with every variable, so that all constraints are satisfied simultaneously.

The output of a CSP solver is Boolean value, determines whether a given input is satisfiable or not.

If a CSP is satisfiable, the solver can retrieve a solution, all solutions or the best solution depending on the case. There are different types of solvers depending on the nature of the variables (i.e. SAT Solvers, a Binary Decision Diagram (BDD) Solvers, General CSP solvers and Search algorithms).

  1. Propositional logic based analysis

Some researchers have proposed the translation of basic feature models into propositional formulas. Recognizing that connection has brought useful benefits since there are many off–the–shelf solvers that automatically analyze propositional formulas.

  1. Description logic based analyses

Other researchers have proposed the translation of FMs into OWL DL ontology. OWL DL is an expressive, yet decidable sublanguage of OWL (Ontology Web Language). It is possible to use automated tools such as RACER (Renamed ABox and Concept Expression Reasoner) for the automated analyses of FMs.

  1. Ad–hoc based analyses

There are some proposals in the literature where the conceptual under pinnings are not clearly exposed. These proposals were grouped as ad–hoc solutions.For example, Van Deursern et. al. 2002, proposed a feature description language to describe FMs. From this language, a FM algebra is described based on rules over the ASF+SDF Meta– Environment [P. Klint, 1993].

Cao et al. 2003, presented ad–hoc algorithms for the automated analyses of FMs. Their algorithms are based on the translation of basic FMs into data structures that claim to be enough to obtain all the products as the base for some other operations. They also present a tool prototype based on their algorithm.

3.                   Systematic Literature Review

  1. Inclusion and Exclusion Criteria

Based on the research questions, the keywords were obtained and used to search the main study sources. The search string used was made using the strategy suggested by Chen et al [39]. Based on this strategy, this study depends on the following search string.

(“automated analysis” OR “systematic literature review” OR “operations”) AND

 (“software product line”) AND (“engineering” OR “configuration” OR, “modeling” OR “framework” OR “structure”)

The term “software product line” ensures that the research is conducted in order to find all SPL foundations, principles, and techniques. In addition, the terms “automated analysis”, “systematic literature review” or “operations” restrict the search by SPL foundations, principles, and techniques. Finally, the terms “engineering”, “configuration”, “modeling”, “framework” or “structure” refers to the main functions and concepts for the development of SPL systems. The primary studies were acknowledged by operating the search strings on three scientific databases, namely LSI 1, Scholar Google 2, and ScienceDirect 3.

These libraries were chosen because they are some of the most relevant ones in the SPL literature [58]. The search was performed using the specific syntax of each database and considering the title, abstract and keywords.

Inclusion and exclusion criteria are used to exclude studies that are not relevant to answer the research questions. We found it useful to include papers which only mentioned our main focus, automated analysis of feature models, in introductory sentences in the abstract. This was needed since it is a central concept in the area and thus is frequently used in abstracts without papers really addressing it any further.

3 http://www.sciencedirect.com/

The following two Inclusion Criteria (IC) were used to comprise studies that are related to solve the research questions.

  • IC1. The publications should be “paper”, “book”, “thesis”, “workshop” or “conference” and only papers written in English were considered.
  • IC2. It is included only main studies which present SPL, feature model notation, and automated analysis operations. Hence, the title, abstract and keywords should clearly describe the focus of the paper contributes performed and evaluated automated analysis operations.

The following three Exclusion Criteria (EC) were used to exclude studies that they are not considered as relevant to solve the research questions.

  •  EC1. It is excluded the technical reports which present statements learned, theses/dissertations, studies that describe events, and studies that are indexes or programming.
  •  EC2. It is excluded duplicate papers. If a main study is published in more than one site, for example, if a conference paper is extended in a journal version, only the latter instance should be considered as a main study. The journal version is preferred since it is expected to be the most useful and complete  report.
  •  EC3. Papers that do not focus on automated analysis of feature model. Methods, techniques and other approaches for SPL have to be excluded.

At first, reading abstracts and looking for keywords and concepts that explain the contribution of the paper. If  abstracts don’t help choosing meaningful keywords, then it is possible to scan the introduction or conclusion sections of the paper.

  1. Data extraction and Reporting

Depending on inclusion and exclusion criteria, 21 papers were included and 14 papers were excluded, then for each included paper, we analyze the title, introduction and abstract searching for defined keywords. The most appropriate papers are as following:

 [Mendonca et al., 2004]

The main goal is to make such a hardness study for satisfiability problems while analyzing feature model depending on satisfiability solving (SAT) techniques, and by doing some experiments to ensure efficiency of these technique. Despite very good performance of SAT solvers in performing SAT checks on large feature models, it may take an infeasible amount of time to check satisfiability of certain kinds of instances (large number of parameters such as number of variables and clauses).

The performed consistency checks on models with up to 10,000 features and a large number of constraints and in the worst-case scenario the SAT solver took about 0.4 seconds to complete the check. In addition, authors developed an algorithm for computing valid domains and observed processing times of about 22 seconds for models with 5,000 features and a fairly large number of cross-tree constraints.

 [Gheyi et al., 2000]

Researchers propose a theory for FMs in Alloy, this theory is useful for automatically checking a number of properties in the Alloy Analyzer. For instance, they show how to yield all valid configurations of a FM and they explain how to check whether FM transformations preserve well-formedness rules of a FM. For instance, they show that the refactoring relation is reflexive and symmetric using a scope of 5 atoms. Finally, they compared G-Theory and R-Theory, which is useful for checking FM refactorings.

 [Benavides et al., 2004]

In this work, authors provide a set of mapping rules to translate feature models into a CSP. They also provide an algorithm to translate extended feature models to CSP.

 [Thomas Thüm et al., 2009]

Here authors should avoid arbitrary edits and restructure them in terms of a sequence of specializations, generalizations, and refactorings, to understand the evolution of a feature model.

Refactoring edit means that no new products are added and no existing products are removed, a specialization edit means that some existing products are removed and no new products are added, and a generalization edit when new products are added and no existing products removed, or an arbitrary edit otherwise.

In this paper, researchers present and evaluate an algorithm to determine the relationship between two feature models (i.e., specialization/ refactoring/ generalization/none of these) using satisfiability solvers.

They generated 200 random feature models with the same parameters (maximum number of children = 10; type of child group = (50 %, 25 %, 25 %); optional child = 50 %; number of cross-tree constraints = 0:1 * n; variables in cross-tree constraints = 2–5) and each performed the same number of random edits of the same kind, and for every 10 features, they create one cross-tree constraint.

 [Zhang et al,. 2008]

In this paper, researchers mainly focus on clone-enabled FM and present a kind of semantics for constraints in clone enabled feature models, which resolves the problem of what kinds of constraint should be added to a feature model after some features are cloned.

The semantics is composed of two patterns: the generating pattern and the adapting pattern, to determine the type of constraints should be imposed on a cloneable feature and its clones, and find the way that an existing constraint should be transformed in the context that features involved in the constraint are cloned.

After that, they propose a BDD-based approach to verify clone-enabled feature models, an approach that makes efficient use of the BDD (binary decision diagram) data structures, by taking into account the specific characteristics of feature models’ verification. According to experiments’ results, this BDD-based approach is efficient and can verify complex feature models.

The researchers apply this approach to verify two sets of designed feature models. First set contains 20 feature models only with binary constraints, and the number of features in them varies from 10, 20 to 90, and then from 100, 200, to 1000. The other set contains 20 feature models with both binary and composite constraints with number of features also varies from 10 to 1000.

The BDD-based approach can verify complex feature models with 500 features using 66.7 seconds, and verify simple feature models with 1000 features using 37.2 seconds.

[Benavides et al,. 2012]

In this paper, they present BeTTy, a framework for Benchmarking and TesTing on the analysis of feature models. BeTTy allows the automated detection of failor in feature model analysis tools, it supports the production of motivating test data to value the performance of analysis tools in both pessimistic and average situations.

They have developed a VaMoSAnalizer tool for the automated analysis of feature models. This tool implements some of the analysis operations on feature models such as the one that determines whether an input feature models is consistent or whether it contains dead features.

They have used BeTTy to automatically test up to 18 different analysis operations on feature models, they employed random search and BeTTy on feature models with 1,000 features, in order to compare their effectiveness. They employed CSP solver with maximum required time to check their consistency.

The results revealed that the hardest random model found required 0.2 seconds to be analyzed, while BeTTy was able to find several models taking between 23.1 and 27.5 minutes to be processed.

Not only that, they found the hardest feature models generated by BeTTy in the ranges 500-1,000 features were harder to process than random models with 10,000 features.

[Benavides, 2011]

This work is a first step toward the development of a widely-accepted test suite to support functional testing in the community of automated analysis of feature models.

Current version of their suite, called FAMA Test Suite, addresses 7 out of 30 analysis operations on feature models.

 [Benavides, 2007]

FeAture Model Analyser (FAMA) provides engineering support for software product line analysts to automate the analysis of feature models. Furthermore, FAMA can be extended with i) further models other than feature models, ii) other operations and iii) other solvers that will certainly appear in the future. It supports abstraction, basic and extended FM, and multiple solvers.

FAMA framework integrates CSP, SAT and BDD Java solvers, in order to combine the best of all of them in terms of performance, to perform the analysis tasks. This framework can perform unlimited number of new analysis operations and employ unlimited number of solvers. One of the advantages of FAMA is the ability to select automatically, in execution time, the most efficient solver according to the operation requested by the user.

They focused on a performance comparison of three different types of solvers working with CSP, SAT and BDD in order to test how these representations can influence in the automated analysis of feature models. The comparison results were obtained from the execution of a number of feature models translated into CSP, BDD and SAT in three solvers presented previously: JaCoP (general CSP solver), JavaBDD (BDD solver) and Sat4j (SAT solver).

[Benavides et al. 2005]

They propose an extension to deal with extra–functional features, and present a mapping to transform an extended feature model into a CSP to formalize extended feature models using constraint programming. Then, they improve current reasoning on feature models and they are able to automatically answer several questions on extended feature models. The test was based on the extended-feature model of HIS, adding new features and several tests were made on each feature model

They have experimentally concluded that the implementation presented has an exponential behavior with increased number of features in the feature model and with constant variability factor.

Next, they provide in this paper several attempts to think of feature models using constraint programming., despite that, there are two main drawbacks in these proposals:

i) None of them associate parameters to features

ii) None of them use constraint programming as reasoning base.

According to them, using constraint programming give their proposal a more powerful reasoning capacity and greater expressiveness than others.

They have experimentally concluded that the implementation presented has an exponential behavior with increased number of features in the FM and with a constant variability factor (VF).

The test determines their model has a good performance until 25 features while the VF is kept constant.

[Benavides et al. 2006]

In this paper, they present how a cardinality-based feature model can be also translated into a constraint satisfaction problem. In that connection, it is possible to use off-the-shelf tools to automatically perform some operations such as: calculating the number of possible feature configurations and detecting possible conflicts. In addition, they present a performance test between two off-the-shelf Java constraint solvers.

Using CSP solvers, it is possible to automatically perform some operations on a FM such as calculating the number of possible combinations of features, retrieving configurations following a criteria, calculating the number of features in a given configuration, validating a given FM to detect possible inconsistences, finding an optimum product on the basis of a given criteria (the cheapest, the one with fewest features and so forth) or calculating the commonality factor for a given feature and the variability factor of a given FM.

[Benavides et al. 2006]

Benavides et al. present a performance comparison between three of the most used solvers; CSP, SAT and BDD.

They provide some experimental results; BDD perform faster than CSP and SAT, however, BDD require more memory than CSP and SAT. They run the experiments using two operations of analysis: void feature model and number of products.

[Benavides et al. 2007]

Benavideset al. focus on the detection and explanation of errors on feature models. To this purpose, the authors propose a framework structured in three levels: a feature model level, a diagnosis level, and an implementation layer where the abstract solution is implemented using constraint programming. The authors use this framework to provide support for the detection of dead features and explanations (e.g. detailing the causes that make a FM model to be void).

[Benavides et al. 2008]

Benavideset al. propose a method called CURE (Configuration Understanding and REmedy) that allows detecting conflicts in a given configuration and propose modifications; determine which features to be selected and deselected to eliminate the problem.

Their technique is based on translating a feature model into a CSP adding some extra variables in order to detect and explain the possible errors after applying optimization operations. CURE also proposes some extensions to the method in order to make it more scalable.

As possible extensions, they propose adding cost attributes to features and proposing changes in the configuration minimizing the total cost of the change. The experimental results are presented showing the scalability of the approach to feature model with more than 5,000 features.

[Benavides et al. 2010]

In this paper, is it considered as a first step about the automated analysis of Debian package dependency language. It is offered a mapping from Debian models to propositional formulas enabling their analysis by using of SAT solvers. It is also explained how this help to execute analysis operations on the repositories like the detection of anomalies, for example packages which is not possible to be installed.

Then, it is explained how to integrate their approach in the FaMa framework which will support an effective and user-friendly withdrawal of Debian variability models and their analysis.

[Sergio Segura. 2008]

In this paper, they set the basis for the usage of Atomic Sets (ASs) as a generic technique for the automated analysis of FMs, they first propose a specific algorithm to construct the ASs of an FM. Then, they present a performance test that measures the degree of improvement in time and memory, when implementing ASs into CSP, BDD and SAT-based solutions.

 [Benavides et al. 2009]

This paper offers three contributions to the study of multi-step configuration for SPLs, i) it introduces a proper model of multi-step SPL configuration and connect this model to CSPs.

ii) it shows how solutions to these CSP configuration problem can be derived automatically with a constraint solver.

iii) it presents empirical results demonstrating that our CSP-based technique can solve multi-step configuration problems involving hundreds of features in seconds.

For a feature model with 500 features configured over 3 steps, the worst case solving time was 3 seconds. The worst case solving time for feature models configured over 10 steps was 16s. According to these results the technique should be fast for feature models with hundreds of features.

[Benavides et al. 2012,2015]

This article presents FLAME, a formal framework for the specification of analysis operations on feature models. Its main advantages are: i) its formal semantics described using the Z specification language and ii) its high level of abstraction, this provides the reuse of the framework for the formalization of different feature model dialects or even for different variability notations.

Furthermore, in order to assure the quality of this framework and to provide a reference implementation that can be used by tool developers, the Z specification has been coded in Prolog and automatically validated using 18,000 test cases automatically generated using metamorphic testing techniques.

The results of the experimental tests help to enhance the framework and to detect inconsistencies both in the previous informal definitions of the analysis operations and in current analysis tools.

[Benavides et al. 2011]

This article reports the usage mutation testing to size the efficiency of an automated test data generator from a user viewpoint.

In this paper both traditional and class-level mutation operators to FaMa (an open source Java framework). It is also compared and contrasted the results with the data found from some motivating mistakes exist in the literature and two real tools; FaMa and SPLOT.

[Benavides et al. 2009]

Benavides et al. propose an automated method to solve multi–step configuration problem, and this type of problems to CSP is provided. The idea is to optimize the steps when configuring a feature model, from an original configuration to another in a given number of steps.

[Benavides et al. 2006]

Benavides et al. propose a set of challenges ahead on the automated analysis off feature models. One of those challenges is feature model debugging and explanations. And then, during 2006, Benavides et al. propose the detection of insolated features (dead features). They propose to use CSP to detect those features using commonality factor. They also propose a possible optimization using variation degree.

[Benavides et al. 2013]

Benavides et al. state the problem of finding computationally–hard feature models as an optimization problem and it is solved using a unique algorithm. Consistent with a tool and an analysis operation, this algorithm generates input models with input parameters the performing time or the memory usage of the tool when excuting the operation over the model.

Experimental tests were done using this evolutionary algorithm on a number of analysis operations and tools. These experiments generated models causing much longer executions times and higher memory consumption than random models of identical or even larger size.

[Benavides et al, 2014]

In this paper, the researchers present FaMa OVM (Orthogonal Variability Models) and FaMa DEB (Debian Variability Models), which are prototypical implementations for the automated analysis of two distinct variability models; OVM and DEB, and they focus on analysis operations provided (such as: valid FM, all products, variability, dead, valid Product, # products, commonality…) and the logical solvers used (such as: Choco, SAT4j, JavaBDD….) for the analysis, for both tools.

[Benavides et al. 2016]

It also proposes a new approach to solve these analysis problems (refinements of known problems or are completely novel) by encrypting them as Quantified Boolean Formulae (QBF) and cracking them through Quantified Satisfiability (QSAT) solvers. QBF could symbolize more complicated analysis operations, that can not be characterized by using propositional formula.

4.                   Results and Analysis

We starting this section by answering the first research question

4.1. The RQ1 results

We answer this question depending on papers cited in literature from 1990 to 2015.

RQ1: Which operations of analysis have been automated and which ones were evaluated?

Table 4 shows a summary of the performed operations (RQ1) and evaluated ones (RQ2) by each relevant study from the literature. Operations are listed horizontally and the primary studies are listed vertically.

It depicts a summary of the studies reporting performance results for analysis operations, where (~) means that the operation has been proposed, and (+) means that operation was evaluated. In the last column, (B) means that the operations were applied for basic feature models, (C) for cardinality-based feature models, (E) for extended feature models, (CE) for clone-enabled feature models:

Operation

Paper

Table 4. Summary of operations of the analysis and performance results

Void Feature Model

#products

Dead features

Valid product

+

All products

Equivalent FMs

Explanations

+

Refactoring

Optimization

Commonality

Filter

Valid partial configuration

Atomic sets

False optional features

Corrective Explanations

Dependency Analysis

+

Generalization

Arbitary edits

Conditional dead features

Multi-step configurations

Specialization

Variant Features

Core Features

Unique Features

Feature Model Notation

[4]

+

  • +

  ~

~

~

~

~

 

~

~

~

         

~

 

~

          B
[9]

~

~                                       E
[6]

+

           

+

         

~

   

+

 

+

   

+

      CE
[5]

+

+

   

+

     

+

 

+

                            B
[8]

+

+

   

+

     

+

+

+

                            B
 [11]

~

 

~

 

+

       

+

+

+

                          E
 [12]

+

+

                                B
 [13]

+

+

                                            C
 [14]

+

 

+

     

+

           

+

                      B
 [23]

~

~

 

~

~

 

~

 

~

 

~

~

~

~

     

~

 

~

~

   

~

~

 

~

B
 [16]                                    

+

          C
 [17]

+

+

 

+

+

+

     

+

                     

+

      B
 [21]                                                 B,C
 [22]    

+

+

                 

+

                    B
 [24]

+

                                             
 [25]                                    

+

         
 [29]

+

 

+

+

+

           

+

                            B
 [27]    

~

~

 

~

 

+

 

~

         

+

     

+

       
 [30] d

~

           

~

                              B
 [31]                              

+

     

+

        B
 [32]

+

+

+

+

+

     

+

+

+

                            B
[37]                                                  

Most of papers have performed and evaluated simple operations, that need minimum resources to be performed/evaluated, while the others have chosen so complicated operations, because of informal descriptions of the operations, so it is desirable to have precise definition of the operations and this would facilitate performing operations and framework/tool development for automating analysis.

Automated analysis on basic or cardinality–based feature models are covered by most of the studies. However, extended feature models where numerical attributes are found, miss further coverage. When including attributes in feature models the analysis becomes more challenging because not only attribute–value pairs can be processed, but more complex relationships can be included like “feature Camera requires Screen_resolution ≥ 640×480”, that may reduce or increase number of resulted products.

However, it seems to be an increasing interest in the analysis of cardinality–based, clone-enabled and extended feature models since 2004, just one paper covered clone-enabled structure.

Table 4 shows, that only one operations with support for explanations was discussed and performed only in one study

We also found that there are 6 studies proposing a formal definition of analysis operations. This tendency seems to be ascendant since 2004, that refer to an increasing interest by the research community to accurately define and report analysis operations.

Table 5 depicts a chronological view of the studied operations. It shows the amount of references to operations found in the literature for each year. Vertically, we list all the years where primary studies were published. Where (    ) means that just one study was made for the operation, (    ) means that 2-3 studies were made for the operation, and (    ) means that more than 3 studies were made for the operation.

Operations

 

Years

Void Feature Model

#products

Dead features

Valid product

All products

Equivalent FMs

Explanations

Refactoring

Optimization

Commonality

Filter

Valid partial configuration

Atomic sets

False optional features

Corrective Explanations

Dependency Analysis

Generalization

Arbitary edits

Conditional dead features

Multi-step configurations

Specialization

1990  
  •  
 
  •  
                                 
2002                                          
2003                                          
2004                                          
2005                                          
2006                                          
2007                                          
2008                                          
2009                                          
2010                                          
2011                                          
2012                                          
2013                                          
2014                                          
2015                                          
2016                                          

Table 5. Number of primary studies referring operations for each year

As illustrated in Tables 4 and 5, there are 17 out of 24 operations that only appeared in one primary study. Likewise, 6 operations were treated in more than 11 studies. This denotes, in our opinion, that authors were quite visionary in predicting the importance of automated analysis of feature models and pointing some of the most referred operations. Although we may remark that 11 operations were referred significantly between 2005 and 2009 suggesting that the analysis of feature models is an active research field. In last years, we notice lack of papers in this domain that employ automated analysis of feature models.

Regarding the notation used, 14 out of 22 primary studies used basic feature model notation for the analysis of feature models. However, there seems to be an increasing interest in the analysis of cardinality–based and extended feature models since 2004, and there was an interest in the analysis of clone-enabled feature models since 2008.

4.2. RQ2 Results

RQ2: What is the experimental design of the evaluation process?

Table7 depicts experiments results of reviewed papers. Where:

NoFrepresents number of features used in experiments, e.g., in the table at 3rd paper each used FMs consists of 300 features (nodes of FM tree).

NoC represents number of cross-tree constraints of studied FMs.

Tis the time spent during execution of operation.

ToFM is the type of FM:

  1. Basic,
  2. Extended,
  3. Cardinality-based,
  4. Computationally-hard,
  5. Clone-enabled,
  6. Randomly generated.

EmpFM is the employed FM (e.g. academic, industrial, and random)

NoIis number of instances of FMs, i.e. number of generated FM trees used during the experiments for each paper.

EmpTech refers to employed technique (e.g. CSP, MOEA, AHP, and BDD-based).

ETechrefers to evaluation technique (e.g. case study, performance test).

NS It means that the information is not specified in the paper.

Paper NoF NoC NoI ToFM EmpFM EmpTech ETech
[Zhang et al,. 2008] 10- 1000 NS 40 Basic, clone-enabled academic BDD-based performance test
[Gheyi et al., 2000] 300 NS 2 basic academic Alloy Analyzer performance test
[Thomas Thüm et al., 2009] 10-10000 2-5 2000 Basic Random SAT solver performance test
[Benavides et al. 2007] 50-300 0-75 200 Basic,Extended Random FAMA performance test
[Benavides et al. 2005] 15-25 NS 4 extended academic, Industrial CSP performance test
[Benavides et al. 2005] 15-25 NS NS extended Random CSP performance test
[Benavides et al. 2006] 15-52 NS 5 Cardinality-based Industrial, Random CSP performance test
[Benavides et al. 2006] 50-300 1-25 200 Randomly generated Random CSP, BDD, SAT performance test
[Benavides et al. 2009] >1000 NS 6 basic, extended Random CSP performance test
[Benavides et al. 2012] 10 0–3 20,000 Basic, cardinality-based Random FLAME performance test
[Benavides et al. 2012] 0-1000 NS Extended Random BeTTy performance test
[Benavides et al., 2011] NS NS 192 Extended Random FAMA performance test
[Benavides et al., 2008] 50-2000 0,0.1,0.5 5000 Basic, Extended Random CSP performance test
[Benavides et al. 2008] 386 54-104 5 Basic, Extended FAMA case study
[Benavides et al. 2011] NS NS BS Randomly generated Random FaMa,

SPLOT

performance test
[Sergio Segura. 2008] 50-300 0-.25 200 Randomly generated Random FAMA performance test
[Benavides et al. 2006] NS NS NS Basic CSP case study
[Benavides et al. 2013] 200-10000 10%,20%,30%,40% 200 computationally–hard Random CSP, BDD, SAT performance test
[Benavides et al. 2015] 10 0–3 20,000 Basic, cardinality-based Random FLAME performance test
[Benavides et al. 2016] 10-20000 0.5-10000 69800 Randomly generated Random QSAT performance test

Table 6. Experimental results for each papers

From Table 6, before 2009 far more automatically generated feature models than realistic ones have been used during experimental tests.

After 2009, there is an increasing concern to evaluate and compare the performance of different solutions using larger and more complex feature models; cardinality-based, extended, and clone-enabled FM. On the other hand, the researches before 2009 employ basic types of solvers (CSP, BDD and SAT) to perform analysis operations provided, while the researches after that they employ advanced, extended and hybrid frameworks to perform its operations.

Performance testing is being studied more and more and recent works show empirical evidences of the computational complexity of some analysis operations. There is a need to more rigorous analysis of computational complexity.

As mentioned in backround section, there are mainly three groups of solvers used for automated analysis: constraint programming, description logic and propositional logic based solvers. From the reviewd studies, we detected the constraint programming– based solvers are often  proposed to deal with extended feature models.

Propositional logic–based solvers seem to be much more efficient for finding the number of products but present serious limitations regarding memory consumption.

Description logic–based solvers have not been studied in depth to show their strengths and limitations in automated analysis of feature models.

Finally, we notice that not all solvers and paradigms will perform equally well for all the identified operations.

4.3. Highlights between 2010 and 2016

RQ3: What are the highlights from Benavidez et al (from 2010 to 2016)?

We summarize the highlights from Benavidez between 2010 and 2016, as followings:

  1. Proposed approaches that support automated analyses of FMs depend on translating FM into known formulas using provided rules, to be solved using automated tools or solvers:
  • Propositional logic based analyses.
  • Description logic based analyses.
  • Constraint programming based analyses.

This is the only method that supports the analyses of both cardinality-based FMs and extended FMs, therefore support the optimization operation. On the one hand, the declarativity of constraint programming eases to give a formal semantics to every analysis operation. On the other hand, current constraint solvers allow to build handy, efficient tools which offer support for every operation identified in this survey.

When some operations can be computationally difficult to perform when constraint programming solvers are used, we have to use a complete reasoning framework to support other tools such as SAT or BDD solvers.

  • Ad-hoc based analyses.
  1. In some papers, Benavides searched for large-scale variability models in the open source community. As a result he discovered that the Debian package dependency language could be decoded as software product line variability model. Furthermore, he discovered that those models could be automatically analyzed in a software product line variability model-like style and can be mapped from these models to propositional formulas and then performing analysis operations.
  1. Other papers focused on performing Test Suite, a set of implementation–independent test cases to validate the functionality of FM analysis tools. This is an efficient and handy mechanism to assist in the development of tools, detecting faults and improving their quality.
  1. In 2015, Benavides addressed the lack of formal definitions of the analysis operations, and face this challenge by developing FLAME, a formal framework for the specification of analysis operations on feature models.
  1. In 2012, Benavides et al apply benchmarking and testing on the automated analysis of feature models.
  1. In 2013, Benavides et al present an evolutionary algorithm for optimized feature models’ generation called ETHOM algorithm.
  1. In 2010, Benavides et al present Debian packages repositories as software product line models as a motivation towards automated Analysis.
  1. In 2016, Benavides et al perform traceability analyses between features and assets in software product lines
  2. In 2011, Benavides et al perform a functional testing of feature Model Analysis Tools.
  3.   After 2009, Benavides et al focus mainly on finding and developing new frameworks and techniques to face challenges such as: FAMA, FLAME and QSAT, whereas before that they focus on using basic solvers to perform their automated analysis.

4.4. Main Gaps in this field

RQ4: What are the main gaps in the literature for this field?

We identify several gaps to be addressed in future research. The first was the lack of formal definitions of the analysis operations, for which most of the reviewed works only provided informal descriptions, leading to misunderstandings and implementation problems in tool development.

The second was the lack of papers that perform the operations on clone-enabled feature model, only one paper treated this case.

The third was the lack of information about employed feature models in experiments, such as: number of features, type of feature model, number of instances, and some papers didn’t specify whether employed feature model was academic, industrial, or randomly generated.

5.                   Related Works

Research on automated analysis of Feature Models in software product lines was active in the last decade. New approaches and techniques have been proposed to handle the challenges that come with variability and new structures of FMs which require high performance assets to be executed and analyzed, such as the FAMA framework.

Because our work focuses on conducting literature review on automated analysis, we refer to recent reviews on analysis of software product lines conducted in this field [Benavides et al, 2010].

A literature Review 20 years later

Benavides et al, in this work, provides a complete literature study on the automated analysis of feature models 20 years after of their foundation. This research provides conclusions by gathering the previous work to help enhance this area.

The literature includes improved topics that have contributed with a set of operations, techniques, tools and experiential results that have not been reviewed until 2010.

6.                   Final Remarks

5.1. Conclusions

In this paper, we present the results of a systematic literature review on automated analysis operations for SPL. Our study helps shed the light on these operations, through 36 reviewed papers. We presented a detailed summary about all these operations, and how they were performed and evaluated, depending on determined factors.

We revised the state of the art on the automated analysis of feature models by running a structured literature review covering 36 primary studies and outlining the main results. We presented a catalogue with 24 operations identified in the literature and classified according to related papers, the tools used to perform the analyses and the results and trends related to the performance evaluation of the published proposals.

This study will help to find probable tools that will support SPL developers or even help them deal with existing SPL, and to explore the great cooperative effect that this research areas can offer to the SPL Engineering community.

Considering the analysis of current solutions, we conclude that the analysis of feature models is maturing with a growing number of studies, tools, researches, operations, contributions and empirical works. We also noticed a number of challenges for future research mainly related to the formalization and computational complexity of the operations, performance comparison of the approaches and the support of extended feature models. We also note that the reviewed researches focused specially on performing operations rather than evaluating ones. Most of papers performed operations using feature models of basic and cardinality-based design in experimental process, ignoring other types of feature model, such as extended and clone-enabled.

5.2. Future works

We hope that this mapping study not only serves to highlight the main research topics at automated analysis operations but that it also serves to entice researchers to provide a background for future research activities. There are many issues that remain open or to be improved. Following, we suggest some of the research areas and possible opportunities to motivate some other possible future work.

  • Extending the research to cover more papers in this area.

We focused our research on papers of Benavides, so if we want to expand this research in future, we have to expand the search to cover papers and researches of more researchers in the same area.

  • Developing adequate community-wide comparison criteria and factors.

In addition to used factors and criteria in comparison tables, we suggest to add more such as: execution time for each experiment, number of attributes in each feature (in extended FM case) and other that cover most important properties of used FMs and performed operations.

  • Developing new analysis techniques/tools.

Depending on features and performance of analysis techniques/tools that were applied during experiments, that can lead to develop new technique/tool that can perform more operations on new structures of FMs (such as extended and clone-enabled).

  • Revise The state of the art and the challenges ahead on the analysis of feature models.

The state of the art and research agenda has to be reviewed from time to time in order to summarize the progress in the area. In addition, more rigorous surveys and reviews are still missing and there is still a large amount of work to be done in this area.

  • Studying visualization of feature models

Although the important of current feature models’ representations and search algorithms that are used to travers through links and features, there is a triggered question; are boxes and lines the best way of representing feature models?

  • Mapping feature selections in a feature model into other development artifacts (i.e. requirements, architecture, processes, code modules, test cases, documentation, etc.).

We believe that this is an open research area that has to be investigated to leverage automated analysis of feature models in other fields and domains of analysis/development.

  • Formal definition of analysis operations. As we mentioned, most of the proposals define operation in terms of informal descriptions.
  • Include feature-attribute relationship for automating analysis on feature models and propose new operations of analysis to maximize extended feature models advantages.

Bibliography

  1. David Benavides, Antonio Ruiz– Cortés, Pablo Trinidad and Sergio Segura. A Survey On The Automated Analyses Of Feature Models, 2006.
  2. K. Pohl, G. B¨ockle, and F. van der Linden. Software Product Line Engineering: Foundations, Principles, and Techniques. Springer–Verlag, 2005.
  3. Marcilio Mendonca, Krzysztof Czarnecki, Andrzej Wa˛sowski. SAT-based Analysis of Feature Models is Easy, 2004.
  4. Rohit Gheyi, Tiago Massoni, Paulo Borba. A Theory for Feature Models in Alloy,2000.
  5. Thomas Thüm, Don Batory, Christian Kästner. Reasoning about Edits to Feature Models, 2009.
  6. Wei Zhang, Hua Yan1, Haiyan Zhao1, and Zhi Jin. A BDD-Based Approach to Verifying Clone-Enabled Feature Models’ Constraints and Customization, 2008.
  7.  W. Zhang, H. Zhao, and H. Mei. A propositional logic-based method for verification of feature models. In J. Davies, editor, ICFEM 2004, volume 3308, pages 115–130. Springer–Verlag, 2004.
  8. D. Benavides, A. Ruiz-Cortés, and P. Trinidad. Coping with automatic reasoning on software product lines. In Proceedings of the 2nd Groningen Workshop on Software Variability Management, November 2004.
  9. D. Benavides. On The Automated Analysis Of Software Product Lines Using Feature Models, 2007.
  10.  David Benavides, Sergio Segura, Pablo Trinidad, Antonio Ruiz-Cortés, A first step towards a framework for the automated analysis of feature models.
  11. D. Benavides, A. Ruiz- Cortés, and P. Trinidad. Automated reasoning on feature models. In Advanced Information Systems Engineering: 17th International Conference, CAiSE, volume 3520 of Lecture Notes in Computer Sciences, pages 491–503. Springer–Verlag, 2005.
  12. D. Benavides, A. Ruiz- Cortés, and P. Trinidad. Using constraint programming to reason on feature models. In the Seventeenth International Conference on Software Engineering and Knowledge Engineering, SEKE 2005, pages 677–682, 2005.
  13. D. Benavides, S. Segura, P. Trinidad, and A. Ruiz-Cortés. Using java CSP solvers in the automated analyses of feature models. LNCS, 4143:389–398, 2006.
  14. D. Benavides, S. Segura, P. Trinidad, and A. Ruiz-Cortés. A first step towards a framework for the automated analysis of feature models. In Managing Variability for Software Product Lines: Working with Variability Mechanisms, 2006.
  15. D. Benavides, S. Segura, P. Trinidad, and A. Ruiz-Cortés. FAMA: Tooling a framework for the automated analysis of feature models. In Proceeding of the First International Workshop on Variability Modelling of Software-intensive Systems (VAMOS), pages 129–134, 2007.
  16. Amador Durán, David Benavides, Sergio Segura, Pablo Trinidad and Antonio Ruiz-Cortés. FLAME: FAMA Formal Framework (v 1.0), 2012.
  17. David Benavides, Jules White, Brian Dougherty, and Doulas C. Schmidt. Automated Reasoning for Multi-Step Feature Model Configuration Problems, 2009.
  18. Amador Dur´an, David Benavides, Sergio Segura, Pablo Trinidad, Antonio Ruiz Cort´es. FLAME: A Formal Framework for the Automated Analysis of Software Product Lines Validated by Automated Specification Testing, 2015.
  19. Sergio Segura, José A. Galindo, David Benavides, José A. Parejo and Antonio Ruiz-Cortés. BeTTy: Benchmarking and Testing on the Automated Analysis of Feature Models, 2012.
  20. Sergio Segura, J. A. Parejo, Robert M. Hierons, David Benavides, and Antonio Ruiz-Cortés. ETHOM: An Evolutionary Algorithm for Optimized Feature Models Generation (v. 1.3), 2013.
  21. David Benavides, Sergio Segura and Antonio Ruiz-Cortés. Functional Testing of Feature Model Analysis Tools: A Test Suite, 2011.
  22. J. White, D. C. Schmidt, D. Benavides, P. Trinidad and A. Ruiz–Cortés. Automated Diagnosis of Product-line Configuration Errors in Feature Models. September 2008.
  23. P. Trinidad, D. Benavides, A. Durán, A. Ruiz-Cortés, and M. Toro. Automated error analysis for the agilization of feature modeling. Journal of Systems and Software, in press, 2008.
  24. José A. Galindo, David Benavides and Sergio Segura. Debian Packages Repositories as Software Product Line Models. Towards Automated Analysis, 2010.
  25. Sergio Segura. Automated Analysis of Feature Models using Atomic Sets, 2008.
  26. Don Batory, David Benavides and Antonio Ruiz-Cortés. Automated Analyses of Feature Models: Challenges Ahead, 2006.
  27. Sergio Segura, Robert M. Hierons, David Benavides, Antonio Ruiz-Cort´es. Mutation Testing on an Object–Oriented Framework: An Experience Report, 2011.
  28. Sergio Segura, David Benavides and Antonio Ruiz-Cort´es. Automated Analysis of Feature Models: A Detailed Literature Review, 2009.
  29. D. Benavides, B. Doughtery, D. Schmidt and J. White.  Automated Reasoning for Multi-step Software Product-line Configuration Problems, 2009.
  30. D. Batory, D. Benavides and A. Ruiz-Cortés. Automated Analyses of Feature Models: Challenges Ahead, 2006.
  31. P. Trinidad, D. Benavides and A. Ruiz- Cortés. Isolated Features Detection in Feature Models, 2006.
  32. Ganesh Khandu Narwane, José A. Galindo, Shankara Narayanan Krishna, David Benavides, Jean-Vivien Millo, and S. Ramesh. Traceability Analyses between Features and Assets in Software Product Lines, 2016.
  33. D. Benavides et al. Automated Analysis of Feature Models 20 Years Later: A Literature Review, 2010.
  34. F. Cao, B. Bryant, C. Burt, Z. Huang, R. Raje, A. Olson, and M. Auguston. Automating feature-oriented domain analysis. In International Conference on Software Engineering Research and Practice (SERP’03), pages 944–949, June 2003.
  35. A. van Deursen and P. Klint. Domain–specific language design requires feature descriptions. Journal of Computing and Information Technology, 10(1):1–17, 2002.
  36. P. Klint. A meta-environment for generating programming environments. ACM Trans. Softw. Eng. Methodol., 2(2):176–201, April 1993.
  37. Krzysztof Czarnecki, Chang Hwan Peter Kim. Cardinality-Based Feature Modeling and Constraints: A Progress Report, 2015.
  38.  Fabricia Roos-Frantz, José A. Galindo, David Benavides, and Antonio Ruiz Cortés. Automated Analysis of Diverse Variability Models with Tool Support, 2014.
  1. Lianping Chen, Muhammad Ali Babar: A Systematic Review of Evaluation of Variability Management Approaches in Software Product Lines. Journal Information and Software Technology, 53(4):344–362 (2011).

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

Related Content

All Tags

Content relating to: "Computer Science"

Computer science is the study of computer systems, computing technologies, data, data structures and algorithms. Computer science provides essential skills and knowledge for a wide range of computing and computer-related professions.

Related Articles

DMCA / Removal Request

If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: