Dissertation on Automatic Unified Modeling Language Generation
Info: 1459 words (6 pages) Introduction
Published: 14th May 2021
Tagged: EngineeringComputing
1. Introduction
1.1. The Problem Statement
This thesis deals with the problem of Automatic generation of a UML Model from Natural Language Software Requirement Specifications. This thesis describes the development of Auto Modeler an Automated Software Engineering tool that takes Natural Language Software System Requirement Specifications as Input, performs an automated OO analysis and tries to produce an UML Model (a partial one in its present state i.e. static Class diagrams only) as output. The basis for Auto Modeler is described in [2][3].
1.2. Motivation
We conducted a short survey of the Software Industry in Islamabad in order to determine what sorts of Automated Software Engineering Tools were required by the Software houses. The result of the Survey (see Appendix-I for the survey report) indicated that there is demand for such a tool as Auto Modeler. Since such tools i.e. [2][3] that have already been developed are either not available in the market or are very expensive, and thus out of the reach of most software houses. Therefore we decided to build our own tool that can be used by the software industry in order to enable them to be more productive and competitive. But at present Auto Modeler is not ready for commercial use. But it is hoped that future versions of Auto Modeler will be able to cater to the needs of the Software Houses.
1.3. Background
1.3.1. The need for Automated Software Engineering Tools: In this era of Information Technology great demands are placed on Software Systems and on all those that are involved in the SDLC. The developed software should not only be of high quality but it should also be developed in minimal amount of time. When it comes to Software quality, the software must be highly reliable and it should meet the customer’s needs and it should satisfy the customer’s expectations.
Automated Software Engineering Tools can assist the Software Engineer’s and Software Developers in producing High Quality Software in minimal amount of time.
1.3.2. Requirements Engineering: Requirements engineering consists of the following tasks [6]:
· Requirements Elicitation
· Requirements Analysis
· Requirements Specification
· Requirements Validation / Verification
· Requirements Management
Requirements engineering is recognized as a critical task, since many software failures originate from inconsistent, incomplete or simply incorrect System Requirements specifications.
1.3.3. Natural Language Requirement Specifications: Formal methods have been successfully used to express Requirements Specifications, but often the customer cannot understand them and therefore cannot validate them [4]. Natural Language is the only common medium understood by both the Customer and the Analyst [4]. So the System Requirements Specifications are often written in Natural Language.
1.3.4. Object Oriented Analysis: The System Analyst must manually process The Natural Language Requirements Specifications Document and perform an OO Analysis and produce the results in the form of an UML Model, which has become a Standard in the Software Industry. The manual process is laborious, time consuming and often prone to errors. Some specified requirements might be left out. If there are problems or errors in the original requirements specifications, they may not be discovered in the manual process.
OOA applies the OO paradigm to models of proposed systems by defining classes, objects and the relationships between them. Classes are the most important building block of an OO system and from these we instantiate objects. Once an individual object is created it inherits the same operations, relationships, semantics, and attributes identified in the class. Attributes of classes, and hence objects, hold values of properties. Operations, also called methods, describe what can be done to an object/class.[1]
A relationship between classes/objects can show various attributes such as aggregation, composition, generalization and dependency. Attributes and operations represent the semantics of the class, while relationships represent the semantics of the model [1]. The KRB seven-step method, introduced by Kapur, Ravindra and Brown, proposes how to find classes and objects manually [1]. Hence,
Identify candidate classes (nouns in NL).
Define classes (look for instantiations of classes).
Establishing associations (capturing verbs to create association for each pair of classes in 1 and 2).
Expanding many-to-many associations.
Identify class attributes.
Normalize attributes so that they are associated with the class of objects that they truly describe.
Identify class operations.
From this process we can see that one goal of OOA is to identify NL concepts that can be transformed into OO concepts; which can then be used to form system models in particular notations. Here we shall concentrate on UML [1].
1.3.5. Natural Language Processing (NLP): If an automatic analysis of the NL Requirements Document is carried out then it is not only possible to quickly find errors in the Specifications but with the right methods we can quickly generate a UML model from the Requirements.
Although, Natural language is inherently ambiguous, imprecise and incomplete; often a natural language document is redundant, and several classes of terminological problems (e.g., jargon or specialist terms) can arise to make communication difficult [2] and it has been proven that Natural Language processing with holistic objectives is a very complex task, it is possible to extract sufficient meaning from NL sentences to produce reliable models. Complexities of language range from simple synonyms and antonyms to such complex issues as idioms, anaphoric relations or metaphors. Efforts in this particular area have had some success in generating static object models using some complex NL requirement sentences.
1.3.5.1. Linguistic analysis: Linguistic analysis studies NL text from different linguistic levels, i.e. words, sentence and meaning.[1]
(i) Word-tagging analyses how a word is used in a sentence. In particular, words can be changeable from one sentence to another depending on context (e.g. light can be used as noun, verb, adjective and adverb; and while can be used as preposition, conjunction, verb and noun). Tagging techniques are used to specify word-form for each single word in a sentence, and each word is tagged as a Part Of Speech (POS), e.g. a NN1 tag would denote a singular noun, while VBB would signify the base form of a verb.[1]
(ii) Syntactic analysis applies phrase marker, or labeled bracketing, techniques to segment NL as phrases, clauses and sentences, so that the NL is delineated by syntactical/grammatical annotations. Hence we can shows how words are grouped and connected to each other in a sentence.[1]
(iii) Semantic analysis is the study of the meaning. It uses discourse annotation techniques to analyze open-class or content words and closed-class words (i.e. prepositions, conjunctions, pronouns). The POS tags and syntactic elements mentioned previously can be linked in the NL text to create relationships.
Applying these linguistic analysis techniques, NLP tools can carry out morphological processing, syntactic processing and semantic processing. The processing of NL text can be supported by Semantic Network (SN) and corpora that provide a knowledge base for text analysis.
The difficulty of OOA is not just due to the ambiguity and complexity of NL itself, but also the gap in meaning between the NL concepts and OO concepts.[1]
1.3.6. From NLP to UML Model Creation. After NLP the sentences are simplified in order to make identification of UML model elements form NL elements easy. Simple Heurists are used to Identify UML Model elements from Natural Text: (see Chapter 7)
* Nouns indicate a class
* Verb indicates an operation
* Possessive relationships and Verbs like to have, identify, denote indicate attributes
* Determiners are used to identify the multiplicity of roles in associations.
1.5. Plan of the thesis
In Chapter 2 we present a brief survey of previous work and work similar to our work. Chapters 3, 4, 5, 6 and 7 describe the theoretical basis for Auto Modeler. Chapter 8 Describes the Architecture of Auto Modeler. In Chapter 9 we describe Auto Modeler in action with a case study. In Chapter 10 we present conclusions.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related Services
View allRelated Content
All TagsContent relating to: "Computing"
Computing is a term that describes the use of computers to process information. Key aspects of Computing are hardware, software, and processing through algorithms.
Related Articles
DMCA / Removal Request
If you are the original writer of this dissertation introduction and no longer wish to have your work published on the UKDiss.com website then please: