Code Comment Quality Analysis and Improvement Recommendation: An Automated Approach

Author(s):  
Xiaobing Sun ◽  
Qiang Geng ◽  
David Lo ◽  
Yucong Duan ◽  
Xiangyue Liu ◽  
...  

Program comprehension is one of the first and most frequently performed activities during software maintenance and evolution. In a program, there are not only source code, but also comments. Comments in a program is one of the main sources of information for program comprehension. If a program has good comments, it will be easier for developers to understand it. Unfortunately, for many software systems, due to developers’ poor coding style or hectic work schedule, it is often the case that a number of methods and classes are not written with good comments. This can make it difficult for developers to understand the methods and classes, when they are performing future software maintenance tasks. To deal with this problem, in this paper we propose an approach which assesses the quality of a code comment and generates suggestions to improve comment quality. A user study is conducted to assess the effectiveness of our approach and the results show that our comment quality assessments are similar to the assessments made by our user study participants, the suggestions provided by our approach are useful to improve comment quality, and our approach can improve the accuracy of the previous comment quality analysis approaches.

2012 ◽  
Vol 11 (03) ◽  
pp. 1250018 ◽  
Author(s):  
Leon A. Wilson ◽  
Maksym Petrenko ◽  
Václav Rajlich

Program comprehension is an integral part of the evolution and maintenance of large software systems. As it is increasingly difficult to comprehend these systems completely, programmers have to rely on a partial and as-needed comprehension. We study partial comprehension and programmer learning with the use of concept maps as a tool for capturing programmer knowledge during concept location, which is one of the tasks of software evolution and maintenance, and it is a prerequisite of a software change. We conduct a user study to measure the performance of programmers using concept maps to assist with locating concepts. The results demonstrate that programmer learning occurs during concept location and that concept maps assisted programmers with capturing programmer learning and successful concept location.


2019 ◽  
Vol 8 (9) ◽  
pp. 418 ◽  
Author(s):  
Ding ◽  
Fan

In recent years, volunteered-geographic-information (VGI) image data have served as a data source for various geographic applications, attracting researchers to assess the quality of these images. However, these applications and quality assessments are generally focused on images associated with geolocation through textual annotations, which is only part of valid images to them. In this paper, we explore the distribution pattern for most relevant VGI images of specific landmarks to extend the current quality analysis, and to provide guidance for improving the data-retrieval process of geographic applications. Distribution is explored in terms of two aspects, namely, semantic distribution and spatial distribution. In this paper, the term semantic distribution is used to describe the matching of building-image tags and content with each other. There are three kinds of images (semantic-relevant and content-relevant, semantic-relevant but content-irrelevant, and semantic-irrelevant but content-relevant). Spatial distribution shows how relevant images are distributed around a landmark. The process of this work can be divided into three parts: data filtering, retrieval of relevant landmark images, and distribution analysis. For semantic distribution, statistical results show that an average of 60% of images tagged with the building’s name actually represents the building, while 69% of images depicting the building are not annotated with the building’s name. There was also an observation that for most landmarks, 97% of relevant building images were located within 300 m around the building in terms of spatial distribution.


2016 ◽  
Vol 24 (3) ◽  
pp. 329-348 ◽  
Author(s):  
Railya B. Galeeva

Purpose The purpose of this study is to demonstrate an adaptation of the SERVQUAL survey method for measuring the quality of higher educational services in a Russian university context. We use a new analysis and a graphical technique for presentation of results. Design/methodology/approach The methodology of this research follows the classic SERVQUAL method in terms of data acquisition but provides a new approach for data analysis and presentation of findings. The technique is intended to improve upon the original method by including an importance-quality analysis grid and extending it with an innovative graphical tool for presenting results to decision-makers that is based on area-based ratios rather than difference scores. Findings The report includes survey results of two waves of research conducted in 2009 and 2014. Each wave consisted of 1,000 respondents from 20 branches of study and 11 higher education institutions, respectively. Research limitations/implications It is argued that the SERVQUAL method can be improved significantly with the proposed technique. However, the validity and reliability of the importance, expectation and perception summary scores need to be further investigated. Also, alternative methods for quality assessment (SERVPERF/HEdPERF) should be tested and compared with the modified SERVQUAL method in Russian and other international education contexts. Practical implications Educational service quality assessments allow the management to acquire an image of the overall quality of an institution, as well as its strengths and weaknesses, thereby improving its strategic positioning to make improvements. It is hoped that the proposed improvement to the SERVQUAL technique will increase adoption of the method among academic institutions. Originality/value The improved SERVQUAL methodology demonstrated in this research replaces the widely criticised “difference scores” with an easily applied graphical display. The methodology also incorporates an importance-quality analysis providing a new perspective on the SERVQUAL data. The current findings provide valuable insights into the perceived quality of the Republic of Tatarstan’s higher education system in Russia, as given by its student customers.


2014 ◽  
Vol 2014 ◽  
pp. 1-23 ◽  
Author(s):  
Songqing Yue ◽  
Jeff Gray

Metaprogramming has shown much promise for improving the quality of software by offering programming language techniques to address issues of modularity, reusability, maintainability, and extensibility. Thus far, the power of metaprogramming has not been explored deeply in the area of high performance computing (HPC). There is a vast body of legacy code written in Fortran running throughout the HPC community. In order to facilitate software maintenance and evolution in HPC systems, we introduce a DSL that can be used to perform source-to-source translation of Fortran programs by providing a higher level of abstraction for specifying program transformations. The underlying transformations are actually carried out through a metaobject protocol (MOP) and a code generator is responsible for translating a SPOT program to the corresponding MOP code. The design focus of the framework is to automate program transformations through techniques of code generation, so that developers only need to specify desired transformations while being oblivious to the details about how the transformations are performed. The paper provides a general motivation for the approach and explains its design and implementation. In addition, this paper presents case studies that illustrate the potential of our approach to improve code modularity, maintainability, and productivity.


2021 ◽  
Vol 20 (No.4) ◽  
pp. 511-539
Author(s):  
Abdullah Almogahed ◽  
Mazni Omar

Refactoring is a critical task in software maintenance and is commonly applied to improve system design or to cope with design defects. There are 68 different types of refactoring techniques and each technique has a particular purpose and effect. However, most prior studies have selected refactoring techniques based on their common use in academic research without obtaining evidence from the software industry. This is a shortcoming that points to the existence of a clear gap between academic research and the corresponding industry practices. Therefore, to bridge this gap, this study identified the most frequently used refactoring techniques, the commonly used programming language, and methods of applying refactoring techniques in the current practices of software refactoring among software practitioners in the industry, by using an online survey. The findings from the survey revealed the most used refactoring techniques, programming language, and the methods of applying the refactoring techniques. This study contributes toward the improvement of software development practices by adding empirical evidence on software refactoring used by software developers. The findings would be beneficial for researchers to develop reference models and software tools to guide the practitioners in using these refactoring techniques based on their effect on software quality attributes to improve the quality of the software systems as a whole.


Author(s):  
Mehmet S. Aktas ◽  
Mustafa Kapdan

Unnecessary repeated codes, also known as code clones, have not been well documented and are difficult to maintain. Code clones may become an important problem in the software development cycle, since any detected error must be fixed in all occurrences. This condition significantly increases software maintenance costs and requires effort/duration for understanding the code. This research introduces a novel methodology to minimize or prevent the code cloning problem in software projects. In particular, this manuscript is focused on the detection of structural code clones, which are defined as similarity in software structure such as design patterns. Our proposed methodology provides a solution to the class-level structural code clone detection problem. We introduce a novel software architecture that provides unification of different software quality analysis tools that take measurements for software metrics for structural code clone detection. We present an empirical evaluation of our approach and investigate its practical usefulness. We conduct a user study using human judges to detect structural code clones in three different open-source software projects. We apply our methodology to the same projects and compare results. The results show that our proposed solution is able to show high consistency compared with the results reached by the human judges. The outcome of this study also indicates that a uniform structural code clone detection system can be built on top of different software quality tools, where each tool takes measurements of different object-oriented software metrics.


Author(s):  
Jianjun Zhao ◽  
Limin Xiang

Change impact analysis is a useful technique in software maintenance and evolution. Many techniques have been proposed to support change impact analysis at the code level of software systems, but little effort has been made for change impact analysis at the architectural level. In this chapter, we present an approach to supporting change impact analysis at the architectural level of software systems based on the architectural slicing and chopping technique. The main feature of our approach is to assess the effect of changes in a software architecture by analyzing its formal architectural specification, and therefore, the process of change impact analysis can be automated completely.


2018 ◽  
Vol 2 (1) ◽  
pp. 10-15
Author(s):  
Rozita Kadar ◽  
Sharifah Mashita Syed-Mohamad ◽  
Putra Sumari ◽  
Nur 'Aini Abdul Rashid

Program comprehension is an important process carried out involving much effort in software maintenance process. A key challenge to developers in program comprehension process is to comprehend a source code. Nowadays, software systems have grown in size ca using increase in developers' tasks to explore and understand millions of lines of source code. Meanwhile, source code is a crucial resource for developers to become familiar with a software system since some system documentations are often unavailable or outdated. However, there are problems exist in understanding source codes, which are tricky with different programming styles, and insufficient comments. Although many researchers have discussed different strategies and techniques to overcome program compr ehension problem, only a shallow knowledge is obtained about the challenges in trying to understand a software system through reading source code. Therefore, this study attempts to overcome the problems in source code comprehension by suggesting a suitable comprehension technique. The proposed technique is based on using ontology approach for knowledge representation. This approach is able to easily explain the concept and relationship of program domain. Thus, the proposed work will create a better way for improving program comprehension.


2016 ◽  
pp. 141-149
Author(s):  
S.V. Yershov ◽  
◽  
R.М. Ponomarenko ◽  

Parallel tiered and dynamic models of the fuzzy inference in expert-diagnostic software systems are considered, which knowledge bases are based on fuzzy rules. Tiered parallel and dynamic fuzzy inference procedures are developed that allow speed up of computations in the software system for evaluating the quality of scientific papers. Evaluations of the effectiveness of parallel tiered and dynamic schemes of computations are constructed with complex dependency graph between blocks of fuzzy Takagi – Sugeno rules. Comparative characteristic of the efficacy of parallel-stacked and dynamic models is carried out.


Sign in / Sign up

Export Citation Format

Share Document