Discussion Several attempts have been made by different authors to estimate and calculate the maintainability. Various techniques, tools and methodologies were evolved with the passage of time. Machine learning algorithms are also getting popular. From the literature survey it has been observed that there exists different categories of maintainability metrics: Design metrics, code metrics and Specification metrics includes mainly complexity metrics.
Design metrics include both modularity and structural complexity where as large part of metrics fall in the category of code metrics to measure maintainability. It is most important to have good understanding of software characteristics, metrics and environment in which software is to be analyzed. The main aim of metric selection is to select such metrics which are statistically significant and must be relevant.
Very little evidences are available over the maintainability models for web based applications[38]. Studies have been conducted and found that there exists strong relation between Object Oriented software metrics and its maintainability.
Conclusion and future work Results are obtained by gathering, studying and analyzing the papers from the year Special care has been taken to select only the relevant studies irrespective of the open source softwares , closed source softwares and web applications. Only 33 papers were included based on the prediction models presented while other papers were based on their validation and comparison between them.
It was observed that little work has been done in maintainability of the web applications and open source softwares. Future work is to work with large versions of open source softwares like Open Office ,Mozilla and Firefox and estimate its maintainability using combination of metrics suggested in past developing new maintainability model. Also using Exploratory factor analysis , Principle component analysis models to estimate maintainability at higher level.
Rizvi and R. Brown, H. Kaspar, M. Lipow, G. Macleod and M. Amsterdam: North Holland. Assessing software maintainability. ACM Communications, 27 1 , Specification of software quality attributes. IEEE Trans. Software Eng. Prentic Hall. ISBN Oman and J. Li and S. Coleman, D. Ash, B.
There are also issues related to the function point metric. Fundamentally, the meaning of function point and the derivation algorithm and its rationale may need more research and more theoretical groundwork. There are also many variations in counting function points in the industry and several major methods other than the IFPUG standard.
In , Symons presented a function point variant that he termed the Mark II function point Symons, Some of the minor function point variants include feature points, 3D function points, and full function points. In all, based on the comprehensive software benchmark work by Jones , the set of function point variants now include at least 25 functional metrics.
Function point counting can be time-consuming and expensive, and accurate counting requires certified function point specialists. Nonetheless, function point metrics are apparently more robust than LOC-based data with regard to comparisons across organizations, especially studies involving multiple languages and those for productivity evaluation.
In , based on a large body of empirical studies, Jones published the book Software Assessments, Benchmarks, and Best Practices. All metrics used throughout the book are based on function points. According to his study , the average number of software defects in the U. This number represents the total number of defects found and measured from early software requirements throughout the life cycle of the software, including the defects reported by users in the field.
Jones also estimates the defect removal efficiency of software organizations by level of the capability maturity model CMM developed by the Software Engineering Institute SEI. By applying the defect removal efficiency to the overall defect rate per function point, the following defect rates for the delivered software were estimated.
The time frames for these defect rates were not specified, but it appears that these defect rates are for the maintenance life of the software. The estimated defect rates per function point are as follows:. Another product quality metric used by major developers in the software industry measures the problems customers encounter when using the product. For the defect rate metric, the numerator is the number of valid defects. However, from the customers' standpoint, all problems they encounter while using the software product, not just the valid defects, are problems with the software.
Problems that are not valid defects may be usability problems, unclear documentation or information, duplicates of valid defects defects that were reported by other customers and fixes were available but the current customers did not know of them , or even user errors. These so-called non-defect-oriented problems, together with the defect problems, constitute the total problem space of the software from the customers' perspective.
PUM is usually calculated for each month after the software is released to the market, and also for monthly averages by year. Note that the denominator is the number of license-months instead of thousand lines of code or function point, and the numerator is all problems customers encountered.
Basically, this metric relates problems to usage. Approaches to achieve a low PUM include:. Reduce the non-defect-oriented problems by improving all aspects of the products such as usability, documentation , customer education, and support. The first two approaches reduce the numerator of the PUM metric, and the third increases the denominator. The result of any of these courses of action will be that the PUM metric has a lower value. All three approaches make good sense for quality improvement and business goals for any organization.
The PUM metric, therefore, is a good metric. The only minor drawback is that when the business is in excellent condition and the number of software licenses is rapidly increasing, the PUM metric will look extraordinarily good low value and, hence, the need to continue to reduce the number of customers' problems the numerator of the metric may be undermined.
Therefore, the total number of customer problems should also be monitored and aggressive year-to-year or release-to-release improvement goals set as the number of installed licenses increases. However, unlike valid code defects, customer problems are not totally under the control of the software development organization. Therefore, it may not be feasible to set a PUM goal that the total customer problems cannot increase from release to release, especially when the sales of the software are increasing.
The key points of the defect rate metric and the customer problems metric are briefly summarized in Table 4. The two metrics represent two perspectives of product quality. For each metric the numerator and denominator match each other well: Defects relate to source instructions or the number of function points, and problems relate to usage of the product.
If the numerator and denominator are mixed up, poor metrics will result. Such metrics could be counterproductive to an organization's quality improvement effort because they will cause confusion and wasted resources. The customer problems metric can be regarded as an intermediate measurement between defects measurement and customer satisfaction.
To reduce customer problems, one has to reduce the functional defects in the products and, in addition, improve other factors usability, documentation, problem rediscovery, etc. To improve customer satisfaction, one has to reduce defects and overall problems and, in addition, manage factors of broader scope such as timing and availability of the product, company image, services, total customer solutions, and so forth.
From the software quality standpoint, the relationship of the scopes of the three metrics can be represented by the Venn diagram in Figure 4. Figure 4. Satisfaction with the overall quality of the product and its specific dimensions is usually obtained through various methods of customer surveys. Based on the five-point-scale data, several metrics with slight variations can be constructed and used, depending on the purpose of analysis.
For example:. Usually the second metric, percent satisfaction, is used. In practices that focus on reducing the percentage of nonsatisfaction, much like reducing product defects, metric 4 is used. In addition to forming percentages for various satisfaction or dissatisfaction categories, the weighted index approach can be used. For instance, some companies use the net satisfaction index NSI to facilitate comparisons across product.
The NSI has the following weighting factors:. This weighting approach, however, may be masking the satisfaction profile of one's customer set. If satisfaction is a good indicator of product loyalty, then half completely satisfied and half neutral is certainly less positive than all satisfied. Therefore, this example of NSI is not a good metric; it is inferior to the simple approach of calculating percentage of specific categories.
If the entire satisfaction profile is desired, one can simply show the percent distribution of all categories via a histogram. A weighted index is for data summary when multiple indicators are too cumbersome to be shown. For example, if customers' purchase decisions can be expressed as a function of their satisfaction with specific dimensions of a product, then a purchase decision index could be useful. In contrast, if simple indicators can do the job, then the weighted index approach should be avoided.
I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time. Pearson Education, Inc. This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.
To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:. For inquiries and questions, we collect the inquiry or question, together with name, contact details email address, phone number and mailing address and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.
We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes. Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites.
Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.
Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.
If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information informit. On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email.
Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature. We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.
Pearson automatically collects log data to help ensure the delivery, availability and security of this site. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.
Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson but not the third party web trend services to link information with application and system log data.
Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services. This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising.
Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site. Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.
Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider.
Marketing preferences may be changed at any time. If a user's personally identifiable information changes such as your postal address or email address , we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service informit. Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT.
If you choose to remove yourself from our mailing list s simply visit the following page and uncheck any communication you no longer want to receive: www. While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest pearson. California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice.
The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services. This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites.
We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. Software is more easily maintainable if it has high-quality code that is readable and well-documented, so keep good coding practices in mind while your software is still in development. While performing maintenance, you can make four types of changes to your software:. Maintaining software in an agile project is challenging.
It requires maintaining legacy software and fixing its bugs alongside the development of new products. Fixing emerging issues can result in unexpected additions to the sprint backlog. This makes it harder to accurately plan and manage sprints.
Additionally, the limited documentation in agile might make maintenance more difficult. Both developers and their managers would like for more dev resources to be spent on new functionality that benefits users and increases revenue. In reality, however, a growing portion of developer time is taken up by the second part—maintenance and bug fixing.
0コメント