SOFTWARE QUALITY ANALYSIS

SOFTWARE QUALITY ANALYSIS

Key quality metrics, such as system availability, operational problems / failures and project completion statistics are often maintained by IT shops. However, organizations typically lack the resources to dig deeper into these metrics to find the root causes of problems. Instead, fixes are installed, applications are patched and the immediate problem or emergency is averted. In some cases, procedures may be put in place to avoid the problem happening again.

 

Instead of this, rather reactive approach to managing software, Mapador Inc. offers a pro-active solution. On the one hand, Mapador’s technology can target an individual application to improve its performance and quality. On the other hand, our technology provides up-to-date software quality metrics across all applications, allowing organizations to improve the overall quality of their applications. In both cases, Mapador’ technology pin-points areas of improvement opportunities and thus helps to reduce maintenance costs and avoid system downtimes before they occur.

 

710_SoftwareQuality

TARGETED APPLICATION QUALITY

Applications written many years ago or utilizing outdated technologies are often left to operate as ‘black-boxes’. In many cases, little or no in-house knowledge remains to improve the application’s performance or reliability. Enhancing or replacing these applications is not even attempted as organizations are afraid to even touch the application.

 

Mapador Target can intelligently understand and map out these applications, regardless of platform or language, and thus provide a roadmap for improvements or outright replacement. Output from our solution includes:

 

  • Inventory Report – provides a comprehensive detail of application components, including used and unused code, technology components, structure, as well as links to outside applications and components.
  • Assessment Report – provides a detailed analysis of software quality and performance, identifies potential changes and their impact.

 

We recommend assessing an older application before any Application Migration activity occurs. First, the assessment may reveal that the application can be brought up to date and into a maintainable status without costly replacement or migration. Second, the source of any application migration project should be well understood. Without such detailed knowledge of the old application, many migration projects fail to create an effective new application. Moreover, unused areas of the application should be removed from the scope of migration, thus reducing complexity and costs.

 

PORTFOLIO-WIDE APPLICATION QUALITY

The measurement of quality requires the collection and analysis of quantitative information, usually stated in terms of metrics. The value of a metrics program is to measure the quality of software products being produced, to determine areas where improvements are required and ultimately to measure whether the improvement programs are having the desired effect.

 

Software quality metrics focus on the quality aspects of the product, process, and project. These metrics can be divided further into end-product quality metrics and in-process quality metrics. Software quality engineering has two main stages:

 

  • Investigate the relationships among in-process metrics, project characteristics, and end-product quality.
  • Engineer improvements in both process and product quality.

 

Quality is viewed from the entire software life-cycle perspective, not simply as a check on the accuracy and performance of the developed code. Software quality metrics also include metrics that measure the quality of the maintenance process.

Back to top
Utilizing Mapador APMMapador APM provides a proactive method of collecting key software quality measures and distributing these measures throughout the IT organization via a web-based portal.   In addition, these measures can be refreshed on a continuous basis as applications are modified and additional issues are encountered.

1.     Problem Analysis

 

Traditionally, quality involves measuring the problems (also called bugs, defects, failures) encountered when using software. These problems can occur in several categories including:

 

  • Internal user problems
  • External client problems
  • Operations problems.

 

Statistics for these problem categories described above are usually recorded in a variety of tools and methods. Client / user problems are typically gathered by Help Desks or customer services representatives. Operations problems are typically collected by operations staff and relate to systems failures and outages. These problems are recorded using both custom in-house tools and commercial tools.

 

Mapador’s technology can parse these problem logs and relate the problems back to the underlying application and system objects.

2.     Linking to Application Software Objects418_QualityMetrics

 

While problem statistics often provide a base understanding of the number of problems that have occurred, deeper understanding of the causes of those problems requires that the problems be linked back to other information. Linking the problem logs to:

 

  • Application objects such as jobs, web pages, screens, transactions, etc. can provide information about error-prone application components.
  • Application releases can determine if problems are being introduced by software releases.
  • Application usage statistics can determine the number of problems based on the amount of usage of the application objects.
The better understanding provided by Mapador’s technology ensures that action plans addressing these problems are more focused and effective.

3.     Production Metrics

 

End-product quality is usually measured by the number of “bugs” (functional defects) in the software or by how long the software can run before a critical failure (a “crash.”) The previous diagram showed application-based software quality metrics being derived based on linkages between production problem logs and application object information. Several end-product or production software quality metrics can be derived as follows:

 

  • Defects
    • Number of defects
    • Number of defects for each object, subsystem or system.
  • Defect Rates
    • Number of defects / Size of application (measured by KLOC, use cases or function points).
  • Customer Usage
    • Number of problems reported by customers for a time period / Total usage during the period (measured in number of web page hits, online transactions, by number of users or licenses).
  • Software Releases
    • Number of defects reported for each software release.
  • Software Maintenance Quality
    • Backlog management index: Number of problems closed during the month / number of problems added during the month
    • Fix responsiveness: Mean time, from open to closed, for all problems
    • Fix quality: Percentage of all fixes that were defective during a month

4.     Test Coverage Metrics

 

Mapador’s technology can also link to test cases stored in automated test tools, Excel spreadsheets or Word documents. These test cases can then be linked to application objects to understand the test coverage for objects such as web pages, windows, screens and batch jobs.

5.     In-Process Metrics

 

One of the goals of a metrics program is to understand the development process and to learn to engineer quality into that process. In-process quality metrics play an important role. However, in-process quality metrics are usually less formally defined than end-product metrics, and their use varies greatly among software developers. Some of these metrics are listed below:

 

  • Defect metrics (recorded in a bug tracking tool)
    • Defect Arrival Rate: Number of defects reported per week
    • Defect Fix Rate: Number of defects open   (indicates quality and efficiency of bug fixing)
  • In-Process Quality during Development (derived by linking bugs to application objects)
    • Defect Density: Number of defects / Size of application
    • Source of Defects: Number of Defects originating in each phase
    • Requirements Quality: Number of Defects originating in Requirements phase / # of Use Cases

These metrics can be correlated with the production or end-product metrics described above to understand how development practices affect the resulting production software.

6.     The Process

 

Mapador APM forms the backbone of a proactive software quality metrics and quality improvement program. The following outlines the process for utilizing the product in a quality improvement program:

 

  • Planning the Metrics to track

 

  • Determine the areas where improvements are required and then select candidate metrics that will demonstrate the quality characteristics of those areas.
  • Identify and determine the level at which acceptable quality is attained for those areas.

 

  • Track the Metrics

 

  • Determine the source data for the metrics and the relationships within the data that will be tracked. Mapador’s collectors and parsers will extract the required information and linkages and place them in the Mapador repository for analysis purposes.

 

  • Check the Results

 

  • Using Mapador’s web portal, metrics can be viewed and analysed by the appropriate groups within the organization. Progress can then be assessed and compared to acceptable quality levels.

 

  • Put improvement plans in place

 

  • Based on the results, action can now be taken to eliminate problems or to avoid problems that have not yet occurred. Improvement actions can include changing processes, installing additional tools or re-factoring error-prone software objects.

 

Mapador’s technology can form the heart of a continuous quality improvement program. Additional metrics can be defined within Mapador APM on a continuous basis as improvement concentrates on other aspects of the IT organization.

Leave a Reply

Your email address will not be published. Required fields are marked *