Custom Search

Introduction

Software Testing has been an integral part of every software development lifecycle and every software organization. The demand for Software Testing professionals has increased tremendously in the last 4-5 years. Industry reports suggest that the demand for testing professionals will increase in comming years. Alone India needs 70,000 testing professionals by end of 2009. Software testing has become a good career choice for young college graduates, who want to pursue their career in booming software industry. This blog is created with the intention to provide facts, knowledge, methodologies, and discuss various career related queries that arise in the minds of people who want to know about software testing. People belonging to the software industry and paricularly in the field of testing are most welcome with their suggestions, views and inputs.

Index

Wednesday, July 2, 2008

Testing Terminology

1.0 Testing

New standard -
There is a lot of terminology surrounding testing, but not until recently has there been an industry standard. A new standard (first published in August 1998) seeks to provide a standard set of terms: BS 7925-i Glossary of Testing Terms. Although a British Standard, it is being adopted by the International Standards Organization (ISO) and will hopefully become an ISO standard within two or three years. A selection of terms from this standard, plus some others used in this course are in the Glossary at the back of these notes.

Error, fault and failure:
Three terms that have specific meanings are error, fault and failure.

Error: a human action that produces an incorrect result.
Fault: a manifestation of an error in software.
Failure: a deviation of the software from its expected delivery or service.

An error is something that a human does, we all make mistakes and when we do whilst developing software it is known as an error. The result of an error being made is a fault. It is something that is wrong in the software (source code or documentation - specifications, manuals, etc.). Faults are also known as a defects or bugs but in this course we will use the term fault.

When a system or piece of software produces an incorrect result or does not perform the correct action this is known as a failure. Failures are caused by faults in the software. Note that software system can contain faults but still never fail (this can occur if the faults are in those parts of the system that are never used).


Reliability:

Another term that should be understood is reliability. A system is said to be reliable when it performs correctly for long periods of time. However, the same system used by two different people may appear reliable to one but not to the other. This is because the different people use the system in different ways.

Reliability: the probability that the software will not cause the failure of the system for a specified time under specified conditions.
The definition of reliability therefore includes the phrase ‘under specified conditions’. When reporting on the reliability of a system it is important to explain under what conditions the system will achieve the specified level of reliability. For example, a system may achieve a reliability of no more than one failure per month providing no more than 10 people use the system simultaneously

Acceptance criteria -
The criteria that a system or component must satisfy in order to be accepted by a user, customer, or other authorized entity.
Acceptance testing-
Formal tests conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept a system.
Application domain -

A bounded set of related systems (i.e., systems that address a particular type of problem). Development and maintenance in an application domain usually requires special skills and/or resources. Examples include payroll and personnel systems, command and control systems, compilers, and expert systems.
Audit -

An independent examination of a work product or set of work products to assess compliance with specifications, standards, contractual agreements, or other criteria.
Baseline -

A specification or product that has been formally reviewed and agreed upon, that thereafter serves as the basis for further development, and that can be changed only through formal change control procedures.
Black-box testing-
Testing based on functional requirements without knowledge of the internal program structures and data. Also called functional testing.
Boundary Value-
An input value or output value which is on the boundary between equivalence classes, or an incremental distance either side of the boundary.
Boundary Value Analysis-
A test case design technique for a component in which test cases are designed which include representatives of boundary values.
Branch testing-
A test method satisfying coverage criteria that require that for each decision point, each possible branch be executed at least once
Code Coverage-
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention
Component-
A part of a software system smaller than the entire system but larger than an element.
Customer - The individual or organization that is responsible for accepting the product and authorizing payment to the developing organization.
Cyclomatic complexity-
The number of linearly independent paths through a program. The cyclomatic complexity of a program is equivalent to the number of decision statements plus 1.
Defect-
Nonconformance to requirements.
(Or)
A flaw in a system or system component that causes the system or component to fail to perform its required function. A defect, if encountered during execution, may cause a failure of the system.

Defect density-
Ratio of the number of defects to program length.
(Or)
The number of defects identified in a product divided by the size of the product component (expressed in standard measurement terms for that product).
Deviation -

A noticeable or marked departure from the appropriate norm, plan, standard, procedure, or variable being reviewed.
Dynamic Analysis-
The process of evaluating a system or component based on its behavior during execution.
Error-
The difference between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition.
End user - The individual or group who will use the system for its intended operational use when it is deployed in its environment.
Equivalence class partitioning-
Partitioning the input domain of a program into a finite number of classes [sets], to identify a minimal set of well selected test cases to represent these classes. There are two types of input equivalence classes, valid and invalid.
Integration Testing-
Exposes faults during the process of integration of software components, or software units and it is specifically aimed at exposing faults in their interactions. The integration approach could be either bottom-up (using drivers), top-down (using stubs) or a mixture of the two. Also known as interface testing.
Interface-
The informational boundary between two software systems, software system components, elements, or modules.
Path testing-
A test method satisfying coverage criteria that each logical path through the program be tested. Often paths through the program are grouped into a finite set of classes; one path from each class is tested.
Pseudo code-
A form of software design in which programming actions are described in a program-like structure; not necessarily executable but generally held to be humanly readable.
Quality -

(1) The degree to which a system, component, or process meets specified requirements.
(2) The degree to which a system, component, or process meets customer or user needs or expectations.
Regression Testing-
Re-testing after fixes or modifications of the software or its environment. Automated testing tools can be especially useful for this type of testing.
Required training -

Training designated by an organization to be required to perform a specific role.
Risk -

Possibility of suffering loss.
Risk management -

An approach to problem analysis which weighs risk in a situation by using risk probabilities to give a more accurate understanding of the risks involved. Risk management includes risk identification, analysis, prioritization, and control.
Risk management plan -

The collection of plans that describe the risk management activities to be performed on a project.
Role -

A unit of defined responsibilities that may be assumed by one or more individuals.
Software life cycle -

The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software life cycle typically includes a concept phase, requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and, sometimes, retirement phase.
Statement testing-
Testing designed to execute each statement of a computer program. See test coverage.
Static Analysis-
Examination of the form and structure of a product without executing the product. It may be applied to requirements, design, or code.
Stress testing-
A test which exercises code up to, including and beyond all stated limits in order to exercise all aspects of the system (e.g., to include hardware, software, and communications). Its purpose is to insure that response times and storage capacities meet requirements.
System testing-
It primarily demonstrates that the software system does fulfill requirements specified in the requirement specification during exposure to the anticipated environmental conditions. All testing objectives relevant to specific requirements should be included during the software system testing. System testing includes testing of performance, security, configuration sensitivity, stress, startup and recovery from failure modes
Testing-
Testing is the execution of a system in a real or simulated environment with the intent of finding faults
Test case-
A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test design-
Documentation specifying the details of the test approach for a software feature or combination of software features and identifying the associated tests.
Test plan-
A document describing the scope, approach, resources, and schedule of intended test activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.
Test procedure-
The formal or informal procedure that will be followed to execute the test case. This is usually a written document that will allow others to carry out the test with a minimum of training and confusion.
Unit-
The smallest piece of software that can be independently tested (i.e., compiled or assembled, loaded, and tested).
Unit Testing-
Unit testing is meant to expose faults on each software unit as soon as this is available regardless of its interaction with other units. The unit is exercised against its detailed design and by ensuring that defined logic coverage is performed. White-box oriented testing in combination with at least one black box method is used.
Validation-
The process of evaluation software at the end of the software development process to ensure compliance with software requirements.
Verification-
The Process of evaluating a system or component to determine whether the product at a given development phase satisfy the conditions imposed at the start of that phase.
White-box testing-
A testing method to test the internal behavior and structure of the program. The testing strategy permits one to examine the internal structure of the program. In using this strategy, the tester derives test data from an examination of the program's logic without neglecting the requirements in the specification. The goal of this test method is to achieve test coverage, that is examination of as much of the statements, branches, paths as possible.

2.0 CMM

Activity -

Any step taken or function performed, both mental and physical, toward achieving some objective. Activities include all the work the managers and technical staff do to perform the tasks of the project and organization.
CMM:
Framework representing a path of improvements recommended for software organizations that want to increase their software process capability.
(Or)

A description of the stages through which software organizations evolve as they define, implement, measure, control, and improve their software processes. This model provides a guide for selecting process improvement strategies by facilitating the determination of current process capabilities and the identification of the issues most critical to software quality and process improvement.
Commitment -

A pact that is freely assumed, visible, and expected to be kept by all parties.
Common features-
The key practices are divided among five Common Features sections namely, Commitment to perform, Ability to Perform, Activities Performed, Measurement and Analysis, and Verifying Implementation. The common features are attributes that indicate whether the implementation and institutionalization of a key process area is effective, repeatable, and lasting.
Commitment to perform -

The actions the organization must take to ensure that the process is established and will endure. Commitment to Perform typically involves establishing organizational policies and senior management sponsorship.
Ability to perform -

The preconditions that must exist in the project or organization to implement the software process competently. Ability to Perform typically involves resources, organizational structures, and training.
Activities performed -

A description of the roles and procedures necessary to implement a key process area. Activities Performed typically involve establishing plans and procedures, performing the work, tracking it, and taking corrective actions as necessary.
Measurement and analysis -

A description of the need to measure the process and analyze the measurements. Measurement and Analysis typically includes examples of the measurements that could be taken to determine the status and effectiveness of the Activities Performed.
Verifying implementation -

The steps to ensure that the activities are performed in compliance with the process that has been established. Verification typically encompasses reviews and audits by management and software quality assurance.
Dependency item -

A product, action, piece of information, etc., that must be provided by one individual or group to a second individual or group so that the second individual or group can perform a planned task.
Effective process -


A process that can be characterized as practiced, documented, enforced, trained, measured, and able to improve.

Goals-
Summarize the key practices of a key process area and can be used to determine whether an organization or project has effectively implemented the key process area. The goals signify the scope, boundaries, and intent of each key process area.
Institutionalization-
Building an infrastructure and a corporate culture that supports the methods, practices and procedures of the business so that they endure after those who originally defined them have gone.
Key practices-
Each key process area is described in terms of key practices that, when implemented, help to satisfy the goals of that key process area. The key practices describe the infrastructure and activities that contribute most to the effective implementation and institutionalization of the key process area.
Key process area-
Each maturity level is composed of key process areas. Each key process area identifies
a cluster of related activities that, when performed collectively, achieve a set of goals considered important for enhancing process capability. For convenience, the key process areas are organized by common features.
Maturity level-
Well-defined evolutionary plateau toward achieving a mature software process. Each maturity level indicates a level of process capability.

The five maturity levels in the SEI's Capability Maturity Model are:
LEVEL-1 : Initial - The software process is characterized as ad hoc, and occasionally even chaotic. Few processes are defined, and success depends on individual effort.
LEVEL-2 : Repeatable - Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.
LEVEL -3 : Defined - The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects use an approved, tailored version of the organization's standard software process for developing and maintaining software.
LEVEL -4 : Managed - Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled.
LEVEL -5 : Optimizing - Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies.

Measure -
A unit of measurement (such as source lines of code or document pages of design).
Measurement - The dimension, capacity, quantity, or amount of something (e.g., 300 source lines of code or 7 document pages of design).
Method - A reasonably complete set of rules and criteria that establish a precise and repeatable way of performing a task and arriving at a desired result.
Methodology - A collection of methods, procedures, and standards that defines an integrated synthesis of engineering approaches to the development of a product.
Milestone - A scheduled event for which some individual is accountable and that is used to measure progress.
Peer review - A review of a software work product, following defined procedures, by peers of the producers of the product for the purpose of identifying defects and improvements.
Peer review leader - An individual specifically trained and qualified to plan, organize, and lead a peer review.
Periodic review/activity - A review or activity that occurs at specified regular time intervals. (See event-driven review/activity for contrast.)
Policy - A guiding principle, typically established by senior management, which is adopted by an organization or project to influence and determine decisions.
Procedure - A written description of a course of action to be taken to perform a given task. [IEEE-STD-610]
Process - A sequence of steps performed for a given purpose; for example, the software development process.

Software capability evaluations:
To identify contractors who are qualified to perform the software work or to monitor the state of the software process used on an existing software effort.
Software process:
A set of activities, methods, practices, and transformations that people use to develop and maintain software and the associated products.
Software process assessments:
To determine the state of an organization's current software process, to determine the highpriority software process-related issues facing an organization, and to obtain the organizational support for software process improvement.
Software process capability:
Describes the range of expected results that can be achieved by following a software process.
Software process maturity:
Is the extent to which a specific process is explicitly defined, managed, measured, controlled, and effective.
Software process performance:
Represents the actual results achieved by following a software process. Thus, software process performance focuses on the results achieved, while software process capability focuses on results expected.
Software product -
The complete set, or any of the individual items of the set, of computer programs, procedures, and associated documentation and data designated for delivery to a customer or end user.

No comments:

Subscribe Now:

Custom Search