Categories
QA&T – Our Approach Your Edge Blog

General Testing Approach

Explore our tailored testing approach, where each engagement features a customized Test Strategy designed to meet the client’s unique needs.

Below, the key objectives, principles, and concepts of our testing approach are briefly summarized. For every testing engagement, we prepare a customized Test Strategy, reflecting the unique client’s situation.

Test Objectives

Primary objectives of software testing is to validate and verify the tested software systems. Validation is focused on making sure the correct system was created. Verification is focused on making sure the system was implemented correctly. Except for these two elementary objectives, there are other objectives of testing:

  • find important bugs,
  • assess the quality of the software product,
  • reveal to what degree the software works according to the requirements provided,
  • reveal to what degree the requirements cover the needs,
  • make sure the software is fit for purpose,
  • gain confidence in the quality of the product,
  • give insight into the progress of the project,
  • provide information for release decisions,
  • provide information for predicting and controlling the cost of operations and support,
  • assist in improving the product quality.
Test Principles

Our Test Strategies are built on the following principles:

  • Covering Requirements: Tests refer to a definition of what the desired behavior is. The granularity of the reference may differ, but it is advisable to refer to the most specific description of the desired behavior (prefer a requirement to a functional specification).
  • Test Levels: Tests are conducted at several levels, with progressively increasing maturity of integration. Tests of individual systems precede tests of fully integrated systems with users.
  • Test Types: In the two broad categories of functional and non-functional tests, there are different types of tests, each focused on an aspect of the system.
  • Process Management: Tests are managed by the defined Test Management Process, defects are managed by the defined Defect Management Process.
  • Data Governance: In projects of a data-intensive nature, the Data Governance of the data touched by the project, plays a significant role (Data Owners, Stewards, Custodians).
  • Separating Delivery and Test Roles: The people in developing and test roles are different when possible.
  • Test Tools: Test Management and Defect Management are supported by tools with methodologies for their use.
  • Test Cases: Test cases are available before the test is initiated and the test results for individual test cases are archived and available for further analysis.
  • Test Data: Non-production data should be used for testing purposes when possible, unless there is a specific IT testing need which requires live data in order to complete tests.
  • Test Environment: Testing is performed in a dedicated Test Environment, separated from the Production Environment, being of ideally very similar configuration.
  • Test Automation: The execution of tests and preparation of test data is automated whenever it makes sense (e.g., from the viewpoint of economy or duration).
Test Activities

In general, we organize Test Management in the following activities:

  1. Test Management: Test Management comprises of planning, organization, staffing, directing, coordination and reporting across the activities of Test Planning, Test Design, Test Preparation, Test Execution and Test Reporting. Test Management is a responsibility of the Test Manager and Test Leaders.
  2. Test Planning: The High-level Test Plan elaborates the Test Strategy into a greater detail. The Detailed Test Plans are focused on specific systems or some non-functional tests.
  3. Test Design: The purpose of this phase is to plan the tests for a specific system or a non-functional test at a detailed level. The Test Design phase produces Detailed Test Plans and Test Cases.
    1. Detailed Test Plans elaborate the Test Strategy and High-level Test Plan; specify inputs, outputs, detailed activities and resources, also specifies requirements on the Test Environment and the scope and quality of the test data.
    2. Test Cases are mapped on Functional Specifications or requirements or any documentation describing the desired functional and non-functional behavior; the behavior which was required but not tested should be transparent.
  4. Test Preparation
    1. Test Scenarios: Test Cases, defined in the Test Design phase, are elaborated into Test Scenarios, describing test steps and desired (positive) results. Test Scenarios may be grouped into Test Sets, grouping the scenarios which are preferably executed together. Test Scenarios (Test Sets) are inputs for the Operational Test Plan.
    2. Operational Test Plan: The plan schedules the testing at the level of Test Scenarios (Test Sets); it depicts what Test Scenario will be tested by which Tester and when. The plan reflects material dependencies between the scenarios, required resources (Testers, Test Analysts) and availability of the Test environment and test data.
    3. Test Environment: Preparation of the Test environment is aligned with the Operation Test Plan. The preparation comprises of preparing infrastructure, including any required integration to other systems, setting the environment (e.g., access rights) and any deployments of the tested system.
    4. Test Data: The preparation of test data is defined in the respective Detailed Test Plan.
    5. Training: Training the team’s actors in the roles and in the methodology and tools used for testing.
  5. Test Execution: Test Execution refers to executing Test Scenarios and following the Defect Management Process.
    1. Testing – executing steps in Test Scenarios, according to the Operational Test Plan, evaluating their results against desired results,
    2. Registering defects – managed by the Defect Management Process, supported by the Defect Management Tool,
    3. Fixing defects – correcting defects by the responsible development team (of the tested system), with the appropriate priority,
    4. Deployment – deploying corrections, conducted by the deployment team, of defects and other changes of the tested system (the systems evolves during its testing in general),
    5. Retests – retesting corrected defects in accordance with the Defect Management Process, supported by the Defect Management Tool.
  6. Test Reporting: Test Reporting is provided by a report of the status of testing – Test Report – and a report of defects found from testing – Defect Report.
Test Inputs and Outputs

Testing requires some external inputs, produces some internally used inputs and outputs, and produces some outputs used externally. For example, the Project Management Plan is an external input used by the Test Team, Test Strategy is an internal output of the Test Team and an internal input used by the Test Team used for building the High-level Test Schedule, the High Level Test Status report is used externally by the Sponsor.

Test Levels

Testing at different levels (e.g., “V-model”, when appropriate) supports identification and correction of defects as soon as possible (e.g., preventing finding defects in system integration during acceptance by business users). Unit Tests, System Tests, System Integration Tests and User Acceptance Tests may be realized in a project.

Test Types

Every test type is focused on a different test objective; some test types are focused on functional aspects, other test types are focused on non-functional aspects.

Test Processes

There are the following two processes for our Test Management approach:

  1. Test Management Process – describes the principles of internal and external communication of the Test Team, escalation mechanisms and principles of dealing with change requests and the characteristics of the Test Management Tool, used for tracking the Test Execution.
  2. Defect Management Process – defines the process for managing defects found in testing. The defect lifecycle is defined at its activity level, together with the principles of management and communication, escalations and monitored metrics.
Test Organization

The test organization structure defines the roles and responsibilities in testing. The head of testing is the Test Manager, usually reporting directly to the Project Manager. The Test Manager is often supported by several Test Leaders, each responsible for specific applications and managing his team of (shared) Test Analysts and Testers.

Metrics and Reporting

There are usually some mandatory metrics and reports to be monitored and reported from testing. The reports are consumed either internally by the Test Team or externally by the Project Management or higher, at the Sponsor level.

Test Data

In general, anonymized production data are used for testing. Test data are managed by the process of collecting requirements from test plans, respecting some general requirements on data consistency and anonymization.

Test Environment

It is recommended, testing to be performed in a dedicated Test Environment, separated from the Production Environment, that resembles as much as possible the production environment in terms of configuration.

Test Tools

An efficient Test Management should be supported by a set of tools – a Test Management Tool, Defect Management Tool, Test Reporting Tool, and Test Repository.