News

Continuous Improvement with TMMi

Paul Brown
02.01.2020 Published: 02.01.20, Modified: 02.01.2020 12:01:07

Every organisation will conduct testing differently based on the specific environment that testing is being delivered in. However, one thing that all enterprises should agree upon is that it is vitally important to continually improve test processes to align with evolving business objectives. An example of this can be seen in the push towards Continuous Integration/DevOps.

Improvement activities should be gradual and driven by a clear objective framework of what a good process looks like. Without understanding what you are aspiring to achieve it is near impossible to improve processes in an effective fashion. Test Maturity Model integration (TMMi) can support this, as it provides:

What is TMMi?

TMMi can be generally defined as a framework used to objectively measure the maturity of an organisation’s test process. It is a staged model, so improvement is measured using maturity levels, which range from 1 (worst) to 5 (best). Each maturity level contains a number of process areas which are specific groups of activities that can be evaluated together to determine their overall effectiveness.

 

The model recognises that not all organisations work in the same way, so it is specific enough to cover all test activities yet generic enough to apply to all delivery life cycles.

Structure of TMMi

TMMi consists of a number of different components which are broken down into different categories. The two most important categories are:

Required Components – What an organisation must achieve to satisfy a process area.

Expected Components – What an organisation will typically implement to achieve a required component.

The different components in the diagram above can be described as follows:

Maturity Level – Degree of process improvement achieved across a pre-defined set of process areas

Process Area – Related practices in an area that are considered important to achieving improvement in an overall area of delivery

Specific Goal – Characteristics that must be present to satisfy a process area

Generic Goal – Characteristics that must be present to institutionalise the processes that implement a process area e.g. establish organisational policy, train people, etc.

Specific Practice – Activities required to achieve specific goals of a process area

Generic Practice – Activities required to achieve generic goals and institutionalise processes associated with a process area

Quickly Achieve Success with TMMi

TMMi can be a long and detailed process requiring a high amount of effort to achieve important long-term benefits. However, the model can be used to achieve success quickly by picking out particular process areas that are causing the most challenges within the test process. These process areas can then be assessed individually with improvements identified and implemented in a short space of time.

The specific practices for each process area can be used as a starting point for reviewing maturity in that particular area. The sub-practices then define further actions that can be taken to assess and improve maturity. Once that assessment has been completed it is simple to see which areas you can start to improve and what activities you can implement to achieve this.

Process Area – Test Environment

Specific Goal 2 – Perform Test Environment Implementation

Specific practice Sub practice
Implement the Test Environment Implement the test environment as specified and according to the defined plan

Adhere to applicable standards and criteria

Perform testing on test environment components as appropriate

Develop supporting documentation, e.g., installation, operation and maintenance documentation

Revise the test environment components as necessary

Create Generic Test Data Create generic test data required to support the execution of the tests

Anonymise sensitive data in line with the policy when ‘real-life’ data is used as a source

Archive the set of generic test data

Specify Test Environment Intake Procedure Define a list of checks to be carried out during the intake test of the test environment

Develop the test environment intake test procedure

Document the test environment intake test procedure in a test procedure specification, based on the test procedure specification standard

Review the test environment intake test procedure specification with stakeholders

Revise the test environment intake test procedure as appropriate

Perform Test Environment Intake Test Perform the intake test (confidence test) using the documented intake test procedure to decide if the test environment is ready to be used for testing

Document the results of the test environment intake test by means of a test log, based on the test log standard

Log incidents if a discrepancy is observed

 

Full TMMi Assessments

Certain organisations are accredited by the TMMi Foundation to be assessment service providers. These forms can provide a full, independent assessment of your company’s test maturity using TMMi. Following a review, they can also provide suggestions on what is required to help achieve the next level of maturity.

Conclusion

IT teams should always be looking to improve the way they deliver value to business stakeholders. Within the testing space TMMi can provide an objective framework that can be used as the basis for:

TMMi provides a huge amount of information, but the model can be used in a flexible way to suit your needs. One option is to review the full test process and identify solutions to overhaul/improve the whole process. Another would be to use components of the model to review specific areas that are causing the most challenges, for example, the test environments or test strategy. From there it is then possible to achieve quick improvements by focusing effort on maturing/developing processes within that area.

You could be doing things better and TMMi can help guide you with that improvement process. With a consistently improving and effective test process, your projects have a much higher chance of delivering success.

This article was originally published on Software Testing News North America on 5 December 2019.