Optimization model for an automatic system testing framework

From Master Projects
Jump to: navigation, search


has title::Behavior based coverage metrics for system testing
status: finished
Master: project within::Software Engineering
Student name: student name::Anca Gentiana Coman
Dates
Start start date:=2012/02/01
End end date:=2012/07/01
Supervision
Supervisor: Natalia Silvis-Cividjian
Second supervisor: Eric Raijmakers
Second reader: has second reader::Hans van Vliet
Company: has company::Oce
Thesis: has thesis::Media:Thesis.pdf
Poster: has poster::Media:Posternaam.pdf

Signature supervisor



..................................

Abstract

Océ is one of the leading providers of document management and printing for professionals. The Océ offering includes office printing and copying systems, high speed digital production printers and wide format printing systems for both technical documentation and color display graphics. Part of the Canon group, the company was founded in 1877, is active in over 100 countries and employs more than 20,000 people worldwide. At the company headquarters, in Venlo, The Netherlands, the Research & Development department is responsible with developing Océ own basic technologies and the majority of its product concepts.

Their line of digital document systems comprises several models, out of which a select few use the PRISMAsync controller, an Océ product which enables a single point of control, efficient task splitting, media synchronization, intelligent color management and advanced editing capabilities. The controller is a software component which runs on dedicated hardware and is the heart of the printer.

The PRISMAsync controller is developed and tested within the Océ R&D facility in Venlo. The testing of the product involves both an automated framework and manual tests. The FAT (framework for automated testing) is responsible for testing controller capabilities such as printing, connectivity, copying, device management, error handling and workflow management. Individual test scripts are run by the framework every night with test run results being presented the following day.

The problems the testing framework is facing are related to the lack of knowledge about what is being tested, how much is tested, which modules are executed during a test run, what capabilities are not being tested enough. Also there is no knowledge on which modules have been modified since the last test run in order to select the appropriate tests to be executed. Therefore, the framework lacks the ability to provide regression test selection. Without a clear connection between what is tested and how much, there is also the possibility of having duplicate tests and no manner of obtaining coverage measures. Because we are dealing with the controller at the system level, functionality is a key aspect for the framework. Unfortunately, information on this topic is currently inexistent because requirements reside in SBD documents, code modules are developed by several teams while the FAT is maintained by a different team. There is no direct connection between test cases, requirements or modules. There is no knowledge on what is not tested or which modules are covered by test cases. This leads to blind testing which is lengthy because the whole test run has to finish before obtaining any knowledge on the faults in the controller. The requirements are also volatile and change very often as the implementation progresses. For this reason achieving requirements traceability has proven to be a challenge.


Research question

In order to achieve the research objectives the following question must be answered:

What type of information could be obtained from existing test cases and requirements in order to improve the system testing process?

The main research question can also be accompanied by a series of sub-questions in relation to the testing framework:

What coverage techniques are available in literature and can any of them be applied in our current context?

What type of coverage metric could be extracted from the system and test cases?

How could this metric be used to steer the system testing process?

What other usages could be derived for the new approach?

How can this optimization be implemented in an automated manner?