1 Introduction
-
RQ1: What are current challenges in practice concerning test case specifications in automotive testing?This research question focuses on the typical life cycle of a test case specification. Usually test case specifications are written by a test designer and afterwards concrete test cases are implemented and executed by a tester. Test designers and testers are usually different people, as illustrated in Fig. 1, but they do not have to be. Between these activities of creating and processing a test case specification, quality assurance activities can exist to improve the test case specification. For example, reviews can be used to detect incomprehensible or faulty test cases or to determine the requirements coverage. In the case of a high-quality test case specification, the tester as the “consumer” of a test case specification should have few queries about the contained test cases and the implemented test cases should test what the test designer intended to test. Hence, we suspect that the challenges occur particularly in the areas of: (C) creation, (P) processing, and (Q) quality assessment related aspects of test case specifications. Therefore, we considered availability and quality of input artifacts used as well as the phrasing of test cases (C). Furthermore, we focus on identifying challenges related to negative effects in downstream development activities based on decisions and faults that occurred during the creation of test case specifications (P). We investigate challenges related to the understanding of high-quality test case specifications and which quality criteria and mechanisms are already in use or could be useful to improve quality (Q).
-
RQ2: Which causes and consequences of the challenges are the practitioners aware of and which solutions exist?This research question supplements research question RQ1. It is interesting to know what causes and consequences practitioners are aware of. This makes it possible to estimate the extent of a problem and, if the causes are known, to develop suitable solutions. In addition, we considered solutions proposed by the practitioners. Such insights are interesting because existing solutions may also be applicable to other practitioners or can be applied to other situations.In order to assess the challenges identified in the first study, we conducted a second complementary study that answers the following two research questions:
-
RQ3: How do practitioners assess the frequency of occurrence and the criticality of the identified challenges?This question focuses on the assessment of the identified challenges from the exploratory case study. We assume that the challenges are also known to other practitioners with similar test responsibilities as the interviewees. Therefore, this research question intends to examine the frequency of occurrence and the criticality of the challenges. The results of the assessment are interesting because they allow a prioritization of improvement activities.
-
RQ4: How do the identified challenges differ between external and internal employees?We suspect differences in the occurrence with respect to the assessments of internal employees (OEM) and external employees (engineering partner). There may be challenges that are more likely to occur for the group of external employees and less for the group of internal employees and vice versa. In addition, we investigate whether there are differences in the assessment of the criticality of a challenge between internal and external employees.
2 Background
3 Related work
3.1 Challenges in software testing in general
3.2 Challenges in software testing related to the automotive domain
3.2.1 Challenges related to the automotive testing process
3.2.2 Challenges explicitly related to test case specifications
ID | Challenges | References |
---|---|---|
CRW01 | Increasing complexity of software-based systems influences testing | |
CRW02 | Huge number of variants and configurations (test case explosion) | |
CRW03 | Enormous test effort (especially due to safety requirements) | Lachmann and Schaefer (2013) |
CRW04 | Heterogeneous nature of software depending on the different domains | Pretschner et al. (2007) |
CRW05 | Quality of requirements impact the quality of test-case design (e.g., due to insufficient or incomprehensible requirements) | |
CRW06 | Missing test plan (and thus undocumented test strategy, test coverage criteria, etc.) | Lachmann and Schaefer (2013) |
CRW07 | Correlation between domain knowledge and tester skills (when developing test cases based on human expertise, the quality of the test cases depends on the knowledge of the tester) | Garousi et al. (2017) |
CRW08 | Lack of domain and system knowledge transfer (e.g., if testers personnel changes frequently) | Kasoju et al. (2013) |
CRW09 | Lack of basic testing knowledge | Kasoju et al. (2013) |
CRW10 | Need for training for various test activities | Garousi et al. (2017) |
CRW11 | Lack of traceability between requirements and test cases makes it hard to determine test coverage | Kasoju et al. (2013) |
CRW12 | Lack of a meaningful metric regarding traceability | Garousi et al. (2017) |
CRW13 | Poor change management (e.g., test artifacts are not updated continuously) | |
CRW14 | Lack of a structured test process and lack of a seamless chain of methods | |
CRW15 | Difficulties in defining the exact responsibilities of different test levels | Sundmark et al. (2011) |
CRW16 | Lack of dedicated testers or unavailability of personnel for testing | Kasoju et al. (2013) |
CRW17 | Distributed development of a system (e.g., involvement of different suppliers in the development and test process) | |
CRW18 | Lack of regular face-to-face meetings (to avoid miscommunication) | Kasoju et al. (2013) |
CRW19 | Differences of opinion regarding test effort distribution | Sundmark et al. (2011) |
CRW20 | No unified tool for entire testing activities | |
CRW21 | Lack of documentation on how tools work | Kasoju et al. (2013) |
CRW22 | Need of better tool support | |
CRW23 | Natural language-based test cases influence comprehensibility | Lachmann and Schaefer (2014) |
CRW24 | Manual test cases are often too long or do not contain necessary details | Garousi et al. (2017) |
CRW25 | Documentation issues on explorative testing | Garousi et al. (2017) |
4 Research methodology
4.1 First study: exploratory case study
Role1 | Company affiliation | Testing expertise | Responsibilities3 | Label | |||
---|---|---|---|---|---|---|---|
(Years) | (Level2) | C | D | I | R | ||
System manager | < 3 | 3 | ✓ | ✓ | Int01 | ||
Test manager | 3 – 5 | 4 | ✓ | ✓ | ✓ | Int02 | |
Test house manager | 11 – 25 | 5 | ✓ | Int03 | |||
Test manager | 3 – 5 | 4 | ✓ | ✓ | ✓ | Int04 | |
Function developer | 3 – 5 | 2 | ✓ | ✓ | ✓ | ✓ | Int05 |
System manager | < 3 | 4 | ✓ | ✓ | Int06 | ||
Function manager | 3 – 5 | 3 | ✓ | ✓ | Int07 | ||
Test manager | 11 – 25 | 4 | ✓ | ✓ | Int08 | ||
Function developer | 11 – 25 | 4 | ✓ | ✓ | Int09 | ||
Project manager (S) | 6 – 10 | 4 | ✓ | ✓ | ✓ | Int10 | |
Test manager (S) | < 3 | 4 | ✓ | ✓ | ✓ | Int11 | |
Test manager | < 3 | 2 | ✓ | ✓ | Int12 | ||
System manager | 6 – 10 | 3 | ✓ | ✓ | Int13 | ||
System manager | < 3 | 3 | ✓ | ✓ | ✓ | Int14 | |
Test manager | 11 – 25 | 3 | ✓ | ✓ | ✓ | Int15 | |
Tester (S) | 6 – 10 | 4 | ✓ | ✓ | ✓ | Int16 | |
Tester | ≥ 26 | 3 | ✓ | ✓ | ✓ | Int17 |
4.2 Second study: descriptive survey
Groups of participants | Sample | Company | Testing | Responsibilities2 | |||
---|---|---|---|---|---|---|---|
size | affiliation | expertise | C | D | I | R | |
(N) | (years) | (level1) | |||||
Internal employees | 26 | 12.5 years | 3 | 13 | 17 | 14 | 13 |
External employees | 10 | 3.0 years | 3 | 5 | 0 | 12 | 6 |
All participants | 36 | 5.6 years | 3 | 18 | 17 | 26 | 19 |
5 Identified challenges
ID | Main categories (M)/Types of challenges (ToC) | Challenge |
---|---|---|
numbers | ||
M1 | Availability problems with input artifacts | C1 – C5 |
ToC-1.1 | Non-existing input artifacts | C1 – C3 |
ToC-1.2 | Distributed input artifacts | C4 |
ToC-1.3 | No access to or provision of input artifacts | C5 |
M2 | Content-related problems with input artifacts | C6 – C17 |
ToC-2.1 | Content-related problems with requirement specifications | C6 – C11 |
ToC-2.2 | Content-related problems with test plan | C12 – C15 |
ToC-2.3 | Conformity of test case specification template with user needs | C16 |
ToC-2.4 | Content-related problems with other input artifacts | C17 |
M3 | Knowledge-related problems | C18 – C24 |
ToC-3.1 | Lack of knowledge about the SUT | C18 |
ToC-3.2 | Lack of knowledge about test platforms | C19, C20 |
ToC-3.3 | Lack of knowledge about testing policies | C21 – C24 |
M4 | Test case description related problems | C25 – C55 |
ToC-4.1 | Language-based problems in test cases | C25, C26, C33 |
ToC-4.2 | Phrasing-based problems in test cases | C27 – C32 |
ToC-4.3 | Quality-based problems in test cases | C34 – C55 |
M5 | Test case specification content-related problems | C56 – C58 |
ToC-5.1 | Handling of variants and model series | C56, C57 |
ToC-5.2 | Focusing on specific test platforms | C58 |
M6 | Process-related problems | C59 – C70 |
ToC-6.1 | Lack of standards or guidelines | C59 – C63 |
ToC-6.2 | Change management related problems | C64 |
ToC-6.3 | Organizational decisions | C65 – C70 |
M7 | Communication-related problems | C71 – C77 |
M8 | Quality assurance related problems | C78 – C86 |
ToC-8.1 | Inadequate definition of the term quality | C78, C82, |
ToC-8.2 | Lack of useful metrics | C79, C80, C83 |
ToC-8.3 | Lack of an (established) review process | C81, C84 – C86 |
M9 | Tool-related problems | C87 – C92 |
ToC-9.1 | Usability and function-related problems | C88, C89, C91 |
ToC-9.2 | Heterogeneous tool chains | C87, C90 |
ToC-9.3 | Deficiencies with the support | C92 |
5.1 Availability problems with input artifacts (M1)
5.1.1 Non-existing input artifacts (ToC-1.1)
5.1.2 Distributed input artifacts (ToC-1.2)
5.1.3 No access to or provision of input artifacts (ToC-1.3)
5.2 Content-related problems with input artifacts (M2)
5.2.1 Content-related problems with requirement specifications (ToC-2.1)
5.2.2 Content-related problems with test plans (ToC-2.2)
5.2.3 Conformity of test case specification template with user needs (ToC-2.3)
5.2.4 Content-related problems with other input artifacts (ToC-2.4)
5.3 Knowledge-related problems (M3)
5.3.1 Lack of knowledge about the system under test (ToC-3.1)
5.3.2 Lack of knowledge about test platforms (ToC-3.2)
5.3.3 Lack of knowledge about testing policies (ToC-3.3)
5.4 Test case description related problems (M4)
5.4.1 Language-based problems in test cases (ToC-4.1)
5.4.2 Phrasing-based problems in test cases (ToC-4.2)
5.4.3 Quality-based problems in test cases (ToC-4.3)
5.5 Test case specification content-related problems (M5)
5.5.1 Handling of variants and model series (ToC-5.1)
5.5.2 Focusing on specific test platforms (ToC-5.2)
5.6 Process-related problems (M6)
5.6.1 Lack of standards or guidelines (ToC-6.1)
5.6.2 Change management related problems (ToC-6.2)
5.6.3 Organizational decisions (ToC-6.3)
5.7 Communication-related problems (M7)
5.8 Quality assurance related problems (M8)
5.8.1 Inadequate definition of the term quality (ToC-8.1)
5.8.2 Lack of useful metrics (ToC-8.2)
5.8.3 Lack of an (established) review process (ToC-8.3)
5.9 Tool-related problems (M9)
5.9.1 Usability and function-related problems (ToC-9.1)
5.9.2 Heterogeneous tool chains (ToC-9.2)
5.9.3 Deficiencies with the support (ToC-9.3)
5.10 Discussion of the identified challenges
Categories | |||||||||
---|---|---|---|---|---|---|---|---|---|
Area | M1 | M2 | M3 | M4 | M5 | M6 | M7 | M8 | M9 |
(C) Creation | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||
(P) Processing | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||
(Q) Quality | ✓ | ✓ | ✓ | ✓ | ✓ | ||||
Related work challenges | CRW06 | CRW01, CRW03, CRW04, CRW05 | CRW07, CRW08, CRW09, CRW10 | CRW11, CRW23, CRW24, CRW25 | CRW02 | CRW13, CRW14, CRW15, CRW16, CRW17, CRW18 | CRW19 | CRW12 | CRW20, CRW21, CRW22 |
References | Lachmann | Grimm (2003), | Kasoju et al. | Kasoju et al. | Grimm (2003), | Grimm (2003), | Kasoju et al. | Kasoju et al. | Grimm (2003), |
and Schaefer | Pretschner | (2013), Petrenko | (2013), Lachmann | Pretschner et al. | Pretschner | (2013), and | (2013), and | Kasoju et al. | |
et al. (2007), | et al. (2015), | and Schaefer | (2007), and | et al. (2007), | Sundmark | Garousi et al. | (2013), Broy | ||
(2013), Kasoju | Garousi et al. | (2014), and | Sundmark | Kasoju et al. | et al. (2011) | (2017) | (2006), Garousi | ||
et al. Garousi | (2017), and | Garousi et al. | et al. (2011) | (2013), Sundmark | et al. (2017), | ||||
et al. (2017), | Lachmann and | (2017) | et al. (2011), and | and Sundmark | |||||
Sundmark et al. | Schaefer (2013) | Lachmann and | et al. (2011) | ||||||
(2011), and | Schaefer (2013) | ||||||||
Lachmann and | |||||||||
Schaefer (2013) |
ID | Challenge | Consequences | Named solutions |
---|---|---|---|
M1: Availability problems with input artifacts | |||
C1,C2 | Requirements specification does not exist/does not exist in time | Delays the creation of test cases | Test cases are created based on previous test cases, using placeholders |
C3 | Test plan does not exist | Lead to inefficient, highly redundant testing across multiple test platforms, insufficient utilization of test platforms, increasing test effort and coasts | Using a rudimentary standard test plan |
C4 | Information and documents are distributed | Gathering information takes time | Enriching specifications with necessary additional information |
C5 | No access to relevant documents | Information gaps, poorer system understanding, incomplete or incorrect test cases | Communication is necessary, data is provided manually |
M2: Content-related problems with input artifacts | |||
C6 | Requirement specification is outdated | Necessary information for creating test cases is missing | Not indicated |
C7 – C11 | Incorrect, conflicting, obsolete, incomplete, or unintelligible requirements exist | Influences the creation of high-quality test case specifications, increasing test effort | Communication with those responsible for the requirement specification is necessary, provision of additional information to encourage understanding of the requirements |
C12 – C15 | Test plan is too general/extensive, insufficient description of the test object, undefined test end criteria | Increasing test effort, inefficient or incorrect usage of test platforms, missing test cases for specific test platforms | Conducting test plan reviews |
C16 | Template does not fit | Test cases cannot be adequately documented, test case duplicates exist (e.g., for different variants) | Modification of the template, parameterizing test cases |
C17 | Faulty test cases of previous versions | Test cases contain incorrect or obsolete information | Parameterizing test cases, using placeholders |
M3: Knowledge-related problems | |||
C18 | Insufficient knowledge about SUT | Derivation of incorrect test cases, errors in test case specifications | Workshops with all relevant participants to fill knowledge gaps regarding the SUT |
C19,C20 | Insufficient knowledge about available test platforms/test platform functionalities | Lack of meaningful test cases for specific test platforms, suboptimal assignment of test cases to test platforms and late error detection, test cases do not fit the test platform | Testers provide sample test cases for specific test platforms |
C21 – C24 | Insufficient knowledge about testing guidelines, lack of training, consulting and contact persons, insufficient documentation of the template | Test cases are not documented template compliant, missing information in test cases | Suggestions: One-sided instructions, continuous and short refresher courses, newsletter, expert hotline |
M4: Test case description related problems | |||
C25,C26 | Translation and spelling errors in test cases exist | Lead to false test cases | Using formal approaches (e.g., description of test cases at signal level) |
C27 | Several authors are working on a test case specification | Ambiguities and misunderstandings during test implementation, non-uniform test case descriptions | Subsequent revision and unification of phrasing in test cases, using previously defined phrase blocks |
C28 | Phrasing of test cases in prose | Ambiguities and misunderstandings during test implementation | Subsequent revision and unification of phrasing in test cases, using previously defined phrase blocks |
C29,C30 | Use of (undefined) abbreviations | Impair comprehensibility/readability of a test case | Using glossaries |
C31 | Abstract test case description leads to inaccuracies in test cases | Interpretation during test case implementation | Not indicated |
C32 | Phrasing of a test case is not specific to the respective target test platform | Necessary information is missing, test cases cannot be performed on the assigned test platform | Suggestion: Phrasing should be adapted to target test platform |
C33 | Typing errors in test cases | Usually no effects, may affect the content of a test case (e.g., fundamentally different meaning) | Communication with responsible persons necessary if deficiencies are noticed |
C34 – C43 | Incomplete test cases exist (missing documentation of: preconditions, actions, expected results, test purpose, prioritization, model series, test platforms, origin, test case derivation procedure) | Increasing communication effort to clarify missing information, testers are not able to identify relevant test cases for their test platform, filtering of test cases not possible (e.g., by model series), lack of traceability of the test case | Communication with responsible persons necessary if deficiencies are noticed |
C44, C46, C49 | Test case description is incomprehensible, ambiguities in test cases exist, poor readability/comprehensibility of a test case | Interpretation during test case implementation | Communication with responsible persons necessary |
C45 | Sequence of test case actions to be executed is unclear | Interpretation during test case implementation, lead to incorrect test cases implementation | Communication with responsible persons necessary |
C47, C48, C50 | Quality parameters of a test case: too many functions, linked requirements, or parameters | Impair the verification of the (content) completeness of a test case | Not indicated |
C51 | Test case already contains implementation details | Restriction of the tester/implementer | Not indicated |
C52, C53 | (Experience-based) Test cases are not reused | Test cases were often redesigned instead of using existing ones, experience-based test cases are lost | Manual copying and adaptation of test cases, documentation of experience-based errors in requirement specification |
C54 | Changes to a test case are time-consuming | Changes are often not made immediately | Tedious manual searching and replacement of affected parts, usage of placeholders, extraction of reusable sequences into a base scenario |
C55 | Test case is not consistent with requirements | Incorrect test case is executed and does not contribute to the detection of errors | Not indicated |
M5: Test case specification content-related problems | |||
C56 | Insufficient handling of variants in the test case specification | Lack of documentation of the variants, lack of clarity of the assigned variant per test case | Various project-specific methods exist (e.g., parameterizing test cases) |
C57 | Many different model series in one test case specification | Lack of documentation of the model series | Various project-specific methods exist (e.g., one test case specification per model series) |
C58 | Only test cases for a specific test platform are contained | Redundant execution of test cases on different test platforms, difficulties in verifying the completeness of test scopes | Suggestion: Test specification contains test cases for all relevant test platforms |
M6: Process-related problems | |||
C59, C61 | Guidelines are missing | Testing process differs (e.g., depending on the department) | Developing project-specific guidelines Suggestion: Mandatory introduction of a test manager for each project |
C60 | Templates for test reports are missing | difficult to make conclusions about the status of a system’s test progress | Suggestion: Uniform guidelines/templates for reporting test progress |
C62, C63 | A defined reference process for testing is missing, unclear interface definition to the overall process | Unclear responsibilities | Suggestion: A company-wide reference process |
C64 | Lack of change management | Leads to ignorance of changes | |
C65 | Test process does not fit the project/system | Own processes are developed | |
C66 | Test manager is vacant | Required test activities are not executed (e.g., test case specification is not available in time) | Suggestion: there should be a test manager |
C67 | Responsibilities and roles are unclear | Test-relevant tasks (e.g., reviews) are not processed | Suggestion: Revision of responsibility descriptions (e.g., for system managers) |
C68 – C70 | Increasing outsourcing | Leads to knowledge gaps, increased effort and test execution takes longer than in-house | Not indicated |
M7: Communication-related problems | |||
C71, C72 | Communication problems with the supplier, communication via representatives | Lead to misunderstandings (“Chinese Whispers”-problem) | Workshops with all relevant participants |
C73, C74 | Communication problems due to distributed teams/spatial distance | Reduced communication | Not indicated |
C75 | Communication problems due to cultural differences | Lead to misunderstandings (e.g., due to shyness to ask in case of ambiguities) | Not indicated |
C76 | Communication problems due to language barriers | Lead to misunderstandings (e.g., due to translation errors) | Using previously defined phrase blocks |
C77 | Different expectations | Lead to misunderstandings (e.g., due to different assumptions about the common knowledge base) | Not indicated |
M8: Quality assurance related problems | |||
C78 | Quality characteristics for a test case specification are unknown | Unclear which criteria have to be checked, e.g., in a review | Not indicated |
C79, C80 | Established metrics for quality assessment are missing (only requirement coverage is well-known) | Complicates the quality assessment of a test case specification | Suggestions: Some ideas for metrics (e.g., number of test steps or number of used words in a test case, etc.) |
C81 | Tools used do not generate quality reports | Only requirement coverage is calculated by the tools and therefore the only well-known metric | Not indicated |
C82 | Company guidelines for quality measurement are missing | Review process varies (e.g., depending on the department) | Suggestions: Providing review checklists and guidelines |
C83 – C86 | Lack of established review processes, due to shortage of manpower, lack of time or test case specifications are too large to conduct (full) reviews | Authors of a test case specification perform a review themselves (no independent review), reviews are only performed on a random sample | Commissioning of reviews Suggestions: Performing internal walkthroughs with colleagues with similar specialist knowledge, providing review checklists and guidelines |
M9: Tool-related problems | |||
C87, C90 | Heterogeneous tools are used, non-continuous tool chains (interface problems) exist | Time-consuming, project delays to carry out the tasks with the available tools, difficult to react flexibly to changes, consistent traceability cannot always be established | If export and import do not work, data are exchanged manually (e.g., via Excel) |
C88 | Tools used are complex and functions are unknown | Considerable time effort necessary to complete tasks | Suggestion: Using another tool |
C89 | Usability deficiencies | Tasks are difficult to perform | Suggestion: Using another tool |
C91 | Project-specific adaptations | Lead to problems if adaptation does not fit into existing tool chain | Not indicated |
C92 | Deficiencies with the tool support | Support cannot help, changes to tools cannot be implemented on time | Developing own tools, which often do not fit seamlessly into existing tool chains |
6 Results of the descriptive survey
6.1 RQ3: Assessment of the identified challenges
6.1.1 M1: Availability problems with input artifacts
6.1.2 M2: Content-related problems with input artifacts
6.1.3 M3: Knowledge-related problems
6.1.4 M4: Test case description related problems
6.1.5 M5: Test case specification content-related problems
6.1.6 M6: Process-related problems
6.1.7 M7: Communication-related problems
6.1.8 M8: Quality assurance related problems
6.1.9 M9: Tool-related problems
6.1.10 Concluding remarks on the assessment of the identified challenges
# | Challenge | Freq. | Crit. |
---|---|---|---|
M1 | Availability problems with input artifacts | ||
C01 | Requirement specification does not exist | 0.848 | 3.500 |
C02 | Requirement specification does not exist in time | 1.303 | 3.250 |
C03 | Test plan does not exist | 2.148 | 3.250 |
C04 | Information and documents are distributed | 3.118 | 2.867 |
C05 | No access to relevant documents | 1.829 | 3.000 |
M2 | Content-related problems with input artifacts | ||
C06 | Requirement specification is outdated | 2.147 | 3.292 |
C07 | Incorrect requirements exist | 1.900 | 3.273 |
C08 | Conflicting requirements exist | 1.464 | 3.000 |
C09 | Obsolete requirements exist | 2.233 | 3.130 |
C10 | Incomplete requirements exist | 2.300 | 3.130 |
C11 | Unintelligible requirements exist | 2.281 | 2.852 |
C12 | Test plan is too general | 1.966 | 2.650 |
C13 | Test plan is too extensive | 1.355 | 3.000 |
C14 | Insufficient description of the test object | 1.310 | 3.071 |
C15 | Undefined test end criteria | 1.548 | 2.765 |
C16 | Template does not fit project-specific requirements | 1.935 | 2.667 |
C17 | Influences due to faulty previous test cases | 2.033 | 2.545 |
M3 | Knowledge-related problems | ||
C18 | Insufficient knowledge about the SUT | 1.972 | 3.417 |
C19 | Insufficient knowledge about available test platforms | 1.031 | 2.727 |
C20 | Insufficient knowledge about test platform functionalities | 1.563 | 2.588 |
C21 | Insufficient knowledge about testing guidelines | 1.758 | 3.150 |
C22 | Lack of training on test case specifications | 2.516 | 3.333 |
C23 | Lack of consulting/contact person | 1.813 | 3.294 |
C24 | Insufficient documentation of the template | 1.321 | 3.083 |
M4 | Test case description related problems | ||
C25 | Translation errors in test cases exist | 1.321 | 2.429 |
C26 | Spelling errors in test cases exist | 2.367 | 1.542 |
C27 | Several authors are working on a test case specification | 1.844 | 2.579 |
C28 | Phrasing of test cases in prose leads to misunderstandings | 1.242 | 2.714 |
C29 | Use of abbreviations impairs comprehensibility | 1.688 | 2.421 |
C30 | Use of undefined abbreviations impair comprehensibility | 1.813 | 2.714 |
C31 | Abstract test case description leads to inaccuracies in test cases | 1.909 | 3.174 |
C32 | Phrasing of a test case is not specific to the respective target test platform | 2.121 | 3.182 |
C33 | Typing errors in test cases | 2.313 | 1.704 |
C34 | Incomplete test cases exist | 2.171 | 3.269 |
C35 | Preconditions of a test case are not documented | 1.455 | 3.688 |
C36 | Actions of a test case are not documented | 0.909 | 3.417 |
C37 | Expected results of a test case is not documented | 1.091 | 3.462 |
C38 | Test purpose of a test case is not documented | 1.926 | 2.000 |
C39 | Prioritization for a test case is not documented | 2.393 | 1.850 |
C40 | Model series is not documented for a test case | 1.259 | 2.750 |
C41 | Test platform is not documented for a test case | 1.367 | 2.571 |
C42 | Origin of a test case cannot be traced | 2.069 | 2.333 |
C43 | Procedure used to determine the test case is not documented | 2.263 | 2.000 |
C44 | Test case description is incomprehensible | 2.242 | 3.423 |
C45 | Sequence of test case actions to be executed is unclear | 1.117 | 2.786 |
C46 | Ambiguities in test cases exist | 1.735 | 2.727 |
C47 | A test case tests too many functions | 1.625 | 2.556 |
C48 | A test case has too many linked requirements | 1.720 | 2.500 |
C49 | Poor readability/comprehensibility of a test case | 2.118 | 2.885 |
C50 | Test case contains too many parameters | 1.438 | 2.438 |
C51 | Test case already contains implementation details | 1.034 | 1.909 |
C52 | Test cases are not reused | 1.269 | 2.455 |
C53 | Experience-based test cases are not reused | 1.577 | 3.077 |
C54 | Changes to a test case are time-consuming | 2.250 | 2.950 |
C55 | Test case is not consistent with requirements | 1.750 | 3.000 |
M5 | Test case specification content-related problems | ||
C56 | Insufficient handling of variants in the test case specification | 2.269 | 3.059 |
C57 | Many different model series in one test case specification | 3.000 | 2.500 |
C58 | Only test cases for a specific test platform are contained | 1.719 | 2.353 |
M6 | Process-related problems | ||
C59 | Guidelines are missing | 2.000 | 3.368 |
C60 | Templates for test reports are missing | 2.067 | 3.118 |
C61 | Guidelines for handling the tools/tool chains are missing | 1.931 | 3.588 |
C62 | A defined reference process for testing is missing | 1.966 | 3.467 |
C63 | Unclear interface definition to the overall process | 2.933 | 3.500 |
C64 | Lack of change management leads to ignorance of changes | 2.879 | 3.414 |
C65 | Test process does not fit the project/ system | 1.742 | 3.353 |
C66 | Test manager is vacant | 2.385 | 3.625 |
C67 | Responsibilities and roles are unclear | 2.061 | 3.556 |
C68 | Increasing outsourcing leads to knowledge gaps | 2.742 | 4.045 |
C69 | Increasing outsourcing leads to increased effort | 3.067 | 3.870 |
C70 | Increasing outsourcing means that test execution takes longer than in-house | 2.962 | 4.000 |
M7 | Communication-related problems | ||
C71 | Communication problems with the supplier | 2.655 | 3.640 |
C72 | Difficult communication due to communication via representatives | 2.765 | 3.400 |
C73 | Communication problems due to distributed teams | 2.333 | 3.000 |
C74 | Reduced communication due to spatial distance | 2.676 | 3.192 |
C75 | Communication problems due to cultural differences | 1.839 | 3.235 |
C76 | Communication problems due to language barriers | 1.794 | 3.263 |
C77 | Different expectations lead to misunderstandings | 2.471 | 3.154 |
M8 | Quality assurance related problems | ||
C78 | Quality characteristics for a test case specification are unknown | 2.655 | 3.300 |
C79 | Established metrics for quality assessment are missing | 2.609 | 3.533 |
C80 | Requirement coverage is the only known quality metric | 3.150 | 3.267 |
C81 | Tools used do not generate quality reports | 2.652 | 2.625 |
C82 | Company guidelines for quality measurement are missing | 2.391 | 3.231 |
C83 | Lack of established review processes | 2.800 | 3.389 |
C84 | Shortage of manpower to conduct reviews | 3.406 | 3.370 |
C85 | Test case specification is too large for conducting a full review | 2.556 | 3.400 |
C86 | Lack of time to conduct reviews | 3.343 | 3.567 |
M9 | Tool-related problems | ||
C87 | Heterogeneous tools are used | 3.188 | 2.815 |
C88 | Tools used are complex and functions are unknown | 2.333 | 3.045 |
C89 | Usability deficiencies | 2.125 | 3.150 |
C90 | Non-continuous tool chains (interface problems) exist | 2.844 | 3.240 |
C91 | Project-specific adaptations of the template lead to problems | 2.179 | 3.222 |
C92 | Deficiencies with the tool support | 1.586 | 3.769 |