Revision as of 13:27, 10 May 2006 edit202.65.150.193 (talk) →Test plan identifier← Previous edit | Latest revision as of 14:19, 26 May 2024 edit undoBlueBirdBlues (talk | contribs)Extended confirmed users653 editsm Adding short description: "Type of document"Tag: Shortdesc helper | ||
(675 intermediate revisions by more than 100 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description|Type of document}} | |||
A '''test plan''' is a systematic approach to testing a system such as a ] or ]. The plan typically contains a detailed understanding of what the eventual ] will be. | |||
A '''test plan''' is a document detailing the objectives, resources, and processes for a specific test session for a ] or hardware product. The plan typically contains a detailed understanding of the eventual ]. | |||
==Test plans |
==Test plans== | ||
A test plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements. A test plan is usually prepared by or with significant input from ]s.<ref>{{Cite book |last1=Dale |first1=Nell |url=https://books.google.com/books?id=0GN7EAAAQBAJ&dq=Test+plan+programming&pg=PA253 |title=Programming and Problem Solving with C++ |last2=Weems |first2=Chip |last3=Richards |first3=Tim |date=2022-07-15 |publisher=Jones & Bartlett Learning |isbn=978-1-284-15732-1 |language=en}}</ref> | |||
Cem Kaner, co-author of ''Testing Computer Software'' (ISBN 0471358460), has suggested that test plans are written for two very different purposes. Sometimes the test plan is a product; sometimes it's a tool. It's too easy, but also too expensive, to confuse these goals. | |||
Depending on the product and the responsibility of the organization to which the test plan applies, a test plan may include a strategy for one or more of the following: | |||
In software testing, a test plan gives detailed testing information regarding an upcoming testing effort, including | |||
* Design verification or compliance test – to be performed during the development or approval stages of the product, typically on a small sample of units. | |||
*] of testing | |||
* Manufacturing test or production test – to be performed during preparation or assembly of the product in an ongoing manner for purposes of performance verification and quality control. | |||
*] | |||
* Acceptance test or commissioning test – to be performed at the time of delivery or installation of the product. | |||
*Test Deliverables | |||
* Service and repair test – to be performed as required over the service life of the product. | |||
*Release Criteria | |||
* Regression test – to be performed on an existing operational product, to verify that existing functionality was not negatively affected when other aspects of the environment were changed (e.g., upgrading the platform on which an existing application runs). | |||
*Risks and Contingencies | |||
A complex system may have a high-level test plan to address the overall requirements and supporting test plans to address the design details of subsystems and components. | |||
Test plan document formats can be as varied as the products and organizations to which they apply. There are three major elements that should be described in the test plan: test coverage, test methods, and test responsibilities. These are also used in a formal ].<ref>{{Cite book |last1=Laganà |first1=Antonio |url=https://books.google.com/books?id=6gUhDn3z2WoC&dq=test+coverage,+test+methods,+and+test+responsibilities&pg=PA1075 |title=Computational Science and Its Applications -- ICCSA 2004: International Conference, Assisi, Italy, May 14-17, 2004, Proceedings |last2=Gavrilova |first2=Marina L.|author2-link=Marina Gavrilova |last3=Kumar |first3=Vipin |last4=Mun |first4=Youngsong |last5=Gervasi |first5=Osvaldo |last6=Tan |first6=C. J. Kenneth |date=2004-05-07 |publisher=Springer Science & Business Media |isbn=978-3-540-22054-1 |language=en}}</ref> | |||
===Test plan template, ] format=== | |||
=== |
===Test coverage=== | ||
Test coverage in the test plan states what requirements will be verified during what stages of the product life. Test coverage is derived from design specifications and other requirements, such as safety standards or regulatory codes, where each requirement or specification of the design ideally will have one or more corresponding means of verification. Test coverage for different product life stages may overlap but will not necessarily be exactly the same for all stages. For example, some requirements may be verified during ], but not repeated during acceptance test. Test coverage also feeds back into the design process, since the product may have to be designed to allow test access. | |||
# Test Plan Identifier | |||
# References | |||
# Introduction | |||
# Test Items | |||
# Software Risk Issues | |||
# Features to be Tested | |||
# Features not to be Tested | |||
# Approach | |||
# Item Pass/Fail Criteria | |||
# Suspension Criteria and Resumption Requirements | |||
# Test Deliverables | |||
# Remaining Test Tasks | |||
# Environmental Needs | |||
# Staffing and Training Needs | |||
# Responsibilities | |||
# Schedule | |||
# Planning Risks and Contingencies | |||
# Approvals | |||
# Glossary | |||
===Test methods=== | |||
Test methods in the test plan state how test coverage will be implemented. Test methods may be determined by standards, regulatory agencies, or contractual agreement, or may have to be created new. Test methods also specify test equipment to be used in the performance of the tests and establish pass/fail criteria. Test methods used to verify hardware design requirements can range from very simple steps, such as visual inspection, to elaborate test procedures that are documented separately. | |||
===Test responsibilities=== | |||
The test plan identifier number is.......( to identify this test plan, its level and the level of software that it is related to. Preferably the test plan level will be the same as the related software level. The number also identifies our test plan to be a smoke test plan. This is to assist in coordinating software and testware versions within ]). | |||
Test responsibilities include what organizations will perform the test methods and at each stage of the product life. This allows test organizations to plan, acquire or develop test equipment and other resources necessary to implement the test methods for which they are responsible. Test responsibilities also include what data will be collected and how that data will be stored and reported (often referred to as "deliverables"). One outcome of a successful test plan should be a record or report of the verification of all design specifications and requirements as agreed upon by all parties. | |||
==IEEE 829 test plan structure== | |||
The version(Revision) number of our test plan is 1.1 | |||
], also known as the 829 Standard for Software Test Documentation, is an ] standard that specifies the form of a set of documents for use in defined stages of software testing, each stage potentially producing its own separate type of document.<ref name="829-2008">{{Cite book | title = 829-2008 — IEEE Standard for Software and System Test Documentation| doi = 10.1109/IEEESTD.2008.4578383| year = 2008| isbn = 978-0-7381-5747-4}}</ref> These stages are: | |||
* Test plan identifier | |||
Revision History | |||
* Introduction | |||
Date Version Description Author | |||
* Test items | |||
11/3/2004 1.0 Initial Version Padmavathi Venugopal | |||
* Features to be tested | |||
11/21/2004 1.1 Final Elaboration Version Padmavathi Venugopal | |||
* Features not to be tested | |||
* Approach | |||
* Item pass/fail criteria | |||
* Suspension criteria and resumption requirements | |||
* Test deliverables | |||
* Testing tasks | |||
* Environmental needs | |||
* Responsibilities | |||
* Staffing and training needs | |||
* Schedule | |||
* Risks and contingencies | |||
* Approvals | |||
The IEEE documents that suggest what should be contained in a test plan are: | |||
====References==== | |||
* 829-2008 ''IEEE Standard for Software and System Test Documentation''<ref name="829-2008"/> | |||
** 829-1998 ''IEEE Standard for Software Test Documentation'' (superseded by 829-2008)<ref>{{Cite book | title = 829-1998 — IEEE Standard for Software Test Documentation | doi = 10.1109/IEEESTD.1998.88820 | year = 1998 | isbn = 0-7381-1443-X }}</ref> | |||
** 829-1983 ''IEEE Standard for Software Test Documentation'' (superseded by 829-1998)<ref>{{Cite book | title = 829-1983 — IEEE Standard for Software Test Documentation | doi = 10.1109/IEEESTD.1983.81615 | year = 1983 | isbn = 0-7381-1444-8 }}</ref> | |||
* 1008-1987 ''IEEE Standard for Software Unit Testing''<ref>{{Cite book | title = 1008-1987 - IEEE Standard for Software Unit Testing | doi = 10.1109/IEEESTD.1986.81001 | year = 1986 | isbn = 0-7381-0400-0 }}</ref> | |||
* 1012-2004 ''IEEE Standard for Software Verification and Validation''<ref>{{Cite book | title = 1012-2004 - IEEE Standard for Software Verification and Validation | doi = 10.1109/IEEESTD.2005.96278 | year = 2005 | isbn = 978-0-7381-4642-3 }}</ref> | |||
** 1012-1998 ''IEEE Standard for Software Verification and Validation'' (superseded by 1012-2004)<ref>{{Cite book | title = 1012-1998 - IEEE Standard for Software Verification and Validation | doi = 10.1109/IEEESTD.1998.87820 | year = 1998 | isbn = 0-7381-0196-6 }}</ref> | |||
** 1012-1986 ''IEEE Standard for Software Verification and Validation Plans'' (superseded by 1012-1998)<ref>{{Cite book | title = 1012-1986 - IEEE Standard for Software Verification and Validation Plans | doi = 10.1109/IEEESTD.1986.79647 | year = 1986 | isbn = 0-7381-0401-9 }}</ref> | |||
* 1059-1993 ''IEEE Guide for Software Verification & Validation Plans'' (withdrawn)<ref>{{Cite book | title = 1059-1993 - IEEE Guide for Software Verification and Validation Plans | doi = 10.1109/IEEESTD.1994.121430 | year = 1994 | isbn = 0-7381-2379-X }}</ref> | |||
==See also== | |||
List all documents that support this test plan. Refer to the actual version/release number of the document as stored in the configuration management system. Do not duplicate the text from other documents as this will reduce the viability of this document and increase the maintenance effort. Documents that can be referenced include: | |||
* ] | * ] | ||
* ] | |||
* Requirements specifications | |||
* ] | |||
* High Level design document | |||
* ] | |||
* Detail design document | |||
* ]ing | |||
* Development and Test process standards | |||
* ] | |||
* ] guidelines and examples | |||
* ] | |||
* Corporate standards and guidelines | |||
* ] | |||
====Introduction==== | |||
State the purpose of the Plan, possibly identifying the level of the plan (master etc.). This is essentially the executive summary part of the plan. | |||
You may want to include any references to other plans, documents or items that contain information relevant to this project/process. If preferable, you can create a references section to contain all reference documents. | |||
Identify the Scope of the plan in relation to the Software Project plan that it relates to. Other items may include, resource and ] constraints, scope of the testing effort, how testing relates to other evaluation activities (] & ]s), and possibly the process to be used for change control and communication and coordination of key activities. | |||
As this is the "Executive Summary" keep information brief and to the point. | |||
====Test items (functions)==== | |||
These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list of what is to be tested. This can be developed from the software application inventories as well as other sources of documentation and information. | |||
This can be controlled on a local Configuration Management (CM) process if you have one. This information includes version numbers, configuration requirements where needed, (especially if multiple versions of the product are supported). It may also include key delivery schedule issues for critical elements. | |||
Remember, what you are testing is what you intend to deliver to the ]. | |||
This section can be oriented to the level of the test plan. For higher levels it may be by application or functional area, for lower levels it may be by program, unit, module or build. | |||
====Software risk issues==== | |||
Identify what software is to be tested and what the critical areas are, such as: | |||
# Delivery of a ] product. | |||
# New version of ] software | |||
# Ability to use and understand a new package/tool, etc. | |||
# Extremely complex functions | |||
# Modifications to components with a past history of failure | |||
# Poorly documented modules or change requests | |||
There are some inherent software risks such as complexity; these need to be identified. | |||
# Safety | |||
# Multiple interfaces | |||
# Impacts on Client | |||
# Government ]s and rules | |||
Another key area of risk is a misunderstanding of the original requirements. This can occur at the management, user and developer levels. Be aware of vague or unclear requirements and requirements that cannot be tested. | |||
The past history of defects (]) discovered during Unit testing will help identify potential areas within the software that are risky. If the unit testing discovered a large number of defects or a tendency towards defects in a particular area of the software, this is an indication of potential future problems. It is the nature of defects to cluster and clump together. If it was defect ridden earlier, it will most likely continue to be defect prone. | |||
One good approach to define where the risks are is to have several ] sessions. | |||
* Start with ideas, such as, what worries me about this project/application. | |||
====Features to be tested==== | |||
This is a listing of what is to be tested from the '']'s'' viewpoint of what the system does. This is not a technical description of the software, but a USERS view of the functions. | |||
Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High, Medium and Low. These types of levels are understandable to a User. You should be prepared to discuss why a particular level was chosen. | |||
Sections 4 and 6 are very similar, and the only true difference is the point of view. Section 4 is a technical type description including version numbers and other technical information and Section 6 is from the User’s viewpoint. Users do not understand technical software terminology; they understand functions and processes as they relate to their jobs. | |||
====Features not to be tested==== | |||
This is a listing of what is 'not'' to be tested from both the user's viewpoint of what the system does and a configuration management/version control view. This is not a technical description of the software, but a ''user's'' view of the functions. | |||
Identify ''why'' the feature is not to be tested, there can be any number of reasons. | |||
* Not to be included in this release of the Software. | |||
* Low risk, has been used before and is considered ]. | |||
* Will be released but not tested or documented as a functional part of the release of this version of the software. | |||
Sections 6 and 7 are directly related to Sections 5 and 17. What will and will not be tested are directly affected by the levels of acceptable risk within the project, and what does not get tested affects the level of risk of the project. | |||
====Approach (strategy)==== | |||
This is your overall test ] for this test plan; it should be appropriate to the level of the plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes should be identified. | |||
* Are any special tools to be used and what are they? | |||
* Will the tool require special ]? | |||
* What ]s will be collected? | |||
* Which level is each metric to be collected at? | |||
* How is Configuration Management to be handled? | |||
* How many different configurations will be tested? | |||
* Hardware | |||
* Software | |||
* Combinations of HW, SW and other ] packages | |||
* What levels of ] will be done and how much at each test level? | |||
* Will regression testing be based on severity of defects detected? | |||
* How will elements in the requirements and design that do not make sense or are untestable be processed? | |||
If this is a master test plan the overall project testing approach and coverage requirements must also be identified. | |||
Specify if there are special requirements for the testing. | |||
* Only the full component will be tested. | |||
* A specified segment of grouping of features/components must be tested together. | |||
Other information that may be useful in setting the approach are: | |||
* MTBF, ] - if this is a valid measurement for the test involved and if the data is available. | |||
* SRE, Software Reliability Engineering - if this methodology is in use and if the information is available. | |||
How will meetings and other organizational processes be handled? | |||
====Item pass/fail criteria==== | |||
What are the Completion criteria for this plan? This is a critical aspect of any test plan and should be appropriate to the level of the plan. | |||
* At the Unit test level this could be items such as: | |||
** All test cases completed. | |||
** A specified percentage of cases completed with a percentage containing some number of minor defects. | |||
** Code coverage tool indicates all code covered. | |||
* At the Master test plan level this could be items such as: | |||
** All lower level plans completed. | |||
** A specified number of plans completed without errors and a percentage with minor defects. | |||
This could be an individual test case level criterion or a unit level plan or it can be general functional requirements for higher level plans. | |||
What is the number and severity of defects located? | |||
* Is it possible to compare this to the total number of defects? This may be impossible, as some defects are never detected. | |||
** A defect is something that may cause a failure, and may be acceptable to leave in the application. | |||
** A failure is the result of a defect as seen by the User, the system ]es, etc. | |||
====Suspension criteria and resumption requirements==== | |||
Know when to pause in a series of tests. | |||
* If the number or type of defects reaches a point where the follow on testing has no value, it makes no sense to continue the test; you are just wasting resources. | |||
Specify what constitutes stoppage for a test or series of tests and what is the acceptable level of defects that will allow the testing to proceed past the defects. | |||
Testing after a truly fatal error will generate conditions that may be identified as defects but are in fact ghost errors caused by the earlier defects that were ignored. | |||
====Test deliverables==== | |||
What is to be delivered as part of this plan? | |||
* Test plan document. | |||
* Test cases. | |||
* Test design specifications. | |||
* Tools and their outputs. | |||
* ]s. | |||
* Static and dynamic generators. | |||
* Error ]s and execution logs. | |||
* Problem reports and corrective actions. | |||
One thing that is not a test deliverable is the software itself that is listed under test items and is delivered by development. | |||
====Remaining test tasks==== | |||
If this is a multi-phase process or if the application is to be released in increments there may be parts of the application that this plan does not address. These areas need to be identified to avoid any confusion should defects be reported back on those future functions. This will also allow the users and testers to avoid incomplete functions and prevent waste of resources chasing non-defects. | |||
If the project is being developed as a multi-party process, this plan may only cover a portion of the total functions/features. This status needs to be identified so that those other areas have plans developed for them and to avoid wasting resources tracking defects that do not relate to this plan. | |||
When a third party is developing the software, this section may contain descriptions of those test tasks belonging to both the internal groups and the external groups. | |||
====Environmental needs==== | |||
Are there any special requirements for this test plan, such as: | |||
* Special hardware such as simulators, static generators etc. | |||
* How will test data be provided. Are there special collection requirements or specific ranges of data that must be provided? | |||
* How much testing will be done on each component of a multi-part feature? | |||
* Special ] requirements. | |||
* Specific versions of other supporting software. | |||
* Restricted use of the system during testing. | |||
====Staffing and training needs==== | |||
Training on the application/system. | |||
Training for any test tools to be used. | |||
Section 4 and Section 15 also affect this section. What is to be tested and who is responsible for the testing and training. | |||
====Responsibilities==== | |||
Who is in charge? | |||
This issue includes all areas of the plan. Here are some examples: | |||
* Setting risks. | |||
* Selecting features to be tested and not tested. | |||
* Setting overall strategy for this level of plan. | |||
* Ensuring all required elements are in place for testing. | |||
* Providing for resolution of scheduling conflicts, especially, if testing is done on the production system. | |||
* Who provides the required training? | |||
* Who makes the critical go/no go decisions for items not covered in the test plans? | |||
====Schedule==== | |||
A schedule should be based on realistic and validated estimates. If the estimates for the development of the application are inaccurate, the entire project plan will slip and the testing is part of the overall project plan. | |||
* As we all know, the first area of a project plan to get cut when it comes to crunch time at the end of a project is the testing. It usually comes down to the decision, ‘Let’s put something out even if it does not really work all that well’. And, as we all know, this is usually the worst possible decision. | |||
How slippage in the schedule will to be handled should also be addressed here. | |||
* If the users know in advance that a slippage in the development will cause a slippage in the test and the overall delivery of the system, they just may be a little more tolerant, if they know it’s in their interest to get a better tested application. | |||
* By spelling out the effects here you have a chance to discuss them in advance of their actual occurrence. You may even get the users to agree to a few defects in advance, if the schedule slips. | |||
At this point, all relevant ] should be identified with their relationship to the development process identified. This will also help in identifying and tracking potential slippage in the schedule caused by the test process. | |||
It is always best to tie all test dates directly to their related development activity dates. This prevents the test team from being perceived as the cause of a delay. For example, if system testing is to begin after delivery of the final build, then system testing begins the day after delivery. If the delivery is late, system testing starts from the day of delivery, not on a specific date. This is called dependent or relative dating. | |||
====Planning risks and contingencies==== | |||
What are the overall risks to the project with an emphasis on the testing process? | |||
* Lack of personnel resources when testing is to begin. | |||
* Lack of availability of required hardware, software, data or tools. | |||
* Late delivery of the software, hardware or tools. | |||
* Delays in training on the application and/or tools. | |||
* Changes to the original requirements or designs. | |||
Specify what will be done for various events, for example: | |||
Requirements definition will be complete by January 1, 20XX, and, if the requirements change after that date, the following actions will be taken: | |||
* The test schedule and development schedule will move out an appropriate number of days. This rarely occurs, as most projects tend to have fixed delivery dates. | |||
* The number of tests performed will be reduced. | |||
* The number of acceptable defects will be increased. | |||
** These two items could lower the overall quality of the delivered product. | |||
* Resources will be added to the test team. | |||
* The test team will work ] (this could affect team ]). | |||
* The scope of the plan may be changed. | |||
* There may be some optimization of resources. This should be avoided, if possible, for obvious reasons. | |||
Management is usually reluctant to accept scenarios such as the one above even though they have seen it happen in the past. | |||
The important thing to remember is that, if you do nothing at all, the usual result is that testing is cut back or omitted completely, neither of which should be an acceptable option. | |||
====Approvals==== | |||
Who can approve the process as complete and allow the project to proceed to the next level (depending on the level of the plan)? | |||
At the master test plan level, this may be all involved parties. | |||
When determining the approval process, keep in mind who the audience is: | |||
* The audience for a unit test level plan is different than that of an integration, system or master level plan. | |||
* The levels and type of knowledge at the various levels will be different as well. | |||
* ]s are very technical but may not have a clear understanding of the overall business process driving the project. | |||
* Users may have varying levels of business acumen and very little technical skills. | |||
* Always be wary of users who claim high levels of technical skills and programmers that claim to fully understand the business process. These types of individuals can cause more harm than good if they do not have the skills they believe they possess. | |||
====Glossary==== | |||
Used to define terms and ]s used in the document, and testing in general, to eliminate confusion and promote consistent communications. | |||
===Regional differences=== | |||
There are often localised differences in the use of this term. | |||
In some locations, the term can mean all of the tests that need to be run as well. Purists would suggest that this a collection of tests or ]s is a ]. | |||
Some locations would consider what is described above as a '''test strategy'''. This usage is generally localised to the ]n market. | |||
==Test plans in hardware development== | |||
{{compu-hardware-stub}} | |||
==Test plans in economics== | |||
{{econ-stub}} | |||
==See also== | |||
*] | |||
*] | |||
*] | |||
*] | |||
*]ing | |||
*]ing | |||
== |
== References == | ||
{{reflist}} | |||
* from the website (O'Reilly) | |||
==External links== | |||
] | |||
* Public domain ] test plan template at (templates are currently inaccessible but sample documents can be seen here: ) | |||
] | |||
] |
Latest revision as of 14:19, 26 May 2024
Type of documentA test plan is a document detailing the objectives, resources, and processes for a specific test session for a software or hardware product. The plan typically contains a detailed understanding of the eventual workflow.
Test plans
A test plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements. A test plan is usually prepared by or with significant input from test engineers.
Depending on the product and the responsibility of the organization to which the test plan applies, a test plan may include a strategy for one or more of the following:
- Design verification or compliance test – to be performed during the development or approval stages of the product, typically on a small sample of units.
- Manufacturing test or production test – to be performed during preparation or assembly of the product in an ongoing manner for purposes of performance verification and quality control.
- Acceptance test or commissioning test – to be performed at the time of delivery or installation of the product.
- Service and repair test – to be performed as required over the service life of the product.
- Regression test – to be performed on an existing operational product, to verify that existing functionality was not negatively affected when other aspects of the environment were changed (e.g., upgrading the platform on which an existing application runs).
A complex system may have a high-level test plan to address the overall requirements and supporting test plans to address the design details of subsystems and components.
Test plan document formats can be as varied as the products and organizations to which they apply. There are three major elements that should be described in the test plan: test coverage, test methods, and test responsibilities. These are also used in a formal test strategy.
Test coverage
Test coverage in the test plan states what requirements will be verified during what stages of the product life. Test coverage is derived from design specifications and other requirements, such as safety standards or regulatory codes, where each requirement or specification of the design ideally will have one or more corresponding means of verification. Test coverage for different product life stages may overlap but will not necessarily be exactly the same for all stages. For example, some requirements may be verified during design verification test, but not repeated during acceptance test. Test coverage also feeds back into the design process, since the product may have to be designed to allow test access.
Test methods
Test methods in the test plan state how test coverage will be implemented. Test methods may be determined by standards, regulatory agencies, or contractual agreement, or may have to be created new. Test methods also specify test equipment to be used in the performance of the tests and establish pass/fail criteria. Test methods used to verify hardware design requirements can range from very simple steps, such as visual inspection, to elaborate test procedures that are documented separately.
Test responsibilities
Test responsibilities include what organizations will perform the test methods and at each stage of the product life. This allows test organizations to plan, acquire or develop test equipment and other resources necessary to implement the test methods for which they are responsible. Test responsibilities also include what data will be collected and how that data will be stored and reported (often referred to as "deliverables"). One outcome of a successful test plan should be a record or report of the verification of all design specifications and requirements as agreed upon by all parties.
IEEE 829 test plan structure
IEEE 829-2008, also known as the 829 Standard for Software Test Documentation, is an IEEE standard that specifies the form of a set of documents for use in defined stages of software testing, each stage potentially producing its own separate type of document. These stages are:
- Test plan identifier
- Introduction
- Test items
- Features to be tested
- Features not to be tested
- Approach
- Item pass/fail criteria
- Suspension criteria and resumption requirements
- Test deliverables
- Testing tasks
- Environmental needs
- Responsibilities
- Staffing and training needs
- Schedule
- Risks and contingencies
- Approvals
The IEEE documents that suggest what should be contained in a test plan are:
- 829-2008 IEEE Standard for Software and System Test Documentation
- 829-1998 IEEE Standard for Software Test Documentation (superseded by 829-2008)
- 829-1983 IEEE Standard for Software Test Documentation (superseded by 829-1998)
- 1008-1987 IEEE Standard for Software Unit Testing
- 1012-2004 IEEE Standard for Software Verification and Validation
- 1012-1998 IEEE Standard for Software Verification and Validation (superseded by 1012-2004)
- 1012-1986 IEEE Standard for Software Verification and Validation Plans (superseded by 1012-1998)
- 1059-1993 IEEE Guide for Software Verification & Validation Plans (withdrawn)
See also
- Software testing
- Test suite
- Test case
- Test script
- Scenario testing
- Session-based testing
- IEEE 829
- Ad hoc testing
References
- Dale, Nell; Weems, Chip; Richards, Tim (2022-07-15). Programming and Problem Solving with C++. Jones & Bartlett Learning. ISBN 978-1-284-15732-1.
- Laganà, Antonio; Gavrilova, Marina L.; Kumar, Vipin; Mun, Youngsong; Gervasi, Osvaldo; Tan, C. J. Kenneth (2004-05-07). Computational Science and Its Applications -- ICCSA 2004: International Conference, Assisi, Italy, May 14-17, 2004, Proceedings. Springer Science & Business Media. ISBN 978-3-540-22054-1.
- ^ 829-2008 — IEEE Standard for Software and System Test Documentation. 2008. doi:10.1109/IEEESTD.2008.4578383. ISBN 978-0-7381-5747-4.
- 829-1998 — IEEE Standard for Software Test Documentation. 1998. doi:10.1109/IEEESTD.1998.88820. ISBN 0-7381-1443-X.
- 829-1983 — IEEE Standard for Software Test Documentation. 1983. doi:10.1109/IEEESTD.1983.81615. ISBN 0-7381-1444-8.
- 1008-1987 - IEEE Standard for Software Unit Testing. 1986. doi:10.1109/IEEESTD.1986.81001. ISBN 0-7381-0400-0.
- 1012-2004 - IEEE Standard for Software Verification and Validation. 2005. doi:10.1109/IEEESTD.2005.96278. ISBN 978-0-7381-4642-3.
- 1012-1998 - IEEE Standard for Software Verification and Validation. 1998. doi:10.1109/IEEESTD.1998.87820. ISBN 0-7381-0196-6.
- 1012-1986 - IEEE Standard for Software Verification and Validation Plans. 1986. doi:10.1109/IEEESTD.1986.79647. ISBN 0-7381-0401-9.
- 1059-1993 - IEEE Guide for Software Verification and Validation Plans. 1994. doi:10.1109/IEEESTD.1994.121430. ISBN 0-7381-2379-X.
External links
- Public domain RUP test plan template at Sourceforge (templates are currently inaccessible but sample documents can be seen here: DBV Samples)