没有合适的资源?快使用搜索试试~ 我知道了~
A_Brief_Essay_on_Software_Testing
需积分: 3 16 下载量 73 浏览量
2010-01-02
21:09:56
上传
评论
收藏 267KB PDF 举报
温馨提示
![preview](https://dl-preview.csdnimg.cn/1957113/0001-649b4b5e732c36d573c75c3611d6a383_thumbnail.jpeg)
![preview-icon](https://csdnimg.cn/release/downloadcmsfe/public/img/scale.ab9e0183.png)
试读
14页
有关软件工程测试的论文,国外,英文。 至于test的重要性,咱们都知道,可是真的重视了么?
资源推荐
资源详情
资源评论
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![application/pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![rar](https://img-home.csdnimg.cn/images/20210720083606.png)
![rar](https://img-home.csdnimg.cn/images/20210720083606.png)
![pptx](https://img-home.csdnimg.cn/images/20210720083543.png)
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![zip](https://img-home.csdnimg.cn/images/20210720083736.png)
![zip](https://img-home.csdnimg.cn/images/20210720083736.png)
![zip](https://img-home.csdnimg.cn/images/20210720083736.png)
![zip](https://img-home.csdnimg.cn/images/20210720083736.png)
![](https://csdnimg.cn/release/download_crawler_static/1957113/bg1.jpg)
1
A Brief Essay on Software Testing
Antonia Bertolino, Eda Marchetti
Abstract— Testing is an important and critical part of the software development process, on which the quality and reliability of the
delivered product strictly depend. Testing is not limited to the detection of “bugs” in the software, but also increases confidence in its
proper functioning and assists with the evaluation of functional and nonfunctional properties. Testing related activities encompass
the entire development process and may consume a large part of the effort required for producing software. In this chapter we
provide a comprehensive overview of software testing, from its definition to its organization, from test levels to test techniques, from
test execution to the analysis of test cases effectiveness. Emphasis is more on breadth than depth: due to the vastness of the topic,
in the attempt to be all-embracing, for each covered subject we can only provide a brief description and references useful for further
reading.
Index Terms — D.2.4 Software/Program Verification, D.2.5 Testing and Debugging.
——————————
u
——————————
1. INTRODUCTION
esting is a crucial part of the software life cycle, and
recent trends in software engineering evidence the
importance of this activity all along the development
process. Testing activities have to start already at the re-
quirements specification stage, with ahead planning of test
strategies and procedures, and propagate down, with deri-
vation and refinement of test cases, all along the various
development steps since the code-level stage, at which the
test cases are eventually executed, and even after deploy-
ment, with logging and analysis of operational usage data
and customer’s reported failures.
Testing is a challenging activity that involves several high-
demanding tasks: at the forefront is the task of deriving an
adequate suite of test cases, according to a feasible and cost-
effective test selection technique. However, test selection is
just a starting point, and many other critical tasks face test
practitioners with technical and conceptual difficulties
(which are certainly under-represented in the literature):
the ability to launch the selected tests (in a controlled host
environment, or worse in the tight target environment of an
embedded system); deciding whether the test outcome is
acceptable or not (which is referred to as the test oracle
problem); if not, evaluating the impact of the failure and
finding its direct cause (the fault), and the indirect one (via
Root Cause Analysis); judging whether testing is sufficient
and can be stopped, which in turn would require having at
hand measures of the effectiveness of the tests: one by one,
each of these tasks presents tough challenges to testers, for
which their skill and expertise always remains of topmost
importance.
We provide here a short, yet comprehensive overview of
the testing discipline, spanning over test levels, test tech-
niques and test activities. In an attempt to cover all testing
related issues, we can only briefly expand on each argu-
ment, however plenty of references are also provided
throughout for further reading. The remainder of the chap-
ter is organized as follows: we present some basic concepts
in Section 2, and the different types of test (static and dy-
namic) with the objectives characterizing the testing activity
in Section 3. In Section 4 we focus on the test levels (unit,
integration and system test) and in Section 5 we present the
techniques used for test selection. Going on, test design,
execution, documentation,d management are described in
Sections 6, 7, 8 and 9, respectively. Test measurement issues
are discussed in Section 10 and finally the chapter conclu-
sions are drawn in Section 11.
2. TERMINOLOGY AND BASIC CONCEPTS
Before deepening into testing techniques, we provide here
some introductory notions relative to testing terminology
and basic concepts.
2.1 On the nature of the testing discipline
As we will see in the remainder of this chapter, there exist
many types of testing and many test strategies, however all
of them share a same ultimate purpose: increasing the
software engineer confidence in the proper functioning of
the software.
Towards this general goal, a piece of software can be tested
to achieve various more direct objectives, all meant in fact
to increase confidence, such as exposing potential design
flaws or deviations from user’s requirements, measuring
the operational reliability, evaluating the performance
characteristics, and so on (we further expand on test objec-
tives in Section 3.3); to serve each specific objective, differ-
ent techniques can be adopted.
Generally speaking, test techniques can be divided into two
classes:
• Static analysis techniques (expanded in Section 3.1),
where the term “static” does not refer to the techniques
themselves (they can use automated analysis tools), but
————————————————
• Antonia Bertolino is with the Istituto di Scienza e Tecnologie “A. Faedo”
Area della ricerca CRD di Pisa, Via Moruzzi 1, 56124 Pisa Italy.
E-mail: antonia.bertolino@isti.cnr.it.
• Eda Marchetti is with the Istituto di Scienza e Tecnologie “A. Faedo” Area
della ricerca CRD di Pisa, Via Moruzzi 1, 56124 Pisa Italy.
E-mail: eda.marchetti@isti.cnr.it.
T
![](https://csdnimg.cn/release/download_crawler_static/1957113/bg2.jpg)
2
is used to mean that they do not involve the execution
of the tested system. Static techniques are applicable
throughout the lifecycle to the various developed arti-
facts for different purposes, such as to check the adher-
ence of the implementation to the specifications or to
detect flaws in the code via inspection or review.
• Dynamic analysis techniques (further discussed in Sec-
tion 3.2), which exercise the software in order to expose
possible failures. The behavioral and performance
properties of the program are also observed.
Static and dynamic analyses are complementary techniques
[1]: the former yield generally valid results, but they may
be weak in precision; the latter are efficient and provide
more precise results, but only holding for the examined
executions. The focus of this chapter will be mainly on dy-
namic test techniques, and where not otherwise specified
testing is used as a synonymous for “dynamic testing”.
Unfortunately, there are few mathematical certainties on
which software testing foundations can lay. The firmest
one, as everybody now recognizes, is that, even after suc-
cessful completion of an extensive testing campaign, the
software can still contain faults. As firstly stated by Dijkstra
as early as thirty years ago [22], testing can never prove the
absence of defects, it can only possibly reveal the presence
of faults by provoking malfunctions. In the elapsed dec-
ades, lot of progress has been made both in our knowledge
of how to scrutinize a program’s executions in rigorous and
systematic ways, and in the development of tools and proc-
esses that can support the tester’s tasks.
Yet, the more the discipline progresses, the clearer it be-
comes that it is only by means of rigorous empirical studies
that software testing can increase its maturity level [35].
Testing is in fact an engineering discipline, and as such it
calls for evidences and proven facts, to be collected either
from experience or from controlled experiments, and cur-
rently lacking, based on which testers can make predictions
and take decisions.
2.2 A general definition
Testing can refer to many different activities used to check
a piece of software. As said, we focus primarily on “dy-
namic” software testing presupposing code execution, for
which we re-propose the following general definition in-
troduced in [9]:
Software testing consists of the dynamic verification of the behav-
ior of a program on a finite set of test cases, suitably selected from
the usually infinite executions domain, against the specified ex-
pected behavior.
This short definition attempts to include all essential testing
concerns: the term dynamic means, as said, that testing im-
plies executing the program on (valued) inputs; finite indi-
cates that only a limited number of test cases can be exe-
cuted during the testing phase, chosen from the whole test
set, that can generally be considered infinite; selected refers
to the test techniques adopted for selecting the test cases
(and testers must be aware that different selection criteria
may yield vastly different effectiveness); expected points out
to the decision process adopted for establishing whether
the observed outcomes of program execution are acceptable
or not.
2.3 Fault vs. Failure
To fully understand the facets of software testing, it is im-
portant to clarify the terms “fault”, “error”
1
and “failure”:
indeed, although their meanings are strictly related, there
are important distinctions between these three concepts.
A failure is the manifested inability of the program to per-
form the function required, i.e., a system malfunction evi-
denced by incorrect output, abnormal termination or unmet
time and space constraints. The cause of a failure, e.g., a
missing or incorrect piece of code, is a fault. A fault may
remain undetected long time, until some event activates it.
When this happens, it first brings the program into an in-
termediate unstable state, called error, which, if and when
propagates to the output, eventually causes the failure. The
process of failure manifestation can be therefore summed
up into a chain [42]:
Fault→Error→Failure
which can recursively iterate: a fault in turn can be caused
by the failure of some other interacting system.
In any case what testing reveals are the failures and a con-
sequent analysis stage is needed to identify the faults that
caused them.
The notion of a fault however is ambiguous and difficult to
grasp, because no precise criteria exist to definitively de-
termine the cause of an observed failure. It would be pref-
erable to speak about failure-causing inputs, that is, those
sets of inputs that when exercised can result into a failure.
2.4 The notion of software reliability
Indeed, whether few or many, some faults will inevitably
escape testing and debugging. However, a fault can be
more or less disturbing depending on whether, and how
frequently, it will eventually show up to the final user (and
depending of course on the seriousness of its conse-
quences).
So, in the end, one measure which is important in deciding
whether a software product is ready for release is its reli-
ability. Strictly speaking, software reliability is a probabilistic
estimate, and measures the probability that the software
will execute without failure in a given environment for a
given period of time [44]. Thus, the value of software reliabil-
ity depends on how frequently those inputs that cause a
failure will be exercised by the final users.
Estimates of software reliability can be produced via test-
ing. To this purpose, since the notion of reliability is specific
to “a given environment”, the tests must be drawn from an
input distribution that approximates as closely as possible
the future usage in operation, which is called the operational
distribution.
1
Note that we are using the term “error” with the commonly used mean-
ing within the Software Dependability community [42], which is stricter
than its general definition in [28].
![](https://csdnimg.cn/release/download_crawler_static/1957113/bg3.jpg)
3
3. TYPES OF TESTS
The one term testing actually refers to a full range of test
techniques, even quite different from one other, and em-
braces a variety of aims.
3.1 Static Techniques
As said, a coarse distinction can be made between dynamic
and static techniques, depending on whether the software
is executed or not. Static techniques are based solely on the
(manual or automated) examination of project documenta-
tion, of software models and code, and of other related in-
formation about requirements and design. Thus static tech-
niques can be employed all along development, and their
earlier usage is of course highly desirable. Considering a
generic development process, they can be applied [49]:
• at the requirements stage for checking language syntax,
consistency and completeness as well as the adherence
to established conventions;
• at the design phase for evaluating the implementation
of requirements, and detecting inconsistencies (for in-
stance between the inputs and outputs used by high
level modules and those adopted by sub-modules).
• during the implementation phase for checking that the
form adopted for the implemented products (e.g., code
and related documentation) adheres to the established
standards or conventions, and that interfaces and data
types are correct.
Traditional static techniques include [7], [50]:
• Software inspection: the step-by-step analysis of the
documents (deliverables) produced, against a compiled
checklist of common and historical defects.
• Software reviews: the process by which different aspects
of the work product are presented to project personnel
(managers, users, customer etc) and other interested
stakeholders for comment or approval.
• Code reading: the desktop analysis of the produced code
for discovering typing errors that do not violate style or
syntax.
• Algorithm analysis and tracing: is the process in which
the complexity of algorithms employed and the worst-
case, average-case and probabilistic analysis evalua-
tions can be derived.
The processes implied by the above techniques are heavily
manual, error-prone, and time consuming. To overcome
these problems, researchers have proposed static analysis
techniques relying on the use of formal methods [19]. The
goal is to automate as much as possible the verification of
the properties of the requirements and the design. Towards
this goal, it is necessary to enforce a rigorous and unambi-
guous formal language for specifying the requirements and
the software architecture. In fact, if the language used for
specification has a well-defined semantics, algorithms and
tools can be developed to analyze the statements written in
that language.
The basic idea of using a formal language for modeling re-
quirements or design is now universally recognized as a
foundation for software verification. Formal verification tech-
niques are attracting today quite a lot attention from both
both research institutions and industries and it is foresee-
able that proofs of correctness will be increasingly applied,
especially for the verification of critical systems.
One of the most promising approaches for formal verifica-
tion is model checking [18]. Essentially, a model checking tool
takes in input a model (a description of system functional
requirements or design) and a property that the system is
expected to satisfy.
In the middle between static and dynamic analysis tech-
niques, is symbolic execution [38], which executes a program
by replacing variables with symbolic values.
Quite recently, the automated generation of test data for
coverage testing is again attracting lot of interest, and ad-
vanced tools are being developed based on a similar ap-
proach to symbolic execution exploiting constraint solving
techniques [3]. A flowgraph path to be covered is translated
into a path constraint, whose solution provides the desired
input data.
We conclude this section considering the alternative appli-
cation of static techniques in producing values of interest
for controlling and managing the testing process. Different
estimations can be obtained by observing specific proper-
ties of the present or past products, and/or parameters of
the development process..
3.2 Dynamic Techniques
Dynamic techniques [1] obtain information of interest about
a program by observing some executions. Standard dy-
namic analyses include testing (on which we focus in the
rest of the chapter) and profiling. Essentially a program pro-
file records the number of times some entities of interest
occur during a set of controlled executions. Profiling tools
are increasingly used today to derive measures of coverage,
for instance in order to dynamically identify control flow
invariants, as well as measures of frequency, called spectra,
which are diagrams providing the relative execution fre-
quencies of the monitored entities. In particular, path spectra
refer to the distribution of (loop-free) paths traversed dur-
ing program profiling. Specific dynamic techniques also
include simulation, sizing and timing analysis, and proto-
typing [49].
Testing properly said is based on the execution of the code
on valued inputs. Of course, although the set of input val-
ues can be considered infinite, those that can be run effec-
tively during testing are finite. It is in practice impossible,
due to the limitations of the available budget and time, to
exhaustively exercise every input of a specific set even
when not infinite. In other words, by testing we observe
some samples of the program’s behavior.
A test strategy therefore must be adopted to find a trade-off
between the number of chosen inputs and overall time and
effort dedicated to testing purposes. Different techniques
can be applied depending on the target and the effect that
should be reached. We will describe test selection strategies
in Section 5.
In the case of concurrent, non-deterministic systems, the
results obtained by testing depend not only on the input
provided but also on the state of the system. Therefore,
when speaking about test input values, it is implied that the
剩余13页未读,继续阅读
资源评论
![avatar-default](https://csdnimg.cn/release/downloadcmsfe/public/img/lazyLogo2.1882d7f4.png)
![avatar](https://profile-avatar.csdnimg.cn/cf6e7c687df749edacfca1a1fa7d4ee4_hello_girls.jpg!1)
hello_girls
- 粉丝: 0
- 资源: 3
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助
![voice](https://csdnimg.cn/release/downloadcmsfe/public/img/voice.245cc511.png)
![center-task](https://csdnimg.cn/release/downloadcmsfe/public/img/center-task.c2eda91a.png)
安全验证
文档复制为VIP权益,开通VIP直接复制
![dialog-icon](https://csdnimg.cn/release/downloadcmsfe/public/img/green-success.6a4acb44.png)