Static
vs Dynamic Anomaly Detection
Static analysis is analysis done on source code
without actually execution it
Example: Source code syntax error detection is the static analysis result
Dynamic analysis is done on the fly, as the
program is being executed and is based on intermediate values that result from
the programs execution
Example: A division by zero warning is the dynamic result
If a problem, such as a data flow anomaly, can be
detected by static analysis methods, then it does not belong in testing, but it
belongs in the language processor
There is actually a lot more static analysis for
data flow analysis and for data flow anomalies going on in current language
processors
Example: Language processors which force variable declarations can detect (-u)
and (ku) anomalies
But still there are many things for which current
notions of static analysis are inadequate
Why static analysis is not enough? There are many
things for which current notions of static analysis are inadequate. They are:
Dead Variables: Although it is often possible to
prove that a variable is dead or alive at a given point in the program, the
general problem is unsolvable
Arrays:
Arrays are problematic. The array is defined or
killed as a single object, but reference is to specific locations within the
array
Usually array pointers are dynamically calculate,
so there is no way to do a static analysis to validate the pointer value
In many languages, dynamically allocated arrays
contain garbage unless explicitly initialized and therefore, -u anomalies are
possible
Records and pointers:
The array problem and the difficulty with
pointers is a special case of multi part data structures
We have the same problem with records and the
pointers to them
Also, in many applications we create files and
their names dynamically and there is no way to determine, without execution,
whether such objects are in the proper state on a given path or for that
matter, whether they exist at all
Dynamic Subroutines and Function Names in a Call:
Subroutine or function name is a dynamic variable
in a call
What is passed or a combination of subroutine
names and data objects, is constructed on a specific path
There is no way, without execution the path, to
determine whether the call is correct or not
False Anomalies:
Anomalies are specific to paths
Even a "clear bug" such as 'ku' may not
be a bug if the path along which the anomaly exist is unachievable
Such "anomalies" are false anomalies
Unfortunately, the problem of determining whether
a path is or is not achievable, is unsolvable
Recoverable Anomalies and Alternate State Graphs:
Constituting an anomaly depends on context,
application, and semantics
How does the compiler know which model I have in
mind? It can't, because the definition of "anomaly" is not fundamental
The language processor must have a
built-in-anomaly definition with which you may or many not (with good reason)
agree
Concurrency, Interrupts System Issues:
Soon as we get away from the simple single task
uniprocessor environment and start thinking in terms of systems, most anomaly
issues become vastly more complicate
How often the data objects are defined or created
at an interrupt level, then they can be processed by a lower-priority routine?
Interrupt can make the "correct" anomalous and the
"anomalies" correct
Much of integration and system testing is aimed
at detecting data flow anomalies that cannot be detected in the context of a
single routine.
Post a Comment
Post a Comment