1.
What factors should be considered in evaluating a risk? (select all that apply) (8-26)
Correct Answer(s)
A. Profitability
B. Monetary Value
C. Schedules
D. Costs
Explanation
• Monetary Value
Monetary value is the best common denominator for quantifying the impact of an adverse
circumstance – whether the damage is actual or abstract, whether the victim is a person, a piece
of equipment, or a function. It is the recompense used by the courts to redress both physical
damage and mental anguish.
• Schedules
Schedules are examined to determine any slips in the completion date. One method of
analyzing schedules is by looking at each task independently and then multiplying them
together. For example, if a project contains 3 independent tasks and each task has a 50%
chance of finishing on time, the project has a 12.5% chance of finishing on time (50% * 50% *
50%).
• Costs
Costs are calculated over the product's life cycle. The costs for each phase are added together
for a total life cycle cost. For example, when producing a software product the cost should
reflect not only what it takes to develop the product, but also to fix and maintain it.
• Profitability
Profitability is typically calculated using:
• Return-on-sales, which is profit, or return as a percentage of a project's total cost. It
does not depend on time. A positive value indicates a profit and a negative value
indicates a loss.
• Return-on-investment, which is an organization-wide measure that assesses performance
against invested assets (organizations may use different formulas). It measures
efficiency, and balances the asset use and the profit margin.
• Economic-value added, which evaluates the cost of capital percent vs. the return of
capital percent. The cost of capital is the cost of financing the organization's operations.
It takes into account the minimum rate of return that the investors (such as
debt holders and shareholders) require.
• Internal rate-of-return, which is a relative measure based on the timing of cash
inflow and outflow. It is the rate at which the net present values of cash inflow and
outflow become equal.
Using a structured method, risk is calculated using the formula:
Expected value = Probability * Impact
2.
Regardless of the method used, two elements must always be considered in a risk analysis. What are they? (8-25)
Correct Answer(s)
A. The probability of the event occurring
B. The impact if the event occurs
Explanation
As it is impossible to be completely certain of the impact or likelihood of many events, these events
are estimated using a combination of historical data, knowledge of the event, and experience and
judgment. Tools such as simulation, decision trees, or calculating a monetary value are used. Risk
can be calculated by structured (associated with data) and unstructured (focus on judgment and
experience) methods. Regardless of the method used, two elements must always be considered:
• The probability of such an event occurring
• The resulting impact if the event occurs
3.
What are some key measurements that can be used to measure a process? (select all that apply) (8-12)
Correct Answer(s)
A. Lines of code
B. Products per person
C. Estimated vs. actual cost
D. Time spent fixing bugs
E. Deliverables completed on time
Explanation
A process can be measured by either of the following:
• Attributes of the process, such as overall development time, type of methodology used, or
the average level of experience of the development staff.
• Accumulating product measures into a metric so that meaningful information about the
process can be provided. For example, function points per person-month or LOC per person-
month can measure productivity (which is product per resources), the number of failures
per month can indicate the effectiveness of computer operations, and the number of
help desk calls per LOC can indicate the effectiveness of a system design methodology.
Guide to the CSQA CBOK
8-12 Version 6.2
There is no standardized list of software process metrics currently available. However, in addition
to the ones listed above, some others to consider include:
• Number of deliverables completed on time
• Estimated costs vs. actual costs
• Budgeted costs vs. actual costs
• Time spent fixing errors
• Wait time
• Number of contract modifications
• Number of proposals submitted vs. proposals won
• Percentage of time spent performing value-added tasks
4.
What is ordinal data? (8-4)
Correct Answer
B. Data that can be ranked but differences between values are not meaningful.
Explanation
This data can be ranked, but differences or ratios between values are not meaningful. For example,
programmer experience level may be measured as low, medium, or high. For ordinal data to be
used in an objective measurement the criteria for placement in the various categories must be well
defined; otherwise, it is subjective.
5.
What are the major uses of quantitative data? (select all that apply) (8-13)
Correct Answer(s)
A. Manage and control the process
B. Improve the process
C. Manage the risks
D. Manage and control the product
Explanation
There are four major uses of quantitative data (i.e., measurement):
1. Manage and control the process.
A process is a series of tasks performed to produce deliverables or products. IT processes
usually combine a skilled analyst with the tasks defined in the process. In addition, each time a
process is executed it normally produces a different product or service from what was built by
the same process at another time. For example, the same software development process may be
followed to produce two different applications. Management may need to adapt the process for
each product or service built, and needs to know that when performed, the process will produce
the desired product or service.
2. Manage and control the product
Quality is an attribute of a product. Quality level must be controlled from the start of the
process through the conclusion of the process. Control requires assuring that the specified
requirements are implemented, and that the delivered product is what the customer expects and
needs.
3. Improve the process
The most effective method for improving quality and productivity is to improve the processes.
Improved processes have a multiplier effect because everyone that uses the improved process
gains from the improvement. Quantitative data gathered during process execution can identify
process weaknesses, and, therefore, opportunities for improvement.
4. Manage the risks
Risk is the opportunity for something to go wrong - for example, newly purchased software
will not work as stated, projects will be delivered late, or workers assigned to a project do not
possess the skills needed to successfully complete it. Management needs to understand each
risk, know the probability of the risk occurring, know the potential consequences if the risk
occurs, and understand the probability of success based upon different management actions.
6.
What are the most common ways of quantifying software size? (select all that apply) (8-9)
Correct Answer(s)
A. Lines of code
C. Function points
Explanation
LOC is the most common way of quantifying software size; however, this cannot be done until the
coding process is complete. Function points have the advantage of being measurable during the
design phase of the development process or possibly earlier.
Lines of Code
This is probably the most widely used measure for program size, although there are many different
definitions. The differences involve treatment of blank lines, comment lines, non-executable
statements, multiple statements per line, multiple lines per statement, and the question of how to
count reused lines of code. The most common definition counts any line that is not a blank or a
comment, regardless of the number of statements per line. In theory, LOC is a useful predictor of
program complexity, total development effort, and programmer performance (debugging,
productivity). Numerous studies have attempted to validate these relationships.
Function Points
A. J. Albrecht proposed a metric for software size and the effort required for development that can
be determined early in the development process. This approach computes the total function points
(FP) value for the project, by totaling the number of external user inputs, inquiries, outputs, and master files, and then applying the following weights: inputs (4), outputs (5), inquiries (4), and
master files (10). Each FP contributor can be adjusted within a range of ±35% for a specific project
complexity.
7.
What are the attributes of good measurements? (select all that apply) (8-5)
Correct Answer(s)
A. Reliable
B. Ease of use
C. Valid
D. Timeliness
Explanation
Reliability
This test refers to the consistency of measurement. If taken by two people, the same results should
be obtained. Sometimes measures are unreliable because of the measurement technique. For
example, human error could make counting LOC unreliable, but the use of an automated code
analyzer would result in the same answer each time it is run against an unchanged program.
Validity
This test indicates the degree to which a measure actually measures what it was intended to
measure. If actual project work effort is intended to quantify the total time spent on a software
development project, but overtime or time spent on the project by those outside the project team is
not included, the measure is invalid for its intended purpose. A measure can be reliable, but invalid.
An unreliable measure cannot be valid.
Ease of Use and Simplicity
These two tests are functions of how easy it is to capture and use the measurement data.
Timeliness
This test refers to whether the information can be reported in sufficient time to impact the decisions
needed to manage effectively.
8.
There are inherent risks in integrating new technology. What is the QA Analyst role in integrating new technology? (select all that apply) (8-31)
Correct Answer(s)
A. Determining the risks
B. Assure controls are adequate to reduce the risk
C. Modify existing processes for new technology
Explanation
One of the major challenges facing an IT organization is to effectively integrate new technology.
This integration needs to be done without compromising quality.
The QA analyst has three roles in integrating new technology:
• Determining the Risks
Each new technology poses new risks. These risks need to be identified and
prioritized and, if possible, quantified. Although the QA analyst will probably not
perform the actual task, the QA analyst needs to ensure that a risk analysis for the
new technology is undertaken and effectively performed.
• Assuring that the Controls are Adequate to Reduce the Risk
The QA analyst needs to assess whether the controls proposed for the new
technology are adequate to reduce the risk to an acceptable level. This may be done
by line management and reviewed by the QA analyst.
Assuring that Existing Processes are Appropriately Modified to Incorporate the Use
of the New Technology
Work processes that will utilize new technologies normally need to be modified to
incorporate those technologies into the step-by-step work procedures. This may be
done by the workers responsible for the work processes, but at least needs to be
assessed or reviewed by the QA analyst.
9.
A process is defined as stable when... (8-18)
Correct Answer
B. Its mean and standard deviation remain constant over time
Explanation
A process is defined as stable when its mean and standard deviation remain constant over time.
Processes containing only common causes of variation are considered stable. As a stable process is
predictable, future process values can be predicted within the control limits with a certain amount
of belief. A stable process is said to be in a state of statistical control. The control chart in Skill
Category 4 depicts a stable process.
10.
What are the distinguishing characteristics of risk? (select all that apply) (8-22)
Correct Answer(s)
A. Situational
B. Time-based
C. Interdependent
D. Magnitude dependent
E. Value-based
Explanation
Risk has five distinguishing characteristics:
Situational
Changes in a situation can result in new risks. Examples include, replacing a team member,
undergoing reorganization, or changing a project's scope.
Time-Based
Considering a software development life cycle, the probability of risk occurring at the beginning of
the project is very high (due to the unknowns), whereas at the end of the project the probability is
very low. In contrast, during the life cycle, the impact (cost) from a risky event occurring is low at
the beginning (since not much time and effort have been invested) and higher at the end (as there is
more to lose).
Interdependent
Within a project, many tasks and deliverables are intertwined. If one deliverable takes longer to
create than expected, other items depending on that deliverable may be affected, and the result
could be a domino effect.
Magnitude Dependent
The relationship of probability and impact are not linear, and the magnitude of the risk typically
makes a difference. For example, consider the risk of spending $1 for a 50/50 chance to win $5, vs.
the risk of spending $1,000 for a 50/50 chance of winning $5,000 vs. the risk of spending $100,000
for a 50/50 chance of winning $500,000. In this example, the probability of loss is all the same
(50%) yet the opportunity cost of losing is much greater.
Value-Based
Risk may be affected by personal, corporate or cultural values. For example, completing a project
on schedule may be dependent on the time of year and nationalities or religious beliefs of the work
team. Projects being developed in international locations where multiple cultures are involved may
have a higher risk than those done in one location with a similar work force.
11.
Risks can be categorized as...(select all that apply) (8-21)
Correct Answer(s)
A. Technical
B. Programmatic
C. Supportability
D. Cost
E. Schedule
Explanation
Risks can be categorized as one of the following:
Technical such as complexity, requirement changes, unproven technology, etc.
Programmatic or Performance such as safety, skills, regulatory changes, material
availability, etc.
Supportability or Environment such as people, equipment, reliability, maintainability, etc.
Cost such as sensitivity to technical risk, overhead, estimating errors, etc.
Schedule such as degree of concurrency, number of critical path items, sensitivity to cost, etc.
12.
What are the components that must be considered separately when determining how to manage a risk? (Select all that apply) (8-21)
Correct Answer(s)
A. The probability that the event will occur
B. The event that could occur
C. The impact or consequence of the event if it occurs
Explanation
Risk is the possibility that an unfavorable event will occur. It may be predictable or unpredictable.
Risk has three components, each of which must be considered separately when determining how to
manage the risk.
• The event that could occur – the risk
• The probability that the event will occur- the likelihood
• The impact or consequence of the event if it occurs – the penalty
13.
A _________ is a derived (calculated or composite) unit of measurement that cannot be directly observed, but is created by combining or relating two or more measures. (8-1)
Correct Answer(s)
metric
metrics
Explanation
A metric is a derived (calculated or composite) unit of measurement that cannot be directly
observed, but is created by combining or relating two or more measures. A metric normalizes data
so that comparison is possible. Since metrics are combinations of measures they can add more
value in understanding or evaluating a process than plain measures. Examples of metrics are mean
time to failure and actual effort compared to estimated effort.
14.
_________data has an absolute zero and meaningful ratios can be calculated. (8-3)
Correct Answer(s)
Ratio
ratio
Explanation
Ratio Data
This data has an absolute zero and meaningful ratios can be calculated. Measuring program size by
LOC is an example. A program of 2,000 lines can be considered twice as large as a program of
1,000 lines.
It is important to understand the measurement scale associated with a given measure or metric.
Many proposed measurements use values from an interval, ordinal, or nominal scale. If the values
are to be used in mathematical equations designed to represent a model of the software process,
measurements associated with a ratio scale are preferred, since the ratio scale allows mathematical
operations to be meaningfully applied.
15.
The measures of central tendency are the ________, __________, and _________. (list all three) (8-3)
Correct Answer(s)
mean
median
mode
medium
Explanation
Measures of Central Tendency
The measures of central tendency are the mean, medium, and mode. The mean is the average of the
items in the population; the medium is the item at which half the items in the population are below
this item and half the items are above this item; and the mode represents which items are repeated
most frequently.
16.
The ___________ is the item at which half the items in the population are below this item and half the items are above this item.
Correct Answer(s)
median
Explanation
the medium is the item at which half the items in the population are below
this item and half the items are above this item;
17.
Which of the below represents the items which are repeated most frequently? (8-4)
Correct Answer
C. Mode
Explanation
The measures of central tendency are the mean, medium, and mode. The mean is the average of the
items in the population; the medium is the item at which half the items in the population are below
this item and half the items are above this item; and the mode represents which items are repeated
most frequently.
18.
This data can be ranked and can exhibit meaningful differences between values.
Correct Answer
C. Interval data
Explanation
Interval Data
This data can be ranked and can exhibit meaningful differences between values. Interval data has
no absolute zero, and ratios of values are not necessarily meaningful. For example, a program with
a complexity value of 6 is four units more complex than a program with a complexity of 2, but it is
probably not meaningful to say that the first program is three times as complex as the second. T. J.
McCabe’s complexity metric is an example of an interval scale.
19.
Who was a strong proponent of the use of statistics that took into account common and special causes of variation?
Correct Answer
A. Dr. W. Edwards Deming
Explanation
Dr. W. Edwards Deming was a strong proponent of the use of statistics that took into account common and special causes of variation. He was a renowned statistician and quality management expert who emphasized the importance of understanding and managing both common causes (inherent in the system) and special causes (resulting from specific events or circumstances) of variation in order to improve quality and productivity. Deming's teachings and principles have had a significant impact on the field of quality management and have been widely adopted by organizations around the world.