Phillip Stanley-Marbell
Foundations of
Embedded Systems
Physical Constraints, Sensor Uncertainty, Error Propagation,
Low-Level C on RISC-V, and Open-Source FPGA Tools
Draft Version of Michaelmas 2020
2
Precision, Accuracy, and Measurement Uncertainty
Version:
git changeset: 170:373da60b63aa0705868d6a651c2d72842dcea0bc,
Fri Apr 10 14:52:08 2020 +0100
Figure 2.1: All measure-
ments have an associated
measurement uncertainty.
Two fundamental proper-
ties of a measurement are
its precision and accuracy.
In most problems of applied mathematics and engineering the data are
no better than 1 : 10
3
or 1 : 10
4
. . . and the answers are not
required or meaningful with higher precisions either.
John von Neuman The computer and the brain.
. . . even precision levels like 1 : 10
5
are inadequate for a large part of
important problems . . . The reasons for this surprising
phenomenon are . . . that when they are broken down into their
constituent elements,
[the procedures] turn out to be very long . . .
Now if there are large numbers of arithmetical operations, the errors
occurring in each operation are superposed.
John von Neuman The computer and the brain.
Table 2.1: Concepts.
Concept Section
Measurand § 2.2
Measurement value § 2.2
Environment § 2.2
Nominal § 2.2
Ground truth § 2.2
Error § 2.2.2
Systematic error § 2.2.2
Random error § 2.2.2
Uncertainty § 2.2.2
Type A uncertainty § 2.2.2
Type B uncertainty § 2.2.2
Epistemic uncertainty § 2.2.2
Aleatoric uncertainty § 2.2.2
Accuracy § 2.2.4
Precision § 2.2.4
Uncertainty distributions § 2.5
All measurements have an associated measurement uncertainty. This
chapter will investigate the principles of measurements, sources and types
of measurement uncertainty, examples of measurement uncertainty for state-
2 phillip stanley-marbell
of-the-art sensors and the effect of these on applications, and methods for
propagating measurement uncertainty through the steps of a computation.
2.1 Intended Learning Outcomes
At the end of this chapter, you should be able to:
1. Define the three components of a measurement and imagine new exam-
ples of measurements (stating their three components).
2. Define precision, accuracy, reliability, and measurement uncertainty.
3. Analyze a system design and quantify precision, accuracy, and noise for a
design’s components.
4. Enumerate the sources of noise and measurement uncertainty in analog
and digital systems.
5. Propose design changes to improve the robustness of systems to noise.
6. Derive the uncertainty propagation equation and postulate alternative
derivations when the assumptions employed in this chapter do not hold.
2.1.1 Learning outcomes pre-assessment
Complete the following
quiz to evaluate your prior knowledge of the material
for this chapter.
2.1.2 Things to think about
Complete the following
thinking exercise to stimulate your thoughts about
the contents of this chapter before proceeding with the material.
2.1.3 Things to look out for
Concepts people sometimes get confused by in this chapter include:
1. What the maximum activity axis selection step does, in the example of the
pedometer (Section
2.4.1).
2. Why the number of steps are the number of zero crossings (either high to
low crossings, or low to high crossings, not both), in the example of the
pedometer (Section
2.4.1).
3. What Type A and Type B uncertainty mean.
4. How supply voltages of sensors affects their noise distributions and how
to interpret noise distribution histograms such as Figure
2.4.
foundations of embedded systems 3
5. What the assumptions of the uncertainty propagation equation are (UPE,
Equation
2.6) and when those assumptions break down.
6. What distributions satisfy the assumptions of the uncertainty propagation
equation derivation.
7. How to derive the uncertainty propagation equation (Equation
2.6).
8. Where the variance, σ
2
y
, in Equation
2.6 comes from.
9. Where the function f (x
1
, . . . , x
n
) in Equation
2.9 comes from.
10. How we apply Taylor series expansion in Equation 2.9.
11. What y
j
represents in Equation
2.11.
12. How you can use Monte Carlo simulations to analyze error propagation.
13. What some of the noise sources in measurements are.
14. Grounds for assuming that if y = f (x
1
, . . . , x
n
) then
y = f (x
1
, . . . , x
n
).
2.1.4 The muddiest point
As you go through the material in this chapter, think about the following two
questions and note your responses for yourself or using the annotation tools
of the online version of the chapter. You will have the opportunity to submit
your responses to these questions at the end of the chapter:
1. What is least clear to you in this chapter? (You can simply list the section
numbers or write a few words.)
2. What is most clear to you in this chapter? (You can simply list the section
numbers or write a few words.)
2.2 Principles of Measurements
Every measurement consists of three parts: the measurement instrument (sen-
sor), the quantity being measured (the measurand), and the environment in
which the measurement is occurring (Figure
2.2). The objective of a mea-
surement is typically to use a sensor to determine the true value, nominal
value, or ground truth value of the measurand independent of variations in
the environment.
Environment
Measurement
Instrument
Figure 2.2: The three
essential components of a
measurement.



 

μ

Figure 2.3: Manufacturer-
specified measurement
noise as a function of
operating voltage for the
Analog Devices ADXL 362.
The environment might affect the behavior of the measurement instrument
or sensor. Figure
2.3 shows how measurement noise for one state-of-the-art
accelerometer varies as a function of the supply voltage at which it is oper-
ated.
1
The environment might also affect the signal being measured itself, as
1
Analog Devices
2014.
is the case when ambient vibrations affect the measurements performed by
4 phillip stanley-marbell
an accelerometer (Figure
2.4). We will revisit such empirical sensor data in
more detail in Section 2.4.
--    




-  ()
 
percent level based on the Cramér-von Mises test.
     






 
 
Figure 2.4: 100 consecutive
x-axis readings from an
ADXL362 accelerometer
operating at 2.5 V and
mounted in a stationary
laboratory harness. The
distribution of values is a
result of both noise in the
sensor as well as vibrations
in its environment.
2.2.1 Self-Assessment Quiz
Complete the following
quiz to evaluate your understanding of the material
for Section
2.2.
2.2.2 Errors and the types of uncertainty and errors
Because of the environment, its variations, and the effect of those variations
on the sensor or measurand, the data obtained from a sensor will invariably
differ from the true value of the measurand. We will refer to this differ-
ence as the measurement error. Every measurement has some amount of error
although we can take precautions to make this error small.
Errors are typically classified into two kinds. When the errors vary over
time, we will refer to them as random errors. As we will see in Section
2.5,
the nature of this random variation could itself take on many forms, as the
variation might be uniformly distributed over some range, distributed with,
say, a Gaussian distribution, or distributed in some other way. Regardless of
the distribution of errors, the larger the spread or variation in measurements,
the larger is the measurement uncertainty. One example of random errors is
the variation in x-axis acceleration data from the accelerometer in Figure
2.4
due to vibrations in the environment or noise in the sensor.
Errors may also be fixed over time, such as an error which is a constant
offset. We will refer to such errors as systematic errors. Purely-systematic
errors therefore do not have any spread or uncertainty, but in practice, mea-
surements with a systematic error component will also have a random error
component.
Metrologists sometimes classify measurement uncertainty into two other
designations, based on how the presence of the measurement uncertainty is
determined. When uncertainty in measured values is computed, for exam-
ple by statistical analysis of a set of measurement values to compute their
variance, the uncertainty so determined is referred to as Type A Uncertainty.
The uncertainty in a measurement may also be specified independent of the
properties observed of the measurement results, such as by specifying the un-
certainty of measurements provided by a given measurement instrument or
sensor, as was the case in Figure
2.3. This form of measurement uncertainty
is referred to as Type B Uncertainty.
Uncertainty in the value of a measurand can also be due to simply not hav-
ing any measurement of the measurand. This form of uncertainty is referred
to as epistemic uncertainty. In contrast to epistemic uncertainty, uncertainty
foundations of embedded systems 5
due to observed random errors, that is, Type A uncertainty, is also referred
to as aleatoric uncertainty.
2.2.3 Self-Assessment Quiz
Complete the following quiz to evaluate your understanding of the material
for Section 2.2.2.
2.2.4 Accuracy, precision, and reliability
In typical science and engineering usage, the term accuracy refers to distance
of the measured value from the true value or nominal value of the measurand.
A lack of accuracy may be due to either systematic or random errors (see
Section
2.2.2).
Precision, on the other hand, refers to repeatability or spread around a
mean. Larger spreads of measured values mean that only measurand values
that are further away from each other can be distinguished. Higher precision
therefore implies finer resolution or finer spacing between values that can be
distinguished by a measurement system. Precision is typically a property of
a measurement instrument or computing system. In the context of a mea-
suring device, precision can also be thought of as the repeatability or spread
between values obtained in measuring an unchanging quantity.
Accuracy and precision both imply that when things go wrong, the system
still obtains an output and that this output differs from the correct output to
a quantifiable degree. In contrast, reliability typically refers to the likelihood
that a system component will fail, that is, the relative frequency with which
a device fails or is otherwise unavailable for use, regardless of whether it is
precise or accurate.
Examples:
1. A ruler with markings at every millimeter is more precise than a ruler
with markings at each centimeter. Similarly, real-valued numbers can be
represented in the C programming language type double with finer spac-
ing (precision) than they can be represented with type float, therefore we
say double enables higher precision than float.
2. A measurement that reports the speed of light as 299792458 m · s
1
is ac-
curate, while one that reports the speed of light as 299792459 m · s
1
is
less accurate (but expressed in a representation that is as precise as the
previously-stated value).
6 phillip stanley-marbell
2.2.5 Self-Assessment Quiz
Complete the following
quiz to evaluate your understanding of the material
for Section 2.2.4.
Thermal / Johnson-Nyquist
Noise
Possible interaction paths
Circuit state disturbance inducement
Microprocessor
Temperature
Fluctuations
LD @(R4), R2
ADD R5, R6
SHRL, R4, #8
Program:
λx.+2x
?
Shot Noise
“Flicker” / 1/f Noise
Random Telegraph Noise
Figure 2.5: Some sources of
noise in hardware.
2.3 Could We Get Rid of All Noise and Errors?
Noise is pervasive in all computing and sensing environments (Figure
2.5
and Figure 2.6). As a result, semiconductor devices, by design, use energy
to guard against the presence of noise, but this comes at a cost of energy.
Operating at higher supply voltages typically allows circuits to operate in the
presence of more noise. Since lower supply voltages lead to lower dynamic
power dissipation (Equation
1.1 in Chapter 1), lower voltage is preferable
from the perspective of dynamic power dissipation.
Theis and Solomon
2
give a cogent explanation of the lower limits on sup-
2
Theis and Solomon 2010.
ply voltage necessary to counteract the effects of thermal noise. They start
from the Johnson-Nyquist voltage noise, which follows a Gaussian distribu-
tion with standard deviation of voltage noise
V
n
=
r
k · T
C
, (2.1)
where k is Boltzmann’s constant, T is temperature in Kelvin, and C is the
load capacitance of a typical gate. They then take V
n
as the minimum voltage
needed to distinguish between two logic states. Thus, a logic voltage of m
standard deviations will give a probability of reliable operation (logic value
foundations of embedded systems 7
Radioactive Decay of
238
U and
232
Th from
device packaging mold resin,
210
Po from
PbSn solder (and Al wire)
12
C
α-particles
γ- rays
Lithium
Cosmic rays Thermal neutrons
High energy neutron
(can penetrate up to 5
ft of concrete)
Neutron capture within Si
and B in integrated circuits
Unstable isotope
Magnesium
or
Possible interaction paths
Circuit state disturbance inducement
Microprocessor
+
+
Temperature
Fluctuations
}
LD @(R4), R2
ADD R5, R6
SHRL, R4, #8
Program:
λx.+2x
?
Electrical Noise
High-Energy Particles
Figure 2.6: Some sources of
soft errors in hardware.
being greater than noise), given by the complementary error function, of
1
2
·Erfc
m
2
. (2.2)
     




-  ()
 
percent level based on the Cramér-von Mises test.
     






 
 
Figure 2.7: 100 consecutive
x-axis readings from a
BMX055 accelerometer
operating at 2.5 V and
mounted in a stationary
laboratory harness. We see
a distribution of values as
a result of both noise in the
sensor as well as noise in
the environment.
2.4 Examples of Sensor Data Uncertainty: Accelerometer Data
Figure
2.4 showed an example of the distribution of values obtained form a
sensor measuring a phenomenon that is nominally unchanging. Figure 2.7
shows a similar histogram for x-axis accelerometer data, this time taken from
a Bosch BMX055 accelerometer. The Bosch BMX055 is a single miniature in-
tegrated package that contains three sensors: an accelerometer, a gyroscope,
and a magnetometer. Its accelerometer subsystem, like the ADXL362, is also
a MEMS accelerometer.
Table 2.2: An excerpt from
the sequence of values
plotted in Figure
2.7.
. . .
134.308
136.261
125.519
136.749
138.214
137.726
. . .
In practice, many sensors provide raw data readings which must be con-
verted into actual physical phenomena value readings (e.g., acceleration in
gs) by applying a set of transformations. These transformations are typically
provided by the sensor’s manufacturer and specified in the sensor’s datasheet
and they often combine the raw sensor readings with sensor-instance-specific
calibration data stored in the sensor’s read-only memory. Table
2.2 shows
excerpts of the 100 data points plotted in Figure 2.7 and Figure 2.4, after con-
verting the raw sequences of bytes obtained from the sensors into acceleration
8 phillip stanley-marbell
readings.
The sensors that produced the readings in Figure
2.4, Figure 2.7, and Ta-
ble 2.2 were mounted on the same system but in different orientations, so
the acceleration values
3
are expected to be different. In both cases, since
3
The resolved component
of the acceleration due to
gravity in the correspond-
ing sensor axes.
our measurement objective is to measure the component of the acceleration
due to gravity in the x-axis of the accelerometer which is mounted to a fixed
laboratory harness, we interpret any measured variation in the sequence of
measurements as random errors. For the BMX055, the number of significant
digits in the measurement result
4
is 1, and for the ADXL362 it is 2. Fig-
4
The number of leading
digits that remain fixed
during the 100 measure-
ments.
ure
2.4 and Figure 2.7 also show different characteristics of the spread of the
measured values. Figure
2.8 shows distributions of sensor data from six sen-
sors across eight supply voltages from 1.8 V to 2.5 V: The Bosch BMX055,
5
5
Bosch Sensortec
2014.
the Analog devices ADXL362,
6
the ST Micro L3GD20H,
7
and the Freescale
6
Analog Devices 2014.
7
ST Microelectronics
2013.
MAG3110.
8
8
Freescale Semiconductor
2013.
The spread of values in the figures are the result of random measurement
error resulting from the environment (ambient vibrations) and noise in the
signal conditioning circuits in the accelerometers prior to analog to digital
conversion. The uncertainty in the measurements shown in the figures is
Type A uncertainty. We can perform further statistical analysis on the mea-
surement values to determine the probability distribution that best fits the
observed variations
9
.
9
A common mistake is to
assume the noise follows a
Gaussian distribution.
2.4.1 Example: End-to-end effect of noise in a pedometer application
Figure
2.9 shows the block diagram of one implementation
10
of a pedome-
10
Zhao 2010.
ter (step-counting) application. The application takes as its input 3-axis ac-
celerometer data and generates as its output a count of steps for time win-
dows of 500 ms.
The first stage of the pedometer algorithm is to select which of the three
accelerometer axes has the maximum peak-to-peak variation: this is the max-
imum activity axis selection block in Figure
2.9. The algorithm selects one
of the x-, y-, or z-axis data to use for each 500 ms window. It uses sam-
ples aggregated across these 500 ms windows (with samples in each window
taken from a single axis) to create a new composite sequence of accelerometer
samples. Next, the pedometer algorithm performs low-pass filtering. Then,
for each 500 ms window, the algorithm computes the maximum acceleration,
minimum acceleration values, and the midpoint of this range for the win-
dow (this is the extremal value marking block in Figure
2.9). Finally, for each
500 ms window, the algorithm counts how many times the low-pass filtered
signal crosses the per-window midpoints (either high to low crossing, or low
to high crossings, not both), and it reports this count of mid-point crossings
as the number of steps taken by the wearer of the accelerometer.
foundations of embedded systems 9
      





-  ()

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(a) BMX055 accel. x-axis.
- - - - -



-  ()

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(b) BMX055 accel. y-axis.
       





-  ()

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(c) BMX055 accel. z-axis.
-   




-  ()

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(d) ADXL362 accel. x-axis.
    





-  ()

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(e) ADXL362 accel. y-axis.
    





-  ()

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(f) ADXL362 accel. z-axis.
- -  






-   (°/)

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(g) BMX055 gyro. x-axis.
- - -    




-   (°/)

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(h) BMX055 gyro. y-axis.
- -  





-   (°/)

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(i) BMX055 gyro. z-axis.
-- -   





-   (°/)

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(j) L3GD20H gyro. x-axis.
- -  





-   (°/)

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(k) L3GD20H gyro. y-axis.
- - 





-   (°/)

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(l) L3GD20H gyro. z-axis.
--------





-    (μ)

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(m) MAG3110 mag. x-axis.
   







-    (μ)

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(n) MAG3110 mag. y-axis.
    



-    (μ)

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(o) MAG3110 mag. z-axis.
- - - -



-    (μ)

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(p) BMX055 mag. x-axis.
   



-    (μ)

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(q) BMX055 mag. y-axis.
- - - -



-    (μ)

1800 mV
1900 mV
2000 mV
2100 mV
2200 mV
2300 mV
2400 mV
2500 mV
(r) BMX055 mag. z-axis.
Figure 2.8: Experimen-
tal characterization of
measurement uncertainty
characterization for six
different state-of-the-art
sensors.
10 phillip stanley-marbell
x-component analysis
y-component analysis
z-component analysis
Pedometer / Step Counting System
Low-
Pass
Filter
Maximum-
Activity
Axis
Selection
Step
Count
Extremal
Value
Marking
Processor
Accelerometer
Figure 2.9: The block
diagram of one canonical
pedometer application
implementation.


 ()

([- ]  )
(a) All three axes of data
(shown low-pass filtered).




 ()

([- ]  )
(b) Maximum activity axes
combined across the 500 ms
windows.




 ()

([- ]  )
(c) Extremal value marking
of the maximum-activity axis
data. Step count: 19.


 ()

([- ]  )
(d) All three axes of data with
added noise (shown low-pass
filtered).




 ()

([- ]  )
(e) Maximum activity axes
combined, for data with added
noise.




 ()

([- ]  )
(f) Extremal value marking for
the data with added noise.
Step count: 16.
Figure 2.10: Intermediate
stages of data from a
pedometer application.
Figure
2.10(a–c) show the progression of a sequence of accelerometer sam-
ples through the stages of the pedometer algorithm. The final output of
the algorithm for this sequence of accelerometer samples is 19 steps. Fig-
ure 2.10(d–f) shows a modified version of the data from Figure 2.10(a–c)
where we have replaced 5% of the samples with zeros to simulate one form of
noise. Even though the data in the final stage of the algorithm (Figure
2.10(c)
and Figure 2.10(f)) looks quantitatively different, the final output of the algo-
rithm is changed by only 16%.
2.5 What Statistical Distributions of Measurement Errors to Expect
What causes noise in measurements to have one distribution (e.g., bi-exponential)
versus another (e.g., Gaussian)?
The distributions of errors such as those we saw in Figure 2.4 and Fig-
ure 2.7 depend on how the underlying noise phenomena (e.g., the phenom-
ena in Figure 2.5 and Figure 2.6) combine to lead to the final measurement
error.
foundations of embedded systems 11
When measurement errors are the result of the additive combination a
large number of small underlying errors, the resulting distribution of mea-
surement error will follow a Gaussian distribution. Let x
i
be a sample (i.e., a
random variate) chosen at random from any distribution and let y be a sum
of n such random variates, that is,
y =
n
i=1
x
i
. (2.3)
Then, if the x
i
are independent,
lim
n
y N[µ, σ], (2.4)
where N[µ, σ] is the Gaussian distribution with mean µ and variance σ
2
.
Similarly, when measurement errors are the result of the multiplicative
combination a large number of small underlying errors, the resulting distri-
bution of measurement error will typically follow a lognormal distribution.
2.6 Exercises
Complete the following exercise (
also available online) to evaluate your un-
derstanding of the material for Section
2.5.
What are some of the sources of noise in integrated circuits?
What is the difference between Type A uncertainty and aleatoric uncer-
tainty?
Why would we ever have epistemic uncertainty about a measurand?
What is the difference between precision and accuracy?
What is the difference between precision and resolution?
Why might we observe different distributions of noise in different sys-
tems?
How could we determine the mean µ and variance σ
2
for a set of readings
from sensor data?
What are the moments of a distribution?
2.7 Propagating Uncertainty
Because all measurements have an associated uncertainty, whenever we ob-
tain a measurement value from a sensor, the true value of the quantity we are
measuring (the measurand) could be anywhere in a range of values around
12 phillip stanley-marbell
the value we obtained from the sensor. The less precise the sensor is, the
greater will be the spread, and the less accurate the sensor is, the further the
measurement value is likely to be from the true value of the measurand.
Almost all sensor data serves as input to one or more algorithms which
perform operations on the individual samples from a sensor, perform op-
erations across multiple samples (e.g., low-pass filtering), or may combine
values from different sensors. As a result, the uncertainty in each measure-
ment value causes the results of each arithmetic or logic operation on that
measurement value or collection of values to likewise have an associated un-
certainty. When sensor data feeds into algorithms that control critical infras-
tructure, we need to have methods to estimate the uncertainty in the result of
a sensor-driven computation from the uncertainty in the sensor measurement
values that are the inputs of the computation. To estimate output uncertainty
exactly, we will need to treat each value in the signal processing algorithm
as a random variable and then determine the distributions of the new random
variables that result from each operation (e.g., addition).
A random variable, X (uppercase), is a function on the elements of the sam-
ple space of possible values, . A random variable X on a sample space
is a real-valued function on ; i.e., X : R. Events correspond to
the random variable X (uppercase), taking on a specific instance value, say, x
(lowercase). The probability of a random variable X taking on the specific
value x is written as Pr{X = x} or f
X
(x).
Algebra on random variables is challenging.
11
In practice, physicists,
11
Glen, Evans, and Leemis
2001.
metrologists, and other practicians need an efficient method for estimating
the uncertainty of the results of arithmetic operations on values which have
uncertainty. A popular approach, which simplifies the task of uncertainty
propagation by making several simplifying assumptions, is what is some-
times referred to as the uncertainty propagation equation (UPE) and is attributed
to Carl Friedrich Gauss Gauss
1823. The uncertainty propagation equation
makes simplifying assumptions about the behavior of the mean of a function
of random variables (i.e., the mean of the output of a function of measured
values with measurement uncertainty) in terms of the means of the random
variables themselves (i.e., the means of the signals with measurement un-
certainty) and also makes simplifying assumptions about the higher-order
partial derivatives of the function applied to the measurement values, with
respect to the measurement values. We will derive the uncertainty propaga-
tion equation and will state its assumptions more formally next.
foundations of embedded systems 13
2.7.1 Deriving the uncertainty propagation equation
Let y be the value of a function f : R
n
R, that is,
y = f (x
1
, . . . , x
n
). (2.5)
The function f (x
1
, . . . , x
n
) defines the relation between the sensor data or
measurement values and other constants, given by x
1
, . . . , x
n
, and the result,
y, of a sensor signal processing algorithm. In other words, x
1
, . . . , x
n
are sig-
nals from one or more sensors (perhaps including constants) and we perform
some operation f (x
1
, . . . , x
n
) on them to get the value y. Since the x
1
, . . . , x
n
are our input signals, we can look at measurements of them to determine
their mean, variance, and so on. The uncertainty propagation equation spec-
ifies how to estimate the variance, σ
2
y
, of y, given: the standard deviations
σ
x
1
, . . . , σ
x
n
of the arguments x
1
, . . . , x
n
of the function f ; the sensitivity of
y to changes in these arguments expressed as the partial derivatives of y with
respect to x
1
, . . . , x
n
; the correlations between the x
1
, . . . , x
n
.
Let σ
x
denote the standard deviation of x. Then, the variance of y, σ
2
y
, can
be approximated
12
as
12
Bevington and Robinson
1969.
σ
2
y
y
x
1
2
σ
2
x
1
+
y
x
2
2
σ
2
x
2
+ . . . + 2σ
2
x
1
x
2
y
x
1
y
x
2
+ . . . . (2.6)
2.7.2 Derivation of the uncertainty propagation equation
Equation
2.6, the uncertainty propagation equation, is an approximation de-
rived based on several assumptions
13
.
14
First, let
y be the mean of y and let
13
Kirkup and Frenkel 2006.
14
Bevington and Robinson
1969.
x
n
be the mean of x
n
. Then, we assume that if
y = f (x
1
, . . . , x
n
), (2.7)
then
y = f (x
1
, . . . , x
n
), (2.8)
that is, the mean of the dependent variable
y is the same function of the
mean of the independent variables x
1
, . . . , x
n
, as the dependent variable y
is of x
1
, . . . , x
n
. We make this assumption because: it holds for several
scenarios of interest; making this assumption will allow us to significantly
simplify the expressions we derive next. With the assumption in hand, we
take the Taylor series expansion of f (x
1
, . . . , x
n
) about the point (
x
1
, . . . , x
n
)
in the function f ’s domain:
y = f (x
1
, . . . , x
n
) = f (
x
1
, . . . , x
n
) +
f (
x
1
, . . . , x
n
)
x
1
(x
1
x
1
) + . . . +
f (
x
1
, . . . , x
n
)
x
n
(x
n
x
n
) + . . . .
(2.9)
14 phillip stanley-marbell
From our assumption that
y = f (x
1
, . . . , x
n
) in Equation 2.8,
(y
y) =
f (
x
1
, . . . , x
n
)
x
1
(x
1
x
1
) + . . . +
f (
x
1
, . . . , x
n
)
x
n
(x
n
x
n
) + . . . .
(2.10)
Now, let Y be a random variable that takes on instance values y, with mean
y. Let N be the number of instances of the random variable Y and let these
instances be indexed by j, so that one such instance is y
j
. Then, the variance
σ
2
y
is given by
15
15
Grimmett and Stirzaker
2001.
σ
2
y
= lim
N
"
1
N
N
j=1
(y
j
y)
2
#
. (2.11)
Keeping only the first term of the Taylor series expansion in Equation
2.10,
ignoring all its higher-order terms and cross partial derivatives, and substi-
tuting that simplified expression into Equation 2.11, we get
σ
2
y
lim
N
1
N
N
j=1
n
i=1
f (
x
1
, . . . , x
n
)
x
i
(x
i
x
i
)
!
2
. (2.12)
Applying Equation
2.11 again to obtain σ
2
x
i
in ter ms of x
i
and x
i
and substi-
tuting these into Equation
2.12, we obtain
σ
2
y
f (
x
1
, . . . , x
n
)
x
1
2
σ
2
x
1
+
f (
x
1
, . . . , x
n
)
x
2
2
σ
2
x
2
+ . . . + 2σ
2
x
1
x
2
f (
x
1
, . . . , x
n
)
x
1
f (
x
1
, . . . , x
n
)
x
2
+ . . . .
(2.13)
When the parameters x
1
, . . . , x
n
are uncorrelated, we can simplify Equa-
tion
2.13 to
σ
2
y
n
i=1
f (
x
1
, . . . , x
n
)
x
i
2
σ
2
x
i
. (2.14)
2.7.3 Where do the σ
x
s come from?
The inputs to Equation
2.14 are the variances of the parameters to the func-
tion y = f (x
1
, . . . , x
n
) and the sensitivity of the function f (x
1
, . . . , x
n
) to its
parameters x
1
, . . . , x
n
. These parameters x
1
, . . . , x
n
are either constants (with
zero variance) or sensor values. In the latter case, we can estimate the pop-
ulation variance by sampling and computing the sample variance as we did
in Figure
2.7. In Figure 2.7, the mean is 135.42, the standard deviation is 2.40
(and the variance is thus 5.76). For the distribution in Figure
2.7, the kurto-
sis is 6.92 (a Gaussian would have a kurtosis of 3) Alternatively, we might
know beforehand what distribution the sensor samples follow, allowing us
to compute the variances from the definition of the distribution.
foundations of embedded systems 15
2.7.4 Special cases of uncertainty propagation from variances
The following are several useful special cases of Equation
2.14:
16
16
Bevington and Robinson
1969.
If y = ax
1
+ bx
2
, then
σ
2
y
a
2
σ
2
x
1
+ b
2
σ
2
x
2
+ 2abσ
2
x
1
x
2
. (2.15)
If y = ax
1
x
2
, then
σ
2
y
y
2
σ
2
x
1
x
2
1
+
σ
2
x
2
x
2
2
+ 2
σ
2
x
1
x
2
x
1
x
2
. (2.16)
If y =
ax
1
x
2
, then
σ
2
y
y
2
σ
2
x
1
x
2
1
+
σ
2
x
2
x
2
2
2
σ
2
x
1
x
2
x
1
x
2
. (2.17)
If y = ax
b
1
then
σ
y
y
bσ
x
1
x
1
. (2.18)
If y = ae
bx
1
then
σ
y
y
bσ
x
1
. (2.19)
If y = a
bx
1
then
σ
y
y
(b ln a)σ
x
1
. (2.20)
2.7.5 Uncertainty propagation in practice
Propagating uncertainty through even the most basic arithmetic operations
using the special cases listed in Section
2.7.4 can be tedious. One of the
course project ideas (Section 1.7) is to add support for propagating informa-
tion about measurement uncertainty throughout the arithmetic operations
in a microprocessor. This is an exciting challenge with many applications
outside embedded systems, including applications to probabilistic graphical
models in machine learning.
2.8 Exercises
Complete the following exercise (
also available online) to evaluate your un-
derstanding of the material for Section
2.7.1 and Section 2.7.2.
Where to the parameters σ in the uncertainty propagation approximation
come from?
16 phillip stanley-marbell
What are the moments of a distribution?
Why did we have to make assumptions in formulating the uncertainty
propagation approximation?
What assumptions did we make in formulating the uncertainty propaga-
tion approximation?
When is the assumption that if y = f (x
1
, . . . , x
n
) then
y = f (x
1
, . . . , x
n
)
valid?
What are the implications of ignoring the higher-order terms in the Taylor
series expansion of Equation
2.10?
2.9 Accuracy of Models Versus Precision of Computations
Accuracy is important when obtaining measurements of signals from the
physical world. Once a measurement system has provided accurate mea-
surements, higher precision in the data representation when storing or com-
puting on the measured values may allow accurately-measured data to be
used to obtain accurate results in data analyses. For example, a bar code
scanner at a retail store must accurately determine the item being purchased.
Once the scanning subsystem has accurately identified an item, subsequent
computations such as charging a customer’s payment card must also occur
accurately.
Not all measurement and computing systems require perfect accuracy
however. There are many applications of data processing where the com-
puting process into which measured data is fed is a model or algorithm that
is itself an approximation of a poorly-understood physical process. Measure-
ment data may serve as input to algorithms that are tolerant to noise in their
input data as the example in Section
2.4.1 showed, or the results of comput-
ing on measurement data may be only used to control images displayed for
human observation (see Topic 12).
2.10 Optional Reading
The textbook An Introduction to Uncertainty in Measurement by Kirkup and
Frenkel (available in the CUED Engineering Library) provides further back-
ground on concepts related to measurement uncertainty.
2.11 Relevant Books Available in the Engineering Library
1. An Introduction to Uncertainty in Measurement, ISBN: 978-0521605793.
foundations of embedded systems 17
2.12 The Muddiest Point
Think about the following two questions and submit your responses through
this
link.
1. What was least clear to you in this chapter? (You can simply list the section
numbers or write a few words.)
2. What was most clear to you in this chapter? (You can simply list the
section numbers or write a few words.)