This article is geared towards to FPV drones - both quadcopters and fixed-wing aircraft, but its concepts apply more broadly (Eg to any aircraft, spacecraft, motion-tracking devices, etc). To use acro (angular-rate-based) controls, the flight control system doesn't need to compute attitude; it needs gyroscope measurements only. For systems beyond this, and ones that augment acro mode, it's critical. Some examples:
This article provides general information: It's applicable to anyone writing or modifying drone firmware, and those looking to understand more about their drones. Some firmwares, like Ardupilot, offer multiple implementations; this article may assist in selecting one.
Accurately measuring attitude is a surprisingly subtle problem. To build intuition about why, consider what attitude is measured in relation to: the earth's surface. What if the terrain slopes? What if the craft is really, really high up? What if it's in orbit? Most attitude systems use the center of earth's gravity as a reference for down. To find this, we need to measure acceleration. We'll define level as when earth's center of gravity is directly below the aircraft. We'll use whatever definition of below makes sense for a given aircraft. For a quad, this will be on the bottom of the frame. (This is also a bit subtle, but usually has an intuitive answer. We won't go into why here, but think about what makes the "bottom" of an aircraft the bottom. A quadcopter? An airliner? A fixed-wing drone? You may find yourself including both biology and aircraft parts in your answer!)
To appreciate the attitude problem, consider how you know your own orientation. Your body uses 2 approaches: a visual depiction of the horizon, and the vestibular system in your inner ear. The former works well when outside with a clear view of the horizon. If indoors, or if the horizon is obscured by terrain, buildings etc, you can still determine 'up' visually from cues in the scene like the straight lines on the ground, walls etc. Or by the orientation of people, animals, and objects you see. This can be deceived by optical illusions, like false horizons, sloping cloud decks, lights on the ground or sea that mimic stars etc.
The vestibular system lets you know your orientation, even if your eyes are closed. It uses moving fluids in your inner ear to measure the direction of gravity, and rotations. It works very well when you're stationary, or moving with a little acceleration (eg walking or running). As every pilot knows, the vestibular system is misleading while under the accelerations and rotations aircraft experience - it didn't evolve for those conditions.
These two biological systems provide a good analogy for approaches we could use in aircraft instruments. In practice, aircraft systems don't use a visual system - this would be tough to implement, but is possible. The inner ear's approach of measuring acceleration and rotation is the standard one.
Thankfully, the Inertial Measurement Units(IMUs) in most drone flight controllers can measure both 3 axes of angular rate (via gyroscope), and 3 axes of acceleration (via accelerometer). Some can also measure magnetic heading (via magnetometer) Determining attitude from this is referred to as an Attitude and Heading Reference System (AHRS).
The challenge: Determine attitude based on noisy accelerator and gyro measurements. Military and commercial aircraft often use Ring Laser Gyroscopes for this - these are comparatively stable, and have little drift. Unfortunately, these are too large and expensive for quads, so we settle for the ubiquitous, cheap, and good-enough MEMS-based IMUs. We mitigate their noisy measurements using digital filtering - either built into the IMU, or in the FC's firmware. For example, the CMSIS-DSP library works on all Cortex-M MCUs, which are common on FCs.
We need to consider 2 coordinate systems: The aircraft's, and the earth's. For the purpose of this article, we'll define the X axis as left and right, Y as forward and aft, and Z as up and down, for both systems. This is arbitrary, but you may find it intuitive. The important part is consistency. The IMU measures angular rates and accelerations in the aircraft's coordinate system. Attitude and heading are in relation to the earth's system.
Let's say you know the starting attitude of your aircraft. For example, consider a level aircraft: 0 pitch, 0 roll, 0 yaw. The gyroscope measures 1 radian/second of pitch up for 1 second. Sweet; you know your aircraft is now at +1 rad pitch, 0 roll, 0 yaw. Sounds good. Let's move forward 1 minute; you've tracked all rotations in this period and timed them. You add them up to find your attitude. Is this reading likely to be accurate or useful? No! Gyroscopes are noisy, and even the best have errors that add up over time. After a few seconds, your attitude from adding up (integrating) gyro measurements will have accumulated significant errors, and the calculated attitude will be wrong.
Gyroscopes can be used to estimate attitude (in conjunction with a known starting attitude) for short duration, but their errors add up quickly.
Given we're using earth's gravity as our reference for attitude, it makes sense to use acceleration to determine attitude. If your aircraft (more specifically, it's IMU) is stationary (or in steady flight) and level, we expect to read 0 acceleration on the x and y axes, and 9.8m/s² (1G) acceleration upwards. (This is the approximate acceleration due to gravity at the earth's surface) This reading is because in order to not be falling at this acceleration, the aircraft must resist it with an equal and opposite acceleration. Again, subtle. If the aircraft were in free-fall, eg dropping like a rock (You had a very bad flight), or has achieved orbit (You had a very good flight!), your accelerator would read near 0 on all axes.
Consider how you might use 3-axis acceleration measurements to determine attitude: If you measure +1G of acceleration upwards on the Z axis, the craft is level. If you measure -1G on the same axis, we can reason that the aircraft is upside-down. If you measure +1G on the X axis, the aircraft is oriented right-side-up.
We can assume, for a stationary aircraft, or one in steady flight, that total acceleration is exactly 1G. (You'll notice differences from this due to measurement error, and non-gravitational acceleration, which we discuss below). It's unlikely this 1G will be exactly along an axis. You can think of it as a unit vector pointing towards the (gravitational) center of the Earth. With this in mind, calculating attitude from accelerator readings is a matter of trigonometry or linear algebra. Let's consider another intuitive example. If you read +0.71G on the X axis, 0 on the Y axis, +0.71G on the Z axis, we can reason that the craft is in a 45° bank left. We notice this has 0° of pitch, due to reading 0 on the Y axis.
We can confirm that our force of Gravity measured is indeed 1G, using the Pythagorean formula: $$ \sqrt{0.71^{2} + 0.71^{2}} = 1 $$
(todo: Calculation of euler angles from 3 accel measurement here)
We can't rely on accelerators alone to estimate attitude: Any type of aircraft maneuver imparts acceleration on an aircraft. (We call this linear acceleration, and distinguish it from gravitational acceleration. For example, while executing a turn by pitching up while in a bank, the aircraft will measure an acceleration downwards relative to its IMU. This is the G-force fighter pilots feel, and how vomit-comet maneuvers cause oscillations between 0 and 2G. A surprising and fundamental part of physics: acceleration from maneuvers is indistinguishable from acceleration due to gravity. Here's your cue to drop down the relativity rabbit-hole, but this article won't take you there, Alice.
Accelerometers can be used to estimate attitude, but aircraft maneuvers confound these results.
Note that we have no notion of yaw using an accelerometer. For that we need to use the gyroscope, or better, gyroscope + magnetometer:
Magnetometer is another name for compass; generally when we have readings along 3 axes, and output digitally. In its most basic use, we can measure heading (ie yaw angle) by its Z-axis reading. Note that the Earth's magnetic north and True north aren't the same, and their difference changes depending on where in the earth you are! This is due to the Earth's non-uniform magnetic field. this article goes into details. You can find charts showing the difference in various parts of the earth; you can use these charts to calculate true north from magnetic measurements. For now, we'll ignore this difference.
Note that, in the same way we integrate the IMU's gyroscope to estimate pitch and roll over short durations, we can use it with yaw too, and fuse this data with the magnetometer for an even more accurate measurement.
In addition to measuring heading, we can also use the magnetometer's X and Y axes to improve our pitch and roll estimations, with further sensor fusion.
Given 3 axis measurements of acceleration, angular rate, and perhaps magnetic field, how do we estimate attitude? We described above why we have enough information to do so, but glossed over implementation details; especially how we balance the 2 or 3 types of measurement. There's no correct answer, and different algorithms may be more accurate in different cases. When choosing an algorithm, we need to consider things like:
There are a number of academic papers available (Search AHRS algorithms on Google Scholar, for example) explaining individual algorithms, and comparing them to each other. Here's an example, comparing several of the types described below. The more popular ones (suitable for drones use) are summarized here. All of these approaches tackle the problem of in what proportions under which circumstances do we blend our 3 sets of measurements?
They take approaches such as:
The Kalman filter is the best known algorithm for sensor fusion in general. Its basic form is suitable for linear problems. To use for an AHRS, we need to use its non-linear variant, the Extended Kalman Filter (EKF). Compared to other algorithms, the EKF has the potential to be the most flexible and accurate. This comes at a cost of complex implementation code.
Kalman filters are rooted in Bayesian inference; they maintain a model of the values they track, and confidence level in the values. As more information becomes available, they update the value and confidence, taking into account the confidence of the new (ie measured) data. More information provided to a Kalman filter always helps it - as long as it knows how reliable the information is.
Kalman filters are the most general of those listed here. While the others are specific to fusing gyro, accelerometer, and magnetometer readings, Kalman filters can be modified to use any info available. For example, pitot airspeed measurements, altimeters, and GPS. As described later in this article, we can also use control inputs directly to improve accuracy of the Kalman filter. A properly configured filter can take any information we have about the system, and use it to improve the attitude estimate.
The Complementary filter is simple to implement, and reason about. Compared to the others we describe, it's the least accurate, but may be good enough. This is a good starting point if coding an AHRS yourself. After estimating the attitude by integrating angular rates, and independently from acceleration measurements, it uses linear interpretation to fuse the attitude into a best-estimate. It uses quaternions internally, and the fusion algorithm can be described as this:
$$ q = (1 - α) q_ω + αq_{am} $$
Where \(\omega\) is attitude from angular rates, and \(αq_{a m}\) is attitude from acceleration and magnetometer reading. \(\alpha\) is the filter's gain; it's a user-adjustable value that balances the two attitude sources. Note that this balance value is constant: This is responsible for the complementary filter's simplicity, and its naivety.
The Madgwick filter provides accurate results, and has a straightforward implementation. This paper by Sebastian Madgwick describes the algorithm in detail. It uses a quaternion representation of attitude, and supports fusion between 3-axis accelerometers, gyroscopes, and magnetometers.
The Mahony filter is an extension of the complementary filter. It dynamically chooses weights for accelerometer, gyroscope, and magnetometer measurements. This AHRS document describes it in a detailed, approachable way.
An intuitive way to represent angles is pitch, roll, and yaw. There are called Euler angles, are measure rotation around 3 linearly-independent axes; ideally orthogonal ones. They can be represented as a vector of 3 values, and rotated by multiplying by a rotation matrix. Euler angles aren't a great way to represent and manipulate attitudes, because they can be subject to singularities; ie gimbal lock, wherein they lose a degree of freedom. They also run into ambiguities about how to combine rotations on the 3 axes, since there's no single answer for in which order to apply the 3 rotation matrices (one for each axis).
Quaternions provide an elegant alternative. A quaternion can be thought of as a collection of 4 numbers that describe orientation. Attitude can be represented by a single quaternion, and manipulated using operations between quaternions and vectors. Quaternions have a reputation for being unintuitive, but when though of as mapping directly to orientation or rotation, they're straightforward to use. Our article on them summarizes how to use them for practical purposes, like representing attitude.
We can combine IMU and magnetometer data with GPS, for even more accurate data. GPS readings have a comparatively slow update rate, and GPS units use a lot of power. Their advantage is in very accurate position and velocity data. By fusing this with other readings, we can refine our AHRS solution further, in addition to providing accurate location and altitude information.
Given we have control over drones - either directly by manual control inputs, or through an autopilot system, we have more information available than sensor readings alone. Pause for a second, and consider how you might use this information to improve the above filters. (EKF implementations may include control inputs already, which is an advantage we glossed over above).
Here's an example: If throttle setting is high, and the pitch, roll, and yaw settings are neutral, we can guess that the aircraft is accelerating away from its orientation. This means we know it probably has linear acceleration up, in the aircraft's own coordinate system. This helps us separate the linear acceleration from gravitational acceleration, improving our accelerator-based readings.
We could also attempt to guess when the aircraft is accelerating forward/back/left/right based on a high throttle setting in conjunction with pitch and roll inputs. In addition to separating acceleration components, we can weight gyro measurements more, and accel less, if an aggressive maneuver is commanded.
For fixed wing aircraft, it's easier to take advantage of this. For example, we can estimate how many linear Gs the aircraft pulls in a turn, based on airspeed, pitch rate commanded, and if we're losing/gaining altitude - we can then subtract this from measured acceleration.