Back in January, we presented our experimental feature for estimating aerobic threshold using HRV. Since this first implementation, we have worked considerably on the implementation of the method and can thus put to rest the original skepticism about how useful the determined values actually are.
The basic principle: Using the exact RR intervals between the individual heartbeats, the so-called DFA-alpha1 values are determined using non-linear mathematical methods. DFA stands for “Detrended fluctuation analysis” and alpha1 is the short term scaling exponent. It is known that this value changes with increasing intensity. From initial values above 1.0, the value decreases to about 0.75 at the aerobic threshold (not to be confused with the anaerobic threshold or lactate threshold!) and values around 0.5 at high intensity.
Before we go a little bit into the technical background of our implementation, we would like to get rid of some comments:
- The analysis of HRV data is very sensitive concerning single outliers. We would like to urge you to pay attention to the quality of the input data. Bruce Rogers has some recommendations in this regard in his FAQ listing. (Bluetooth should be used instead of ANT+; If there are 3% artifacts or more, the results are not trustworthy).
- Estimating Aerobic Threshold (or even Anaerobic Threshold, see DFA a1 and the HRVT2) should only be considered for dedicated ramp tests. Regression lines, as can be displayed in Runalyze, should only be considered when the activity solely involves a ramp test.
- The whole thing is a quite new topic. Despite our known time constraints, we will try to respond promptly as new information becomes available. Please remember: from our perspective, we are offering the feature as a beta feature to support current research.
- For the reason just mentioned, we are still far from catching up with the calculations for all existing activities. Therefore, we cannot show you long-term trends yet.
There are three important steps for the determination of the DFA-alpha1 values:
- Detrending of RR intervals: Since the heart rate changes significantly over the course of an activity, the RR intervals are also subject to this long-term trend. This trend must be removed during preprocessing of the data. The method for this (see Tarvainen et al. 2002) is very simple in the mathematical formulation – very complex in the actual calculation.
- Artifact correction: As with all measurements, measurement errors can also occur during the acquisition of HRV data. These must first be detected and corrected or removed for further calculations. Here, there are potentially numerous methods of varying complexity and quality.
- Segment-wise calculation of DFA-alpha1 values: DFA is a “standard tool” of mathematics. Differences between different implementations should be minimal.
Runalyze is primarily written in PHP as a web application and accordingly we did the first implementation of the feature in January in PHP as well. Let me just tell you: More complex mathematical methods in PHP are a horror. What works in e.g. Python, R or MATLAB with only a few lines of code and little computing time, is complex and computationally intensive in PHP. Our first implementation therefore did without detrending (1) and used a simple method for artifact correction (2). The results were accordingly insufficient.
Our solution for detrending
In early May, we decided to invest the time and implement the method in R. With the RHRV package, a library is available here that already provides everything for (2) and (3), and detrending (1) is implemented in a few lines of code, as mentioned. The result was usable: The results were much closer to those of the Kubios software, which is considered the gold standard for HRV evaluations, so to speak.
The only problem: performance. Detrending requires finding the inverse of an NxN matrix, where N is the number of RR intervals. For 1h of activity at an average of 150bpm, that’s N = 9,000 values, which corresponds to a matrix with 81 million entries. Calculating the inverse is equivalent to solving a linear system of equations with 9,000 equations. What sounds complicated, computers can do pretty fast nowadays. But for a 3h activity we are already talking about 9 times the effort – and for a web application like Runalyze, response times of > 1s are already very annoying, not to mention the > 30s needed here at the beginning.
The solution: We also determine the detrending segment by segment (with some overlap). Our tests have shown that the bias (i.e. the systematic error we commit with this) is of negligible magnitude. Another advantage is that the NxN matrix and its inverse are initially independent of the RR intervals themselves, so we only need to compute the inverse once for numerous segments of length N. This means that we do not need to compute the inverse of the NxN matrix. In addition, we have implemented some methods of the RHRV package itself to achieve even better performance.
Our previous solution for artifact correction
Now, when we thought we were almost there, Bruce comparison between Runalyze and Kubios (and HRV logger) made us realize that our artifact correction (2) is insufficient. The RHRV package offers the FilterNIHR method, which uses time-dependent thresholds in a relatively similar way to the method used by Kubios (see Lipponen & Tarvainen 2019). Unfortunately, this internally restricts the thresholds to a fixed range, which may be useful for HRV data at rest, but is insufficient for activity data.
Here we see an example of a 3h cycling session without filtering (red) and with the RHRV filtering (black). The first thing to note is that the percentage of artifacts is incredibly high, although in the plot this is also due to the length of the activity: out of 25,003 data points, FilterNIHR detects 1,350 artifacts (5.4%). We see in black, however, that other obvious artifacts were not detected.
Therefore, we had to partially give away our elaborately gained performance and again choose the more computationally intensive implementation as presented by Kubios and Lipponen & Tarvainen. The result is 1,485 detected artifacts (5.9%). Still, some artifacts remain undetected:
If we now combine both methods, i.e. first the one from Lipponen & Tarvainen and then again FilterNIHR, 1,516 artifacts (6.1%) are detected. At first glance, only one undetected artifact remains:
So even this combined method does not quite manage to really capture all artifacts correctly. We also see the effect in the resulting DFA-alpha1 values (at 120s window width and 115s overlap). At about 1:05:00 comes an area where DFA-alpha1 jumps wildly between values at about 1 and about 2. In the graph, these values are slightly transparent because the segments in this case exceed the set limit of 10ms due to the high SDNN of about 43ms.
We are working on further refining the method to detect this artifact as well. Until then you should – as always – pay attention to the best possible input data. The fewer artifacts there are in the data, the lower the probability that individual artifacts will not be detected. In this example, we would not trust the results anyway because of the total percentage of artifacts of 6.1%.
- Gronwald, T., Rogers, B., Hoos, O.: Fractal Correlation Properties of Heart Rate Variability: A New Biomarker for Intensity Distribution in Endurance Exercise and Training Prescription?, Frontiers in Physiology, 11, p. 1152, 2020. doi:10.3389/fphys.2020.550572
- Rogers, B., Giles, D., Draper, N., Hoos, O., Gronwald, T.: A new detection method defining the aerobic threshold for endurance exercise and training prescription based on fractal correlation properties of heart rate variability, Frontiers in Physiology, 2020. doi:10.3389/fphys.2020.596567
- Rogers, B.; Giles, D.; Draper, N.; Mourot, L.; Gronwald, T.: Influence of Artefact Correction and Recording Device Type on the Practical Application of a Non-Linear Heart Rate Variability Biomarker for Aerobic Threshold Determination. Sensors 2021, 21, 821. doi.org/10.3390/s21030821
- Rogers, B.; Giles, D.; Draper, N.; Mourot, L.; Gronwald, T.: Detection of the Anaerobic Threshold in Endurance Sports: Validation of a New Method Using Correlation Properties of Heart Rate Variability. J. Funct. Morphol. Kinesiol. 2021, 6, 38. doi.org/10.3390/jfmk6020038
There is also a wealth of information on the subject on Bruce Rogers’ blog.
12 thoughts on “HRV: Improved estimation of DFA-alpha1 values”
Thank you for your work on this. I am using DFA a1 as a guide to my aerobic threshold and Runalyze is a big part of how I look at my sessions.
Truly a wonderful job guys! Well though out, tested and implemented. Tough to get close to kubios given they are using Matlab runtime on your own PC.
Would it make sense that I could choose only one segment of any training session data? That way I could first warm-up, then e.g. n * 3 min ramp-up (n>2), after which I could do whatever I’m willing to do in that session. In the analysis I would then choose only the ramp-up part to solve the linear equation for DFA a1.
Yes, that’s planned.
Great! Any schedule for that? :)
Is there DFAa1 graphing in runalyze currently possible for your strava linked data?
I just signed up and thought it was implememted but could not find it.
Strava doesn’t store that data data and as we are not able to get the original file from Stravas API there is no chance.
Is this feature available only for “supporters” or also free users?
That feature is available for everyone. Just make sure your uploaded data contains rr-intervals.
Thanks for your work on this!
I have some comments, that I hope will be helpful, based on my personal experience and questions I have seen across various blogs.
Regarding Runalyze specific experience…
I did get the Aerobic Threshold Estimation to work with Polar H10 recorded by Polar M200 once I setup Automatic Sync and merged the Flow Exported HRV – RR CSV file into the activity.
I was surprised I couldn’t merge an HRV file into a previously recorded activity manually imported from Polar Flow into Runalyze before I setup auto sync. I assumed that auto sync was basically auto importing the TCX file from Polar Flow but I guess the process using the API is different somehow. It doesn’t matter to me because I don’t have any old data that needs to be analyzed, I’m just pointing it out because I spent time testing with an old activity one night until I could record a new activity the next morning to be auto-synced and I wasn’t sure I wasn’t going to have the same experience with an auto-synced activity. Gladly the auto-synced activity worked great!
I first manually uploaded an existing activity that I had previously recorded on my Polar M200/H10. I exported the Session (TCX) file and the HRV – RR (CSV) file from Polar Flow.
The TCX file uploaded fine, but I was not able to merge the HRV – RR CSV file into the activity. I was able to successfully open the same HRV – RR CSV file in Kubios standard so I feel the file was good.
Then I setup Runalyze to Automatic Sync with Polar. Recorded a Run with H10/M200. Exported the corresponding HRV – RR CSV file from Polar Flow. Merged that file into the newly recorded/synced activity in Runalyze and then looked at the Aerobic Threshold Estimation. It seemed like a reasonable estimate to me (within 3 bpm of my previous best estimate based on internet research and HR data lol), even though the run was very erratic and certainly not a ramp test. Runalyze reported only 0.4% artifacts.
Is that expected behavior for the manually exported/imported TCX not to accept merge of the corresponding HRV – RR CSV file in Runalyze? It’s not an issue to me since I am only concerned with new auto-synced data anyway. I’m just curious and wanted to save anyone else the time I spent on it.
Regarding Polar watches capable of recording HRV data for this purpose…
There was some confusion on other websites about whether a Vantage V2 is required to record this data.
My M200 seemed to work fine. It appears to me that most semi-current models can record this data from H10 including my M200. I didn’t need a third party app to observe this data in Runalyze or Kubios. I am curious if the third party apps offer any analysis not currently available in Runalyze. I know a lot of folks believe that certain apps or a Polar Vantage V2 is the only way to get this data from an H10 but I just wanted to clarify that the M200 and likely other Polar watches work fine.
Of course if a third party app keeps you from spend $150+ on a Polar watch then that is great news. I just happened to already have a watch that worked.
This HRV article on Polar mentions only the Vantage V2 because of the on watch analysis built into the V2 but V2 is not needed to merge HRV data into an activity in Runalyze.
This article is a little better because it at least mentions the Polar recording methods that don’t work but it still leaves you to infer that watches other than the V2 will work to record this data to be exported from Flow.
“HRV data export is not available in the following cases:
Training sessions are performed with the Polar Beat app or M600 even if the sessions were done using an ECG heart rate sensor.
Training sessions are recorded to the internal memory of H10 are processed in the Polar Beat app.
When an optical heart rate sensor is used as the heart rate source.”
Based on my experience with the M200, I would guess all current watches since the M200 except for the M600 would record the HRV data for Export from Polar Flow.
I also read somewhere that like recordings by Polar Beat mobile app, recordings directly by the new functionality in the Polar Flow mobile app won’t work either which makes sense because it seems to basically just share that functionality with Beat now. Confusing though because the required auto sync and HRV file that comes from the Polar Flow web app have to be recorded by the watch and then synced to the Polar Flow app by Flow mobile app or USB.
I had previously assumed that recordings by Beat and Flow mobile apps were the same as recordings by my M200 watch. It’s fine anyway because I do all of my recordings with my watch anyway because I detest carrying a cell phone on a run.
Hope this help!
No, you should be able to merge the HRV data into any activity. Maybe there was just a small short problem. Try it again and if it’s not working send me an mail to firstname.lastname@example.org – Tell me an example activity where it wasn’t possible to merge the data and send it the rr csv attached.
You were absolutely correct. I just uploaded and merged the same corresponding HRV RR – CSV file to the same activity that I had manually uploaded last weekend (before I turned on auto-sync with Polar Flow) and this time the same file merged with no issue.
Thanks for giving me the nudge to try the upload again. Using Auto-Sync going forward is great, but being able to manually upload a previously recorded run and merge the corresponding HRV data is also great for those who recorded important activities before turning on Auto-Sync.
Sorry for the confusion. I thought I had run across some info that could save someone time but as you said that was just a small short problem.
Hopefully documenting a couple of valid workflows is helpful for those considering the purchase of a watch or app to access this functionality.
Thanks again. You guys are great!