Article By: Mark Langley on his blog.
-The purpose of this article is to address the validty and reliability of velocity based training (VBT) measurement devices.
-Three devices were compared to video analysis: Beast, PUSH, and OpenBarbell V2. Of the three, OpenBarbell seems the most appropriate for powerlifting.
– “This will be covered largely from a powerlifting standpoint. Also, did I mention this is geared towards powerlifters? I mostly wrote this for people interested in improving their squat, bench press, and deadlift. Athletes that focus on these movements in particular are powerlifters. I wrote this for powerlifters. Powerlifting.”
-The most important considerations for a VBT device for powerlifters are affordability, reliability at low-end velocity, and support for powerlifting specific movements (primary and secondary lifts)
-There is nothing that makes a tethered or accelerometer VBT device better than other options without testing across different movements you intend to perform under VBT conditions
-The limitation of an analysis like this is accuracy or reliability can be improved in some models with no additional hardware changes (adding something to the internals). An update to their respective apps could change how velocity is calculated, rendering this analysis null and void. This was conduct in late January of 2017
Velocity Based Training (VBT) is one method of auto-regulating training. It can auto-regulate load on the bar, number of reps within a set, total number of sets, any combination of those three, or any other relevant factor in training. It is beyond the scope of this article to make the case for VBT. It’s hard to make a case for VBT when you haven’t first established that the methods used to gauge velocity are valid and/or accurate. VBT has been a training methodology put forth in power athletes and team sports. It has gotten significantly less attention for strength athletes like powerlifting. Other coaches can more appropriately talk on the matter for strength and conditioning outside of powerlifting, and people like Bryan Mann, Carl Valle, Dan Baker, Eamonn Flannigan, and Mladen Jovanovic already have. On the powerlifting side of the house, the volume of writing and academic work is limited to Louis Simmons of Westside Barbell, Brandon Senn of Kabuki Strength, Mladen Jovanovic of Complementary Training, and I guess you could also argue that Mike Tuscherer of Reactive Training Systems as well – although it’s more appropriate to say Mike uses velocity as a reference point, not a driver of training.
The case for VBT in powerlifting is much the same case as it is for auto-regulated training in powerlifting. Rather than make this tangent over-shadow the assessment of validity and reliability of the sensors, I will direct you to Brandon Senn’s article on the auto-regulation book of methods or wait until I’m able to produce an article that addresses this specifically.
Powerlifting is not a money sport. If you want the best system with the best reliability and accuracy, you should probably do what universities do to test the validity and reliability of velocity based training (VBT) devices. They commonly use VICON 3D analysis systems, or even just run a cross-comparison to another system that has an established track record and vintage prices. Other systems that fit this condition include the T-Force and Tendo linear position transducer. All of these systems are priced out pretty high, so what we’re left with are consumer-grade options, which could arguably be good enough.
Keep in mind, not every device is marketed with the powerlifter in mind. In general, many of the options are aimed at strength and conditioning coaches for power athletes and sports, not for the exclusive use of strength or hypertrophy development. If you just want something to collect silos of data with no contribution to your training plan, there are no particular suggestions for you. If you mean to use it for strength and conditioning, there are many factors to consider – none of which I will cover. Other more qualified evaluators like Carl Valle have covered this in better depth than I could or care too (no offense, I’ll keep following you on Twitter Carl). Powerlifting is the odd-man out here. There is little regard, little attention, and little support in the hardware or software to enable VBT for powerlifting.
Cost is probably the first thing to think about. Unless you’re a sponsored athlete, you’re probably going to consider the more affordable options. Once you’ve determined your budget, it’s essential to consider the coverage of exercises you intend to use it for. This is particularly important because some exercises bottom out at lower velocities than others. It’s also important to consider the direction of the manufacturer. If the manufacturer caters mostly to recreational weight lifters, it probably won’t be that appropriate for you. On the other hand, if it caters to another barbell sport like weightlifters, it might be appropriate for you. And lastly, if it caters to power athletes in team sports, its function and implementation for powerlifting is probably outside of your scope of concern, but might dual purpose well enough. One easy way to see the direction of the company is to look at its change log on its accompanying application. If many of the changes pertain to increasing exercise variety and features that pertain to powerlifting, it might be headed in a direction that will be suitable down the road if it isn’t already appropriate at start up.
And lastly, anyone that says that people should just use BarSense or IronPath: I challenge you to run a 6 week cycle of VBT strength training at intensities of 85-95% with at least 33% of all repetitions performed using VBT feedback. It won’t work, even if the applications work as advertised. The fact that there isn’t an app in the Google Play or Apple app store that utilizes phone internal sensors is beyond my understanding, but no such feasible, low-cost/no-cost option currently exists. And it’s not because VBT is new, because tethered units that have filled a VBT capability like the Tendo have been available for a long time. Anyone recommending this option might as well be recommending a recumbent bike to drive the training of a weightlifter. This is an ignorant argument that doesn’t deserve more than a paragraph of concern.
THE LAY OF THE LAND
This whole section is worth skipping, but here it is… Here’s an obligatory, not-all-inclusive table, because people like tables.
|Method of Measurement
|Tether displacement w/o angle of pull
|Tether w/ angle of pull
|Accelerometer and barometer, attaches via bar collar
|Accelerometer, attaches to the bar
|Accelerometer on a barbell or strap
|Accelerometer and Gyroscope, attaches to the forearm
|Tether w/o angle of pull
|Time to Feedback
|If not real time, after set
|Academic and weightlifting
|Anything involving a barbell
|Power athletes and recreational lifters
|Anything involving a barbell
So if anyone asks, “Bruh, Y NO GYMAWARE?” look at the price. Loan me one. I’d love to evaluate it. Will pay return shipping. I worked for a facility that was an early PUSH adopter. I became an early adopter. Greg Nuckols of powerlifting and Stronger by Science fame (formerly the better named StrengTheory) loaned me a Beast sensor. I bought an OpenBarbell. And Bob’s your uncle. Storytime’s over. Loan me any of these and I’ll try them out.
First let me explain Bland-Altman plots via scatter-plots.
The X axis is velocity (everything in meters per second) as measured by a Beast sensor. The Y axis is velocity determined using video shot at 60 fps and a free application called Tracker. I’m shilling for Big Physics. You could also use Kinovea, but for the volume of work I was doing alone and the quality of computers I have to work with, this was easier. The dotted line represents how the measurements are correlated to one another. A cursory check of tracker’s determination of displacement and time appeared to line up with what I could confirm in more rudimentary ways. Many things could affect this as a standard of comparison and it may have been more appropriate to exclude using the program, such as taking the average of all sensors values as the dependent variable. For reasons that I cover later, this was not the most optimal designation of variables.
About the graph: the further from the line, the more they disagree. If the dot is north of the Mason-Dixon line, the video analysis measured the movement at a faster velocity than the Beast sensor did (or that sensor measured it slower than the video). If it’s south of there, then the sensor measured it slower than the video (or vice versa). The slope of the line shows the bias across different quantities. Potential scenarios are that the device could be more accurate as slow velocities and less at fast velocities. In a perfect world, if the graph was scaled at 1:1 (length of gridline matched height of gridline for the same number of units), this line would form a 45 degree angle. The problem with a scatter plot is putting too many comparisons on it would look too busy, especially in our case if we were measuring different squat type movements (this is measuring back squats, front squats, and pause squats) across three different devices. A Bland-Altman plot helps us visualize it in a greater context by essentially rotating the graph 45 degrees and giving new horizontal and vertical axes showing video analysis velocity and the difference of the sensors relative to the video’s measurement. Something like the picture on the right.
So what does that look like in practice? Like this for a squat:
Firstly, notice that the horizontal axis (video analysis velocity in the Y direction) is reverse ordered, showing the fastest squats on the left and the slowest on the right. The vertical axis shows the difference relative to the video, with no difference lying in the center at 0.00 m/s. The colors show different sensors used, and the shapes of each point denote the type of squat. The legend helps you out here: back squat, front squat, and pause squat.
It’s important to note that the squat is likely going to be your longest movement in terms of distance. Since velocity is displacement over time, that means velocity is attenuated by that. For most people, squats will be faster than bench or deadlift. As we decrease velocity (move to the right on the graph), we approach heavier loads. You’ve probably noticed in your own training that you aren’t able to lift your 1 rep max (1RM) as fast as your warm up weight. Velocity reflects this.
You can generally see that OpenBarbell clusters in a generally straight line. At higher velocities, PUSH holds its ground, but it does have a little bit of scatter gun spread going on further down the low end – the part that’s most pertinent to powerlifting. Beast is roughly near that. But more importantly, this is a comparison of a tethered system against two different accelerometer systems. The supposed superiority of tethered systems is they are more accurate (closer to zero difference). This doesn’t show that. But it does show that it’s reliable.
Accuracy isn’t too important for VBT, but reliability is. Accuracy would reflect how “true” the measurement is to what it’s measuring. So if the 2×6 is 30 inches, it’s more accurate if it measures it at 32 inches than it would if it measured it at 48 inches. Reliability would be measuring the board 5 times with two different tape measures. One could give a reliable measurement (42, 41, 43, 42, 42) and another could give an unreliable measurement (30, 49, 15, 26, 52). Ideally you want both, but for VBT training purposes reliability is all that’s important. Bad carpenters blame their tools, really bad ones buy twice as much wood.
Back to the reliability though: if you notice, the differences between sensors and video seem to shrink at slower velocities, or get closer to the 0 m/s difference on the Y axis. Slower reps of the same distance take more time. If you’re dividing the same distance (assuming all your squats are generally the same length of displacement) by increasing times to complete the upward portion of the lift, it slows down and the potential for inaccuracy decreases. Put another way, if it’s reliability is something like plus or minus 5%, smaller quantities (velocity) tend to have smaller difference. But that small difference could be huge. For example, a squat at 0.34 m/s could be tolerable, but a squat at 0.28 m/s could be slower than an individual is able to grind out – like a load above your 1RM that somehow dogmatically followed the trend line and ignored force capacity. It’s probably not helpful to think of VBT as an overly precise tool in prescribing load according to velocity though. Usually when prescribing a velocity to train at, it’s better to aim for that velocity but accept a range of velocities above and below it.
The shapes also help us identify if there are movements that are particularly tricky for different systems to measure. In this case, PUSH does not measure front squats as reliably as back squats or pause squats (which is a back squat variant). I’ve always suspected this after loading the bar according to the load-velocity relationship at what should be 70% 1RM and finding myself only able to crank out 5 reps.
If you want to be super technical, here’s the individual points by video velocity and difference for Beast, PUSH, and OpenBarbell. Here’s the correlation between video velocity and Beast, PUSH, and OpenBarbell. If you’re happy with just the coefficients of determination for Beast, Push, and OpenBarbell, they are 0.93, 0.84, and 0.92.
”The coefficient of determination, denoted R2 or r2 and pronounced “R squared”, is a number that indicates the proportion of the variance in the dependent variable that is predictable from the independent variable(s).” The range of coefficients of determination are 0 to 1, with closer to 1 indicating that the regression line perfectly fits the data.
This does not give the full picture though. To date, I have had zero “dropped reps,” or reps that weren’t detected by OpenBarbell. Dropped reps in VBT is what lag is to computer gaming: it’ll kill you ded. The squat is generally more reliable as an exercise because it’s faster and manufacturers know that if you can’t get squats right you’re considered useless because DO SQUATS! PUSH didn’t detect one rep of front squat, as did Beast. Beast also dropped one regular squat. This might not be a big deal on the surface, but if you were trying to determine velocity at 100% 1RM (which you could figure out through AMRAP, arguably) and your sensor dropped one of your 8 reps, that velocity could be lost to the gains goblins of the labyrinth. The opposite could happen too. The sensor could detect 14 reps when you only did 8. Beast calls these “ghost reps” and gives you the ability to choose the ones to cull – which are sometimes obviously wrong and sometimes arguably could be a real measurement. PUSH, on the other hand, only lets you tell it how many reps you did (if you kept count), and determines which measurements to cull without further input (part of their rep detection algorithm). Sometimes it’s more helpful to manipulate the system and say you completed one or two more reps than you actually performed just so you can see the full range of measurements, but this can get screwy. You could potentially make the system give readouts for reps you didn’t complete. In the case of these charts, dropped reps have been removed from the data set and it biases the measurement to some degree. Keep that in mind throughout this. Reps that these units didn’t detect could possibly be excluded because they are markedly different from adjacent reps and the system self-identifies its own measurement variability. This is the technological equivalent of your dog running into the other room as soon as you come through the front door because they broke the lamp again.
What do I mean by removing dropped reps? How can I remove reps that don’t exist? For example: imagine three dropped reps that were measured by video at 0.3 m/s. If the reps were dropped, that essentially means the difference is -0.3 m/s. If it was a fast rep (which is less meaningful in powerlifting, but here’s an explanation regardless), a dropped rep can mean a difference of -0.75 m/s. This difference is huge. The effect of including them is it misrepresents the bias trend for the rest of the recorded reps, which is of greater interest in the presentation of the data. The unfortunate downside of this is I have to make well-reasoned assumptions as to what are dropped reps vs what are ghost reps.
BANCH, BANCH, BANCH
These measurements do reflect one of my programs, which will be released later. As a result of that exercise selection, there are more data points for bench than there are for the other movements. This is an obvious excuse for why I’m shilling for Big Italy (such innuendo). Exit grapherrhea:
Again, Open Barbell performs fairly reliably and arguably more accurately and reliably. The region of most importance, at 0.60 m/s and below is very tight. Beast seems to handle pin presses very well, which is surprising because Beast usually gets confused when you change direction rapidly (like a barbell bouncing oh-so-softly on safety pins). It almost appears like PUSH has a sinusoidal shaped bias, but it’s hard to tell. This could be a result of how the data is smoothed from its sample rate down to a usable signal. Even though Beast performs better at lower velocity, generally, it also has some scatter at mid-range.
Bench press variants surprised me. In a previous article in a blog, I tested multiple PUSH bands at the same time. I tried to set up conditions that tested it’s reliability, with some possible failure points and some protocols I chose specifically because I was sure they would fail (by using the device incorrectly). At the time, the main validation article PUSH had under its belt utilized curls and multiple individuals at East Tennessee State University under Sato, Beckham, Haff, and Carroll (full author list because Carroll has written more on VBT since then). So I replicated it, broadening the conditions. My “obvious fail” condition was wearing the device in it’s normal configuration on the forearm and comparing that to wearing the device incorrectly at the wrist. The results were fairly consistent. It became obvious that it was measuring angular acceleration and from there determining angular velocity. In the account creation process, one of the inputs you have to give is your weight and height. This was supposed to be a failing condition of my test, but instead I figured (not me actually, my boss did) that it was likely referencing distance and position of the sensor relative in space given proportions of extremities of your height. Further explanation is beyond the scope of the article, but here’s two links to get you started.
My assumption had been that because PUSH likely used angular acceleration, it likely performs better than it should in comparison to other devices in movements that were attenuated by forward/backward movement and upward/downward movement. The bar path on the bench press is different from the squat and deadlift in that respect. Greg Nuckols covers that in detail here as does MySquatMechanics, which is my cop out to abandon that tangent. Long story short, I expect PUSH to be better at this than OpenBarbell or Beast. I expected this even more so with reverse grip (RG) bench press since the movement arcs down farther towards the upper stomach or bottom of the sternum. At the low-end of velocity, it tends to do that for RG bench, but it appears scattered in a sinusoidal pattern throughout. Beast performs at a generally predictable bias, but is also scattered at higher velocities. Open Barbell continues to perform well, especially at lower velocities where powerlifters will do much of their important work. Its data generally speaks for itself.
For point of reference, I usually grind out my slowest rep (1RM, no coincidence) at 0.10 m/s. In other studies, 1RM for bench is typically around 0.15 m/s (link to a review article). Most accelerometers systems tend to perform less reliably (anecdotal evidence mostly, but also some implication by manufacturers) at velocities slower than 0.15-0.40 m/s, or in my case ~80% 1RM and above for a bench press. Again, with squats this isn’t nearly as clutch, because 1RM’s are usually around 0.30 m/s, so we’re not as close to the edge of the variability cliff. Many individual factors will influence your velocity, such as shortening the range of motion because you take a different stance or grip, or you’re American by birth but of Asian height by the grace of God (I can say that, I’m Filipino). For reference to my 1RM velocities (MVT – minimum velocity threshold), I squat at hip width, bench wide, and have a sumo deadlift at 5’5”. My individual experience, thusly, is very challenging for systems that can’t perform as well under 0.40 m/s.
No one wants to tell their best friend they have an ugly baby, but they should hear it from someone that loves them first: PUSH had 4 dropped reps, Beast had 3 dropped reps. As before, OpenBarbell does not drop reps in my experience of over 500 to date. As before, excluding this data means Beast and PUSH appear more accurate and reliable than they truly are.
Coefficients of determination: Beast=0.81, PUSH=0.92, and OpenBarbell=0.98.
THE GRITTY MESS THAT IS DEADLIFTS
Deadlifts are usually the least forgiving in terms of accelerometer systems dropping reps. I would love to prove that, but I only deadlift twice a week and I only have 48 reps to populate data for PUSH and 35 for Beast. PUSH is notoriously bad at this, and it dropped 7 reps (14.5%). In my n=1 experience, Beast also drops lifts, but that is not substantiated in the current data. I can also tell you that because Beast over-records all movement, you will get significantly more ghost reps in a deadlift if you aren’t setting the bar down quiet enough for planet fitness to approve. This also introduces some bias into the measurement, because even though I’m fairly sure most the measurements included were actually detected and weren’t ghost reps, I’m not absolutely sure. Ghost reps are a huge deal if they register at ~0.30 m/s (which they mostly are) and your deadlifts at 75-80% 1RM only move that fast or slower.
Nonetheless, you’ll see slightly more variation in Beast and PUSH than you will OpenBarbell. Most importantly, you’ll also see this most pronounced at lower velocities. The range of difference nearly doubles. I believe it’s more helpful to see all the individual comparisons separately on the same scale, which I have compiled here.
Given that some accelerometers are measuring angular acceleration, calculating angular velocity, and churning that out into vertical velocity, you could see that something like a deadlift could be challenging. If the unit detects best by change in angle of the unit and the arm is fixed in a downward position throughout the movement, then it doesn’t have much to detect.
This data is very biased by design, but not purposely so. My deadlifts are slow and sumo is usually slower than conventional. As before, less ROM=less velocity. Additionally, I believe the reason PUSH drops reps so often for deadlifts could be because the sumo deadlift algorithm is fairly new – at least I think they use a different proprietary algorithm for sumo than they do conventional. Since I updated the chart, I think that is the case as 13 of 14 reps of conventional deadlifts recorded. The sumo deadlift algorithm is a year old whereas conventional has been a part of the system since release. Upon early release, the “fix” for rep detection on the deadlift was to complete the pull, drop the weight from hip height, and complete additional reps in the same manner. I think they have since changed this, but there is a possibility that this could also “fix” the sumo deadlift problem. If you’re addicted to data, I guess you should try that out. If you’re just trying to git sum dedlifts, then it might behoove you to find another way to auto-regulate deadlifts.
In general, deadlifts are perplexing for VBT, and I have no definite reason why. Everyone I’ve talked to from Mladen Jovanovic, to Brandon Senn, and other normal VBT device users of Reddit tend to agree with this.
If you’re really interested in VBT for powerlifting, Mladen Jovanovic and Branden Senn are the best place to start. Bryan Mann, Dan Baker, and others that focus on VBT don’t tend to touch on topics of absolute overlap with powerlifting, but Mladen and Brandon do cover topics of direct translation to the sport. No offense to Mann or Baker, their insight and lectures have been a great help.
For many exercises, the first rep of the set is the fastest, and following reps have successive speed decrements if the reps are performed with consistent form and maximum volitional velocity. The widow maker doesn’t do this. Trying to game the movement through starting the movement on the rack at knee height, touch and go reps, pull and reset to the floor reps, performing cluster sets, or pausing at different positions during the lift don’t seem to smooth out the movement’s behavior. This is the ugly baby that VBT proponents don’t like talking about, and opens the floor to many ways of trying to understand how to use the feedback VBT devices provide. It is entirely possible that VBT isn’t appropriate for deadlift autoregulation, or we just haven’t figured it out yet. Basing our assessments of performance on subjective measures might be better suited (like RPE) unless we’re talking about final reps left in the tank. This subject deserves an article unto itself.
This phenomenon is reflected in the correlations between video velocity and sensors. Beast has a coefficient of determination of 0.64, PUSH at 0.59, and OpenBarbell at 0.80. Given how well each system generally performed in previous lift classifications, these numbers pale in comparison. Keep in mind, these R2’s are based on the whole range of differences from 40% to 85% 1RM. If you were to increase the sample in velocity ranges typical for powerlifting, there might be completely different conclusions. I decided to include conventional deadlifts after initially publishing this to the interwebs, and I think the differences were only of consequence to PUSH. Because PUSH dropped so many reps, there was less of a sample size to work with. Knowing that conventional was a long standing detected exercise, I decided to include for clarity’s sake, although I realize this comes at a potential loss of comparison to the Beast sensor. The Beast sensor was not included for conventional deadlifts because by the time I decided to do this I had returned the sensor to the owner.
The point of this article was to cover the validity and reliability of different VBT devices. Some of these have had their own validation studies and others have studies in the works, but I hope this rudimentary analysis gives you an idea of realistic expectations. It may sound like I’ve been harsh to accelerometer systems, but I would like to bring to your attention that the orientation of this is for powerlifting. Some of these devices weren’t designed to operate within the parameters of powerlifting optimally but do excel in other areas that I have purposely excluded. This evaluation is not meant to be fair, it is meant to gauge narrow, appropriate implementation of VBT devices in powerlifting.
Of the three options, OpenBarbell does appear to be the best suited for powerlifting movements, however it’s application to deadlifts is questionable. On the surface, it might seem that it may only be appropriate for 66% of powerlifting exercises, but given that many use squats and their variations to build their deadlift I would argue it still has relevance. I would also argue again that this evaluation is a snapshot in time of late January in 2017 and does not speak for how future updates to the hardware or software will augment validity, reliability, or utility of the different VBT devices.
Here’s a table:
If you wish to see more comparisons added to this, I’m willing to evaluate other models as long as it doesn’t involve obligating me to purchase them. In particular, I would like to see Gym Aware added for comparison. I would not like to see the Form Lifting collar added yet as it is a new product and would be unfavorably skewed against an emerging technology which could arguably have valuable technological contribution to VBT. The manufacturer seems to agree with me on the matter in our exchange via email.
It’s worthwhile to point out the limitations of this analysis. Firstly, I’m using questionable video analysis software as the method of comparison across all exercises. On the surface, it does not seem to be a huge concern because the main anomaly in the measurements appears to be deadlifts, but given that other professionals seem to acknowledge the unique nature of deadlifts maybe that is real. However, it does seem odd to me that there are measurements near 1.35 m/s. I have never seen any device measure this high. That velocity is comparable to the peak velocity of weightlifting movements, and I will agree it is suspect. I’ll cover how I have used Tracker in a following article or video. It would seem to me that it is over-estimating velocity at least at the high end. This discrepancy appears to behave uniformly though, so it may be suitable to use it as a comparison. I could also reanalyze the data to use methods of comparison when there is no known validity, such as comparing the difference of one measurement to the average of all measurements of that given repetition. There are also inherent flaws in that method, especially when you factor in dropped reps. This method could unjustly punish devices that do detect repetitions.
It’s also suitable to consider that one subject isn’t appropriate to generate the data points and a sample of multiple individuals might smooth out variances within a group due to differences in movement proficiency, limb lengths relative to height, and a host of other factors. I hope the slack that I leave will be picked up by people that are currently evaluating these technologies with funding in universities, such as Dr. Zordous from Florida Atlantic University.
In terms of how to implement a reliable device into a powerlifting program and how to utilize feedback from VBT devices, I will cover these topics in other articles. As a first article, it doesn’t make sense to imply the efficacy of this technology in the sport without first establishing it’s validity and reliability.