Site icon The Staging Site of Author Tim Ferriss

Is Beet Juice Really a Performance-Enhancing "Drug"? Digging In…

(Photo: Foodthinkers)

The following is a guest post by Mark McClusky, the editor of Wired.com and founding editor of Wired Playbook. Previously, he was a reporter at Sports Illustrated and a member of the baseball analytics collective, Baseball Prospectus.

Can “juicing” for performance enhancement sometimes involve juice alone?  Beet juice, spinach, celery, or chard, perhaps?  In this post, we look at fact versus fiction, dosing, and results you can potentially replicate.

Get exclusive content from Tim right in your inbox

I’ve added some thoughts of my own in brackets. In other random news, I’m finally on Instagram! Here I am, and here is a pic of Tony Robbins palming my entire face.

Now, back to our piece…

Enter Mark

The latest craze in sports drinks for Olympic athletes isn’t something citrusy from one of the big sports labs. It’s not chocolate milk, which has been shown in study after study to be a great, low-cost drink for muscular recovery…

No, today’s hottest sports drink is deep red and frothy, and tastes a little bit like dirt. Drink the recommended dosage, and you may find that your urine and feces become pink from taking it. But you also might find that you’re faster in your races. Ladies and gentlemen, I give you beet juice.

The key researcher into beet juice’s effect on athletes is Andy Jones, who became well known in sports science circles through his work with marathon world record holder Paula Radcliffe. How much has Jones, a professor at the University of Exeter, in the UK, become associated with the beverage? His Twitter handle is @andybeetroot.

So why beet juice? (Or beetroot juice, as it’s sometimes known, especially in the UK. They’re the same thing.) The key is the very high level of nitrate found in the juice. The body transforms nitrate into nitrite, and then into nitric oxide. According to Jones, nitric oxide has two major effects on an athlete. “The first is that it causes blood vessels to dilate, so you can provide more blood through them,” he says. “Simultaneously, it seems to make the mitochondria more efficient, so they are able to create the same energy while consuming less oxygen. So you really have two things happening. Lower oxygen cost because the mitochondria are more efficient, and then you have a higher oxygen supply—in terms of performance, that’s a pretty good combination.”

That combination does, in fact, seem to offer a strong performance boost for a particular kind of event. Jones’s group has published a study that seems to show a nearly 3 percent gain for athletes involved in efforts that last between five and thirty minutes.

We’re commonly told that nitrates and nitrites are potentially dangerous, and that we should limit our consumption of them. The fear is that inside the body, nitrates and nitrites can combine with meat proteins to form compounds known as nitrosamines. There is some evidence that these compounds are carcinogenic, which is the reason that most health organizations advise that we limit our intake of cured meats like bacon and hot dogs, which use sodium nitrite in the curing process.

But Jones and his team have shown that we’re still very early in our understanding of what nitrates and nitrites do in our bodies, especially when it comes to athletic performance. As opposed to cured meat, beet juice contains nitrate, not nitrite, and there’s no protein that could lead to the formation of nitrosamines. (Other vegetables also have high levels of nitrate, including spinach, celery, and chard. They presumably could have similar effects, but such studies haven’t been conducted yet.)

The protocol that’s been studied the most by sports scientists involves about 300 mg of nitrate delivered as beetroot juice between 2 and 2.5 hours before exercise. Jones and his group did a study the looked at the dose-response curve for beetroot juice—more juice did have a greater effect on nitrite levels in the blood, although there seemed to be a diminishing return when it came to the actual performance boost. The optimal level in their study was two concentrated shots that are the equivalent of about 600 mL of juice.

It’s probably worth finding the minimum effective dose of beetroot juice, given some of the side effects. Many athletes suffer from gastric distress when they drink a lot of the juice. World Champion cyclist Mark Cavendish highlighted another side effect of beetroot juice in a tweet:

As contrasted to the results for those 5 to 30 minute events, the results are more ambiguous for longer events. The Exeter researchers found that while there were small performance increases for cyclists in a fifty-mile time trial when using beet juice two hours before the event, they weren’t large enough to be statistically significant. [Note: There was no re-dosing in this time trial]

Of course, that raises an important question: is “statistical significance” [versus clinical or practical, for instance] the right measurement to use when we’re evaluating a study like this, especially for athletic performance?

LET’S TALK ABOUT STATISTICS

In the 1920s, English statistician Ronald Fisher created the concept of the p-value. The idea behind the p-value is that it’s an expression of the probability that the result see in an experiment is due to random chance [p-value], rather than the result of an intervention or treatment.

So, when you’re doing a study that seeks to show the effect of beet juice on cycling time trial performance, you start out assuming that it will not have an effect—this is called the “null hypothesis.” [i.e. It will not work] After collecting your data and doing your analysis, you crunch the numbers and come up with a p-value. A smaller p-value means that the evidence against the null hypothesis is better; that the effect of the intervention (in this case, beet juice consumption) is more likely. The p-value tries to express the reliability of the conclusion that researchers draw from their experimental data.

[In other words, a high p-value increases the likelihood of random chance, coincidences, or dumb luck explaining your outcome.  The lower the p-value — ostensibly — the higher the likelihood that your “treatment”/intervention produced measurable differences.]

Fisher argued that a p-value of less than or equal to 0.05 was a good informal line to draw when it came to evaluating research. That means that there is a 5 percent or lower chance that the null hypothesis would be true (i.e. <5% chance that the results are due to chance alone and not an intervention like beet juice), therefore giving a 19 out of 20 chance that the experimental intervention was responsible for the effect seen in the data. In general, if you’re a scientist and you do a study where p is greater than 0.05, you’re unlikely to ever see it published. If p is 0.05 or below, it’s assumed by many to significant; most journals follow this cutoff.

Will Hopkins, a New Zealand sports scientist and statistics guru who has written a massive primer on the use of statistics in scientific papers, advocates a different way of evaluating sports science research.

Hopkins points out that a reliance on p-values below 0.05 completely falls apart when you’re looking at elite sports, where the margins are so tight. “You would see effects in studies where things would help an athlete, and yet it wasn’t statically significant, so people would say there was no effect,” says Hopkins. “It’s crazy to say there’s no effect, it’s crazy to make that kind of decision. It’s clearly wrong. What matters is a probability of it helping an athlete. We need to decide what we take as sufficient evidence to use something or not use something with an athlete.”

The issue with the use of p-values is that if you have a small sample size for your experiment, you need a very strong effect to cross the threshold of statistical significance. But when you’re trying to do studies on elite athletes, you have, by definition, a pretty small cohort. And if you are studying something that might only have a small effect, you’re probably not going to get over the p-value threshold with that sample size.

Hopkins’s solution is to express the results of research in terms of what he calls “confidence intervals.”

Instead of a binary view, where a result is either significant or not, he talks about the likelihood of a benefit or negative effect. So, he’ll express the results of a study in terms of it being very likely beneficial, or almost certainly not harmful. Because when researchers and coaches are working with elite athletes, they have to conform to that central principle of medical ethics: First, do no harm.

“You’re trying to do better,” says Hopkins. “But you have to make sure not to do worse.”

BACK TO THE BEETS

With all of that in mind, we can look more closely at the study I mentioned above, where Jones’s team looked at the effect of beet juice on a 50-mile cycling time trial. If you read the abstract of the study, you see the following:

In conclusion, acute dietary supplementation with beetroot juice did not significantly improve 50 mile TT [time trial] performance in well-trained cyclists. It is possible that the better training status of the cyclists in this study might reduce the physiological and performance response to NO3- [nitrate] supplementation compared with the moderately trained cyclists tested in earlier studies.

That seems pretty clear—”did not significantly improve.” But then you look more closely at the experiment. There were eight cyclists involved in the study, and they did a double blind experiment where each rider did two time trials, one after drinking 500 mL of normal beet juice, and one when they drank the same amount of juice that had been nitrate-depleted. Here’s the key line of the paper with the results of those trials:

Compared to PL [placebo], BR [beetroot juice] supplementation resulted in a group mean reduction in completion time for the 50 mile TT of 0.8 % or 1.2 min (PL: 137.9 ± 6.4 vs. BR: 136.7 ± 5.6 min), but this difference did not attain statistical significance (P > 0.05).

So it wasn’t that the riders didn’t improve. When you look at the mean, they improved by 0.8 percent. But the sample size was small, and the improvement was small, so the p-value was above 0.05. That’s why the authors correctly note that it didn’t attain statistical significance.

[Note from Tim: Here’s what one researcher friend of mine added to this: “This is a classic problem of “underpower” (beta). With only 8 subjects in a cross-over design, you’d need to see a 10% difference or so to achieve p<0.05! Let me put this in perspective, since TT [time-trial] is my sport. These races are won or lost by seconds, not minutes. 1.2 minutes is a long time for a 50-mile TT. The problem with this study was power, plain and simple. I can’t speak for Jones and his team, but what are you thinking when you design a trial with such a small N? A power table tells you up front how big a delta you need to see. I guess they assumed the difference would be 10x what it was, but again, you think you’d know that from “pilot data.”]

But Jones knows that part of his audience is scientists working with competitive athletes. Drawing on Hopkins’s concept of how to express the results of an experiment, the paper states:

It is noteworthy, however, that although the group mean improvement in 50 mile TT performance did not attain statistical significance, an improvement in completion time of 0.8 % would likely to be practically meaningful during competition.

And digging even deeper into the study, you find something else. Of the eight subjects, five of them had their level of plasma nitrite increase by 30 percent or more after drinking the beet juice, which is what you’d expect when you drink something rich in nitrates.

But the other three riders didn’t get the same increase in nitrite in their blood—in fact, one actually had a decrease. This sort of individual difference in response is actually a common phenomena across lots of things, from nutrition to how we react to a workout. (In my book, Faster, Higher, Stronger, I argue that those individual differences mean that each of us is doing an experiment with just one subject, a perspective that’s near and dear to many of the readers of this blog and Tim’s books).

When you look at just the responders to beet juice, the improvements look very different. They had a mean time reduction of 2 percent, which would be more in line with what previous studies have found.

One other interesting result of Jones’s research is that beet juice seems to be a more effective ergogenic aid for regular athletes than it is for elite performers. “If you think of the things beet juice helps with, like blood flow and mitochondrial function, in elite athletes, those abilities are pretty well developed,” says Jones. “So there, you do have an issue of diminishing returns—any ergogenic aid might have a smaller benefit in the elite. But even if the benefit is just 0.1 of a percent, it’s probably worth trying.” This is one case in which regular folks like you and me might get more out of beet juice than an Olympian, but as Jones notes, even the small chance of a benefit for that elite athlete make it worth trying. And they certainly have been.

Jones tells me just about every top nation at the 2012 Olympics was using beet juice with its athletes. “It was actually pretty difficult to buy beet juice within ten miles of London,” he says.

SO WHAT DOES THIS MEAN FOR ME?

As Jones says, the good news is that beet juice is likely more effective for more normally athletic people like you and me. The best recommendations I have for you right now are:

###

My book, Faster, Higher, Stronger: How Sports Science Is Creating a New Generation of Superathletes, is an in-depth look at the science and technology that allows the world’s best athletes to push the boundaries of human performance.

Further reading:

Get exclusive content from Tim right in your inbox

Exit mobile version