In this episode I discuss five tools that can help you tell good studies from bad studies. This was a fun one to record, and I hope you enjoy.
Script:
For today’s episode we are going to talk about identifying good and bad studies. Now in episode 12 we discussed P-Hacking which is definitely bad science and in episode 18 we discussed fraud. Since those have already been discussed I am going to avoid them today. This episode is all about some signs you can use to help identify studies that are likely to be problematic.
The first thing I always look at is study size. Study size is one of those cases where bigger is almost always better. The larger the study is the less likely we are to believe that the observed effect is due to the studied feature, rather than random chance.
The second thing I look at is study design. I am going to keep this as simple as possible for the purpose of this podcast. There is one important term we want to look for and that word is “blind”. The best studies are almost always blinded. There are two types of blinding, single or double. In a single blinded study the subjects do not know what group they are in, but the experimenters do know. In a double blinded study neither the subjects nor the experimenters know what group each subject is in. Double blinded studies are almost always going to the best for our purpose.
The third thing I look at is the control group. We need a control group that makes sense. For example there’s a bunch of studies on intermittent fasting, that compare them to ad libitum eating, meaning eating whatever you want. This doesn’t really help us because what we really need is intermittent fasting being compared to calorie restriction. Figure out what it is being compared to is important for us to figure out what the effect really is.
The fourth thing, though you do need to be careful with this one is what journal is it published in. There are some journals with really lax peer review or that are even almost pay to publish. It can be difficult to determine which journals are reputable if it is not a field you are familiar with so I recommend googling the journal name and double checking before you use it.
Finally, I check the effect size. If they are repeating incredible results, from relatively small interventions, I get very nervous and wait to see it replicated. If the effect doesn’t seem believable, I don’t believe it.
So remember, number of subjects, study design, good control, what journal, and size of effect. Using those five tools will help you separate the good science from the bad.
##############################################################################
Special feature for this episode, I want you to go out and find a journal article (use scholar.google.com if you don’t know where to start) and then send me an email at scinutrient@gmail.com or DM me on Facebook at Scientific Nutrition with an explanation of why it is either good or bad. If you do I’ll send you a code to get a free copy of my book.
Find this podcast on:
One thought on “Scientific Nutrition Update 22: Good and Bad Studies”