Science for Smart People

I think I may just blow a few minds here when I say, I highly recommend Tom Naughton's recent lecture from the Low Carb Cruise entitled "Science for Dummies Smart People".






I don't normally encourage getting one's science from non-scientists, but I can appreciate the value of learning this kind of thing from a layperson - and in humorous fashion I might add.  For a change, Tom gets this mostly if not entirely right.  Where I have some things to add, it's that Tom does not seem to take or understand his own advice at times.



Yes, you guessed it, I'm about to itemize and expand upon some of those.  Most of my qualms are not about errors in logic or presentation, but rather how many in the low carbosphere don't apply these same principles and critical thinking processes when it comes to their own gurus and/or research supporting "alternative wisdom". 

Teenage Barbie:  "Math is tough"

Yep!  I often call this BWBS - aka Baffling with BS.   This was one point I made in Of Thermodynamics, Complexity, Closed Systems & Equilibrium.  Basically CICO is an overarching ... erm, tautology!  I'll leave it to others to find supplements, macro ratios, timing of eating and/or exercise, etc. etc. to best create a caloric deficit or avoid a chronic surplus, but ... really ... the body as black box model exploited by engineers for time immemorial works just fine for real life humans in practice.  Figure out exactly what you're taking in (on average for a suitable period of time) and you can lose weight by: (a) reducing intake, (b) increasing energy expenditure through exercise or (c) speed the process by doing a&b, aka ELMM.  Ignore this Second Law nonsense or any attempts to convince you that it is all too complicated for little old you to get.  You get it!  You know you do!!

"Will we ever have enough clothes?"  - Well duh!  NO!!!

The Belief Engine:  We tend to latch onto the ones that we already believe or at least do not contradict those we already believe.  Two words:  Echo chamber.

Paleolithic Humans:  Umm, a nit pick here but I think costumed rain dancing tribal rituals probably post date the paleolithic era.



Well, I'm not sure this is always "science" per se, I mean we could look at these in a context of consumer behavior (back to teenage Barbie and her clothes) for instance.  But I get his point and it is well presented at that, IMO.


This is excellent!  And, I might add, does NOT support this silly notion that the point of science is to disprove one's hypothesis.

Experiments are studies, not all studies are experiments:  In this vein Naughton, rightly IMO, takes aim at all the correlative studies that then go on to imply (overtly or by appealing to our innate human "belief engine") causality.  Kudos!  I would make the following observations, however:
  • Just because it's impossible to investigate correlations in living humans with experiments (timing and or ethics being key issues there) doesn't mean those who study such are necessarily sci-morally bereft.  
  • Read the ACTUAL scientific paper, not the press release from the research institution or, worse yet, just the headline and blather of a mainstream media account.  I don't know how often I've read a low carb blogger "debunk" some study only to be railing against a journalist's account of a study while the researchers said NO such thing!
  • Apply this same standard to pro-low carb studies.  Sadly critical thinking is all but suspended in the face of media accounts heralding the rebel cause!
  • Just because observational studies aren't experiments doesn't mean they are totally meaningless!
Confounding Variables:  Tom gets this mostly right, but there are some nuances
  • If C → A and C → B, we'll usually observe a correlation between A & B.  While we can describe C as a confounding variable, this is more generally referred to as co-correlated or C being the "co-correlating" factor.  The thing about this is that in many cases, A can be a more immediate manifestation of C then B is.  Therefore observing A can be predictive of B occurring later.  A is indicative of the presence of C that may or may not be the known underlying cause that may or may not be measurable.  This is the whole basis of biomarkers and lipid profiles.  
  • Confounding variables can be factors *other than those being studied* that are actually responsible for the observed correlation.  So A correlates with B, but some factor C may cause B totally independent of A.  The observed correlation is a coincidence.  
So ... re:

Observational Studies:
  • Assigning risk based on correlations is dubious at times.  Surely the numbers can be played with.  But in Tom's example of high/low fat and high/low sugar he fails to mention the common practice (in well conducted studies) of *controlling for potential confounding variables*.   
  • While it still does not rise to a level of causality - and never will - if A correlates with B even when "controlling for C, D, elemenopee" this is reason for more concern IMO.   Using Tom's example, the investigators would compare the high and low fat (let's call that A) groups for just the high sugar consumers and separately compare the high and low fat groups for just the low sugar consumers.  If independent of sugar consumption CVD was still correlated with fat consumption, it's reason for at least concern.  Likewise, the investigators could look at high and low sugar consumption in just the low fatties or high fatties and see if there's a link to CVD.  
  • So, many, MANY, observational studies do just that ... control for potential confounders and STILL establish a correlation.  No, causation this still doth not prove, but "biomarker" ... that, a compelling case can be made for.
  • Tom's example of your group designation being at the whim of the investigator sounds good, but it's not really the issue in WELL designed observational studies that DO statistical analyses controlling for various variables.  So yeah - if a study proclaims "XYZ increases risk of ABC", I'm not gonna go out and load a bullet into my gun to hasten my demise (though a few recent readers might well be happy if I did!), but if the study says they controlled for elemenopee and they still showed a tight correlation, I'm going to at least pay attention to the potential ramifications.
  • It's not a huge quibble, but Tom does imply that in observational studies investigators cannot control their variables.  Not true!  
Experiments & the Scientific Method - KUDOS!!!

I will repeat my minor quibble from the last bullet point.  That being the implication that you can't control for variables in an observational study, but that in clinical studies - e.g. "real scientific experiments" - you do.  Yes, you CAN, and well designed experiments do just that.  But many experiments, unfortunately, FAIL to properly control for confounding variables.  An experiment is not inherently more valuable in providing meaningful information vs. an observational study.  

Tom brings up a great point that is often, unfortunately, not always the case:  that being that the two groups being studied start out "statistically the same".  I can think of several studies I've read in the past week alone where a factor being analyzed differed significantly* in the baseline characteristics.  (*often it would seem to rise to a level of statistical significance).  Unfortunately he fails to note that there are myriad and several experiments that are badly designed and/or executed.  But at least we're moving in the right direction:
  1. Observe
  2. Construct a hypothesis
  3. Test ...
But here's where Tom goes a bit over the top with his "Scientists are freakin' liars" schtick.  Granted he states that not all scientists, not even most, but then he goes on to discuss "Lies, Damned Lies and Medical Science" in which an MD is cited as claiming that 80% of science is wrong depending on the source.  And so now I will implore all of you to do some critical thinking of your own:  If all of this science is wrong, doesn't that mean the science in support of carbohydrate restriction is wrong too?   

But let's go to the source of this figure and take a look at Lies, Damned Lies and Medical Science
In the paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right.
Now, I'm not defending bad science here, and I'm sure there are investigators (and science journalists ) who engage in such behaviors, but please do keep in mind here that Ioannidis created a mathematical model to predict the outcome of a scientific experiment based on predictions of human behavior.  To wit:
His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.
So this is a prediction based on a mathematical model.  And this agrees with what was found to be wrong?  Based on looking at 49 articles?  Perhaps there's more?  I don't know, but using a mathematical model to "predict" what's wrong rather than doing a serious survey of thousands of articles to determine what is *actually* wrong - while cloistered up on a Greek island for a lengthy period of time - doesn't fly with me.

Still, this is the number that Tom presents to the audience:  80%  And he equates non-randomized studies with observational studies, which I'll get to in a bit.  Here is the quote from Naughton's slide:

"80 percent of the conclusions drawn from non-randomized (observational) studies turns out to be wrong."
Umm, no.  Ioannidis' mathematical model predicts (guesses?) that 80% of non-randomized studies will lead to wrong conclusions.  He did not demonstrate this to actually be the case analyzing a large representative body of scientific works.  Ioannidis does give higher marks to randomized trials - only 25% for smaller, 10% for larger studies.  Naughton again misstates this as "turning out to be wrong", when Ioannidis' model merely *predicts* this to be the case. 


Non-randomized does not equal Observational 

In Tom's defense, the use of "trials" for the latter two descriptions but "studies" for non-randomized does give the impression that Ioannidis is talking about observational studies in the former.  But a non-randomized trial can be one where the participants get to pick which study group they want to participate in.  Indeed Dr. Michael Dansinger discussed this issue with Jimmy Moore (~20:20 mark).  Dansinger is the author of the first of the "diet wars" studies, Comparison of the Atkins, Ornish, Weight Watchers, and Zone Diets for Weight Loss and Heart Disease Risk Reduction - A Randomized Trial.  He discusses the problem with doing a randomized trial because of participants not wanting to follow the program to which they are assigned.  Indeed even when one does a proper randomized trial, the dropouts can dramatically skew the results - through no fault of the scientists.  One study I looked at recently (can't recall which one) had a high attrition rate after randomization and before the study commenced b/c subjects didn't like their assigned group.  This means the remaining subjects in that group were likely more motivated to comply with their assigned plan.  

Lack of Control?  After that whole discussion of confounding variables and all that, citing Ioannidis doesn't address this issue one iota.  And randomizing does nothing to control for possible confounders.  There are hundreds of abhorrent studies where the subjects were randomized but the methods for or lack of even an attempt at controlling for confounding variables just don't cut it.

OK ... so let's move on with the lecture ...

Causality and Direction of Causality

Naughton's discussion here is good, but I find the example for direction of causality rather silly.  At least he didn't use the rogue fat accumulating causing a positive energy balance nonsense.  I'll just leave it at that.  

Naughton also doesn't distinguish between prospective observational studies vs. cross-sectional ones.  With the former, where characteristics are observed at the start and outcomes assessed at some future date don't assign causation, but they do become "predictive markers" when assessing risks.  Risk is an exercise in probabilities.  If X-consumers develop colon cancer at double the rate of non-X eaters, that might be information that's useful to consider when deciding whether or not to eat X.  

That said, I share probably everyone's frustration over the conflicting and ever exaggerated claims made from observational studies ... poorly conducted ones at that.  I ignore most of those headlines.

Is A linked to B Consistently?       Bravo!  
Apply this to all claims.  Such as A = carbohydrates  B = excess fat accumulation
Does consuming carbohydrates always lead to fat mass gain in study after study after study?  Ummm .... the floor just fell out ....
Animal Studies   I agree.  View with caution.  Please, apply this standard to ALL research, however.

Absolute v. Relative Change      Bravo again!  Let's apply this standard when evaluating ALL research!

Significance of Results

Here Tom wanders off the scientific reservation a bit.  When scientists use this term they are applying a statistical measure and he gets this description correct.   So if his point is to remind you of this, OK.  But then he goes on to discuss a couple of examples where the outcomes were not very "significant" to him as an individual.  Now, I agree wholeheartedly that when you read a headline, or receive advice from your doctor to take a medication or eat certain foods or not eat certain foods, it is very important for you to think critically and evaluate if the "significant effect" seen statistically in a study is "significant" enough to you - especially when balanced by potential risks/side effects.

I mean let's stick with weight loss and the ... erm ... unpleasant gastrointestinal side effects that warrant the advice to wear dark clothing and carry a change of clothes with you.  If you're going to lose 5 more pounds in six months vs. not using the drug, maybe the wardrobe malfunction risk alone is enough to just be patient and lose that weight in seven or eight months.   So these are things to consider.  BUT, they have no bearing on judging the soundness of the science of the study.

I would have rather Naughton mention assigning significance to studies where the differences were NOT statistically significant.   We see this all the time in the media - and this is almost always why I read the last paragraph of articles before I read anything else.  This is usually where you find the weaselly wordings "not statistically significant", although sometimes they're bold enough to just come out and say it.  It's also where you'll find those limitations that may well have rendered the study useless, complete with "future study is needed".  Now ... since science is rarely settled, future study is almost always needed, so this isn't a bad thing.  But future study to correct inadequacies of the current study are a concern.  

Lastly I would add that many studies measure a laundry list of things and only one or two result in differences that are statistically significant.  The others may "trend".  It is human nature to view this as more "significant" than it is.

Controlling Variables:  Tom does come around to discuss controlling variables a bit.  I would have liked to see him expand more on this.

Compared to What?:  I think this is an excellent question to ask yourself when evaluating the *claims* made by the media or advocacy groups based on some study.  The example he uses is "more whole grains" and it is a valid one.  I think the ads for whole grains are atrocious.  And suggesting that we somehow aren't getting enough grain in our SADiets is absurd.  

Tom hits on another point when making this argument that is valid.  If increasing A decreases B, does decreasing A increase B?  Or vice versa?  Or if increasing A increases B then does decreasing A decrease B ?  Or vice versa? 

Let's apply this to Gary Taubes (you didn't really think I would get all the way through a post without bringing him up did ya?).    In WWGF Taubes unambiguously fingers carbohydrate ingestion as the sole cause of fat accumulation - alpha and omega.  So if A=carb and B=fat accumulation, we notice that decreasing A (Atkins diet for example) tends to decrease B.   From this we conclude that increasing A increases B.  Oh, and Mommy knew bread and potatoes were fattening.  Very scientific

Do the results support the conclusions:  I'm all for this.  He's right here.  Very often the results do not support the conclusions.  He gives excellent examples.   This is why I read studies in the following order:  Skim abstract to see if there's something of interest.  Download full text pdf or print HTML to pdf.  Come back to the paper a few days later.  Skip to Methods and Results.  Think critically for myself, then go back and read the Intro (if there was one) followed by the Discussion & Conclusions.


And now I ask you to listen again from about the 39 minute mark:  Scientists are freakin liars and we need to call them on it.

Couldn't have said it better myself Mr. Naughton.   Where criticism and exposure is warranted I encourage everyone to speak up.   Just make sure it's the scientists your really calling out - often it is not the primary researcher, it's the institution or media that makes the absurd claims.  

Science journalists and low carb gurus can be freakin liars too, and I'm just doing my civic duty calling them on it.  It's not a sign of mental instability and speaking up about it is not internet stalking.

Comments

Colby said…
You may be interested in Goodman & Greenland's response to the Ioannidis paper- PDF: http://www.bepress.com/cgi/viewcontent.cgi?article=1135&context=jhubiostat
CarbSane said…
Thanks Colby! I've put it in my library :)
CarbSane said…
Welcome StoragePro! I think you misunderstand me a little here.

It does not take a scientist to exercise your brain, think critically and measure what is being said.

I surely do agree!

The troubling aspect of this thinking - to me at least - is to indicate that 'lay people' (another religious term) do not have the capacity to think critically and find what is truth.

I don't think I said they couldn't.

Surely there are lots of scientists who argue pseudoscience as well. What I'm addressing with that comment is folks looking to people like Tom and, say, Mark Sisson for their information. Tom makes some arguments in his Big Fat Fiasco presentation, for example, that are at odds with what the scientific research shows. He addresses science quite a bit on his blog that seems to have a lively readership. I find that unfortunate because he doesn't seem to have a good grasp on the science (or he is hopelessly mired in dogma).
Sanjeev said…
Storagepro,
> We get to hear instead of how unworthy we are if
> we do not 'do science' daily

I have not personally noted that to a great extent.

We finally, FINALLY can these days listen to real scientists instead of reporters and science popularizers.

In the last couple of years I've listened to some scientists doing podcasts and blogs, and listened to a lot of scientists being interviewed on podcasts, and the humility and self-questioning has clearly stood out to me, especially if the interview veers outside the scientist's area of expertise.

The rare exceptions have mostly in politicized areas.

Reflecting on what you wrote, I realized I've gotten the "attitude" you write of far more from MDs and narcissistic personality disordered technician-Radiologists ; ) who have delusions that they're deep thinkers because they've read stuff on Zen (and their clear display of right action showed they "got it") & they've read Wittgenstein (and their endless stream of straw men proved their philosophical sophistication), and I've unfortunately also seen much of it from those with my own training, engineering.
Unknown said…
As a scientist, I am incredibly grateful to watchdogs, inside or outside academia, who debunk unfounded claims. When successful, it saves an enormous amount of limited time and resources. I only wish there was this much public interest in solid-state physics. We could use more of this kind of outside scrutiny.
Sanjeev said…
> I only wish there was this much public
> interest in solid-state physics.

Sold state physics. You young-uns with your transistors & photolithography.

A faddish flash in the pan. I'll still have my tubes after solid state is a distant memory ; )