las

Welcome all seeking refuge from low carb dogma!

“To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact”
~ Charles Darwin (it's evolutionary baybeee!)

Tuesday, July 3, 2012

Variability & Error (yeah ... and one more time Ebbeling et.al.)

One of the issues raised in recent days is quite important:  How does measurement error factor in?  How does variability in what's being measured factor in?  

Statistical analyses do not address these two things.   I'm not sure which podcast it was, there have been a few, with the dynamic Chris duo (Kresser with Masterjohn), but Chris made the comment {paraphrase big time} that the best way to change your LDL particle size favorably was to have a different test done.  I'd love a link gang, if this rings a bell for you.   LDL is not even measured directly in some (most?  all?) analyses of blood lipids.  So first, Chris addressed the error in the measurement.    And I think they discussed the normal day-to-day variability in the lipoprotein concentrations in that same installment as well.  Now I don't want to put numbers/words in Chris' mouth but I've heard similar statements from countless others -- the number 20 point fluctuations in LDL comes to mind.  


I spent roughly five years of my life where about 50% of my job was to develop the assays to measure (mostly) blood levels of drugs.  One of those assays was used for all of the analyses for a drug that was submitted for FDA approval.  Peer review is NOTHING like passing FDA review!!  There you submit reams of data, and substantial part of the submissions in this regard from my department came from yours truly.  I developed the assay and then there was the arduous task of validation at hand.  My memory is a bit fuzzy on this, but it was at least several weeks worth of work.  Who knows how much of what you see on CSI is true these days, I haven't worked in the field for 25 years (gosh I'm old LOL), but certainly post "physical" analysis data processing has reached much greater heights.  The basics of validation, however, I doubt have changed.  Once I had an extraction/preparation, chromatography and detector scheme that worked, I had to validate, and every run of, say, 100 samples was preceded (and sometimes re-run at the end) by calibration samples.  

My calibration samples were plasma I had "spiked" with known amounts of the drug.  Even this difference, that I added these molecules to plasma rather than the molecule being dissolved, absorbed and who knows may even temporarily stick to blood components before getting to the target organ -- blood components centrifuged out in the samples I was analyzing -- could mean that when I measured 5 ng/ml levels of X in Patient 8675309, their true circulating levels of "available X" could be 10 or 100.  Remind you of anything?  I'm reminded of the discussion with L. Ron Rosedale about plasma vs. whole blood glucose levels.  My point is, even for this seemingly straightforward measure, I can't even be sure what the error is from the exact value.  The accuracy (how close a value is to the true value.  If you're aiming at a dart board, accuracy is assessed by how close you come to the bullseye).

Back to the calibration samples.  When I validated my assay, I made little "vats" of these samples of known concentration.  A range of concentrations the assay could be validated for was important.   The detector sensitivity was the issue here.  While a lot of digital instrumentation these days will automatically switch sensitivity levels, I'd have my detector set at one setting, these ranges generally  guarantee the quantity of compound vs. response curve is linear across that range.   Now I was normally measuring levels in the 1 to 10 ng/ml range so the detector responded exactly 10X as much to the 10 ng/ml sample as for the 1 ng/ml sample.  But down at 1 ng/ml, I was getting nearer to the limit of detection because the signal-to-noise ratio starts to take over.  There was greater leeway in the variability for calibration samples down at this end.  Conversely, however, if a sample run through my system exceeded the high end, there were restrictions on how we could report this.   So one thing I did was analyze like 5-to-10 replicate samples from my vats, use linear regression to generate one "master" calibration curve and calculate the "actual" values of each of my replicates.  The standard deviation of my mean here was a measure of measurement error, and it was usually expressed as a % of value at each concentration. This measured precision (how close my values were to each other) and accuracy (how close they were to the known value, assuming it was accurate to begin with).   This is my round-about way of getting at the fact that measurement error is not even constant across a range of values usually seen for whatever is being measured, across ranges commonly assessed by the same exact measurement.  

I ran a calibration set with each run -- generated the calibration curve for that run -- and calculated concentrations for all of those samples using that curve.  Now the correlation coefficient here had to be high (>0.99 if memory serves for 5 samples, certainly >0.98 ), but each curve was ever so slightly different.  So the very same sample from the very same patient could contain 4.9  ng/ml one day and   5.1 the next (pulling numbers from a hat there) -- and this error would also be variable across the concentrations tested.   And the very same sample I prepared in bulk, analyzed on any particular day could vary as well.   Bottom line, the concentration of each sample IS something, but what we "measure" can differ considerably.  

Does this mean it's all so hopelessly confounded by error people are taking their lives in their hands every time they pop a pill?  Of course not.  But there are certain drugs for which (mental block on the precise term) the therapeutic and toxic levels are close and dose and corresponding levels can be quite variable for bioavailability -- theophyllin (sp?) comes to mind.  Where precision and accuracy are paramount, the assays for determining these values are even more rigorously scrutinized and perfected to minimize error.   The FDA set a pretty high bar for my validation (and I took an odd pride in coaxing my instrumention to keep the variation at the low end below that allowed) -- I wonder how much many of the research methods used in the studies we discuss here, are validated.  

So.   Patient Y has 5 ng/ml of X in their blood 1 hour after popping their 100 mg X pill on Monday.  But now we have to address variability.  Same Patient Y takes same 100 mg X pill on Tuesday, only this time when they took it "with food" the meals were different ... and now they only have 4 ng/ml the same 1 hour after popping.  That's a 20% variation.

And now we have the error reported with all of these trials we read and from which sweeping generalizations are drawn.  This error does NOT address measurement error or variability the sort of which I've just discussed.  The underlying assumption is that whatever this error is, it will fall out in the wash through the magic of randomization.  Here is where sample size is critical, because with small sample sizes this assumption does what the word ASS.U.ME commonly brings to mind.  If there is a relatively high measurement error and/or a high variability in whatever it is you're measuring, they trump all.  You simply cannot assume that an equal number of values were below the mean as above, or an equal number of values were, due to biological variability, lower on one day and higher the next. 

So are all small sample size studies ultimately useless?  No.  I don't think necessarily so.  But when evaluating them and drawing any inferences, there is all the more onus on the researchers to address the role measurement/variability "error" could have on the results.  If a *statistically significant* difference in some measurement is found between groups, the researchers should address:
  • The magnitude of said difference in the context of the values (here is where % differences are often more informative)  reported
  • The magnitude of said difference in the context of the usual day-to-day variability of what is being measured
  • The magnitude of said difference in the context of the magnitude of measurement error.
Look ... certain factions of the low carb community seem enthralled -- might I even suggest -- addicted to their glucometers.  I don't know where mine is at the moment, but I did a lot of testing back in 2009, and I bought strips on ebay -- this would mean different lots, and some were just a bag of  sealed, looked almost like birth control pill cartridges from various lots.  If you buy a box of strips, they come with a test solution.  I doubt many even bother with that (if you presume the strips are good, why waste the strip?) but what's more astounding is that the test liquid is a sugar solution and if you do the test the range of acceptable results is wide -- from memory at least 20 points, which is more than the allowable BG "spike" in LLVLClueland.    And while I could never bring myself to obsessively assess repeatability of these measures, I did on occasion test twice within minutes with considerably (>10) different resultss.  These meters are intended to help diabetics control their blood glucose levels -- in the case of T1's especially, to ward against potentially deadly hypoglycemia.  They are not intended for all these n=1 tests.  They are not useless, I am grateful for having spent a relatively nominal amount of money (in medical cost terms) and tested quite a lot -- in VLC and in cheatocarblandia -- and know I'm not IR or whatever.  But I recall being alarmed by an almost 50 point "spike" after a fairly rigorous 30 minute bike ride (into, gasp! the mid-to-high 120's!)  in the IF-fasted (18 hrs) state (one time occurrence), and wondering if something was wrong when my fasting values rose inexplicably for a stretch -- lot of strips?  Seemed so. 

I'm fairly confident that plasma glucose levels measured from a blood draw are infinitely more accurate & precise.  But everyone is well aware of the so-called dawn phenomenon, that IF can elevate FBG in some, or as I've just described, exercise can have even "spike" BG.  So if you go for your yearly "well-(wo)man" visit, you haven't eaten since 10pm the night before, and perhaps your blood isn't drawn until 10am, you're starving or just Jonesing for your caffeine, you leisurely meandered to the clinic or raced out the door unshowered ... all of this can impact your BG more than what you ate the day before or even entire month before.    Which is why I consider it criminal for some people to be labeled (beware folks, any mention of diabetes in your medical records can impact life insurance premiums, and who knows, maybe they find a way for it to impact auto insurance next!) "pre" diabetic based on a single FBG over 100.  I've asked many "diabetics" how they were diagnosed and I just don't get how the medical establishment gets away with labeling people with an "incurable" illness based on so little, or medically "slanders" millions (and scares them) with this notion of pre-diabetes.  Incidentally, I'd note that many long term low carbers would be diagnosed as prediabetic, right alongside their high carbing fellow man if the same criteria were applied.  (Same goes for lipids in many cases, but I digress ...)

If you're reading this blog, you're hopefully interested in the many, many, studies I do my best to analyze critically and in as open-minded and unbiased fashion as possible.  I've probably addressed far more experiments than observational studies and this is probably for a few reasons:
  • Cause and effect can never be ascertained from observational studies and these should generally function as the impetus for proper experiments to try to do just that.    In other words, on OS that shows A correlates with B is interesting.  The scientists should then get busy designing experiments to see if A causes B, or B causes A, or perhaps C causes A & B simultaneously.   Thus I tend to focus on the "why" and experiments are where that's at.
  • I'm a scientist first, statistics buff down the list.  Scientists only do observational studies to -- ahem Mr. Taubes -- formulate hypotheses to explain the observations.  They then test those hypotheses to large degree before passing them by their peers in publication.  I would dare say my experience in actually doing this qualifies me to deconstruct experiments moreso than so many who deem themselves qualified to pick winners and losers based on their biases (as GT did once more in the NYT ... sigh ... to blog or not to blog?) 
But as maligned as OS studies are, and as difficult (impossible?) as it is to control for every possible confounder in such studies, they generally have one thing going for them.  Large ... often huh-yooge sample sizes.  When you have hundreds and often thousands of values to work statistical wonders on, and you see a correlation between A & B, none of what I blathered on about in this post matters.  Random chance will do it's magic.  If Joe gets his blood drawn in an agitated state and has his glucose level elevated, Jane is perhaps super relaxed.  And for every sample level reported on the high side, there will be one reported on the low side.  So long as your measurements are reasonably reproducible (e.g. a known sample is analyzed to contain no more or less than some acceptable percent difference on repeated analyses), statistically significant differences between groups can have meaning, even when they flirt with being on par with measurement error and normal fluctuations.  This is how a 10 point differential in the mean FBG between two large groups can be significant and, more importantly, "meaningful".  But a 10 point differential in mean FBG between two small groups may get the nod from the stats fairy, it needs to be held up to scrutiny of if it has any meaning at all if it is anywhere close to measurement and variability.

In this regard, some of the old fashioned studies like my beloved Gray & Kipnis where individual data were pretty much all that was reported, can be more instructive than all of these studies with relatively small group comparisons (anything <30, and sometimes I've seen the bar raised to 40) that just report mean and SD data.   Because we're assuming a "fixed" population from which we're sampling.  Consider 1000 marbles, 200 each of different color ... this is what I'm talking about, it is known that each of those marbles is one color and remains that way and we can apply probability to predict the compositions of various samples of different sizes randomly selected to within degrees of confidence.  Now consider that those same 1000 marbles are made of different "mood ring" glasses.  They can randomly change to  one of the other colors on each side of some circular continuum.  Can you predict the composition of a sample selected from this now?  Yes ... but the bar is much higher.  Because at any instant in time, one could envision that if all marbles start out at their base colors in multiples of 200, one can only imagine the variability in snapshots of this "population" from which we're sampling.

In these experiments then, that scrutiny be paid to study design and statistical analysis, but also to outcomes in the context of experimental error and natural fluctuations.  I tend to give greater weight to those studies where the researchers take the time to address those issues outright or avoid that term "significant" when the effect is hardly so in the context of the normal fluctuations and measurement error for the measure.  

This is ultimately why I spent so much time on that Ebbeling et.al. study.  I had no idea the biases of the researchers, but now I know that Ludwig, the head researcher of this group apparently, is a huge advocate of the low GI diet.  I don't have the time to confirm this, but here I'll accept non-sciencey gossip.  So when you look at this study it pretty much epitomizes all that is wrong in the arena of obesity (and nutrition/diabetes/etc.) research.  I'd like to think it's not by design, but am becoming increasingly jaded in my thinking that it almost has to be in the more blatant cases.  

Their main outcomes were REE and TEE.  They paid NO attention to the variability of these  values ... like I've said, I don't recall the exact fluctuation, but I've seen it stated that  REE varies day to day from 2-5% -- perhaps 2% is the average, 5% is the high end of range?  In any case, the supposed significant difference in REE was less than 5% ... no mention of this.  Secondly, this study HINGES on maintenance.  If these subjects were not weight stable and in energy balance according to tteir measure or TEE vs. calories supplied before their tests, why bother.  WHY BOTHER.  Eight months and countless dollars out the window, which is what I think happened here.  Because all the supposed control in the world can't be fixed when you have a situation where scientists don't even pay passing homage to ensuring the integrity of their core outcomes.    They used only pooled-variation-based statistical probabilities to discuss error.  Criminal.  No mention of measurement error.  No mention of normal fluctuations.  

They "had the technology" (state of the art no less) to measure REE and TEE ... especially TEE, and DID NOT DO IT after the most important part of the entire study -- weight stabilization phase.  It's bad enough they weight reduced them on a diet different from all three and yet supposedly tested metabolic adaptation vs. baseline for four weeks of a different diet.  Throw this out the window already folks, it tells us diddly squat.  If you believe the EE data, then it tells us that for 4 weeks, perhaps you increase EE when your protein intake flirts with an average of 200g, or perhaps you're a not-so-oddball who burns a bit more energy during the 4 weeks that most closely resemble the macronutrient content of the diet you've been consuming for 4 months.  But if they didn't do the diligence on the weight stabilization, cue Metallica, NOTHING ELSE MATTERS.   And in this regard, once weight stabilized, before anything else, they should have done one more doubly labeled water test.  Measure TEE, control CI and make sure they match.  PERIOD.  Punkt!  as my Mom would say.  Fini! 

And c'mon ... did it not occur to Ludwig or Ebbeling or any of these researchers whose names are on that paper that 300 calories/day over 4 weeks is a significant enough differential that it should show up SOMEWHERE in their measured data??  Where is it?  Poof?  NO attempt to account for it?  How about something like (if it happened) the individuals for whom TEE changed favorably with X diet did lose Y weight although this was not statistically significant??   How about acknowledging, for those who don't know, that this supposed change in REE is within the range of measurement error and reported daily variations?  

I'm "obsessing' on this because if we want to talk inanity Mr. Taubes, this is the study complete with the media mahem!  If the group leader is taking to the presses, and we see how  what matters is this this stupid notion of 300 cal/day = 1 hour moderate exercise per day -- rather than any mention of where these calories went vis a vis weight -- and we have this being uncritically reported by "the usual suspects" ... it's just a hot mess.  Kevin Hall, via Carson Chow -- those *young* NIH biophysicists who read every paper on the planet on the lynch pin of TWICHOO, sent them to Taubes, and remain firmly committed to that "ridiculous" absolutism that is conservation of matter -- hit nail on head.  There has to be measurement error here -- and I'd add likely complicated by natural variation.   And I'm not a subscriber to James Krieger's Weightology so know not what he wrote about this, but I agree with the comment he left on Stephan's blog -- it is surprising this study passed through peer review.  JAMA no less.  Sad indeed.  Taubes frets only about the length of time.  Nevermind that all of these subjects lost weight consuming a minimum of 100g carb/day and likely close to 200g carb/day on average and who knows, perhaps some even upwards of 300g carb/day (we can't tell exactly from the summary data provided).  No, this sort of "good science"  -- cough ... cough -- 

WOW ... I had FNS on TV in the background and they jsut discussed taxpayer $$ for WLS ... it got testy.  One "expert" actually quoted this study about 300 cal/day diet ... and (wha???)  then he goes on to talk about how exercise is helpful ... ummm didn't he just buy into this utter nonsense that the EE differential  of Atkins would supposedly replace one hour of that exercise?  And then the moderator sheepishly admits to buying a pizza for (presumably) her child's team today -- with all those carbs!  Umm ... get real. Pizza -- especially anything other than plain cheese or veggie toppings ... is not a carb bomb problem ... pretty much start with the base and everything "fattening" is adding fat calories with, hopefully, a bit of protein along for the ride ... otherwise we're talking 25-30g carb per average slice.  And pizza is relatively, sometimes perplexingly for the  diabetic trying to properly dose insulin, low GI.  Oh, the horror.

Not sure how to end this ... other than to just end it.  But let me stress one last time how statistical error, however robust, is not the whole story, and extra especially for small studies, it should be viewed critically and in context with measurement error and normal fluctuations.  Ebbeling et.al. may well go down in history for being more informative of what's wrong with science and "science" journalism than its ultimately inconsequential results.  But rest assured ... more study is needed, for a longer time ... for more $$.  Sigh.

24 comments:

LeonRover said...

Hey Evelyn,

You have nailed "the Irishman in the bogpile" (as an Irishman I MAY say this) of this particular study.

"and DID NOT DO IT after the most important part of the entire study -- weight stabilization phase."

The efficacy of the weight loss phase, and its results in terms of all blood and metabolic outcomes should have shown for all to see - particularly as it used a diet comprising 60% of Weight Monitoring calories, but 25% Protein, likely providing same number of Protein Grams as in Monitoring.

The study also neglects to say whether the diet Macro %ages in Stabilization are those of Monitoring (SAD ??) or Weight Loss sub-phases.

In effect, Weight Stabilization was another Maintenance Phase, with a Fourth Diet.
(As an aside, I note that all diets used are in excess of SAD 15% Protein.)

I have monitored my own Blood Pressures & Glucoses for over a decade, and control instrument and subject variability by taking multiple readings and averaging (six for B Pressures, 2-3 strips for B Glucose.)

I too NEED to see all the readings, so I can plot them if necessary. (I get so pissed off not being able to see the data!)

Slainte

bentleyj74 said...

They lost me with the fiber problem, I just lost interest after that hand was tipped and a dishonest framing was revealed.

Eric said...

Talk about twisting the results, I don't think you even need to go so far as to worry about measuring, they have hung themselves just fine on this one. ( see notes b and c on the study outcome slide )

REE, REE FFM, TEE RQ, TEE FFM, TSH, and Insulin sensitivity show no statically significant difference between diets.

Three other including Leptin and non-HDL also show non-signifigance between two of the three diets.

If I'm reading correctly only 2 of the variables actually showed significance and neither had anything to do with energy balance.

The comment section completely ignores the fact the statistical evidence ( or lack there of ) on the study outcome.

Larry Eshelman said...

Links to the Kresser/Masterjohn podcasts:

September 8, 2011
http://chriskresser.com/episode-16-chris-masterjohn-on-cholesterol-heart-disease-part-2

February 8, 2012
http://chriskresser.com/chris-masterjohn-on-cholesterol-and-heart-disease-part-3

In the one on Sept 8, 2011 Chriss Masterjohn talks about measurement error in lipid measurements, and this is discussed again in the podcast on February 8, 2012. The latter one includes a transcript, so one doesn't have to listen to the whole thing. A quote from the latter: "So, you always want to get two or three readings to look at that variation. And, you know, while you bring that up, that's a source of error. I have also seen cases where people go on a diet that seems to be helping, and they say: Why have my blood lipids increased? And it was a simple error like they were fasting one time and they weren't fasting the other time."

Larry Eshelman said...

I just found my notes from the second interview (on Sept 8, 2011). Chris Masterjohn gave the following standard deviations for biological fluctuations (e.g. from week to week) of lipid measurements:
TotalChol 17
LDL-C 15
HDL-C 4.5-5.0
Trigs 20
TC/HDL 0.4

garymar said...

I remember a character on CSI saying "Mass Spec Never Lies". I blew chunks!

When I was in the lab using MALDI mass spectrometry to generate data for my thesis, on most days the reality was "Mass Spec Never Gives a Straight Answer".

v/vmary said...

hi evelyn, i was wondering what currently is helping you to lose fat mass and build muscle mass. i think i read somewhere that you follow the prefect health diet and count calories within that framework. is that so? how is it working? thx.

Geoff 99 said...

In the absence of admitted or discovered real errors, and taking every thing at face value, this study currently demonstrates only that on a Very Low Carb diet you need to burn MORE calories to remain weight stable. A metabolic "dis-advantage" if you will.

Thats the equivalent of one extra hour of exercise per day, just to maintain the same weight as the Low Fat and Low GI groups. If I was promoting VLC Im not sure I would refer to this as a "very well conducted study".

Given the cost of these studies, their complexity, the possibility of introducing bias when "adjusting" for confounding variables and their possible impact in the popular press, it is regrettable that the raw data is not publicly available or subject to more rigorous review.

At least then there might be some possibility of unraveling where things went wrong - and where things went right. It is now a closed book with a relatively absurd conclusion.

Sanjeev said...
This comment has been removed by the author.
Sanjeev said...

Been too busy with deadlines to write or even think much but this is a great re-cast.

Thanks dude, that's cool & sharp of you.

> on a Very Low Carb diet you need to burn MORE calories to remain weight stable. A metabolic "dis-advantage" if you will
___

So supposing this study's results and measurements and statistics and some of the authors' conclusions are correct -both precise and accurate and NOT confounded by lack of metabolic chamber -

what does it say about the supposed "appetite suppression" of low carb?

has Rosedale commented on this?

Evelyn aka CarbSane said...

Thanks for the links Larry!

@garymar: I'm pretty sure it was CSI Miami I saw about 5 min of once while channel surfing. They analyzed some sample and in 15 seconds this 3D rotating molecule came up on this giant screen. Yeah ... unhuh!

@Geoff: What's most frustrating is all the money spent on the 12 weeks weight loss. If they're looking at metabolic adaptation for different maintenance diets they could have just recruited a bunch of formerly obese that could document weight loss. It would probably be better to have varied diets for weight loss if they're truly trying to assess differences in EE between maintenance diets. They don't need the pre-weight loss EE because the changes in EE's they purport to be looking at are for maintenance, and reduction in metabolic rate from weight loss is a different thing. Conversely, if they wanted to assess the difference in metabolic adaptation due to weight loss for different diets, they could have done that and compared before/after. I know this isn't scientific, but there have been peer review "studies" based on the Active Low Carbers discussion board so their stories are fair game. There are too many LC'ers who remain faithful to their LC but start to regain. You don't seem to see this with folks who lose weight other ways, the regain is from slipping back into old habits or ditching the diet entirely. Yeah you have your Fred's and Gary's and Tom's who were never that overweight to begin with who brag on how they can have two steaks, a pound of bacon and half dozen lobster tails drenched in butter and not gain, but then you have Eades and even Sisson who chronicle their intake from time to time, and it's really quite low. Weight regain from LC also seems to be faster. All in all these observations indicate EE probably decreases with long term carb restriction, hence why IF is so popular to reduce intake further ;)

@v/vmary: I'm not currently trying to lose fat mass. I've been eating more carbs these days now long enough that when I get motivated I'll probably give VLC another shot -- hopefully it spontaneously reduces my intake. If it doesn't work, I'll probably count calories, and the only macro I'll concern myself with is protein. I'd say my diet is pretty Zonish these days and I eat all foods but limited crap. I used to describe it as PHD'ish, but I have reservations about high fat diets -- and by high fat I mean in terms of amounts, percents mean nothing to me. I'm also not convinced there's a "sweet spot" for carbs, though I'm forever grateful for PHD's specific inclusion of starch in the diet. I also find PHD a bit fructophobic and I do enjoy the occasional grains and legumes.

Evelyn aka CarbSane said...

Yeah, I was surprised by the protein content of all diets, when I worked out the amounts in grams, the average for the LC'ers was close to 200g ... that's a lot of protein.

This study reminds me of the Westman LC vs. LGI one. We were given the data on all meds only for subjects who took insulin. It wasn't that big a study, why not all the data on all the meds?

Evelyn aka CarbSane said...

Not entirely related, but I'm tired of low fat diet as necessarily grain and highly-processed food based. Try a high-GI diet with carbs from potatoes fruits and other starchy veggies.

Geoff 99 said...

Hi Sanjeev,

The study authors went to great lengths to avoid any confounding effects of "appetite" :-) Each test diet supposedly had the same caloric content and was weighed and measured down to the last gram. Participants were even given rubber spatulas to ensure they consumed every last portion of their supplied meal.

Despite using the latest scientific techniques in assessing energy expenditure, exercise, markers for metabolic syndrome etc., their method for assessing hunger and well being consisted of "point your finger at the chart" (reading taken prior to breakfast only).

If you accept this is as valid, then their findings were that for an identical number of calories, all diets resulted in roughly the same level of well-being and hunger (at least prior to breakfast).

Evelyn aka CarbSane said...

I just look at that scatter plot and think they must have done statistical massaging to get some of the p-values they report. It is also deceptive to report confidence intervals vs. standard deviatiation. Those CI's were constructed using a pooled variation which makes it appear the variability of each of the parameters reported that way were similar with the three diets. I have a hard time seeing this being the case given the type of scatter we see in the EE plots.

v/vmary said...

thx evelyn. i have kept my 10% weightloss since 2009 with the de vany diet. i avoid high gycemic carbs most of the time, but not all the time. i eat a lot of fruit until i hit a patch where i feel sleepy, then i back off from it. bascially if my heart rate goes up noticeably and i get sleepy, i know i've overdone it with carbs-usually fruit. i don't eat grains at all. we have a lot of white rice in the house since my husband is chinese and i love it. i can't stop with one bowl. i used to eat it mixed with salad dressing- so good!!!! so i avoid white rice. when i was little i used to eat wonder bread by taking off the crust and mushing it into a ball. so good!!!! so i binge when it comes to bread and stuff- so i avoid it. i think avoiding these things plus not eating my old fast food/microwave diet has helped make maintenance easy. i could lose more, but that would probably require counting carbs or calories which stresses me out.

Evelyn aka CarbSane said...

We are in similar places it seems, though my weight probably a bit higher and weight loss a bit more. I pretty easily maintained likely a >30% loss in weight in 2007-8 through most of 2010. The past year and a half has been more difficult mostly because of stressful personal life leading to more convenience eating or eating to hubby's schedule & wants more, and more alcohol that I really probably need to eliminate entirely. I had pretty much stalled for a year in 2009, lost 10 more lbs and stalled out again. Did some recomposing. I'm not after perfection, wear reasonable clothing sizes, get a lot of male attention (sometimes I think more than when I weighed 110 lbs in high school!) ... all systems go. I don't have any binge problems anymore, it's more just eating too many calories. My metabolism has made a comeback, but there's just not a lot of wiggle room based on my 2009 tracking. Maintain? I can get away with a few more calories these days, but I think my metabolism is more adaptive to deficits more quickly than it used to be. Sucks ... and I sincerely hope all the snarky young ones won't get their comeuppance when the time comes.

Aside: I can't quite figure out why so many people get personally offended when I point out how easy it is to passively overeat on the SAD. That is how I got truly obese, and if I blame low carb for anything it is mostly that rebound from LC the first time is what got me there to truly truly plus sizes obese. I was in my early 30's, no sign of hormonal upheaval or any issues.

I'd like to get down to around 170, but it's just not a priority. Like you, I'm wary of actually calorie counting because it's always in the back of my mind that I might trigger the obessions of my youth. But in the right mindset and some spousal cooperation, I could make it work. There's more to life than how much you weigh and what you eat to achieve that though. Somewhere in there factors in quality of life. It's just not on the priority list right now.

Balled up Wonder bread ... yep loved that too! Mom mostly gave me sandwiches on the equivalent of Ezekiel bread, always toasted. Soft smooshable bread was a novelty, LOL.

Evelyn aka CarbSane said...

BTW, this past year reduced activity has clearly factored in as well. I have a bad ankle that has seriously reduced my little NEAT-fests -- like doing FlashDance stuff while coffee heats in the microwave, or running up and down stairs to the second floor. Plus, I need to reign in this blogging thing ;-) but it's an outlet for lots of things I enjoy.

RRX said...

Re. the LDL test/result topic. I knew a chemist once that worked for a time with UCLA on various lab stuff. He said that one project had them sending out vials of blood from the same people to multiple other labs to get a measure of the various cholesterol levels. He said no two responses ever matched. Each lab came back with a different profile, despite being blood drawn from the same person at the same time. What kind of confidence am I supposed to have in that test??

RRX said...

re.
"This is my round-about way of getting at the fact that measurement error is not even constant across a range of values usually seen for whatever is being measured, across ranges commonly assessed by the same exact measurement."

I've experienced this myself when building micro-implantable devices and gold-plating the wires.

Eric said...

Oh, double labeled water has not been fully validated for use in obese subjects. Different levels of isotope sequestering can lead to a bias in the results. In fact if they really did find a statical significant difference of 300 calories/day for the one group they should have also recorded a statically significant change in weight during those four weeks.... did they? ( I know, corroborated evidence, who would have thought? )

Evelyn aka CarbSane said...

Thanks ... would appreciate a cite for the library if you have any regarding this for the obese. Given as they also report different values for RQ vs. FQ, I wonder how the high protein diet may have influenced this in the calculation. You are right, if they can measure EE to that degree of stat.sig. surely they should be able to measure mass and "see". Average masses were virtually identical but the %CI's (smaller than SD for all other data) spanned +/- ~ 9 lbs or roughly +/- one-third of weight lost. SIGH.

Evelyn aka CarbSane said...

I remember reading about that.

For me, if my lipids suddenly go beserk, I'll worry. Or if they were wildly outta whack, I'd worry. Or if doing something normalized dramatically, I'd celebrate. Most of the changes in all of these studies are inconsequential.

Evelyn aka CarbSane said...

LOL (well, not so funny) but part of the reason I don't have my PhD was the amount of mental energy that was wasted in the months characterizing impedance noise at low frequencies on my instrumentation that seemed to be altered by the effect I was trying to study. Sigh ...

Post a Comment

Moderation is currently on. Thanks in advance for your patience.