Post by Admin/ Traveler on Mar 12, 2019 18:01:54 GMT
Advanced warning, this will likely be a long post! But, important to understand information.
I've already ranted (likely more than enough) about how faulty the testing is, and if it's that faulty, how can anyone rule out these infections properly - but now I'd like to take it a step further - and maybe you will see how ridiculous all of the 'other sides' flaying about really is as well.
There a lots of studies on tick-borne infections that use very small numbers of people in the groups. This is not a good thing, and I would like to take the time to explain why. My information comes from many different sources, but I have one main article to share to explain why small groups are not good in clinical studies that seems to sum it all up pretty well.
First, the clinical study that made me want to do this (fortunately, all we need is the abstract for this):
Co-infections in Persons with Early Lyme Disease, New York, USA
----------------------------------------------------------------------------------------------------------------------------------------------------------------
Gary P. Wormser, Donna McKenna, Carol Scavarda, Denise Cooper, Marc Y. El Khoury, John Nowakowski, Praveen Sudhindra, Alexander Ladenheim, Guiqing Wang, Carol L. Karmen, Valerie Demarest, Alan P. Dupuis, and Susan J. Wong
Author affiliations: New York Medical College, Valhalla, New York, USA (G.P. Wormser, D. McKenna, C. Scavarda, D. Cooper, M.Y. El Khoury, J. Nowakowski, P. Sudhindra, A. Ladenheim, G. Wang, C.L. Karmen); New York State Department of Health, Albany, New York, USA (V. Demarest, A.P. Dupuis II, S.J. Wong)
"Abstract
In certain regions of New York state, USA, Ixodes scapularis ticks can potentially transmit 4 pathogens in addition to Borrelia burgdorferi: Anaplasma phagocytophilum, Babesia microti, Borrelia miyamotoi, and the deer tick virus subtype of Powassan virus.
In a prospective study, we systematically evaluated 52 adult patients with erythema migrans, the most common clinical manifestation of B. burgdorferi infection (Lyme disease), who had not received treatment for Lyme disease.
We used serologic testing to evaluate these patients for evidence of co-infection with any of the 4 other tickborne pathogens.
Evidence of co-infection was found for B. microti only; 4–6 patients were co-infected with Babesia microti. Nearly 90% of the patients evaluated had no evidence of co-infection.
Our finding of B. microti co-infection documents the increasing clinical relevance of this emerging infection."
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Right off the bat, their statement that the erythema migrans (bulls eye rash) is "the most common clinical manifestation" is just BS. This one article completely blows that issue out of the water, but there are plenty of other studies that can prove this same thing:
www.bayarealyme.org/blog/lyme-disease-bullseye-rash/
Now, for the article that speaks to the small study sizes:
The Crisis of Science
"TRANSCRIPT
In 2015 a study from the Institute of Diet and Health with some surprising results launched a slew of click bait articles with explosive headlines:
“Chocolate accelerates weight loss” insisted one such headline.
“Scientists say eating chocolate can help you lose weight” declared another.
“Lose 10% More Weight By Eating A Chocolate Bar Every Day…No Joke!” promised yet another.
There was just one problem: This was a joke.
The head researcher of the study, “Johannes Bohannon,” took to io9 in May of that year to reveal that his name was actually John Bohannon, the “Institute of Diet and Health” was in fact nothing more than a website, and the study showing the magical weight loss effects of chocolate consumption was bogus. The hoax was the brainchild of a German television reporter who wanted to “demonstrate just how easy it is to turn bad science into the big headlines behind diet fads.”
Given how widely the study’s surprising conclusion was publicized—from the pages of Bild, Europe’s largest daily newspaper to the TV sets of viewers in Texas and Australia—that demonstration was remarkably successful. But although it’s tempting to write this story off as a demonstration about gullible journalists and the scientific illiteracy of the press, the hoax serves as a window into a much larger, much more troubling story.
That story is The Crisis of Science.
This is The Corbett Report.
What makes the chocolate weight loss study so revealing isn’t that it was completely fake; it’s that in an important sense it wasn’t fake. Bohannes really did conduct a weight loss study and the data really does support the conclusion that subjects who ate chocolate on a low-carb diet lose weight faster than those on a non-chocolate diet. In fact, the chocolate dieters even had better cholesterol readings. The trick was all in how the data was interpreted and reported.
As Bohannes explained in his post-hoax confession:
“Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a ‘statistically significant’ result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.”
You see, finding a “statistically significant result” sounds impressive and helps scientists to get their paper published in high-impact journals, but “statistical significance” is in fact easy to fake. If, like Bohannes, you use a small sample size and measure for 18 different variables, it’s almost impossible not to find some “statistically significant” result. Scientists know this, and the process of sifting through data to find “statistically significant” (but ultimately meaningless) results is so common that it has its own name: “p-hacking” or “data dredging.”
But p-hacking only scr*pes the surface of the problem. From confounding factors to normalcy bias to publication pressures to outright fraud, the once-pristine image of science and scientists as an impartial font of knowledge about the world has been seriously undermined over the past decade.
Although these types of problems are by no means new, they came into vogue when John Ioannidis, a physician, researcher and writer at the Stanford Prevention Research Center, rocked the scientific community with his landmark paper “Why Most Published Research Findings Are False.” The 2005 paper addresses head on the concern that “most current published research findings are false,” asserting that “for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.” The paper has achieved iconic status, becoming the most downloaded paper in the Public Library of Science and launching a conversation about false results, fake data, bias, manipulation and fraud in science that continues to this day.
JOHN IOANNIDIS: This is a paper that is practically presenting a mathematical modeling of what are the chances that a research finding that is published in the literature would be true. And it uses different parameters, different aspects, in terms of: What we know before; how likely it is for something to be true in a field; how much bias are maybe in the field; what kind of results we get; and what are the statistics that are presented for the specific result.
I have been humbled that this work has drawn so much attention and people from very different scientific fields—ranging not just bio-medicine, but also psychological science, social science, even astrophysics and the other more remote disciplines—have been attracted to what that paper was trying to do.
SOURCE: John Ioannidis on Moving Toward Truth in Scientific Research
Since Ioannidis’ paper took off, the “crisis of science” has become a mainstream concern, generating headlines in the mainstream press like The Washington Post, The Economist and The Times Higher Education Supplement. It has even been picked up by mainstream science publications like Scientific American, Nature and phys.org.
So what is the problem? And how bad is it, really? And what does it mean for an increasingly tech-dependent society that something is rotten in the state of science?
To get a handle on the scope of this dilemma, we have to realize that the “crisis” of science isn’t a crisis at all, but a series of interrelated crises that get to the heart of the way institutional science is practiced today."
There's lots more to the article, but it's long and I don't want to burden those that aren't up to reading that much. If you are interested in reading the whole article (LOTS of links in it to show where their information is coming from!), then please do click on the link above!!
I've already ranted (likely more than enough) about how faulty the testing is, and if it's that faulty, how can anyone rule out these infections properly - but now I'd like to take it a step further - and maybe you will see how ridiculous all of the 'other sides' flaying about really is as well.
There a lots of studies on tick-borne infections that use very small numbers of people in the groups. This is not a good thing, and I would like to take the time to explain why. My information comes from many different sources, but I have one main article to share to explain why small groups are not good in clinical studies that seems to sum it all up pretty well.
First, the clinical study that made me want to do this (fortunately, all we need is the abstract for this):
Co-infections in Persons with Early Lyme Disease, New York, USA
----------------------------------------------------------------------------------------------------------------------------------------------------------------
Gary P. Wormser, Donna McKenna, Carol Scavarda, Denise Cooper, Marc Y. El Khoury, John Nowakowski, Praveen Sudhindra, Alexander Ladenheim, Guiqing Wang, Carol L. Karmen, Valerie Demarest, Alan P. Dupuis, and Susan J. Wong
Author affiliations: New York Medical College, Valhalla, New York, USA (G.P. Wormser, D. McKenna, C. Scavarda, D. Cooper, M.Y. El Khoury, J. Nowakowski, P. Sudhindra, A. Ladenheim, G. Wang, C.L. Karmen); New York State Department of Health, Albany, New York, USA (V. Demarest, A.P. Dupuis II, S.J. Wong)
"Abstract
In certain regions of New York state, USA, Ixodes scapularis ticks can potentially transmit 4 pathogens in addition to Borrelia burgdorferi: Anaplasma phagocytophilum, Babesia microti, Borrelia miyamotoi, and the deer tick virus subtype of Powassan virus.
In a prospective study, we systematically evaluated 52 adult patients with erythema migrans, the most common clinical manifestation of B. burgdorferi infection (Lyme disease), who had not received treatment for Lyme disease.
We used serologic testing to evaluate these patients for evidence of co-infection with any of the 4 other tickborne pathogens.
Evidence of co-infection was found for B. microti only; 4–6 patients were co-infected with Babesia microti. Nearly 90% of the patients evaluated had no evidence of co-infection.
Our finding of B. microti co-infection documents the increasing clinical relevance of this emerging infection."
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Right off the bat, their statement that the erythema migrans (bulls eye rash) is "the most common clinical manifestation" is just BS. This one article completely blows that issue out of the water, but there are plenty of other studies that can prove this same thing:
www.bayarealyme.org/blog/lyme-disease-bullseye-rash/
Now, for the article that speaks to the small study sizes:
The Crisis of Science
"TRANSCRIPT
In 2015 a study from the Institute of Diet and Health with some surprising results launched a slew of click bait articles with explosive headlines:
“Chocolate accelerates weight loss” insisted one such headline.
“Scientists say eating chocolate can help you lose weight” declared another.
“Lose 10% More Weight By Eating A Chocolate Bar Every Day…No Joke!” promised yet another.
There was just one problem: This was a joke.
The head researcher of the study, “Johannes Bohannon,” took to io9 in May of that year to reveal that his name was actually John Bohannon, the “Institute of Diet and Health” was in fact nothing more than a website, and the study showing the magical weight loss effects of chocolate consumption was bogus. The hoax was the brainchild of a German television reporter who wanted to “demonstrate just how easy it is to turn bad science into the big headlines behind diet fads.”
Given how widely the study’s surprising conclusion was publicized—from the pages of Bild, Europe’s largest daily newspaper to the TV sets of viewers in Texas and Australia—that demonstration was remarkably successful. But although it’s tempting to write this story off as a demonstration about gullible journalists and the scientific illiteracy of the press, the hoax serves as a window into a much larger, much more troubling story.
That story is The Crisis of Science.
This is The Corbett Report.
What makes the chocolate weight loss study so revealing isn’t that it was completely fake; it’s that in an important sense it wasn’t fake. Bohannes really did conduct a weight loss study and the data really does support the conclusion that subjects who ate chocolate on a low-carb diet lose weight faster than those on a non-chocolate diet. In fact, the chocolate dieters even had better cholesterol readings. The trick was all in how the data was interpreted and reported.
As Bohannes explained in his post-hoax confession:
“Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a ‘statistically significant’ result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.”
You see, finding a “statistically significant result” sounds impressive and helps scientists to get their paper published in high-impact journals, but “statistical significance” is in fact easy to fake. If, like Bohannes, you use a small sample size and measure for 18 different variables, it’s almost impossible not to find some “statistically significant” result. Scientists know this, and the process of sifting through data to find “statistically significant” (but ultimately meaningless) results is so common that it has its own name: “p-hacking” or “data dredging.”
But p-hacking only scr*pes the surface of the problem. From confounding factors to normalcy bias to publication pressures to outright fraud, the once-pristine image of science and scientists as an impartial font of knowledge about the world has been seriously undermined over the past decade.
Although these types of problems are by no means new, they came into vogue when John Ioannidis, a physician, researcher and writer at the Stanford Prevention Research Center, rocked the scientific community with his landmark paper “Why Most Published Research Findings Are False.” The 2005 paper addresses head on the concern that “most current published research findings are false,” asserting that “for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.” The paper has achieved iconic status, becoming the most downloaded paper in the Public Library of Science and launching a conversation about false results, fake data, bias, manipulation and fraud in science that continues to this day.
JOHN IOANNIDIS: This is a paper that is practically presenting a mathematical modeling of what are the chances that a research finding that is published in the literature would be true. And it uses different parameters, different aspects, in terms of: What we know before; how likely it is for something to be true in a field; how much bias are maybe in the field; what kind of results we get; and what are the statistics that are presented for the specific result.
I have been humbled that this work has drawn so much attention and people from very different scientific fields—ranging not just bio-medicine, but also psychological science, social science, even astrophysics and the other more remote disciplines—have been attracted to what that paper was trying to do.
SOURCE: John Ioannidis on Moving Toward Truth in Scientific Research
Since Ioannidis’ paper took off, the “crisis of science” has become a mainstream concern, generating headlines in the mainstream press like The Washington Post, The Economist and The Times Higher Education Supplement. It has even been picked up by mainstream science publications like Scientific American, Nature and phys.org.
So what is the problem? And how bad is it, really? And what does it mean for an increasingly tech-dependent society that something is rotten in the state of science?
To get a handle on the scope of this dilemma, we have to realize that the “crisis” of science isn’t a crisis at all, but a series of interrelated crises that get to the heart of the way institutional science is practiced today."
There's lots more to the article, but it's long and I don't want to burden those that aren't up to reading that much. If you are interested in reading the whole article (LOTS of links in it to show where their information is coming from!), then please do click on the link above!!