• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

Methodology in medical research


  • Please log in to reply
11 replies to this topic

#1 opales

  • Guest
  • 892 posts
  • 15
  • Location:Espoo, Finland

Posted 10 May 2006 - 03:04 AM


hi,

This thread is intented to help science savvy members to upgrade their ability to make proper judgements on the research findings they encounter. The ability to assess study quality and make inferences beyond author's conclusions is the skill that separates great scientists from the rest. In the internet, such level of ability is a rare phenomenon indeed. Plus, I'll guarentee you, this is the skill that provides the most amount of benefit in shortest amount of time.

I'd be delighted especially if some of the ImmInst hard-core scientist would pop in to provide insights or perhaps point us to source of information they themseves have found valuable, remembering of course level of knowledge of the audience they are addressing.

I'll begin by providing few links that not only give great insights to the nature of evidence, also might give direct relevant information on the phenomena themselves. Take your time with these and enjoy:

Some basics (you may want to follow the links on the wikis by yourself, I just collected some of the most important ones)

http://en.wikipedia....-based_medicine
http://en.wikipedia....ki/Study_design
http://www.cebm.net/study_designs.asp
http://en.wikipedia..../Clinical_trial
http://en.wikipedia....ontrolled_trial
http://en.wikipedia....ki/Case-control
http://en.wikipedia....ki/Cohort_study
http://clio.stanford...hort/index.html
http://clio.stanford...hort/index.html

http://en.wikipedia....i/Meta-analysis
http://en.wikipedia....stematic_review
http://en.wikipedia....y_heterogeneity

http://en.wikipedia....gitudinal_study
http://en.wikipedia....sectional_study
http://en.wikipedia....imental_control
http://en.wikipedia....iki/Confounding

http://www.montreali...principles.html
http://www.montreali...rug/stages.html
http://www.montreali...-target-id.html
http://www.montreali...ry-lead-id.html
http://www.montreali...timization.html
http://www.montreali...cal-trials.html
http://www.montreali...ev-phase-1.html
http://www.montreali...ev-phase-2.html
http://www.montreali...ev-phase-3.html
http://www.montreali...ev-phase-4.html

http://en.wikipedia....rimental_design
http://en.wikipedia....ry:Epidemiology
http://en.wikipedia....Clinical_trials
http://en.wikipedia....rising_outcomes
http://en.wikipedia....eline_(medical)
http://en.wikipedia....ensus_(medical)

More specific subjects (a good way to learn more is to check the list of references):

This articles discusses the relationship between diet and cancer while given insights to the research methodology, not very hard-core stuff yet:
http://www.nypcancer...s/iss_ins.shtml

This is rather rigorous paper on relialibity and nature of research findings
http://medicine.plos.....Epmed.0020124

And here is our own John Schloendorn's commentary on statistical significance, with references of course (page 3, don't know why the links never work):
http://www.imminst.o...13

Edited by opales, 10 May 2006 - 07:20 AM.

  • like x 1

#2 scottl

  • Guest
  • 2,177 posts
  • 2

Posted 10 May 2006 - 04:07 AM

A few thoughts:

Make sure the patient population in the study is the one you wish to draw conclusions about e.g. studies on patients with advanced disease (or pregnant patients [wis] ) says nothing about what happens in you or I.

If you really wish to evaluate a study keep in mind:

--you can find one study to show anything. So you must know what other studies in the field have shown.

--don't read just the abstract (or god forbid what the sleezy press says about the study). Read the whole study for yourself.

--Do the people conducting the study have an axe to grind? Was it funded by a drug/supp company

--was the study constructed intelligently? Did they ask the rigth question? Does the data justify their conclusions.

sponsored ad

  • Advert
Click HERE to rent this advertising spot for SUPPLEMENTS (in thread) to support LongeCity (this will replace the google ad above).

#3 doug123

  • Guest
  • 2,424 posts
  • -1
  • Location:Nowhere

Posted 10 May 2006 - 04:14 AM

It seems you two already have the methodology down alright. I wish everyone was not as affected by hype. It's good to be speculative; that's for sure.

I can't say I have never been affected by hype. Remember that ALCAR ARGINATE thing way back when? Increases growth of neurites twice as much as NGF! Then I saw those results were in culture...oh, so that's not in vivo? So that is pretty weak evidence. I know this now.

Back when I joined this forum, I never even knew what in vivo really was. I'd heard of it before... I wish we still got occasional visits from AORsupport so he could slap me around a little bit more..lol.

#4 opales

  • Topic Starter
  • Guest
  • 892 posts
  • 15
  • Location:Espoo, Finland

Posted 28 June 2006 - 11:33 AM

I posted this earlier in another thread, but here is some awesome read on how to evaluate medical studies:

http://www.cfsan.fda...s/qhc-gtea.html

some samples:

  Overview of Data and Eligibility for a Qualified Health Claim

A health claim characterizes the relationship between a substance and a disease or health-related condition (21 CFR 101.14(a)(1)).  The substance must be associated with a disease or health-related condition for which the general U.S. population, or an identified U.S. population subgroup is at risk (21 CFR 101.14(b)(1)).  Health claims characterize the relationship between the substance and a reduction in risk of contracting a particular disease.[2]  In a review of a qualified health claim, the agency first identifies the substance and disease or health-related condition that is the subject of the proposed claim and the population to which the claim is targeted.[3]  FDA considers the data and information provided in the petition, in addition to other written data and information available to the agency, to determine whether the data and information could support a relationship between the substance and the disease or health-related condition.[4] 

The agency then separates individual reports of human studies from other types of data and information.  FDA focuses its review on reports of human intervention and observational studies.[5] 

In addition to individual reports of human studies, the agency also considers other types of data and information in its review, such as meta-analyses,[6] review articles,[7] and animal and in vitro studies.  These other types of data and information may be useful to assist the agency in understanding the scientific issues about the substance, the disease or health-related condition, or both, but can not by themselves support a health claim relationship.  Reports that discuss a number of different studies, such as meta-analyses and review articles, do not provide sufficient information on the individual studies reviewed for FDA to determine critical elements such as the study population characteristics and the composition of the products used.  Similarly, the lack of detailed information on studies summarized in review articles and meta-analyses prevents FDA from determining whether the studies are flawed in critical elements such as design, conduct of studies, and data analysis.  FDA must be able to review the critical elements of a study to determine whether any scientific conclusions can be drawn from it.  Therefore, FDA uses meta-analyses, review articles, and similar publications[8] to identify reports of additional studies that may be useful to the health claim review and as background about the substance-disease relationship.  If additional studies are identified, the agency evaluates them individually.

FDA uses animal and in vitro studies as background information regarding mechanisms of action that might be involved in any relationship between the substance and the disease.  The physiology of animals is different than that of humans.  In vitro studies are conducted in an artificial environment and cannot account for a multitude of normal physiological processes such as digestion, absorption, distribution, and metabolism that affect how humans respond to the consumption of foods and dietary substances (Institute of Medicine, National Academies of Science, 2005).  Animal and in vitro studies can be used to generate hypotheses or to explore a mechanism of action but cannot adequately support a relationship between the substance and the disease.

FDA evaluates the individual reports of human studies to determine whether any scientific conclusions can be drawn from each study.  The absence of critical factors such as a control group or a statistical analysis means that scientific conclusions cannot be drawn from the study (Spilker et al., 1991, Federal Judicial Center, 2000).  Studies from which FDA cannot draw any scientific conclusions do not support the health claim relationship, and these are eliminated from further review.

Because health claims involve reducing the risk of a disease in people who do not already have the disease that is the subject of the claim, FDA considers evidence from studies in individuals diagnosed with the disease that is the subject of the health claim only if it is scientifically appropriate to extrapolate to individuals who do not have the disease.  That is, the available scientific evidence must demonstrate that: (1) the mechanism(s) for the mitigation or treatment effects measured in the diseased populations are the same as the mechanism(s) for risk reduction effects in non-diseased populations; and (2) the substance affects these mechanisms in the same way in both diseased people and healthy people.  If such evidence is not available, the agency cannot draw any scientific conclusions from studies that use diseased subjects to evaluate the substance-disease relationship.

Next, FDA rates the remaining human intervention and observational studies for methodological quality.  This quality rating is based on several criteria related to study design (e.g., use of a placebo control versus a non-placebo controlled group), data collection (e.g., type of dietary assessment method), the quality of the statistical analysis, the type of outcome measured (e.g., disease incidence versus validated surrogate endpoint), and study population characteristics other than relevance to the U.S. population (e.g., selection bias and whether  important  information about the study subjects--e.g., age, smoker vs. non-smoker was gathered and reported).  For example, if the scientific study adequately addressed all or most of the above criteria, it would receive a high methodological quality rating.  Moderate or low quality ratings would be given based on the extent of the deficiencies or uncertainties in the quality criteria.  Studies that are so deficient that scientific conclusions cannot be drawn from them cannot be used to support the health claim relationship, and these are eliminated from further review.

Finally, FDA evaluates the results of the remaining studies.  The agency then rates the strength of the total body of publicly available evidence.[9]  The agency conducts this rating evaluation by considering the study type (e.g., intervention, prospective cohort, case-control, cross-sectional), study category , the methodological quality rating previously assigned, the quantity of evidence (number of the various types of studies and sample sizes), whether the body of scientific evidence supports a health claim relationship for  the U.S. population or target subgroup, whether study results supporting the proposed claim have been replicated[10], and the overall consistency[11] of the total body of evidence.[12]  Based on the totality of the scientific evidence, FDA determines whether such evidence is credible to support the substance/disease relationship, and, if so, determines the ranking that reflects the level of comfort among qualified scientists that such a relationship is scientifically valid.



#5 emerson

  • Guest
  • 332 posts
  • 0
  • Location:Lansing, MI, USA

Posted 28 June 2006 - 12:59 PM

If I can offer a quick plug, my current pick for best introduction to the subject is Experimental Psychology: A case approach. While the focus is of course on experimental psychology, the framework is for the most part universally applicable. What makes it so great is the fact that it presents the information in short segments, and always ties it together with actual experiments. So the reader is able to read the particular aspect being discussed, have a study presented to them, and then work out for themselves which components of the study were strong or weak. Then, finally, they're presented with a detailed breakdown of the experiment to compare with their own decisions. And not to put down psychology, but the fact that the target audience is students within that discipline make it an especially good introduction. Partially for the fact that the specific terminology will be much less obscure than in many other sciences. Also, because many psychology students make it a fair ways into their studies before encountering any kind of design procedure, the book seems somewhat careful to not presume any particular background with scientific methodology.

The price is a bit hefty, especially for such a short book. But if one can find it used, it's an excellent item to get ones foot in the door to understanding how and how not to put a study together.

#6 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 28 June 2006 - 01:02 PM

This thread is very much to my liking.... [thumb]

I was trying to compile some relevant information as well and to submit a far more controversial topic. :)

Lot's of reading to do anyway.

#7 emerson

  • Guest
  • 332 posts
  • 0
  • Location:Lansing, MI, USA

Posted 28 June 2006 - 01:16 PM

A couple to add to the hint bottle while I'm still semi-awake.

Never allow yourself to fall into the trap of thinking you're a pinnacle of scientific thought and rationality, apart from the unthinking herd around you. We're all biased, and it's precisely when you decide there's no longer any need to be on guard, even if just due to disbelief that you possess it, that bias will lead you right off a cliff. You need to be twice as careful while looking over a study that tells you what you want to believe.

Just because it's published doesn't make it a good experiment. While the big names are especially likely to have a good ratio, there's no journal around that's not going to wind up with a real clunker of an article every now and again. Whether by culture, our own style of thought, or a million other factors, sometimes fairly large amounts of people just wind up blind to an otherwise obvious flaw. It can be a bit like finding Waldo. Hidden until that moment of combined luck and determined searching. But once found, the Waldo-like error can seem so obvious and glaring that you'll never be able to lose sight of it again.

#8 emerson

  • Guest
  • 332 posts
  • 0
  • Location:Lansing, MI, USA

Posted 30 June 2006 - 10:27 AM

This thread is very much to my liking....  [thumb]


I just wish it was more visible, and more populated. If any post within the health section deserved a blink tag, this would be it. These all form the alphabet of science. Grasp them, and one can read the stories and judge them by their merit. Ignore them, and one is at the mercy of science reporters to do the judging for them. Often, a rather dangerous wager.

Quite frankly, I think most of us could stand to be reminded of these principles on a rather frequent basis. Personally, once upon a time I had my act together when it came to experimental design. But it's amazing how quickly that skill fades when not put to use. Going through the links in this thread consisted in large part of many moments of forehead smacking as I once again yelled out, "Damn, I remember beating my head against that on a regular basis just a few years ago. How could that have fallen out of my day to day train of thought!" And yet, annoyingly, the confidence in my own ability to instantly recognise design flaws at the drop of a hat remains just as strong even as the skill fades. Not to mention that I'm sure there's a large percentage of people from fields totally disconnected with medical research who've never even encountered these as fleshed out, defined, tools, instead of self-conceived pieces combined with procedures drawn from their own fields of study. The latter can be awesome and at times provide new insight, but few things beat a Philips head screwdriver, made for use with those particular screws. A jury-rigged all purpose pocket knife might also get the screw in , but it's a pretty clear choice between the two.

#9 opales

  • Topic Starter
  • Guest
  • 892 posts
  • 15
  • Location:Espoo, Finland

Posted 18 July 2006 - 10:42 AM

This wikipedia main category pretty much covers everything one needs to know about conducting and evaluating clinical research:

http://en.wikipedia....inical_research

#10 doug123

  • Guest
  • 2,424 posts
  • -1
  • Location:Nowhere

Posted 24 October 2006 - 06:01 PM

This full text is available here FREE

Why Most Published Research Findings Are False

Why Most Published Research Findings Are False
John P. A. Ioannidis

Summary
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.


John P. A. Ioannidis is in the Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece, and Institute for Clinical Research and Health Policy Studies, Department of Medicine, Tufts-New England Medical Center, Tufts University School of Medicine, Boston, Massachusetts, United States of America. E-mail: jioannid@cc.uoi.gr

Competing Interests: The author has declared that no competing interests exist.

Published: August 30, 2005

DOI: 10.1371/journal.pmed.0020124

Copyright: © 2005 John P. A. Ioannidis. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abbreviation: PPV, positive predictive value

Citation: Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124



#11 doug123

  • Guest
  • 2,424 posts
  • -1
  • Location:Nowhere

Posted 17 December 2006 - 09:16 PM

Somethings that might be worth looking at:

Systems to Rate the Strength of Scientific Evidence

...and

BBC: News source

...and in today's news:

Animal studies 'of limited use'


Tests of drugs on animals are not reliable in all cases, a study warns.

The British Medical Journal research looked at studies in six areas and found animal studies agreed with human trials in just three.

The high-profile London drug trial which left six men ill was carried out after animal studies showed the drug TGN1412 was effective.

This study, led by Professor Ian Roberts, suggests animal studies should be used, but not for all drug research.

At the moment there is too much emotion and not much science
Professor Ian Roberts 


Six men suffered serious organ failure early this year after taking part in a trial of the TGN1412 drug made by TeGenero.

In this study, a team from the London School of Hygiene and Tropical Medicine reviewed evidence from a wide range of human and animal trials looking at six areas of treatment.

They were using corticosteroids to treat head injuries and respiratory illnesses in babies, antifibrinolytics to treat bleeding, thrombolysis and tirilazad for stroke and bisphosphonates for osteoporosis.

But there was no consistent agreement between the animal and human studies.

Corticosteroids did not show any benefit for treating head injury in clinical trials but had done so in animal models.

Different results were also seen for tirilazad. Data from animal studies suggested a benefit but the human trials showed no benefit, and possible harm.

However, bisphosphonates increased bone mineral density in both clinical trials and animal studies, while corticosteroids reduced neonatal respiratory distress syndrome in animal studies and in clinical trials, although the data were sparse.

'Hysterical'


Professor Roberts said: "This is all about the predictive value of animal experiments.

"The debate over this issue is really quite hysterical. At the moment, there is too much emotion and not much science.

"Anti-vivisectionists say animal testing is of no use at all, and those who do them say we would have no safe and effective treatments if we didn't."

He said his investigations showed some animal studies were poorly carried out, involving too few animals and that they could be affected by design or publication bias.

Professor Roberts said animal experiments could be designed to better reflect human experience, and that there would be some areas of drug research where animal testing was relevant, and some where it was not.

"It could be that, as with the TeGenero drugs, because of the mechanism and the action, animal tests don't tell you very much about safety in humans; but others where having the right model in animals would help."

Story from BBC NEWS:
http://news.bbc.co.u...lth/6179687.stm

Published: 2006/12/15 01:28:03 GMT

© BBC MMVI

sponsored ad

  • Advert
Click HERE to rent this advertising spot for SUPPLEMENTS (in thread) to support LongeCity (this will replace the google ad above).

#12 scottl

  • Guest
  • 2,177 posts
  • 2

Posted 17 December 2006 - 10:40 PM

Animal studies 'of limited use'

1. I think some of the most misleading data used to bash supps e.g. vit C comes not even from animals, but from cell culture work, which should be taken with a pillar of salt.

2. Animal data...well ya ideally one would stick to human data. It is true that there are significant differences between rats and humans. Keep in mind that if one does this, the abstract Opales just posted using diabetic rats:

antioxidants exacerbate AGE formation?

becomes irrelevant as does much of MRs DHA data.

ANyway so I suppose what one makes of animal data depends on how aggressive one wishes to be and many other personal factors. Certainly the most conservative path is to wait for human data.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users