[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 983 KB, 500x700, m7qiiaJYLp1qjmcxto1_500.gif [View same] [iqdb] [saucenao] [google]
5773678 No.5773678 [Reply] [Original]

How common is it for scientists to falsify reports?

Or skew results in favor of a particular outcome?

>> No.5773687 [DELETED] 

>>5773678
if you get caught skewing results, say bye-bye to your entire fucking career.
http://en.wikipedia.org/wiki/Hwang_Woo-suk

>> No.5773699

Falsifying reports tends not to happen; most robust science tends to undergo some peer review at least and get blasted apart. As a scientist, its a death knell to make up results out of nothing - nobody will hire you again and everything you say will be suspect. At that point you go into homeopathy and tell people less dose = more response.

Skewing results tends not to be so simple. It probably happens in all papers to some degree (this totally supports this!) but on a larger scale negative results usually aren't published and shoved in a filing cabinet. When this happens enough you basically hide, say, 7 experiments that failed but present 3 that succeeded, creating a bias.

Be happy we're not like economists, who dictacted policy in many government based on a spreadsheet that didn't calculate averages correctly.

>> No.5773716

>>5773699
>peer review
You mean replication. Peer review can only check for mistakes in your reasoning process.

>> No.5773724

I actually did a report on this subject matter recently. as it would happen, 0% of scientist use or make falsified reports.

>> No.5773734

>>5773724
Do you have a single fact to back that up?

>> No.5773753
File: 86 KB, 328x349, laugh_patachu's_little_sister.jpg [View same] [iqdb] [saucenao] [google]
5773753

>>5773724
I see what you did there.

>> No.5773756

>>5773734
lol idiot.

>> No.5773761

OP, scientist don't usually cheat or lie on their results, it's pretty bad if you get caught.

Generally it's the people paying for the experiments that want things fudged to look better in their favor.

They will usually do this by designing an experiment to give them good results, or by fudging the written interpretation of the scientists results.

>> No.5773774

30% of all science papers are flawed

>> No.5773776

Intentionally skewing or falsifying data is complete academic suicide.

>> No.5773780

As others have already said, it's rare to see that happen

>inb4 op reveals his true nature with climate change/race/other /pol/ bullshit

>> No.5773786

Deliberately falsifying or skewing results is great way to destroy your career, if you get caught. And you'll almost definitely get caught - if your paper matters at all, it won't be long before someone duplicates your process and gets totally different results. Whoops!

But unintentionally skewing results? That happens more often, I think. Generally towards what you expected to see to begin with. There are a lot of famous examples of this - the stuff I'm working with right now was "discovered" several years after it was detected, since people kept thinking it was experimental error.

>> No.5773787

The real question is how common is it for peer review to actually be reviewed by peers. Answer: not very.
Usually people only read an article when they need to use results from it in their own work.

>> No.5773790

>>5773761
It's very hard to prove that they "cheated". They can just say it was a "mistake".
Since the goal of most scientific papers is to secure more funding, it's thought of as a victimless crime.

>> No.5773791

>>5773776
No it's not. That's how you get funding for the next project :-)

>> No.5773798

>>5773791

No, it's how you get fired. :-)

>> No.5773801

>>5773798
Ahh, to be young again...

>> No.5773804

>>5773801
>says the 20 year old

>> No.5773806

Outright falsifying data is rare. However, just about everyone engages in "polishing" data. Inconvenient data points are eliminated (it's easy to justify this in most data sets), people cherry pick their best looking tissue samples to show pictures of in papers as if they're representative of everything else, failed experiments are left out of the paper, statistical procedures and other mathematical adjustments are made to try to mask ugly data, etc.

>>5773687
Not just you, but everyone under you and any major collaborators will also get fucked. That's what's scary about being a student or post-doc. If your PI happens to be cheating, you lose your career too.

>> No.5773813

>>5773787
All articles have to go through peer review before getting accepted. Thing is, the person doing the peer review often isn't the scientist who's supposed to be doing it, but a student. On the other hand though, students actually tend to be overly critical, to the point where they make the most retarded criticisms that hold up your paper from getting accepted. Not just things like making a big deal out of something that's pretty minor, but also bashing you over something that's totally correct because they have a misunderstanding about the concept.

>> No.5773815

OP again. How reliable would you say peer reviews are? What is to guarantee that this peers reviewing might not also have their own biases or reasons for approving a faulty research paper? I'm not a tinfoil hat wearing basement dweller or anything, I just want to better understand how the process works.

>> No.5773822

>>5773804
I'm 32 actually

>> No.5773819

>>5773806
Never stop believing - that's what JB taught us!

>> No.5773820

>>5773678
There's a lot of polishing and cherry picking of results, but outright false results is mainly the domain of the Chinese.

>> No.5773827

>>5773815
It's impossible to determine. But there is usually no incentive for the scientist to submit false results. False results means no funding. No funding means no more experiments. No experiments means unemployment. All are things the scientist wishes to avoid.

>> No.5773832

>>5773822

How's that life coming?

>> No.5773833

>>5773815
Listen to >>5773820 as long as a scientist isn't Chinese. What you're looking at is probably reliable.

>> No.5773835

>>5773832
Not bad actually, thanks for asking

>> No.5773837

>>5773815
>What is to guarantee that this peers reviewing might not also have their own biases or reasons for approving a faulty research paper?

Well, their reputation is on the line too. It's impossible to just keep perpetuating false information forever, because reality is always there to contradict you.

>> No.5773839

Honestly, it'll probably get more common as it gets more and more impossible to get funded. I know a lot of kids think they'd never resort to data fabrication, but imagine the following scenario:

You're in your mid to late 30s. Your career only finally got started a few years ago when you somehow managed to finally snag a tenure track position. You had start up funds that carried you for awhile, but they've run out. You don't get paid now, because your whole salary comes from grant money. You have a family to support. You also have grad students in your lab, and if your lab goes under, they have to start over which could ruin their chances at a PhD. Furthermore, if you lose your job you'll be hard pressed to find another one. Academia probably won't give you a second chance, and PhDs are worthless outside of academia. You need to get funding NOW, and you know you don't have anything that has a high likelihood of getting a grant.

You COULD fabricate some data though. That'll get you into a top tier journal, which will be a massive CV boost and make you much more likely to get funded. You can also make up some preliminary data while at it, and use all of that to apply for a grant. You'll be very likely to get funded then, and you're pretty sure your hypothesis is true anyway. After that, you'll be able to be honest again.

Can you resist the temptation?

>> No.5773836

>>5773827
Who knows they are false? God?

>> No.5773844

>>5773836
>Who knows they are false?

The first person to try to do anything based on your work.

>> No.5773845

>>5773844
What if a tree falls in the forest? Then what?
The sheer amount of papers being published makes it very unlikely this will ever happen, bless its high profile.

>> No.5773849

>>5773845

If you publish something with no practical applications whatsoever and which nobody cares about and which leads to no further research, then yes, it would be theoretically possible to conspire to publish false results and get away with it.

>> No.5773848

>>5773839
By that stage it is no longer a matter of "temptation" it is just how things are done. Everyone knows it.
You can only publish so many meta analysis

>> No.5773854

>>5773815
Peer reviews, if anything, tend to be overly critical. It's almost unheard of for a paper to be accepted to a journal without at least one revision. Keep in mind that revisions in the world of scientific papers often means adding new statistical analyses, changing presentation of data, and even performing new experiments. Even low impact journals can have surprisingly hostile peer reviews.

As someone who's been through both ends of the peer review process multiple times, I can assure you that's it's pretty reliable. It won't catch well falsified data though; that's impossible to detect looking at a lab from the outside. The way frauds get found out is either other labs try to replicate the results and everyone talks to each other and finds out they can't replicate what the paper got, or people working in the lab come forward and accuse the PI of falsifying data.

>> No.5773859

>>5773776
>http://en.wikipedia.org/wiki/Hwang_Woo-suk

Nope.

source: http://www.masslive.com/business-news/index.ssf/2013/04/umass_thomas_herndon_shines_light_on_aus.html

>> No.5773867

>>5773849
It should really be pointed out though that there's no point in faking results like what's described in this post, so it's not a real thing to be concerned about. Insignificant papers like those are not only unlikely to get published, but will also do nothing to further your career or increase your chances of getting funded. All they do is put you at risk of losing your career, which is what you're trying to avoid in the first place by committing research fraud.

>> No.5773873

>>5773813
this is true, my advisor often delegates reviews to me. i try my best but i sometimes don't feel qualified

>> No.5773951

>>5773839
well, it's nice knowing what my advisor (an assistant professor) might be experiencing

>> No.5773975

I falsified an entire experiment for a college project, but since it didn't go a journal anyways I'm not feeling too bad.

The experiment itself was designed well. It's just that when we actually ran it, our results came back inconclusive/no significant anything at all. The professor said that it was well done but without interesting results we wouldn't get a good grade.

So I pulled some numbers out of my ass and we did a new report on that. Got an A.

>> No.5773982

falsify?


very uncommon in the west.


extremely ridiculously insanely common in China, and less so (But still common) in Japan and India.


there is anecdotal evidence that at least some of that may actually be the result of lazy authors who merely copy english language content out of preexisting journal articles, not intentional fabrication.
Skewing?


extremely ridiculously common world wide.
how much skewing is "ignorable"? depends.


rephrasing your question:


"how common is it for researchers to skew their results/data/analysis in a significant way?"


Uncommon, but not so uncommon as to be a rare instance.


full blown fabrication, however, is very rare in any publication that is even worth discussing


throughout this discussion I have assumed that we completely ignore publications that do not receive notoriety, respect, patronage, support and all of the other important concerns of a good publication.


As a simple, generic (not specifically referring to any journal in particular) example:


there are actually publications that exist solely for the purpose of allowing undergraduates to publish.


they are not educational publications where new 3rd year P-chem experiments are published.....

They are sort-of-kind-of like communications, but they are described as full blown articles.


Such publications are totally irrelevant and the only people who pay any attention to them are the sad undergraduates who are doomed to a BS from a university ranked above #70..... Outside of that oddly insular world, such publications are extremely uncommon subjects of discussion.

>> No.5773987

the only skewing you see regularly is merely cautious editing....


basically, the journal article itself receives all of the best data, compiled into the best tabulated formats.....


all of the horrendous data (eg scatter plots where you should get a linear trend) goes into the SUpplemental section.

>> No.5773996

>>5773951
PhD's are worthless outside of academia? I wonder how the HEP theorist's students keep getting hired by banks then.

>> No.5774002
File: 144 KB, 960x640, I have already won.jpg [View same] [iqdb] [saucenao] [google]
5774002

>>5773987
I've seen a lot of misplaced log-log plots, completely invalidates any claims to power laws.

http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.

>> No.5774006

>>5773996
i think you mean to reply to >>5773839

>> No.5774015

>>5774002
This seems to be about clinical trials and not science as a whole. Clinical trials and other biological experiments often suffer from small test populations that result in people often saying screwed up things about their results simply because there is some reason to believe a causative mechanism exists and there is a very minor data bump that shows that, however it is often below the noise level. Physics experiments, on the other hand, often have a massive test population and statistically significant conclusions are much easier to see and falsified claims are much easier to kill off.