Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

Science How to learn from failure, in scientific experimentations

views
     
TSconvivencia
post Dec 31 2009, 08:46 PM, updated 16y ago

idiot
*******
Senior Member
2,675 posts

Joined: Dec 2008
Too often, we assume that a failed experiment is a wasted effort. But not all anomalies are useless. Here’s how to make the most of them

Source

1. Check Your Assumptions

Ask yourself why this result feels like a failure. What theory does it contradict? Maybe the hypothesis failed, not the experiment.

2. Seek Out the Ignorant

Talk to people who are unfamiliar with your experiment. Explaining your work in simple terms may help you see it in a new light.

3. Encourage Diversity

If everyone working on a problem speaks the same language, then everyone has the same set of assumptions.

4. Beware of Failure-Blindness

It’s normal to filter out information that contradicts our preconceptions. The only way to avoid that bias is to be aware of it.
azarimy
post Dec 31 2009, 11:40 PM

mister architect: the arrogant pr*ck
Group Icon
Elite
10,672 posts

Joined: Jul 2005
From: shah alam - skudai - shah alam


anyone who've ever involved in REAL research would know this. first we need a hypothesis, and later through the experiment, prove the hypothesis right or wrong. both right and wrong are a result of the research. research is is only bad if it cannot return any result.
TSconvivencia
post Jan 1 2010, 02:09 PM

idiot
*******
Senior Member
2,675 posts

Joined: Dec 2008
QUOTE(azarimy @ Dec 31 2009, 11:40 PM)
anyone who've ever involved in REAL research would know this. first we need a hypothesis, and later through the experiment, prove the hypothesis right or wrong. both right and wrong are a result of the research. research is is only bad if it cannot return any result.
*
what if the research data all come out wrong?

do you discard the data or do you change the hypothesis, or both?

or do you start a new branch of research on why the data come out so weird?

the 3rd option may prove to be very useful. sometimes it might result in unexpected new discoveries
lin00b
post Jan 1 2010, 05:23 PM

nobody
*******
Senior Member
3,592 posts

Joined: Oct 2005
check your equipment. check your method.
C-Note
post Jan 1 2010, 06:50 PM

starry starry night
*******
Senior Member
3,037 posts

Joined: Dec 2007
From: 6-feet under


You learn from people's failures. Life's too short to learn from your own mistakes all the time.
azarimy
post Jan 1 2010, 10:06 PM

mister architect: the arrogant pr*ck
Group Icon
Elite
10,672 posts

Joined: Jul 2005
From: shah alam - skudai - shah alam


QUOTE(convivencia @ Jan 1 2010, 06:09 AM)
what if the research data all come out wrong?

do you discard the data or do you change the hypothesis, or both?

or do you start a new branch of research on why the data come out so weird?

the 3rd option may prove to be very useful. sometimes it might result in unexpected new discoveries
*
if the method is right but the results are wrong, then it IS STILL a valid result.

for example, u hypothesize in the beginning that "eating an apple a day will keep the dentist away". the null hypothesis would be "eating an apple a day will NOT keep the dentist away".

so after studying and researching all ur data, u found that "eating an apple a day will not keep the dentist away". all data is still correct, but u still need to go to the dentist every 6 month. so the result is clear. apples dont do anything. it's the whole mentality of people nowadays that keeps going back to the dentist.

if u're doing a PhD, a null hypothesis is STILL a result.
Alexes
post Feb 3 2010, 10:06 PM

-=alx=-
******
Senior Member
1,314 posts

Joined: Aug 2007
From: Sarikei --> Kajang


hypothesis is just a early assumption on a conclusion on ur experiment... it may be right may oxo be wrong...

no matter what the outcome from ur experiment or practical, even disagree with hypothesis, it still a result... once cant exclude that from ur conclusion...

but is fine if u can repeat with other method or technique to get a conclusion which agree with ur hypothesis...

but we have to explain... as long as there be a reasonable explanation that supported with data from experiment or practical, that is a good result...
ZeratoS
post Feb 4 2010, 06:06 PM

Oh you.
******
Senior Member
1,044 posts

Joined: Dec 2008
From: 127.0.0.1


Yes, what is a failure. You got your results from the experiment did you not, albiet not the one you wanted/expected.
Chobits
post Feb 4 2010, 08:28 PM

Cutest piece of technology on the planet
*****
Senior Member
721 posts

Joined: Jul 2007
From: Chii ?


There is no failures in experiments, in the end u will get results which later guide u on how to reset the parameters of the experiment.

Failure to follow the Hypothesis, Experimentation, Conclusion will lead to nothing but chaos of the subject under study.
Critical_Fallacy
post Nov 26 2013, 09:30 PM

∫nnộvisεr
Group Icon
VIP
3,713 posts

Joined: Nov 2011
From: Torino
Hi Fellows Blofeld, jonoave, mycolumn,

How would you reduce the likelihood of control failures with innovations in the Design of Experiments? icon_question.gif

Would you recommend Taguchi methods? sweat.gif
Blofeld
post Nov 27 2013, 04:30 PM

Look at all my stars!!
*******
Senior Member
4,697 posts

Joined: Mar 2012
QUOTE(Critical_Fallacy @ Nov 26 2013, 09:30 PM)
Hi Fellows Blofeld, jonoave, mycolumn,

How would you reduce the likelihood of control failures with innovations in the Design of Experiments? icon_question.gif

Would you recommend Taguchi methods? sweat.gif
*
Experimental research is not my forte. laugh.gif
Critical_Fallacy
post Nov 27 2013, 04:41 PM

∫nnộvisεr
Group Icon
VIP
3,713 posts

Joined: Nov 2011
From: Torino
QUOTE(Blofeld @ Nov 27 2013, 04:30 PM)
Experimental research is not my forte.  laugh.gif
Oh I see... So, am I safe to confirm Survey Research in Life Sciences doesn't require experimental design? unsure.gif

This post has been edited by Critical_Fallacy: Nov 27 2013, 04:41 PM
Blofeld
post Nov 27 2013, 08:23 PM

Look at all my stars!!
*******
Senior Member
4,697 posts

Joined: Mar 2012
QUOTE(Critical_Fallacy @ Nov 27 2013, 04:41 PM)
Oh I see... So, am I safe to confirm Survey Research in Life Sciences doesn't require experimental design? unsure.gif
*
hmm.gif I'm curious on that too. laugh.gif
Critical_Fallacy
post Nov 27 2013, 09:28 PM

∫nnộvisεr
Group Icon
VIP
3,713 posts

Joined: Nov 2011
From: Torino
QUOTE(Blofeld @ Nov 27 2013, 08:23 PM)
hmm.gif I'm curious on that too.  laugh.gif
For example, Abraham Maslow published his paper "A Theory of Human Motivation" in 1943, as we know it today called Maslow's hierarchy of needs. How do researchers validate the existence of universal human needs? sweat.gif
jonoave
post Nov 28 2013, 07:13 AM

On my way
****
Junior Member
659 posts

Joined: May 2013


QUOTE(Critical_Fallacy @ Nov 26 2013, 04:30 PM)
Hi Fellows Blofeld, jonoave, mycolumn,

How would you reduce the likelihood of control failures with innovations in the Design of Experiments? icon_question.gif

Would you recommend Taguchi methods? sweat.gif
*
Not sure what is Taguchi method.

First of all, is to check whether the result is consistent with my expectation. Sometimes this step it's good to get feedback from others. For example recently I just finished a script to run some data analysis. The results looked ok to me, but some of my labmates and boss thought some of the numbers were a bit low.

I took a look back at the script and found little bug that takes and process the numbers, where the numbers were being incorrectly reported to the results. So for me, the first step is to take a thorough look at your methods to see if there were any possible errors - that include your reagents.

If the methods and materials seemed fine, then try to analyse why is the results as such. Sometimes unexpected results can be interesting too. This happens very often in phylogenetics studies (study of relationships using genetic data) - e.g. crocodiles are found to be closely related to birds, the divergence of a species etc.

And the checking of method here is important too. There are so much variance and noise in phylogenetic dataset (missing data, fast-evolving genes, horizontal gene transfers etc) that can contribute noise and in turn, produce misleading assumptions.
Critical_Fallacy
post Nov 30 2013, 03:27 AM

∫nnộvisεr
Group Icon
VIP
3,713 posts

Joined: Nov 2011
From: Torino
QUOTE(jonoave @ Nov 28 2013, 07:13 AM)
» Click to show Spoiler - click again to hide... «
Thank you for your explanations. When designing physics experiments, we are encouraged to use Response Surface Method to find the optimal process control settings to achieve peak performance. An example of a response surface with two independent parameters (driving factors / inputs) is shown below:

user posted image

 

Change to:
| Lo-Fi Version
0.0193sec    0.88    5 queries    GZIP Disabled
Time is now: 29th November 2025 - 03:10 PM