KillerMovies - Movies That Matter!

REGISTER HERE TO JOIN IN! - It's easy and it's free!
Home » Community » General Discussion Forum » Religion Forum » Double standards in experiments

Double standards in experiments
Started by: Deadline

Forum Jump:
Post New Thread    Post A Reply
Pages (4): [1] 2 3 » ... Last »   Last Thread   Next Thread
Author
Thread
Deadline
Junior Member

Gender: Male
Location: United Kingdom

Double standards in experiments

Two interesting articles.

http://www.ics.uci.edu/~jutts/Sweden.pdf



How are anomalous cognition (ac) - remote viewing and
ganzfeld - results different from aspirin results?
�� If same standard applied, ac results are much stronger.
�� The aspirin studies had more opportunity for fraud and
experimenter effects than did the ac studies.
�� The aspirin studies were at least as frequently funded and
conducted by those with a vested interest in the outcome.
�� Both used heterogeneous methods and participants.



http://bigthink.com/ideas/24951

Whats really interesting is this and this sort of thing is mentioned in the first pdf.

By chance, then, the students should have been right exactly half the time. Instead, they predicted correctly just over 53 percent of the time. Not a big difference, but, as Melissa Burkley blogged last month, effects of that size are what support claims that aspirin can prevent heart attacks or that eating calcium helps build healthy bones.

So basically people just dictate to us what we should believe.


__________________
Watch what people are cynical about, and one can often discover what they lack.
- General George Patton Jr

Old Post Aug 19th, 2011 02:47 PM
Deadline is currently offline Click here to Send Deadline a Private Message Find more posts by Deadline Edit/Delete Message Reply w/Quote Quick Quote
Bardock42
Junior Member

Gender: Unspecified
Location: With Cinderella and the 9 Dwarves

Well, lets hope this experiment can be repeated and that it's methodology was indeed as flawless as claimed.


__________________

Old Post Aug 19th, 2011 03:12 PM
Bardock42 is currently offline Click here to Send Bardock42 a Private Message Find more posts by Bardock42 Edit/Delete Message Reply w/Quote Quick Quote
Deadline
Junior Member

Gender: Male
Location: United Kingdom

quote: (post)
Originally posted by Bardock42
Well, lets hope this experiment can be repeated and that it's methodology was indeed as flawless as claimed.


You can always find something wrong with experiments if you're looking for mistakes. Anyway thinks theres a link to his experiment if interested. In terms of psi experiments with repeatable results you have it in the first pdf.

http://www.ics.uci.edu/~jutts/Sweden.pdf


Results of Free Response Experiments
(Used in 1995 report I wrote for U.S. Government)
Hit rates assume there were four choices; chance = 25%

U.S. Government Studies in Remote Viewing:
�� SRI International (1970's and 1980's)
966 trials, p-value = 4.3 × 10-11, hit rate = 34%, 2-sided 95% C.I. 31% to 37%
�� SAIC
455 trials, p-value = 5.7 × 10-7, hit rate = 35%, C.I. 30% to 40%
Ganzfeld:
�� Psychophysical Research Laboratories, Princeton (1980's)
355 trials, p-value = .00005, hit rate = 34.4%, C.I. 29.4% to 39.6%
�� University of Amsterdam, Netherlands (1990's)
124 trials, p-value = .0019, hit rate = 37%, C.I. 29% to 46%
�� University of Edinburgh, Scotland (1990's)
97 trials, p-value = .0476, hit rate = 33%, C.I. 25% TO 44%
�� Rhine Research Institute, North Carolina (1990's)
100 trials, p-value = .0446, hit rate = 33%, C.I. 24% to 42%


Thats hundreds of results.


__________________
Watch what people are cynical about, and one can often discover what they lack.
- General George Patton Jr

Old Post Aug 19th, 2011 03:24 PM
Deadline is currently offline Click here to Send Deadline a Private Message Find more posts by Deadline Edit/Delete Message Reply w/Quote Quick Quote
dadudemon
Senior Member

Gender: Male
Location: Bacta Tank.

I'd like to see hit rates of over 50% for four choices and 75% for 2 choices.





I've been right on dice rolls (50% chance with 2) as much as 20+ times in a row. I certainly do not have any "psychic" powers or latent psi abilities.



The tests would have to be run many days and thousands of times (tens of thousands) in order to be legit, to me.





In statistics, it's not unheard of for a professor to do the ol' coin flip 100 times homework. If you don't have at least 6 coin flips in a row, you fail the assignment because 100 coin flips should result in at leas one side being tossed 6 times in a row.


__________________

Old Post Aug 19th, 2011 03:30 PM
dadudemon is currently offline Click here to Send dadudemon a Private Message Find more posts by dadudemon Edit/Delete Message Reply w/Quote Quick Quote
tsilamini
Junior Member

Gender: Unspecified
Location:

some of those stats aren't correct....

if you are comparing the observed mean to 25%, and the 95% CI contains the value 25, your p-value, by definition, can't be significant.


__________________
yes, a million times yes

Old Post Aug 19th, 2011 03:36 PM
tsilamini is currently offline Click here to Send tsilamini a Private Message Find more posts by tsilamini Edit/Delete Message Reply w/Quote Quick Quote
Bardock42
Junior Member

Gender: Unspecified
Location: With Cinderella and the 9 Dwarves

quote: (post)
Originally posted by Deadline
You can always find something wrong with experiments if you're looking for mistakes. Anyway thinks theres a link to his experiment if interested. In terms of psi experiments with repeatable results you have it in the first pdf.

http://www.ics.uci.edu/~jutts/Sweden.pdf


Results of Free Response Experiments
(Used in 1995 report I wrote for U.S. Government)
Hit rates assume there were four choices; chance = 25%

U.S. Government Studies in Remote Viewing:
�� SRI International (1970's and 1980's)
966 trials, p-value = 4.3 × 10-11, hit rate = 34%, 2-sided 95% C.I. 31% to 37%
�� SAIC
455 trials, p-value = 5.7 × 10-7, hit rate = 35%, C.I. 30% to 40%
Ganzfeld:
�� Psychophysical Research Laboratories, Princeton (1980's)
355 trials, p-value = .00005, hit rate = 34.4%, C.I. 29.4% to 39.6%
�� University of Amsterdam, Netherlands (1990's)
124 trials, p-value = .0019, hit rate = 37%, C.I. 29% to 46%
�� University of Edinburgh, Scotland (1990's)
97 trials, p-value = .0476, hit rate = 33%, C.I. 25% TO 44%
�� Rhine Research Institute, North Carolina (1990's)
100 trials, p-value = .0446, hit rate = 33%, C.I. 24% to 42%


Thats hundreds of results.


I am of course referring not to the discredited and abandoned older research, but that new research that your second link claims is so professional and accepted.


__________________

Old Post Aug 19th, 2011 03:43 PM
Bardock42 is currently offline Click here to Send Bardock42 a Private Message Find more posts by Bardock42 Edit/Delete Message Reply w/Quote Quick Quote
Deadline
Junior Member

Gender: Male
Location: United Kingdom

quote: (post)
Originally posted by dadudemon


I've been right on dice rolls (50% chance with 2) as much as 20+ times in a row. I certainly do not have any "psychic" powers or latent psi abilities.



Maybe, but you do say alot of stuff.

quote: (post)
Originally posted by dadudemon



The tests would have to be run many days and thousands of times (tens of thousands) in order to be legit, to me.


Yes because 100s of trials aren't enough, I see where you're coming from.



quote: (post)
Originally posted by dadudemon


In statistics, it's not unheard of for a professor to do the ol' coin flip 100 times homework. If you don't have at least 6 coin flips in a row, you fail the assignment because 100 coin flips should result in at leas one side being tossed 6 times in a row.


Dunno about that.

quote: (post)
Originally posted by inimalist
some of those stats aren't correct....

if you are comparing the observed mean to 25%, and the 95% CI contains the value 25, your p-value, by definition, can't be significant.


You might be right, but you're probably lying.


quote: (post)
Originally posted by Bardock42
I am of course referring not to the discredited and abandoned older research, but that new research that your second link claims is so professional and accepted.


*shrug* Sound as if you're taking the piss.


__________________
Watch what people are cynical about, and one can often discover what they lack.
- General George Patton Jr

Last edited by Deadline on Aug 19th, 2011 at 03:48 PM

Old Post Aug 19th, 2011 03:45 PM
Deadline is currently offline Click here to Send Deadline a Private Message Find more posts by Deadline Edit/Delete Message Reply w/Quote Quick Quote
dadudemon
Senior Member

Gender: Male
Location: Bacta Tank.

quote: (post)
Originally posted by Deadline
Maybe, but you do say alot of stuff.


No, not "maybe". A mystic would say I have "abilities" when hearing that story.

A statistician would say that it is a probability to get 100 in a row with millions of trials.



quote: (post)
Originally posted by Deadline
Yes because 100s of trials aren't enough, I see where you're coming from.


Correct.


Some only had 100 in the data set. smile


Not only would you want to see thousands but you'd want to see results duplication. (Not really thousands...but something like that would need a lot od evidence to get the scientific community to believe) smile




quote: (post)
Originally posted by Deadline
Dunno about that.


Well I do and it's true. The larger your "set", the higher the probability that you'll have 'straight' runs of one side or the other.


__________________

Old Post Aug 19th, 2011 03:57 PM
dadudemon is currently offline Click here to Send dadudemon a Private Message Find more posts by dadudemon Edit/Delete Message Reply w/Quote Quick Quote
Bardock42
Junior Member

Gender: Unspecified
Location: With Cinderella and the 9 Dwarves

I'd be very happy if Psi would be real. That would be an enormous finding. I just think that a lot of the tests I have heard about are by no means valid.

Why do you think that, if there is such good evidence, that the scientific community tries to hide it? I mean, that's what you propose, yes? That there is a conspiracy to hide the significant findings in Psi from the general public.


__________________

Old Post Aug 19th, 2011 03:59 PM
Bardock42 is currently offline Click here to Send Bardock42 a Private Message Find more posts by Bardock42 Edit/Delete Message Reply w/Quote Quick Quote
tsilamini
Junior Member

Gender: Unspecified
Location:

woah... ummm, I think you guys just gave Cohen a stroke....


the way those statistical tests are done, the more subjects and the more trials actually biases in favor of a positive result. Increasing the N in an experiment, by definition, decreases the standard deviation, and thus reduces the 95% CI to a smaller range, that is therefore less likely to contain the null. It is one of the major problems with those statistical methods, and a full power analysis (something with its own problems) would be required to know exactly how many subjects and trials would be appropriate.

The idea that more people doing more trials is good for science is false, 100% false


__________________
yes, a million times yes

Old Post Aug 19th, 2011 04:02 PM
tsilamini is currently offline Click here to Send tsilamini a Private Message Find more posts by tsilamini Edit/Delete Message Reply w/Quote Quick Quote
Deadline
Junior Member

Gender: Male
Location: United Kingdom

quote: (post)
Originally posted by dadudemon
No, not "maybe". A mystic would say I have "abilities" when hearing that story.

A statistician would say that it is a probability to get 100 in a row with millions of trials.


As far as I'm aware you usually have a bigger number than that in psi experiements. They don't just stop at 20 guesses and job done.


quote: (post)
Originally posted by dadudemon

Correct.


Some only had 100 in the data set. smile


Not only would you want to see thousands but you'd want to see results duplication. (Not really thousands...but something like that would need a lot od evidence to get the scientific community to believe) smile






Well I do and it's true. The larger your "set", the higher the probability that you'll have 'straight' runs of one side or the other.


I kinda see what you're saying but the reason why these experiments were set this way was because they were within the protocols of accepted science and I might be wrong but other areas of reasearch follow the same criteria.

What you're saying now is eventhough the experiements followed scientific protocols it's not good enough. In fact theres evidence that psi experiments have actually better standards than other experiments.

I bet you any money that there other areas of science which don't accept the criteria you mentioned.


__________________
Watch what people are cynical about, and one can often discover what they lack.
- General George Patton Jr

Old Post Aug 19th, 2011 04:08 PM
Deadline is currently offline Click here to Send Deadline a Private Message Find more posts by Deadline Edit/Delete Message Reply w/Quote Quick Quote
dadudemon
Senior Member

Gender: Male
Location: Bacta Tank.

quote: (post)
Originally posted by inimalist
The idea that more people doing more trials is good for science is false, 100% false


K.


I'll be over here, not doing "real" science with more trials. My quack statisticians like this thing called "effect size".


__________________

Old Post Aug 19th, 2011 04:08 PM
dadudemon is currently offline Click here to Send dadudemon a Private Message Find more posts by dadudemon Edit/Delete Message Reply w/Quote Quick Quote
Deadline
Junior Member

Gender: Male
Location: United Kingdom

quote: (post)
Originally posted by Bardock42
I'd be very happy if Psi would be real. That would be an enormous finding. I just think that a lot of the tests I have heard about are by no means valid.

Why do you think that, if there is such good evidence, that the scientific community tries to hide it? I mean, that's what you propose, yes? That there is a conspiracy to hide the significant findings in Psi from the general public.




quote: (post)
Originally posted by inimalist
woah... ummm, I think you guys just gave Cohen a stroke....


the way those statistical tests are done, the more subjects and the more trials actually biases in favor of a positive result. Increasing the N in an experiment, by definition, decreases the standard deviation, and thus reduces the 95% CI to a smaller range, that is therefore less likely to contain the null. It is one of the major problems with those statistical methods, and a full power analysis (something with its own problems) would be required to know exactly how many subjects and trials would be appropriate.

The idea that more people doing more trials is good for science is false, 100% false



If there were less subjects you would find something else to complain about. There was a telephone telepathy experiment done by Rupert Sheldrake and in those experiments they complained there weren't enough subjects. Damned if you do, damned if you don't.

Which is the point I've been making all along. You don't make constructive criticism you invent stuff to complain about. You've already made your mind up. If this were an experiment about something else you wouldn't be saying that.


__________________
Watch what people are cynical about, and one can often discover what they lack.
- General George Patton Jr

Old Post Aug 19th, 2011 04:15 PM
Deadline is currently offline Click here to Send Deadline a Private Message Find more posts by Deadline Edit/Delete Message Reply w/Quote Quick Quote
tsilamini
Junior Member

Gender: Unspecified
Location:

quote: (post)
Originally posted by dadudemon
K.


I'll be over here, not doing "real" science with more trials.


the most common scientific tests are t-tests or z-tests (ANOVAs and F-tests too, but they are much more complicated than I want to explain on a forum....).

In a t or z test, you compared your observed mean to a null mean, divided by either the standard deviation or standard error of your results:

t = (mean1-mean2)/SD

when calculating either SD or SE, the N (number of trials/participants) is the denominator:

SD = (whatever)/N

as N increases, the (whatever)/N value decreases, thus, SD decreases. As SD decreases, the t value increases, as the difference between the means is divided by an increasingly lower number. The higher your t value, the more likely the result is significant. it is a basic law of the equations used.

like, seriously, I'm not even exaggerating at this point, null hypothesis testing has major, major problems


__________________
yes, a million times yes

Old Post Aug 19th, 2011 04:15 PM
tsilamini is currently offline Click here to Send tsilamini a Private Message Find more posts by tsilamini Edit/Delete Message Reply w/Quote Quick Quote
tsilamini
Junior Member

Gender: Unspecified
Location:

quote: (post)
Originally posted by Deadline
If there were less subjects you would find something else to complain about. There was a telephone telepathy experiment done by Rupert Sheldrake and in those experiments they complained there weren't enough subjects.


I wouldn't have made that complaint

quote: (post)
Originally posted by Deadline
Which is the point I've been making all along. You don't make constructive criticism you invent stuff to complain about.


like, I'm working on a post that tries to explain confidence intervals to you, is this even worth my time?


__________________
yes, a million times yes

Old Post Aug 19th, 2011 04:19 PM
tsilamini is currently offline Click here to Send tsilamini a Private Message Find more posts by tsilamini Edit/Delete Message Reply w/Quote Quick Quote
Bardock42
Junior Member

Gender: Unspecified
Location: With Cinderella and the 9 Dwarves

quote: (post)
Originally posted by Deadline



I'm not trying to insult you. That is what you think, isn't it?


__________________

Old Post Aug 19th, 2011 04:25 PM
Bardock42 is currently offline Click here to Send Bardock42 a Private Message Find more posts by Bardock42 Edit/Delete Message Reply w/Quote Quick Quote
Deadline
Junior Member

Gender: Male
Location: United Kingdom

Every man and his dog is complaining about psi experiments not being repeatable? How the hell do you disprove that? By having more trials. Now somebodies trying to tell me more trials is bad?

If this isn't an example of trying to take the piss I don't know what is.


__________________
Watch what people are cynical about, and one can often discover what they lack.
- General George Patton Jr

Old Post Aug 19th, 2011 04:27 PM
Deadline is currently offline Click here to Send Deadline a Private Message Find more posts by Deadline Edit/Delete Message Reply w/Quote Quick Quote
dadudemon
Senior Member

Gender: Male
Location: Bacta Tank.

quote: (post)
Originally posted by inimalist
the most common scientific tests are t-tests or z-tests (ANOVAs and F-tests too, but they are much more complicated than I want to explain on a forum....).

In a t or z test, you compared your observed mean to a null mean, divided by either the standard deviation or standard error of your results:

t = (mean1-mean2)/SD

when calculating either SD or SE, the N (number of trials/participants) is the denominator:

SD = (whatever)/N

as N increases, the (whatever)/N value decreases, thus, SD decreases. As SD decreases, the t value increases, as the difference between the means is divided by an increasingly lower number. The higher your t value, the more likely the result is significant. it is a basic law of the equations used.

like, seriously, I'm not even exaggerating at this point, null hypothesis testing has major, major problems



If I'm not mistaken, I was just referring to 2 side and 4 side probabilities and the likelihood of guessing results.

Is that not what those tests were?

There's also the literal requirement of a significant sample size. All the time I read articles about a new drug or a new method that has fewer than 20 samples and the journal concludes: "More testing will be required to determine if x is effective."

In z tests, you must have a significant sample size in order to overcome the null hypothesis...if you're trying to show the desired results. In fact, not having a large enough sample size in your z-testing can result in shitty results or wacky SDs (nuisance parameters).

In the medical community, 30 is usually considered acceptable for z-testing. It's the same for physics.


__________________

Old Post Aug 19th, 2011 04:28 PM
dadudemon is currently offline Click here to Send dadudemon a Private Message Find more posts by dadudemon Edit/Delete Message Reply w/Quote Quick Quote
Deadline
Junior Member

Gender: Male
Location: United Kingdom

quote: (post)
Originally posted by dadudemon

There's also the literal requirement of a significant sample size. All the time I read articles about a new drug or a new method that has fewer than 20 samples and the journal concludes: "More testing will be required to determine if x is effective."


Exactly. What on earth is he talking about?

quote: (post)
Originally posted by Bardock42
I'm not trying to insult you. That is what you think, isn't it?


Go ahead. If trolling makes you feel good thats says alot about you. Yes and I do think there might be a 'conspiracy' but I think it may just be more like prejudice.


__________________
Watch what people are cynical about, and one can often discover what they lack.
- General George Patton Jr

Last edited by Deadline on Aug 19th, 2011 at 04:33 PM

Old Post Aug 19th, 2011 04:30 PM
Deadline is currently offline Click here to Send Deadline a Private Message Find more posts by Deadline Edit/Delete Message Reply w/Quote Quick Quote
tsilamini
Junior Member

Gender: Unspecified
Location:

quote: (post)
Originally posted by Deadline
Every man and his dog is complaining about psi experiments not being repeatable? How the hell do you disprove that? By having more trials. Now somebodies trying to tell me more trials is bad?

If this isn't an example of trying to take the piss I don't know what is.


oh, ok

there is a difference between more trials in a single experimental design (the N value) and replicating experimental designs using similar N values.

within a single experiment, yes, there are problems with increasing N, because of how it effects the calculation of standard deviation or standard error, however, across experiments this isn't the case, as SD and SE are calculated as unique values within individual experimental designs.

within a single experiment, increasing N will only have the effect of lowering your p value (increasing the likelihood of significant results), and is something I am personally loathe to do [even when my adviser thinks it is ok. Actually, in terms of philosophical approaches to data, I am a very strict purist].

Replication, on the other hand, verifies previous findings and has no impact on their statistical results.

Do you know about meta-analysis?


__________________
yes, a million times yes

Old Post Aug 19th, 2011 04:32 PM
tsilamini is currently offline Click here to Send tsilamini a Private Message Find more posts by tsilamini Edit/Delete Message Reply w/Quote Quick Quote
All times are UTC. The time now is 03:54 PM.
Pages (4): [1] 2 3 » ... Last »   Last Thread   Next Thread

Home » Community » General Discussion Forum » Religion Forum » Double standards in experiments

Email this Page
Subscribe to this Thread
   Post New Thread  Post A Reply

Forum Jump:
Search by user:
 

Forum Rules:
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is OFF
vB code is ON
Smilies are ON
[IMG] code is ON

Text-only version
 

< - KillerMovies.com - Forum Archive - Forum Rules >


© Copyright 2000-2006, KillerMovies.com. All Rights Reserved.
Powered by: vBulletin, copyright ©2000-2006, Jelsoft Enterprises Limited.