How are anomalous cognition (ac) - remote viewing and
ganzfeld - results different from aspirin results?
If same standard applied, ac results are much stronger.
The aspirin studies had more opportunity for fraud and
experimenter effects than did the ac studies.
The aspirin studies were at least as frequently funded and
conducted by those with a vested interest in the outcome.
Both used heterogeneous methods and participants.
Whats really interesting is this and this sort of thing is mentioned in the first pdf.
By chance, then, the students should have been right exactly half the time. Instead, they predicted correctly just over 53 percent of the time. Not a big difference, but, as Melissa Burkley blogged last month, effects of that size are what support claims that aspirin can prevent heart attacks or that eating calcium helps build healthy bones.
So basically people just dictate to us what we should believe.
__________________ Watch what people are cynical about, and one can often discover what they lack.
- General George Patton Jr
You can always find something wrong with experiments if you're looking for mistakes. Anyway thinks theres a link to his experiment if interested. In terms of psi experiments with repeatable results you have it in the first pdf.
Results of Free Response Experiments
(Used in 1995 report I wrote for U.S. Government) Hit rates assume there were four choices; chance = 25%
U.S. Government Studies in Remote Viewing:
SRI International (1970's and 1980's)
966 trials, p-value = 4.3 × 10-11, hit rate = 34%, 2-sided 95% C.I. 31% to 37%
SAIC
455 trials, p-value = 5.7 × 10-7, hit rate = 35%, C.I. 30% to 40%
Ganzfeld:
Psychophysical Research Laboratories, Princeton (1980's)
355 trials, p-value = .00005, hit rate = 34.4%, C.I. 29.4% to 39.6%
University of Amsterdam, Netherlands (1990's)
124 trials, p-value = .0019, hit rate = 37%, C.I. 29% to 46%
University of Edinburgh, Scotland (1990's)
97 trials, p-value = .0476, hit rate = 33%, C.I. 25% TO 44%
Rhine Research Institute, North Carolina (1990's)
100 trials, p-value = .0446, hit rate = 33%, C.I. 24% to 42%
Thats hundreds of results.
__________________ Watch what people are cynical about, and one can often discover what they lack.
- General George Patton Jr
I'd like to see hit rates of over 50% for four choices and 75% for 2 choices.
I've been right on dice rolls (50% chance with 2) as much as 20+ times in a row. I certainly do not have any "psychic" powers or latent psi abilities.
The tests would have to be run many days and thousands of times (tens of thousands) in order to be legit, to me.
In statistics, it's not unheard of for a professor to do the ol' coin flip 100 times homework. If you don't have at least 6 coin flips in a row, you fail the assignment because 100 coin flips should result in at leas one side being tossed 6 times in a row.
Gender: Unspecified Location: With Cinderella and the 9 Dwarves
I am of course referring not to the discredited and abandoned older research, but that new research that your second link claims is so professional and accepted.
No, not "maybe". A mystic would say I have "abilities" when hearing that story.
A statistician would say that it is a probability to get 100 in a row with millions of trials.
Correct.
Some only had 100 in the data set.
Not only would you want to see thousands but you'd want to see results duplication. (Not really thousands...but something like that would need a lot od evidence to get the scientific community to believe)
Well I do and it's true. The larger your "set", the higher the probability that you'll have 'straight' runs of one side or the other.
Gender: Unspecified Location: With Cinderella and the 9 Dwarves
I'd be very happy if Psi would be real. That would be an enormous finding. I just think that a lot of the tests I have heard about are by no means valid.
Why do you think that, if there is such good evidence, that the scientific community tries to hide it? I mean, that's what you propose, yes? That there is a conspiracy to hide the significant findings in Psi from the general public.
woah... ummm, I think you guys just gave Cohen a stroke....
the way those statistical tests are done, the more subjects and the more trials actually biases in favor of a positive result. Increasing the N in an experiment, by definition, decreases the standard deviation, and thus reduces the 95% CI to a smaller range, that is therefore less likely to contain the null. It is one of the major problems with those statistical methods, and a full power analysis (something with its own problems) would be required to know exactly how many subjects and trials would be appropriate.
The idea that more people doing more trials is good for science is false, 100% false
As far as I'm aware you usually have a bigger number than that in psi experiements. They don't just stop at 20 guesses and job done.
I kinda see what you're saying but the reason why these experiments were set this way was because they were within the protocols of accepted science and I might be wrong but other areas of reasearch follow the same criteria.
What you're saying now is eventhough the experiements followed scientific protocols it's not good enough. In fact theres evidence that psi experiments have actually better standards than other experiments.
I bet you any money that there other areas of science which don't accept the criteria you mentioned.
__________________ Watch what people are cynical about, and one can often discover what they lack.
- General George Patton Jr
If there were less subjects you would find something else to complain about. There was a telephone telepathy experiment done by Rupert Sheldrake and in those experiments they complained there weren't enough subjects. Damned if you do, damned if you don't.
Which is the point I've been making all along. You don't make constructive criticism you invent stuff to complain about. You've already made your mind up. If this were an experiment about something else you wouldn't be saying that.
__________________ Watch what people are cynical about, and one can often discover what they lack.
- General George Patton Jr
the most common scientific tests are t-tests or z-tests (ANOVAs and F-tests too, but they are much more complicated than I want to explain on a forum....).
In a t or z test, you compared your observed mean to a null mean, divided by either the standard deviation or standard error of your results:
t = (mean1-mean2)/SD
when calculating either SD or SE, the N (number of trials/participants) is the denominator:
SD = (whatever)/N
as N increases, the (whatever)/N value decreases, thus, SD decreases. As SD decreases, the t value increases, as the difference between the means is divided by an increasingly lower number. The higher your t value, the more likely the result is significant. it is a basic law of the equations used.
like, seriously, I'm not even exaggerating at this point, null hypothesis testing has major, major problems
Every man and his dog is complaining about psi experiments not being repeatable? How the hell do you disprove that? By having more trials. Now somebodies trying to tell me more trials is bad?
If this isn't an example of trying to take the piss I don't know what is.
__________________ Watch what people are cynical about, and one can often discover what they lack.
- General George Patton Jr
If I'm not mistaken, I was just referring to 2 side and 4 side probabilities and the likelihood of guessing results.
Is that not what those tests were?
There's also the literal requirement of a significant sample size. All the time I read articles about a new drug or a new method that has fewer than 20 samples and the journal concludes: "More testing will be required to determine if x is effective."
In z tests, you must have a significant sample size in order to overcome the null hypothesis...if you're trying to show the desired results. In fact, not having a large enough sample size in your z-testing can result in shitty results or wacky SDs (nuisance parameters).
In the medical community, 30 is usually considered acceptable for z-testing. It's the same for physics.
Go ahead. If trolling makes you feel good thats says alot about you. Yes and I do think there might be a 'conspiracy' but I think it may just be more like prejudice.
__________________ Watch what people are cynical about, and one can often discover what they lack.
- General George Patton Jr
Last edited by Deadline on Aug 19th, 2011 at 04:33 PM
there is a difference between more trials in a single experimental design (the N value) and replicating experimental designs using similar N values.
within a single experiment, yes, there are problems with increasing N, because of how it effects the calculation of standard deviation or standard error, however, across experiments this isn't the case, as SD and SE are calculated as unique values within individual experimental designs.
within a single experiment, increasing N will only have the effect of lowering your p value (increasing the likelihood of significant results), and is something I am personally loathe to do [even when my adviser thinks it is ok. Actually, in terms of philosophical approaches to data, I am a very strict purist].
Replication, on the other hand, verifies previous findings and has no impact on their statistical results.