Double standards in experiments

Started by inimalist4 pages

Originally posted by dadudemon
If I'm not mistaken, I was just referring to 2 side and 4 side probabilities and the likelihood of guessing results.

Is that not what those tests were?

at the end of the day, its all t-tests

Originally posted by dadudemon
There's also the literal requirement of a significant sample size. All the time I read articles about a new drug or a new method that has fewer than 20 samples and the journal concludes: "More testing will be required to determine if x is effective."

In z tests, you must have a significant sample size in order to overcome the null hypothesis...if you're trying to show the desired results. In fact, not having a large enough sample size in your z-testing can result in shitty results or wacky SDs (nuisance parameters).

In the medical community, 30 is usually considered acceptable for z-testing. It's the same for physics.

no, totally

sorry, I'm never sure of how much people know about stats.

z tests have far more problems associated with them than t tests, like the one you mentioned above. There is also the problem of the SD in a z test assuming to represent a population mean rather than sample mean.

This really doesn't change the fact that inflating N gives more positive results. There is a theoretical "sweet spot" for any design based on the variance in the sample, however, there are only indirect and, imho, half assed ways to calculate it (power analysis, for instance). It is one of the main reasons why so many people have complained about NHST in recent years, and why I am teaching myself Bayes for my own data

Originally posted by inimalist
oh, ok

there is a difference between more trials in a single experimental design (the N value) and replicating experimental designs using similar N values.

My point.

I don't mean having someone "flex" their "psi" abilities 1000 thousand times in a single trial.

I'm referring to testing over and over again. Keeping bitches fresh for measurements. 😆

Originally posted by inimalist

within a single experiment, increasing N will only have the effect of lowering your p value (increasing the likelihood of significant results), and is something I am personally loathe to do [even when my adviser thinks it is ok. Actually, in terms of philosophical approaches to data, I am a very strict purist].

What experiment are you looking at? I think they mentioned the number of trials. Whats your def of an experiment 1 trial or many?

Originally posted by inimalist

Do you know about meta-analysis?

A bit.

Originally posted by dadudemon
My point.

I don't mean having someone "flex" their "psi" abilities 1000 thousand times in a single trial.

I'm not even sure they did that.

Originally posted by dadudemon

I'm referring to testing over and over again. Keeping bitches fresh for measurements. 😆

Yes by having lots of trials.

Originally posted by Deadline
Go ahead. If trolling makes you feel good thats says alot about you. Yes and I do think there might be a 'conspiracy' but I think it may just be more like prejudice.

I don't mean to troll you, I would like to understand where you are coming from.

Originally posted by Deadline
What experiment are you looking at?

I'm actually just speaking in general here about stats more abstractly. These N problems apply to any science that uses null hypothesis testing

Originally posted by Deadline
I think they mentioned the number of trials.

they do, and I could probably look up and run a power analysis, but I'm not actually arguing that any of those results have an issue with N [like, the n in the study I'm running is about 2000 (20 subjects, 100 trials each), so it really doesn't seem like that would be a problem here... I'd have to look closer at the methodology though...]. I was more commenting on you and ddm talking about wanting to run more trials.

Though I may have confused what you guys were talking about, ie: running more experiments to replicate previous results, with simply just running thousands of trials within single experiments

Originally posted by Deadline
Whats your def of an experiment 1 trial or many?

I wouldn't define an experiment by the number of trials actually. for sure, there has to be more than one. Optimally, you just have to be smart as a researcher. Ive seen studies that had an N over 5000, but only 2 subjects (over 2500 trials per subject). They had awesome p-values, but only 2 subjects, no matter how many trials, isn't really good science. The opposite is true too, 1000 subjects with 2 trials each isn't worth much at all.

I'm actually trying to teach myself new statistical methods that aren't tied to things like N or the other problems in NHST, namely, Bayesian probability. I honestly do not trust p values anymore

Originally posted by Deadline
A bit.

🙂 ok, so the only time too much replication would be a problem is when someone does a meta-analysis. There is something called the "file-drawer" effect, meaning that studies that don't produce significant results don't get published. So, I can run statistical analyses over many studies in an attempt to see if there is any overarching pattern, however, if all that are published are the studies that are successful, this meta-analysis will be biased in favor of a positive result.

another issue with modern science is that you can never get null (negative) results published.

Originally posted by inimalist
I'm actually just speaking in general here about stats more abstractly. These N problems apply to any science that uses null hypothesis testing

they do, and I could probably look up and run a power analysis, but I'm not actually arguing that any of those results have an issue with N [like, the n in the study I'm running is about 2000 (20 subjects, 100 trials each), so it really doesn't seem like that would be a problem here... I'd have to look closer at the methodology though...]. I was more commenting on you and ddm talking about wanting to run more trials.

Though I may have confused what you guys were talking about, ie: running more experiments to replicate previous results, with simply just running thousands of trials within single experiments

I wouldn't define an experiment by the number of trials actually. for sure, there has to be more than one. Optimally, you just have to be smart as a researcher. Ive seen studies that had an N over 5000, but only 2 subjects (over 2500 trials per subject). They had awesome p-values, but only 2 subjects, no matter how many trials, isn't really good science. The opposite is true too, 1000 subjects with 2 trials each isn't worth much at all.

I'm actually trying to teach myself new statistical methods that aren't tied to things like N or the other problems in NHST, namely, Bayesian probability. I honestly do not trust p values anymore

🙂 ok, so the only time too much replication would be a problem is when someone does a meta-analysis. There is something called the "file-drawer" effect, meaning that studies that don't produce significant results don't get published. So, I can run statistical analyses over many studies in an attempt to see if there is any overarching pattern, however, if all that are published are the studies that are successful, this meta-analysis will be biased in favor of a positive result.

Yea I see what you mean but I don't think that applies to anything I posted.

Originally posted by inimalist

another issue with modern science is that you can never get null (negative) results published.

Dunno about that pretty sure I've seen failed replications of psi.

Originally posted by Deadline
Yea I see what you mean but I don't think that applies to anything I posted.

sort of, it does apply to how to replicate the research

the part about not trusting NHST is applicable /shrug

Originally posted by Deadline
Dunno about that pretty sure I've seen failed replications of psi.

fair enough

take that to mean they are rare. you seem them occasionally, but most journals wont publish nulls. You certainly don't see even a significant fraction of null results published.

Originally posted by inimalist
🙂 ok, so the only time too much replication would be a problem is when someone does a meta-analysis. There is something called the "file-drawer" effect, meaning that studies that don't produce significant results don't get published. So, I can run statistical analyses over many studies in an attempt to see if there is any overarching pattern, however, if all that are published are the studies that are successful, this meta-analysis will be biased in favor of a positive result.

another issue with modern science is that you can never get null (negative) results published.

Those meta-analyses are instantly criticized and some will not even get "funded" before they can commence the research.

Maybe I'm more of an optimist and think that department heads don't approve projects that have a propensity towards the publication bias.

IMO, it can't be considered a meta-analysis unless "inconclusive" and "negative" results are also included (where applicable).

But what about negative or inconclusive results that were considered "improperly" conducted as I often see from the primary researchers? They scream, "they didn't do the test according to the parameters so their results must be thrown out! RAWR!"

Other times, meta-analyses can reveal quite awesome information via the "aggregate conclusion" phenomenon.

About the other stuff.

The "N", especially when testing humans, HAS to be in separate "testing" events due to fatigue. You can't have someone do 500 trials in one sitting without expecting results to change towards the end.

Also, depending on what is being tested, it very well may BE legitimate to use 1000 subjects with only 1 or 2 trials for each. You should still see the realization of the central limit theorem in such a case. That really does depend on the test being done because you can't test to see how people will respond to 4 pictures, one the object, and then measure only one viewing while randomizing the object among them. Or can you?

DAMNIT! Soooo much headache.

Originally posted by Deadline
You might be right, but you're probably lying.

well, it is actually really simple

all of the data you have presented is based on what is called a "null-hypothesis-significance-test" (NHST). in an NHST, you compare the data you obtain through an experiment to a "null" value that represents chance.

So, in most studies data is compared to a null hypothesis (H0) of 0, meaning there is no effect. In the cases you presented, instead of testing against no effect, they tested against a chance null, H0=25%. This is fine. Now, say I have results where my observed mean (u1) is 30. This means there is an absolute difference of 5% between my observed mean hit rate, and the assumed null chance hit rate (u0):

u1-u0=5

However, in my test, not every subject had a hit rate of exactly 30. Some were above and others were below. So, we then take an average of how much people are different from 30, known as the standard deviation, SD.

There are other things here, like probability distributions, etc, but for the sake of simplicity, just trust me that as a mathematical law, you can say what percentage of subject scores will fall within any number of standard deviations from the observed mean.

So, lets say that the SD in these results was 3. So, it is known that ~65% of all subjects will fall within one standard deviation of the mean. So, our 65% confidence interval would be 27-33.

now, there is something called an alpha value (a). alpha can be seen as, ummmmm, lets say the "opposite" of a confidence interval. So, the tradition in science is to use an alpha value of .05, or 5%, meaning that the typical CI that is used in experiments is 95%. basically, the CI representing all the values which are represented by your observed data, and alpha representing those that are outside.

a 95% confidence interval is all data within 2 standard deviations of the observed mean, so in this case, 30 +/- 6, or 24-36.

because 25 falls within this range, we cannot say it is statistically different from our null mean. Based on the variance in the data, the score of 25% would not be unexpected, therefore, the result is non significant. Let me know if this doesn't make sense.

Also, about the H0=25%. That null isn't the most appropriate, as in many cases, the probability of something occurring is based not simply on a raw percentage estimate, but can be influenced by previous trials and other mundane things. Think about it like this: I have a deck of cards and I say "deadline, predict the suit of the next card". Now, the raw probability is 1/4, or 25%. However, as we go through the deck, the suit of the previous cards can influence how probable it is another suit would come up. So, if you see a string of hearts, you as the experimental subject would then, even if subconsciously, know not to select hearts, as there are now fewer hearts in the deck than the other suits, making the probability of the other suits greater than 25%. This isn't nitpicking either. The studies I did in my undergrad had subjects distinguish between an L and a T on target objects. There was nothing important about the T or the L, but even in that case, they would often ask "why were there more Ts/Ls?". This type of probability is something our brains are intrinsically aware of, and could certainly cause difficulty in determining what a proper null percentage would be in these experiments. Another example of this is from previous stuff you showed me that said that a particular subject seemed to have a talent for remote viewing military installations. However, the studies were conducted on military bases (iirc), meaning that the context may have played a role in priming a certain type of response in the individual. However, it could be even more mundane, as cognitive biases like that could be produced from someone simply being a fan of Command and Conquer games.

Additionally, as I posted in the Atheism thread, when you apply more rigorous statistical methods, like Bayesian probability analysis, most of the significant results in psi have been seen to evaporate. There are a number of reasons for this. For one, a NHST does not tell you how likely it is that your hypothesis is true, but rather, how likely it is chance alone is responsible for your results. Studies have looked at the correlation between p-values (the probability of chance explaining your results) and true hypotheses, and found the R value to be just over .35 (extremely low), and this number drops when you restrict p-values to only those that would find significant results. (please ask if this doesn't make sense, I'm sure stats aren't as exciting to you as they are to me, but if you want to talk about double standards in science, you need to understand how stats work). These results are interesting, but really only show that, in a few experiments, the pattern of results aren't what would be expected due to chance alone.

this was too long to include as part of the last post, but here are studies that have begun analyzing previous psi results using Bayesian methods

Originally posted by inimalist
hmmmmm, can't think of a better place to post this, so a little OT, but I think you guys might be interested.

So, in the field of statistical analysis, there is this new concept known as Bayesian Probability (BP), which can replace Null-Hypothesis significance testing (NHST), as a way of determining whether you have an effect in your data. I don't really understand it at this point (actually, the reason I found these articles was looking for intros to BP on pubmed for my own research), but if you look at statistical science, it is pretty much taken as a given that BP is superior to NHST for a number of reasons (in fact, NHST has massive and fatal problems, but for some reason, psychologists seem to be the last people to abandon it).

Anyways, in my search for such tutorials, I can across a pair of articles that will undoubtedly re-awaken some old debates, but that I feel most people here are going to get a kick out of.

They are both psi studies that reevaluated results found using NHST with BP. In this first, a series of 9 studies with over 1000 participants, which were all positive results using NHST, were found to be completely insignificant using BP:

Why psychologists must change the way they analyze their data: the case of psi: comment on Bem (2011)

Does psi exist? D. J. Bem (2011) conducted 9 studies with over 1,000 participants in an attempt to demonstrate that future events retroactively affect people's responses. Here we discuss several limitations of Bem's experiments on psi; in particular, we show that the data analysis was partly exploratory and that one-sided p values may overstate the statistical evidence against the null hypothesis. We reanalyze Bem's data with a default Bayesian t test and show that the evidence for psi is weak to nonexistent. We argue that in order to convince a skeptical audience of a controversial claim, one needs to conduct strictly confirmatory studies and analyze the results with statistical tests that are conservative rather than liberal. We conclude that Bem's p values do not indicate evidence in favor of precognition; instead, they indicate that experimental psychologists need to change the way they conduct their experiments and analyze their data.

http://www.ncbi.nlm.nih.gov/pubmed/21280965

The second, while still finding 3 out of 6 positive results using BP, overturned a 6 out of 6 positive result using NHST:

[quote]Extraordinary claims require extraordinary evidence: the case of non-local perception, a classical and bayesian review of evidences

Starting from the famous phrase "extraordinary claims require extraordinary evidence," we will present the evidence supporting the concept that human visual perception may have non-local properties, in other words, that it may operate beyond the space and time constraints of sensory organs, in order to discuss which criteria can be used to define evidence as extraordinary. This evidence has been obtained from seven databases which are related to six different protocols used to test the reality and the functioning of non-local perception, analyzed using both a frequentist and a new Bayesian meta-analysis statistical procedure. According to a frequentist meta-analysis, the null hypothesis can be rejected for all six protocols even if the effect sizes range from 0.007 to 0.28. According to Bayesian meta-analysis, the Bayes factors provides strong evidence to support the alternative hypothesis (H1) over the null hypothesis (H0), but only for three out of the six protocols. We will discuss whether quantitative psychology can contribute to defining the criteria for the acceptance of new scientific ideas in order to avoid the inconclusive controversies between supporters and opponents.

http://www.ncbi.nlm.nih.gov/pubmed/21713069

I'm really interested in reading the second, simply to see what evidence there is that still remains, but the take away from this post is that, even in the past where psi phenomena may have been discovered in tests, we see now, that superior statistical methods actually reduce, if not eliminate entirely, the salience of that evidence. This is nothing new, the same is typically found with tighter controls, etc, just something that tickled me in the right way this morning.

...

[an interesting reply to the first study]

A Bayes factor meta-analysis of Bem's ESP claim

In recent years, statisticians and psychologists have provided the critique that p-values do not capture the evidence afforded by data and are, consequently, ill suited for analysis in scientific endeavors. The issue is particular salient in the assessment of the recent evidence provided for ESP by Bem (2011) in the mainstream Journal of Personality and Social Psychology. Wagenmakers, Wetzels, Borsboom, and van der Maas (Journal of Personality and Social Psychology, 100, 426-432, 2011) have provided an alternative Bayes factor assessment of Bem's data, but their assessment was limited to examining each experiment in isolation. We show here that the variant of the Bayes factor employed by Wagenmakers et al. is inappropriate for making assessments across multiple experiments, and cannot be used to gain an accurate assessment of the total evidence in Bem's data. We develop a meta-analytic Bayes factor that describes how researchers should update their prior beliefs about the odds of hypotheses in light of data across several experiments. We find that the evidence that people can feel the future with neutral and erotic stimuli to be slight, with Bayes factors of 3.23 and 1.57, respectively. There is some evidence, however, for the hypothesis that people can feel the future with emotionally valenced nonerotic stimuli, with a Bayes factor of about 40. Although this value is certainly noteworthy, we believe it is orders of magnitude lower than what is required to overcome appropriate skepticism of ESP.


[/QUOTE]

Originally posted by dadudemon
Those meta-analyses are instantly criticized and some will not even get "funded" before they can commence the research.

Maybe I'm more of an optimist and think that department heads don't approve projects that have a propensity towards the publication bias.

IMO, it can't be considered a meta-analysis unless "inconclusive" and "negative" results are also included (where applicable).

But what about negative or inconclusive results that were considered "improperly" conducted as I often see from the primary researchers? They scream, "they didn't do the test according to the parameters so their results must be thrown out! RAWR!"

Other times, meta-analyses can reveal quite awesome information via the "aggregate conclusion" phenomenon.

actually, the trick is that once you have a meta-analysis score, you can run some stats to see how many null results would have to be included in the meta-analysis to make it non-significant.

So like, if you find that your significant meta-analysis would only need one or two disconfirming studies to show there is no actual pattern in the data, you can probably conclude the meta-analysis is reflecting some type of file drawer effect. If the number is like 400000 or something, then you can probably be confident that there is a real effect. This is actually the case with the research done of violence in media and violence in people. The last such analysis I saw said that something like only 20 null results would reduce the effect to non-significance.

you are right about null results though, a lot aren't published because they just aren't good research versus just not being a good narrative. Its a double edge sword though, and also applies to research that finds different results, as stuff I did in undergrad was almost impossible to publish, because if provided significant evidence against prevailing theories.

Originally posted by dadudemon
About the other stuff.

The "N", especially when testing humans, HAS to be in separate "testing" events due to fatigue. You can't have someone do 500 trials in one sitting without expecting results to change towards the end.

Also, depending on what is being tested, it very well may BE legitimate to use 1000 subjects with only 1 or 2 trials for each. You should still see the realization of the central limit theorem in such a case. That really does depend on the test being done because you can't test to see how people will respond to 4 pictures, one the object, and then measure only one viewing while randomizing the object among them. Or can you?

sure, but now we are differentiating statistics from experimental design. 1 subject doing 10000000000 trials is going to net you an impressively low p-value, and will conform to general statistical rules, but in terms of being proper or well designed research, it is nonsense.

my own bias is that, no, 2 trials is not enough per participant. It is stuff like that which causes myself and other lab mates to laugh heartily at social psych research that depends on survey results and the like.

Originally posted by dadudemon
DAMNIT! Soooo much headache.

pfft, how can you not like stats?

Originally posted by dadudemon
Maybe I'm more of an optimist and think that department heads don't approve projects that have a propensity towards the publication bias.

man, if you only knew what a cluster**** funding is

Originally posted by inimalist
well, it is actually really simple

all of the data you have presented is based on what is called a "null-hypothesis-significance-test" (NHST). in an NHST, you compare the data you obtain through an experiment to a "null" value that represents chance.

So, in most studies data is compared to a null hypothesis (H0) of 0, meaning there is no effect. In the cases you presented, instead of testing against no effect, they tested against a chance null, H0=25%. This is fine. Now, say I have results where my observed mean (u1) is 30. This means there is an absolute difference of 5% between my observed mean hit rate, and the assumed null chance hit rate (u0):

u1-u0=5

However, in my test, not every subject had a hit rate of exactly 30. Some were above and others were below. So, we then take an average of how much people are different from 30, known as the standard deviation, SD.

There are other things here, like probability distributions, etc, but for the sake of simplicity, just trust me that as a mathematical law, you can say what percentage of subject scores will fall within any number of standard deviations from the observed mean.

So, lets say that the SD in these results was 3. So, it is known that ~65% of all subjects will fall within one standard deviation of the mean. So, our 65% confidence interval would be 27-33.

now, there is something called an alpha value (a). alpha can be seen as, ummmmm, lets say the "opposite" of a confidence interval. So, the tradition in science is to use an alpha value of .05, or 5%, meaning that the typical CI that is used in experiments is 95%. basically, the CI representing all the values which are represented by your observed data, and alpha representing those that are outside.

a 95% confidence interval is all data within 2 standard deviations of the observed mean, so in this case, 30 +/- 6, or 24-36.

because 25 falls within this range, we cannot say it is statistically different from our null mean. Based on the variance in the data, the score of 25% would not be unexpected, therefore, the result is non significant. Let me know if this doesn't make sense.

Also, about the H0=25%. That null isn't the most appropriate, as in many cases, the probability of something occurring is based not simply on a raw percentage estimate, but can be influenced by previous trials and other mundane things. Think about it like this: I have a deck of cards and I say "deadline, predict the suit of the next card". Now, the raw probability is 1/4, or 25%. However, as we go through the deck, the suit of the previous cards can influence how probable it is another suit would come up. So, if you see a string of hearts, you as the experimental subject would then, even if subconsciously, know not to select hearts, as there are now fewer hearts in the deck than the other suits, making the probability of the other suits greater than 25%. This isn't nitpicking either. The studies I did in my undergrad had subjects distinguish between an L and a T on target objects. There was nothing important about the T or the L, but even in that case, they would often ask "why were there more Ts/Ls?". This type of probability is something our brains are intrinsically aware of, and could certainly cause difficulty in determining what a proper null percentage would be in these experiments. Another example of this is from previous stuff you showed me that said that a particular subject seemed to have a talent for remote viewing military installations. However, the studies were conducted on military bases (iirc), meaning that the context may have played a role in priming a certain type of response in the individual. However, it could be even more mundane, as cognitive biases like that could be produced from someone simply being a fan of Command and Conquer games.

Additionally, as I posted in the Atheism thread, when you apply more rigorous statistical methods, like Bayesian probability analysis, most of the significant results in psi have been seen to evaporate. There are a number of reasons for this. For one, a NHST does not tell you how likely it is that your hypothesis is true, but rather, how likely it is chance alone is responsible for your results. Studies have looked at the correlation between p-values (the probability of chance explaining your results) and true hypotheses, and found the R value to be just over .35 (extremely low), and this number drops when you restrict p-values to only those that would find significant results. (please ask if this doesn't make sense, I'm sure stats aren't as exciting to you as they are to me, but if you want to talk about double standards in science, you [b]need to understand how stats work). These results are interesting, but really only show that, in a few experiments, the pattern of results aren't what would be expected due to chance alone. [/B]

Look man it's debateable wether Bayesian analysis is more rigorous and I don't think it makes past results null and void. Bayesian analysis enables you to get what results depending on how biased you are. Don't think you have that with the other method. Think this is explained at the beginning here.

http://www.psy.unipd.it/~tressold/cmssimple/uploads/includes/ExtraordinaryClaim011.pdf

Too busy to read all of it.

Looks like a lot of medical experiments could have cognitive bias if you read comics.

Originally posted by Deadline
Look man it's debateable wether Bayesian analysis is more rigorous and I don't think it makes past results null and void. Bayesian analysis enables you to get what results depending on how biased you are. Don't think you have that with the other method. Think this is explained at the beginning here.

http://www.psy.unipd.it/~tressold/cmssimple/uploads/includes/ExtraordinaryClaim011.pdf

Too busy to read all of it.

Looks like a lot of medical experiments could have cognitive bias if you read comics.

I've read and, in fact, posted that study to you in the past... It shows that a Bayes analysis reduces iirc 6 significant results to 3... That isn't a strong positive for previous psi research. If you are really interested you can look up my summation of it and the experiments immediately preceding it in the "Atheism" thread.

can you explain how "Bayesian analysis enables you to get what results depending on how biased you are."? I haven't found that in the article or in any of the reading I have done on Bayes theory. I suspect you are confusing the assignment of initial probability with some type of bias, but even then, there are very conservative and non-subjective fixes to that abundant in the literature... Even then, however, after the first iteration of the analysis, that initial probability becomes essentially moot...

EDIT: WTF, lol, nvm dude, i re-posted that already, like 5 posts above this one. If you are too busy to read my argument, don't post a reply, it really makes you look sloppy...

Originally posted by inimalist
I've read and, in fact, posted that study to you in the past... It shows that a Bayes analysis reduces iirc 6 significant results to 3... That isn't a strong positive for previous psi research. If you are really interested you can look up my summation of it and the experiments immediately preceding it in the "Atheism" thread.

can you explain how "Bayesian analysis enables you to get what results depending on how biased you are."? I haven't found that in the article or in any of the reading I have done on Bayes theory. I suspect you are confusing the assignment of initial probability with some type of bias, but even then, there are very conservative and non-subjective fixes to that abundant in the literature... Even then, however, after the first iteration of the analysis, that initial probability becomes essentially moot...

EDIT: WTF, lol, nvm dude, i re-posted that already, like 5 posts above this one. If you are too busy to read my argument, don't post a reply, it really makes you look sloppy...

Haven't been on here for while but just need say somethings. I have little doubt that you're probably twisting things around. I also noticed you only started mentioning Bayesian analysis after I kept posting experiments with stats to back it up. I suspect you probably never even heard of Bayesian analysis and just started using it.

Anyway I don't have enough in-depth knowledge to challenge in detail (or time to read up) but again I've asked people who have more knowledge about this subject than you do and they don't agree with your opinion.

I really can't be bothered if I had time I'd probably find out you haven't read the pdf propely. That pdf was actually recommended, I can assume what you're going to do and sometimes it's not really worth the effort.

you outdo yourself every time

LOL @ Deadline.

Originally posted by inimalist
you outdo yourself every time

Dammm you inimalist!!! angrymob

I don't see what you want here deadline...? like, you seem determined to keep this active, but your posts amount to little more than accusations of lying and insults, whereas you are the first person to cry foul when you perceive even the smallest slight from other members. You clearly don't read what I'm writing, and have essentially refused to support any study or answer any claim I have made in this thread.

It really seems like you are looking for some kind of validation or vindication of your position from me, which, tbh, really isn't that important. If you believe this stuff, sure, that is cool, I don't judge you for that, however, if you want to substantiate this idea that psi phenomenon are being specifically ignored and there is a vast conspiracy among psychologists to suppress it, you might have to do better than suggesting I don't understand things that I've made (in retrospect far too) long posts describing.

I mean, you asked people who know better than I do, wtf do you care what I think? Obviously I don't know what I'm talking about. It's not like I do this for a living. Every. Day. Of. My. Life.

Originally posted by Deadline
Haven't been on here for while but just need say somethings. I have little doubt that you're probably twisting things around. I also noticed you only started mentioning Bayesian analysis after I kept posting experiments with stats to back it up. I suspect you probably never even heard of Bayesian analysis and just started using it.

You keep making accusations like this, but never back them up. I would think you'd enjoy someone taking as much time as inimilist has to engage the topic with you, regardless of whether or not you disagree. Because, contrary to what your posts suggest you think, most peoples' motivating factor in these discussions is not to win debates on the internet by making things up.

Originally posted by Deadline
Anyway I don't have enough in-depth knowledge to challenge in detail (or time to read up) but again I've asked people who have more knowledge about this subject than you do and they don't agree with your opinion.

Professed ignorance followed by an appeal to authority.

Originally posted by Deadline
I really can't be bothered if I had time I'd probably find out you haven't read the pdf propely. That pdf was actually recommended, I can assume what you're going to do and sometimes it's not really worth the effort.

Lack of effort followed by another veiled accusation (also unsupported).

Note: I haven't once commented on your position, just your methods of debate. You can change to a more amenable form of communication without rubbing everyone the wrong way, and without having to change your positions on any topic.