Occult.Digital.Mobilization
Would you like to react to this message? Create an account in a few clicks or log in to continue.

"Remote Viewing" -

2 posters

Go down

"Remote Viewing" - Empty "Remote Viewing" -

Post  Khephra Mon Jul 27, 2009 1:20 pm

Most of this was pooled from Radin's The Conscious Universe. It was originally intended to serve as a repository where I could direct fundamentalist skeptics, but I figured I'd add it here too:

Dr. Ray Hyman, a professor of psychology at the University of Oregon, has an extensive record as a skeptic of psi phenomena. He chaired the National Research Council's review committee on parapsychology. He stated in a 1988 interview with the Chronicle of Higher Education that "Parapsychologists should be rejoicing. This was the first government committee that said their work should be taken seriously." (Chronicle of Higher Education, September 14, 1988, p. A5).

In early 1989 the Office of Technology Assessment issued a report of a workshop on the status of parapsychology. The end of the report stated that "It is clear that parapsychology continues to face strong resistance from the scientific establishment. The question is - how can the field improve its chances of obtaining a fair hearing across a broader spectrum of the scientific community, so that emotionality does not impede objective assessment of the experimental results? Whether the final result of such an assessment is positive, negative, or something in between, the field appears to merit such consideration." (Office of Technology Assessment 1989).

In 1995 the American Institutes for Research reviewed formerly classified government-sponsored psi research for the CIA at the request of the US Congress. Statistician Jessica Utts of the University of California Davis, one of the two principle reviewers, concluded that, "The statistical results of the studies examined are far beyond what is expected by chance. Arguments that these results could be due to methodological flaws in the experiments are soundly refuted. Effects of similar magnitude to those found in government-sponsored research ... have been replicated at a number of laboratories across the world. Such consistency cannot be readily explained by claims of flaws or fraud. ... It is recommended that future experiments focus on understanding how this phenomenon works, and on how to make it as useful as possible. There is little benefit to continuing experiments designed to offer proof." (1996 An assessment of the evidence for psychic functioning. Journal of Statistics Education 10:3).

The other principal reviewer, skeptic Ray Hyman, agreed: "The statistical departues from chance appear to be too large and constitent to attribute to statistical flukes of any sort. ... I tend to agree with Professor Utts that real effects are occurring in these experiments. Something other than chance departures from the null hypothesis has occurred in these experiments." (1996 Evaluation of a program on anomalous mental phenomena. Journal of Statistics Education 10:57).

These opinions have even been reflected in college textbooks. One of the most popular books in the history of college publishing is Introduction to Psychology, by Richard L. Atkinson and three coauthors. A portion of the preface in the 1990 edition of this textbook reads: "Readers should take note of a new section in Chapter 6 entitled 'Psi Phenomena.' We have discussed parapsychology in previous editions but have been very critical of the research and skeptical of the claims made in the field. And although we still have strong reservations about most of the research in parapsychology, we find the recent work on telepathy worthy of careful consideration."


Randomization:
In psi experiments, the way a target is selected is important because if the participants can consciously or unconsciously guess what the targets are, and they are repeatedly guessing many targets in a row, as in an ESP card test, then their responses could look like psi when they are really educated guesses.

Say that an ordinary deck of playing cards was accidentally unbalanced to contain fewer clubs than there were supposed to be. With repeated guessing, and with feedback about the results of each trial, participants might be able to notice that clubs did not show up as often as expected. If they decided to slightly undercall the number of clubs in subsequent guesses, this could slightly inflate the number of successful hits they got on the remaining cards. Successful results in such a test would not indicate psi, but rather a clever (or unconscious) application of statistics.

In a ganzfeld study, however, the process of randomizing the targets is much less important because only one target is used per session, and most participants serve in only one session. So there is no possibility of learning any guessing strategies based on inadequate randomization. However, a critic could argue (and did) that if all the target pictures within each target pool were not selected uniformly over the course of the study, this could still produce inflated hit rates.

The reasoning goes like this: A person who has participated in the study tells a friend about her ganzfeld experience where the target was, say, a Santa Claus picture. Later, if the friend participated in the study, and he got the same target pool, and during the judging period he also selected the Santa Claus because of what his friend said, and the randomization procedure was poor, and Santa Claus was selected as the target again, then what looked like psi wasn't really psi after all, but a consequence of poor randomization.

A similar concern arises for the method of randomizing the sequence in which the experimenter presents the target and the three decoys to the receiver during the judging process. If, for example, the target is always presented second in the sequence of four, then again, a subject may tell a friend, and the friend, armed with knowledge about which of the four targets is the real one, could successfully select the real target without the use of psi.

Although these scenarios are implausible, skeptics have always insisted on nailing down even the most unlikely hypothetical flaws. Hyman and Dr. Charles Honorton disagreed regarding the importance of these randomization flaws. Hyman claimed that he saw a significant relationship between randomization flaws and study outcomes, and Honorton did not. The sources of this disagreement can be traced to Honorton's and Hyman's differing definitions of "randomization flaws," to how the two analysts rated these flaws in the individual studies, and to how they statistically treated the quality ratings.

These sorts of complicated disagreements are not unexpected given the diametrically opposed convictions with which Honorton and Hyman began their analyses. When such discrepancies arise, it is useful to consider the opinions of outside reviewers who have the technical skills to assess the disagreements. In this case, ten psychologists and statisticians supplied commentaries alongside the Honorton-Hyman published debate that appeared in 1986. None of the commentators agreed with Hyman, while two statisticians and two psychologists not previously associated with this debate explicitly agreed with Honorton. (Harris, M.J., and R. Rosenthal. 1986. Interpersonal expectancy effects and human performance research, Postscript to interpersonal expectancy effects and human performance research., and Human performance research: An overview Washington, DC: National Academy Press. Saunders, D.R. 1985. On Hyman's factor analysis. Journal of Psychology 49:86-88. Utts, J.M. 1986. The ganzfeld debate: A statistician's perspective. Journal of Psychology 50:393-402).

In two separate analyses conducted later, Harvard University behavioral scientists Monica Harris and Robert Rosenthal (the latter a world-renowned expert in methodology and meta-analysis) used Hyman's own flaw ratings and failed to find any significant relationships between the supposed flaws and the study outcomes. They wrote, "Our analysis of the effects of flaws on study outcome lends no support to the hypothesis that ganzfeld research results are a significant funciton of the set of flaw variables." (Utts, J.M. 1991. Rejoinder. Statistical Science 6:396-403).

In other words, everyone agreed that the ganzfeld results were not due to chance, nor to selective reporting, nor to sensory leakage. And everyone, except one confirmed skeptic, also agreed that the results were not plausibly due to flaws in randomization procedures.


RV Procedures:
In typical RV experiments, a "viewer" is asked to sketch or to describe (or both) a "target." The target might be a remote location or individual, or a hidden photograph, object, or video clip. All possible paths for sensory leakage are blocked, typically by separating the target from the viewer by distance, sometimes thousands of miles, or by hiding the target in an opaque envelope, or by selecting a target in the future.

Sometimes the viewer is assisted by an interviewer who asks questions about the viewer's impressions. Of course, in such cases the interviewer is also blind to the target so he or she cannot accidentally provide cues. In some RV studies, a sender visits the remote site or gazes at a target object during the session; these experiments resemble classic telepathy tests. In other studies there are no senders at the remote site. In most tests, viewers eventually receive feedback about the actual target, raising the possibility that the results could be thought of as precognition rather than real-time clairvoyance.


Judging the Results:
All but the very earliest studies at SRI (Stanford Research Institute, mentioned in an earlier comment) evaluated the results using a method called "rank-order judging." This is similar to the technique employed in dream-telepathy experiments. After a viewer had remote-viewed a target (a geographic site, a hidden object, a photograph, or a video clip), a judge who was blind to the true target looked at the viewer's response (a sketch and a paragraph or two of verbal description) along with photographs or videos of five possible targets. Four of these targets were decoys and one was the real target.

The actual target is always selected at random from this pool of five possibilities to ensure that neither the viewers nor the judges could infer which was the actual target. The judge is asked to assign an rank to each of the possible targets, where a rank of 1 meant that the possible target matched the response most closely, and a rank of 5 meant that it matched the least. The final score for each remote-viewing trial is simply the ranking that the judge assigned to the actual target.

In the SAIC Experiments (Science Applications International Corporation) from 1989-1993, the CIA sponsored a review of the government sponsored RV research. The SAIC studies provided a rigorously controlled set of experiments that had been supervised by a distinguished oversight committee of experts from a variety of scientific disciplines. The committee included a Nobel laureate physicist, internationally known experts in statistics, psychology, neuroscience, astronomy, and a retired US Army major general who was also a physician.

Of ten government-sponsored experiments conducted at SAIC, six involved RV. Because the SRI studies had previously established the existence of RV to the satisfaction of most of the government sponsors, the SAIC experiments were not conducted as "proof-oriented" studies, but rather as a means of learning how psi perception worked.

Jessica Utts (mentioned earlier), ended her review as follows:

It is clear to this author that anomalous cognition is possible and has been demonstrated. This conclusion is not based on belief, but rather on commonly accepted scientific criteria. The phenomenon has been replicated in a number of forms across laboratories and cultures.

I believe that it would be wasteful of valuable resources to continue to look for proof. No one who has examined all of the data across laboratories, taken as a collective whole, has been able to suggest methodological or statistical problems to explain the ever-increasing and consistent results to date. (An Assessment of the evidence for psychic functioning. Journal of Statistical Education 10:3-30)

Ray Hyman, after viewing the same evidence, concluded:

I agree with Jessica Utts that the effect sizes reported in the SAIC experiments and in the recent ganzfeld studies probably cannot be dismissed as due to chance. Nor do they appear to be accounted for by multiple testing, filedrawer distortions, inappropriate statistical testing or other mis-use of statistical inference. ... So, I accept Professor Utt's assertion that the statistical results of the SAIC and other parapsychologists experiments "are far beyond what is expected by chance."

The SAIC experiments are well-designed and the investigators have taken pains to eliminate the known weaknesses in previous parapsychological research. In addition, I cannot provide suitable candidates for what flaws, if any, might be present. (Hyman 1996. Evaluation of a program on anomoalous mental phenomena. Journal of Statistical Education. 10:31-58 ).
Khephra
Khephra

Age : 59
Number of posts : 897
Registration date : 2008-08-10

Back to top Go down

"Remote Viewing" - Empty Re: "Remote Viewing" -

Post  emperorzombie Mon Jul 27, 2009 4:53 pm

its kinda funny, the scientific community is extremely harsh on remote viewing, but the scrutiny provides more credibility in the end. one of those belief defines reality, at least temporarily type deals.

were i into this a while ago and had positive results better than i have now, i.e. more talent, i would have just started predicting the stock market and bypassed the scientific community all together. science can have trouble arguing with money.
emperorzombie
emperorzombie

Location : ohio
Number of posts : 56
Registration date : 2009-07-17

Back to top Go down

Back to top

- Similar topics

 
Permissions in this forum:
You cannot reply to topics in this forum