Front page
   Concepts
     Grading

Grading

p-value

Calculate p-value, enter the three first fields, the p-value at 95% confidence is the number on the bottom and should be less than 0.0500. The p-value is useful when considering the performance of ARV sessions. https://stattrek.com/online-calculator/binomial.aspx

Individual correlation grade

If I compare my report to the target feedback, I can make a subjective qualitative assessment on the correlation and assign a grade A, B, C, or F to the correlation, the grade reflects both the accuracy and the detail of the report. This grading does not follow any exact format, and also if I assign this grade at two different occasions I might give it a different grade another time. These grades are perhaps not an actual measure of a remote viewing ability in itself, but these grades are extremely valuable because as long as my grading is consistent with how I do it, these grades will show the difference in remote viewing performance from one protocol to another. These grades are very good for comparing protocols, and when we know which variable or which variables differ between two protocols, a significant difference in individual grades suggests that those variables are responsible for an enhancing or inhibiting effect on the remote viewing ability.

The interesting thing is that, provided the type of target pictures are about similar from one protocol to another, then if there were no remote viewing ability at all in existence, then the individual correlation grades should remain roughly the same! The existence of variables which affect the individual correlation grades significantly from one protocol to the next, can be taken as evidence that remote viewing is a real ability.

Grade C - Basic main shapes elements are identified in the report as colors and shapes, some may be falsely labeled, there is however a lack of more advanced target landscape, there is a lack of target identity.

Grade B - Meets the requirements of grade C but in addition there are more details, some of the elements have been pieced together into the beginnings of a target landscape, some understanding of what the target is about, better accuracy, but there is a lack of target identity.

Grade A - Meets the requirements of grade B but in addition there is more detailed target landscape, better accuracy, and usually a target identity has been achieved.

Grade F - There is no sufficient correlation from report to target feedback, or if there is sufficient correlation then there could be so much inaccuracy in addition which subtracts from it.

Accuracy %

If listed elements are each given a rating of yes, no, or ?, depending on if they match to the target feedback, do not match to the target feedback, or it cannot be determined whether they match to the target feedback, the total of yes from the total of yes and no gives the accuracy %. Accuracy % can also be made in the same way by rating each of the individual statements that are in a report summary.

A high accuracy % is already automatically a good result, of course provided that the target pool was diverse enough where targets would not contain all of most of typical elements, and provided that a sufficient amount of elements or statements have been made. Accuracy % is less subjective, or perhaps at best even fully objective, meaning that each person might come to the same accuracy % for a report. Accuracy % is also a quantitative measurement of remote viewing performance, because it produces a number score. Accuracy % is a percent value and ranges from 0% to 100%, where 0% is the lowest possible performance and 100% is the highest possible performance.

One additional way of figuring out the significance of an accuracy % score, is to first measure it for reports to their own targets. Then all reports could be given an accuracy % for the previous or following target instead. These two data sets could indicate the significance of the accuracy %.

Yes/No Matching

If a report is matched to one of several target options, ideally where one was the actual target remote viewed and others are non-targets or decoys. If the report is matched correctly to its own target, then it was a correct match, and if the report was matched to one that was not its target then it was an incorrect match. This produces a simple yes or no, all or nothing score. If there are a total of two options for a matching, then a correct match is as significant as a 1 in 2 or 50% probability. If there are a total of four options for a matching then a correct match is as significant as a 1 in 4 or 25% probability. A larger total number of options makes a correct match more significant. However a larger number of options is difficult to make fairly because target images become more similar to each other and a potential actual remote viewing ability has to perform better with more detailed remote viewing work to be able to show itself.

Matching of a report to one of several picture options can only be done by someone who is not the remote viewer. The remote viewer must go straight from remote viewing a target and to feedback. I do not recommend any type of self-matching, especially not when used to try to measure the skill of the remote viewer, because displacement is likely to happen which ruins the correlation to the intended target and will give a false measure of the remote viewing ability.