Cohen`s Kappa is a statistical coefficient that represents the degree of accuracy and reliability in a statistical nomenclature. It measures the concordance between two evaluators (judges) who divide each of the objects into mutually exclusive categories. These statistics were introduced in 1960 by Jacob Cohen in the journal Educational and Psychological Measurement. where po is the relatively observed correspondence between the evaluators and pe the hypothetical probability of a random match. The Lin concordance calculator is used to assess the degree of concordance between two continuous variables such as chemical or microbiological concentrations. It calculates the value of the lens concordance correlation coefficient. the values of ±1 refer to perfect concordance and discordance; A value of zero marks its total absence. Thirty-four subjects were identified. All kappa coefficients were assessed on the basis of the guideline outlined by Landis and Koch (1977), the thickness of the kappa coefficients being low = 0.01-0.20; 0.21-0.40 Mass; 0.41-0.60 moderately; 0,61-0,80; 0.81-1.00 almost perfect, according to Landis & Koch (1977). Of the thirty-four themes, 11 had a fair agreement, five a moderate agreement, four had substantial agreement and four subjects had almost perfect agreement. The development of kappa and Lin concordance calculator tools was funded by the New Zealand Department of Health, the Foundation for Research, Science and Technology (FRST) and Environmental Diagnostics Ltd. Landis, J.

R., &Koch, G. G. (1977). The measurement of observer compliance for categorical data. Biometrics, 33, 159-174 To interpret your cohen`s Kappa results, you can refer to the following guidelines (see Landis, JR & Koch, GG (1977). The measurement of observer compliance for categorical data. Biometrics, 33, 159-174): If you already know the meaning of Cohen`s Kappa and know how to interpret it, go straight to the calculator. Reliability is an important part of any research study. Statistics Solutions` Kappa calculator evaluates the reliability of Inter-Rater from two evaluators on one goal. In this easy-to-use calculator, enter the frequency of agreements and disagreements between evaluators and the Kappa computer calculates your Kappa coefficient. The computer gives references that will help you qualitatively assess the degree of compliance.

(For example, click here). The Kappa computer is used to assess the degree of agreement between two dichotomous variables. These variables can have only one of two « values », for example.B. presence or absence. It calculates the value of Cohen`s « kappa » (κ). A value of κ = 1 represents a perfect match, k = 0 for the agreement by chance alone. The detection computer is used to study and calculate different test properties of three types of hypotheses: unilateral, zero-point, and equivalence. You can use either one or two groups of samples, all assuming a random sample from a normal distribution. For the first two types of hypotheses, you can specify one: the sample size, the level of significance, the size of detectable effects and their probability of detection. You indicate three of them and he calculates the rest. For equivalence tests, only the probability of detection can be calculated (other options will be added later); The results are followed by an instructive graphical comparison of the influence of sample size on the probabilities of detection of the tests of the inequivalence hypothesis, the equivalence hypothesis and the zero point hypothesis. .

. .