If the two groups of raters (or the same group observed on 2 occasions) must rate the exact same group of raters, then any agreement coefficient used (e.g. Fleiss generalized kappa, Gwet's AC

_{1}, Conger's generalized kappa, Brennan-Prediger coefficient, or Krippendorff's alpha) will produce two correlated coefficients, making the calculation of the variance of the difference very difficult due to the embedded correlation structure. Gwet (2016) proposed the linearization method to resolve this problem. This approach consists of using the linear approximation to the agreement coefficient to develop the equivalent of a paired t-test. Users of the R package may use the

**R functions**that I developed to implement the linearization method to testing the difference of two agreement coefficients for statistical significance.

Bibliography:

*Gwet, K. L. (2016). Testing the Difference of Correlated Agreement Coefficients for Statistical Significance, Educational and Psychological Measurement, Vol 76(4) 609-637*