Common Contracts

1 similar null contracts

Appendix Methods Statistical analysis Diagnostic agreement
March 3rd, 2017
  • Filed
    March 3rd, 2017

Cohen’s kappa coefficient (κ) was used to evaluate interobserver agreement for diagnosis and Cohen’s weighted kappa coefficient (κw) was used to evaluate interobserver agreement for an estimation of the probability of each diagnosis. This approach has been used in previous investigations of interobserver agreement for the diagnosis of diffuse lung diseases (1, 2). The percentage diagnostic likelihood given for each diagnosis was converted to a 5-point scale (0–4), representing clinically useful probabilities: 0 = condition not included in the differential diagnosis, 1 = low probability (5–25%), 2 = intermediate probability (30–65%), 3 = high probability (70–95%), and 4 = pathognomonic (100%). For example, if the differential diagnoses given by a physician were IPF (70% diagnostic likelihood), NSIP (20% diagnostic likelihood) and hypersensitivity pneumonitis (10% diagnostic likelihood), the probability grades for IPF, NSIP and hypersensitivity pneumonitis for this case would be 3,

AutoNDA by SimpleDocs
Time is Money Join Law Insider Premium to draft better contracts faster.