Common use of PARTICIPANT RESPONSES Clause in Contracts

PARTICIPANT RESPONSES. All 34 images were reviewed by 22 experts, for a total of 748 diagnosis and quality responses. A diagnosis of can- not determine was made in 18 of 748 cases (2%). Among 748 cases, image quality was scored as adequate in 656 cases (88%), possibly adequate in 72 cases (10%), and inadequate for diagnosis in 20 cases (3%). Overall diagnostic responses are summarized in the Table. Three of 34 images (9%) were classified as plus by all 22 experts. In the 3-level categorization, 1 image (3%) was classified as neither plus nor pre-plus by all 22 experts, and no images were classified as pre-plus by all 22 experts. In the 2-level classification, 4 of 34 images (12%) were classified as not plus by all experts who pro- vided a diagnosis. Representative images and responses are shown in Figure 1. INTEREXPERT AGREEMENT Figure 2 shows absolute agreement in plus disease di- agnosis, based on the percentage of experts who as- signed the same diagnosis to each image. For example, the same 3-level diagnosis was made by at least 90% of experts in 6 images (18%) and by at least 80% of experts in 7 images (21%). The same 2-level diagnosis was made by at least 90% of experts in 20 images (59%) and by at least 80% of experts in 24 images (71%). The mean n statistics for each expert compared with all others are shown in Figure 3. In the 3-level categoriza- tion, the mean weighted n statistic for each expert com- pared with all others was between 0.21 and 0.40 (fair agree- ment) for 7 experts (32%) and between 0.41 and 0.60 (moderate agreement) for 15 experts (68%). In the 2-level categorization, the mean n statistic for each expert com- pared with all others was between 0 and 0.20 (slight agree- ment) for 1 expert (5%), between 0.21 and 0.40 (fair agree- ment) for 3 experts (14%), between 0.41 and 0.60 (moderate agreement) for 12 experts (55%), and between 0.61 and 0.80 (substantial agreement) for 6 experts (27%). There were no statistically significant differences in mean n or weighted n statistics based on the following expert characteristics: working in vs not working in an institu- tion with a RetCam, having published at least 5 vs fewer than 5 peer-reviewed ROP manuscripts, type of ophthal- mologist (pediatric vs retina specialist), self-reported level of experience interpreting RetCam images (extensive, lim- ited, or none), status as a principal investigator vs not a principal investigator in the CRYO-ROP or ETROP study, or status as a certified investigator vs not a certified inves- tigator in either of those studies.

Appears in 5 contracts

Samples: Interexpert Agreement, Interexpert Agreement, Interexpert Agreement

AutoNDA by SimpleDocs
Time is Money Join Law Insider Premium to draft better contracts faster.