Reliability of ultrasound ovarian-adnexal reporting and data system amongst less experienced readers before and after training
2022-09-30PrayashKatlariwalaMitchellWilsonYeliPiBaljotChahalRogerCroutzeDeelanPatelVimalPatelGavinLow
lNTRODUCTlON
Building on the original ovarian-adnexal reporting and data system (O-RADS) publication in 2018, the American College of Radiology (ACR) O-RADS working group has recently introduced risk stratification and management recommendations to supplement the detailed reporting lexicon for this classification system[1,2]. These guidelines aim to provide consistent language, accurate characterization, and standardized recommendations for ovarian/adnexal lesions identified on ultrasound, ultimately improving the quality of communication between ultrasound examiners, referring clinicians and patients. A couple of recent papers have validated the use of the O-RADS system as an effective tool for the detection of ovarian malignancies, possessing high diagnostic accuracy and robust inter-reader reliability even without formalized training[3,4] For its future directions, the O-RADS working group specifically calls for additional studies validating this system in North American institutions and amongst less experienced readers[1]. Thus, the primary objective of the present study is to assess the inter-reader reliability of O-RADS classification amongst North American Radiology trainees using the O-RADS system, before and after training.
MATERlALS AND METHODS
This is a single center retrospective study performed at the University of Alberta Institutional Health Research Ethics Board (HREB) approval was acquired prior to the study (Pro00097690). Patient consent for individual test cases was waived by the HREB as cases were retrospectively retrieved from the institutional Picture Archiving and Communication System (PACS) and de-identified prior to review by individual readers.
Patient selection
The University of Alberta institutional PACS was reviewed between May 2017 and July 2020 for all pelvic ultrasounds in adult female patients that demonstrated at least 1 ovarian/adnexal lesion with adequate diagnostic quality, including the presence of transvaginal 2D and Doppler sonographic image of the lesion(s) of interest. Studies were excluded if limited by technical factors such as bowel gas, large size of lesion, location of the adnexa, or inability to tolerate transvaginal ultrasound (O-RADS 0)[1].
The prince asked her who she was, and where she came from, and she looked at him mildly and sorrowfully with her deep blue eyes; but she could not speak
So first she tasted the porridge of the Great, Huge Bear, and that was too hot for her;13 and she said a bad word about that.14 And then she tasted the porridge of the Middle Bear, and that was too cold for her; and she said a bad word about that, too. And then she went to the porridge of the Little, Small, Wee Bear, and tasted that; and that was neither too hot nor too cold, but just right; and she liked it so well that she ate it all up:15 but then Goldilocks said a bad word about the little porridge-pot, because it did not hold enough for her.
A total of 100 diagnostic non-consecutive cases were selected by a Steering Committee of three authors including the senior author (Wilson MP, Patel V, Low G). In patients with more than one ovarian lesion, only different ipsilateral lesions were used with each individual lesion extracted as an independent blinded case when presented to study readers and the lesion of interest was designated with an arrow in each respective case. No concurrent contralateral lesions were used within the same patient. Cases were selected non-consecutively to acquire an approximately equal range of O-RADS 1 to O-RADS 5 Lesions. From these 100 cases, 50 cases were selected into separate ‘Training’ and ‘Testing’ groups. All cases were then de-identified leaving only the age, with 50 years of age used as a threshold for menopausal status. The cases were then listed as a teaching file in our institutional PACS (IMPAX 6 AGFA Healthcare) with a randomly assigned case number. All available static and cine imaging for the case were included in the teaching case file, with the additional inclusion of a ‘key image’ identifying the lesion intended for risk stratification with an arrow.
Training and testing
Three PGY-4 Diagnostic Radiology residents from a single institution volunteered as readers for the present study, henceforth referred to as R1, R2 and R3. The residents did not have prior formal experience with the O-RADS, SRU or IOTA systems for adnexal lesions, but have been exposed to ultrasonography in routine clinical practice totaling up to 12 wk. The residents were provided a copy of the O-RADS US Risk Stratification and Management System publication for independent review[1], and subsequently were asked to independently analyze all 50 ‘Testing’ cases assigning the best O-RADS risk stratification score and lexicon descriptor. Answers were collected using an online Google Forms survey. Following completion of the testing file, an interval of six weeks was selected to prevent case recall. The senior author (Low G) then provided residents with a presentation reviewing the O-RADS system including lexicon descriptors, differentiating nuances for scoring, and separate examples of lesions in each O-RADS category (no overlap with cases used in the study design). The residents were then provided access to the 50 ‘Training’ cases together with an answer key, for practice purposes and to establish familiarity with using the O-RADS system. Following the training session, and after the readers had reviewed the ‘Training Cases,’ the 50 “Testing” cases were then re-randomized, and independently scored again by all 3 readers in similar fashion to the pre-training format.
Pairwise comparison of the ROC curves showed a significant improvement post-training
pretraining for R1 (
= 0.04) but not for R2 (
= 0.29) and R3 (
= 0.21).
Statistical analysis
Excellent specificities (85%-100%), AUC values (0.87-0.98) and very good pairwise reliability can be achieved by trainees in North America regardless of formal pre-test training. Less experienced readers may be subject to down-grade misclassification of potentially malignant lesions and specific training about typical dermoid features and smooth
irregular margins of ovarian lesions may help improve sensitivity.
RESULTS
Cumulatively, the testing portion of the study was comprised of 50 cases. The average age of the patients in the test cohort was 40.1 ± 16.2 years and a range from 17 to 85 years. According to the reference standard, there were 10 cases (20%) of O-RADS 1, 10 cases (20%) of O-RADS 2, 7 cases (14%) of O-RADS 3, 12 cases (24%) of O-RADS 4 and 11 cases (22%) of O-RADS 5. Of the complete test cohort, 24 lesions (48%) were lateralized to the left and right with 2 lesions (4%) being located centrally in the pelvis and with an indeterminate origin site.
Overall, the lesion sizes ranged from 1.2 cm to 22.5 cm with an average size of 6.9 ± 4.7. Mean lesion size by O-RADS category was: 2.1 ± 0.5 cm for O-RADS 1, 5.1 ± 1.4 cm for O-RADS 2, 10.6 ± 5.8 cm for O-RADS 3, 7.8 ± 4.6 cm for O-RADS 4 and 9.4 ± 4.4 cm for O-RADS 5 (
< 0.001).
Inter-reader reliability
The overall inter-reader agreement for the 3 readers as a group on the pre-training assessment was considered
(k = 0.76 [0.68 to 0.84, 95% Confidence Interval {CI}], p < 0.001). Kappa values for agreement on individual 0-RADS categories were
or
as follows: O-RADS 1, k = 0.82 (0.66 to 0.98),
< 0.001; O-RADS 2, k = 0.78 (0.62 to 0.94),
< 0.001; O-RADS 3, k = 0.74 (0.58 to 0.90),
< 0.001; O-RADS 4, k = 0.73 (0.57 to 0.89),
< 0.001; O-RADS 5, k = 0.72 (0.56 to 0.88),
< 0.001.
The overall inter-reader agreement for the 3 readers as a group on the post-training assessment was considered
(k = 0.77 [0.69 to 0.86, 95%CI],
< 0.001). Kappa values for agreement on individual ORADS categories were
or
as follows: O-RADS 1, k = 0.96 (0.80 to 1),
< 0.001; O-RADS 2, k = 0.81 (0.65 to 0.97),
< 0.001; O-RADS 3, k = 0.65 (0.49 to 0.81),
< 0.001; O-RADS 4, k = 0.74 (0.58 to 0.90),
< 0.001; O-RADS 5, k = 0.70 (0.54 to 0.86),
< 0.001.
Pairwise inter-reader agreement, as evaluated using weighted kappa, was ‘
as follows: Pretraining: R1 and R2, k = 0.79 (0.62 to 0.96),
< 0.001; R1 and R3, k = 0.77 (0.59 to 0.95)
< 0.001; R2 and R3, k = 0.87 (0.73 to 1.00)
< 0.001. Post-training: R1 and R2, k = 0.86 (0.73 to 0.99),
< 0.001; R1 and R3, k = 0.85 (0.71 to 0.99)
< 0.001; R2 and R3, k = 0.89 (0.78 to 0.99)
< 0.001.
So she sent for the Enchanter secretly, and after making him promise that he would never turn herself and King Cloverleaf out of their kingdom, and that he would take Potentilla far away, so that never again might she set eyes upon her, she arranged the wedding for the next day but one
Diagnostic accuracy
The respective sensitivity, specificity, NPV, and PPV for each reader per O-RADS category are included in Table 1 for the pre-training assessment and Table 2 for the post-training assessment. All readers showed excellent specificities (85%-100% pre-training and 91%-100% post-training) and NPVs (89%-100% pre-training and 91%-100% post-training) across the O-RADS categories. Sensitivities range from 90%-100% in both pre-training and post-training for O-RADS 1 and O-RADS 2, 71%-100% pre-training and 86%-100% post-training for O-RADS 3, 75-92% in both pre-training and post-training for O-RADS 4, and 55%-82% pre-training and 64%-82% post-training for O-RADS 5. Readers misclassified 22 (14.7%) of 150 cases on pre-training assessment and 17 (11.3%) on post-training assessment. Misclassified cases and their respective lexicon descriptors are included in Table 3.
The 2018 Ovarian-Adnexal Reporting and Data System (O-RADS) guidelines are aimed at providing a system for consistent reports and risk stratification for ovarian lesions found on ultrasound. It provides key characteristics and findings for lesions, a lexicon of descriptors to communicate findings, and risk characterization and associated follow-up recommendation guidelines. However, the O-RADS guidelines have not been validated in North American institutions.
Meanwhile the King, who saw, as he passed, this fine castle of the ogre s, had a mind to go into it. Puss, who heard the noise of his Majesty s coach running over the draw-bridge, ran out, and said to the King:
For both pre and post-training assessment, the reference gold standard was determined by independent consensus reading of three fellowship-trained body imaging radiologists with experience in gynaecologic ultrasound with 5, 13, and > 25 years of ultrasound experience (Wilson MP, Patel V, Low G).
At first nobody would hear of this arrangement, and her father and brothers, who loved her dearly, declared that nothing should make them let her go; but Beauty was firm
DlSCUSSlON
This study demonstrates ‘good’ to ‘very good’ inter-reader agreement amongst less experienced readers in a North American institution, with pairwise and overall kappa values between spanning 0.76 and 0.89 (
< 0.001). The high degree of reliability is concordant with the findings of a prior study by Cao
[4]. In their study performed at a tertiary care hospital and a cancer hospital in China, the pair-wise inter-reader agreement between a first-year radiology resident and a staff radiologist with 9 years experience in gynaecologic ultrasound was assessed. The authors found a kappa of 0.714 for the ORADS system and a kappa of 0.77 for classifying lesion categories (
< 0.001).
Our study also highlights excellent diagnostic accuracies of resident readers when compared to a reference standard of three body-fellowship trained radiologists with experience in gynaecologic ultrasound. Solely with self-review of the O-RADS guidelines, the readers achieved high specificities greater than 0.85 and NPV greater than 0.89. These results persisted post-training, showing significant improvement in 1 resident (
= 0.04) and a trend towards improved accuracy amongst the other readers. The otherwise non-significant differences are due in part to excellent overall diagnostic accuracy without pre-test training as well as inadequate power to detect small differences. The study suggests that individual review of the O-RADS risk stratification is sufficient in less experienced readers with respect to specificity and AUC values. In this regard, this study validates the use of O-RADS risk classification amongst less experienced readers in a North American institution; a cohort specifically requiring validation by the ACR O-RADS committee[1].
RSNA Research & Education Foundation Medical Student Grant #RMS2020.
Then the sun shone so brightly, and right before her stood an angel of God in white robes; it was the same one whom she had seen that night at the church-door. He no longer carried the sharp sword, but a beautiful green branch, full of roses; with this he touched the ceiling, which rose up very high, and where he had touched it there shone a golden star. He touched the walls, which opened wide apart, and she saw the organ which was pealing32 forth; she saw the pictures of the old pastors33 and their wives, and the congregation sitting in the polished chairs and singing from their hymn-books. The church itself had come to the poor girl in her narrow room, or the room had gone to the church. She sat in the pew with the rest of the pastor’s household, and when they had finished the hymn and looked up, they nodded and said, “It was right of you to come, Karen.”
A second frequent error occurred in multilocular lesions with an irregular inner wall and/or irregular septation (O-RADS 4). These lesions were downgraded to O-RADS 1 through O-RADS 3 Lesions with variable lexicon descriptors used. Most commonly, these were characterized as a multilocular lesion with a smooth inner wall (O-RADS 3) in both pre-training and post-training assessment, suggesting that specific training on this finding was not sufficient in the current study. In this scenario, it is important that readers comprehensively evaluate the entire lesion on the cine clips, as irregularity in the inner wall/septation may be a subtle finding only seen in a small area within the lesion. An example of this misclassification is shown in Figure 3. Unlike the dermoid misclassification, however, this downgrade still results in a recommendation for evaluation by an ultrasound specialist or MRI and gynecology referral, reducing the risk for adverse potential complication of this misclassification. Despite these misclassifications, the negative predictive value in O-RADS 4 and O-RADS 5 Lesions remains high in both pre-training and post-training assessment (89%-97% and 91%-97%).
This study is subject to several limitations Firstly, this was a retrospective non-consecutive review. As the menopausal status was often not provided in the clinical information, an arbitrary age cut-off of 50 years was used to differentiate pre-menopausal (< 50 years)
post-menopausal patients (≥ 50 years), an approach has also been used in previous epidemiologic studies[6-8]. Secondly, we did not use a pathological reference standard. Our reference standard was an expert panel of 3 three fellowshiptrained radiologists with experience in gynaecologic ultrasound. However, as O-RADS is a risk stratification system that is designed to be applied universally in the clinical setting and as our study is designed primarily to evaluate inter-reader agreement, an expert consensus panel is arguably a reasonable reference standard, and one that simulates ‘real world’ clinical practice. A similar approach has been taken in previous O-RADS accuracy studies[3,9]. Thirdly, our sample size of 50 training cases was fairly small. A large multi-center inter-observer variability study in North America would be useful to evaluate the generalizability of our findings. Despite these limitations, we believe that the rigorous study design and specific reader cohort provide valuable insight into a needed area of validation identified by the ACR O-RADS committee.
Justin and Heather are now teenagers, and the plum tree has become our bonding symbol. Although we moved from the home that housed Justin s favorite plum tree, the first tree to be planted in our new yard was a purple plum, so that Justin and Heather could know when to expect her special day. Throughout their growing-up years, the children spent countless14 hours nestled in the branches, counting down the days through the birth of leaves, flowers, buds15 and fruit. Our birthday parties are always festooned with plum branches and baskets brimming with freshly picked purple plums. Because as Mother Nature-and Justin-would have it, for the last fifteen years, the purple plum has ripened16 exactly on June 22.
CONCLUSlON
In summary, the study validated the use of the ACR-ORADS risk stratification system in less experienced readers, showing excellent specificities and AUC values when compared to a consensus reference standard and high pairwise inter-reader reliability. Less experienced readers may be at risk for misclassification of potentially malignant lesions, and specific training around common pitfalls may help improve sensitivity.
ARTlCLE HlGHLlGHTS
Research background
The ROC analysis evaluated diagnostic accuracy of the readers are included in Figure 1A for the pretraining assessment and Figure 1B for the post-training assessment. Given that higher O-RADS score (
O-RADS 4 and O-RADS 5) are predictors of malignancy, reader AUC values are as follows: Pretraining: R1, AUC of 0.87 (0.75 to 0.95),
< 0.001; R2, AUC of 0.95 (0.84 to 0.99),
< 0.001; R3, AUC of 0.89 (0.77 to 0.96),
< 0.001. Post-training: R1, AUC of 0.96 (0.86 to 0.99),
< 0.001; R2, AUC of 0.98 (0.89 to 1.00),
< 0.001; R3, AUC of 0.94 (0.83 to 0.99),
< 0.001.
Research motivation
The O-RADS ultrasound risk stratification requires validation in less experienced North American readers.
Research objectives
Evaluate the diagnostic accuracy and inter-reader reliability of ultrasound O-RADS risk stratification amongst less experienced readers in a North American institution without and with pre-test training.
Research methods
A single-center retrospective study was performed using 100 ovarian/adnexal lesions of varying ORADS scores. Of these cases, 50 were allotted to a training cohort and 50 to a testing cohort
a nonrandomized group selection process in order to approximately equal distribution of O-RADS categories both within and between groups. Reference standard O-RADS scores were established through consensus of three fellowship-trained body imaging radiologists. Three PGY-4 residents were independently evaluated for diagnostic accuracy and inter-reader reliability without and with pre-test O-RADS training. Sensitivity, specificity, positive predictive value, negative predictive value, and area under the curve (AUC) were used to measure accuracy. Fleiss kappa and weighted quadratic (pairwise) kappa values were used to measure inter-reader reliability.
After a few days, my grandmother, who seldom visited us, came to my home and unpacked4 her bag, smilingly taking out bags of dried eggplants, dried beans and dried vegetable. She told me that I left so hastily last time that she forgot to give me some of these foods, so she took this chance to bring me what I liked.
Research results
The diagnostic accuracy of each individual reader and inter-observer variability between each reader both pre-training and post-training was evaluated. Continuous variables were expressed as the mean ± standard deviation. Statistical tests included: Fleiss kappa (overall agreement) and weighted quadratic kappa (pairwise agreement) was used to calculate the inter-reader agreement. The kappa (k) value interpretation as suggested by Cohen was used:
< 0.20 (poor agreement),
= 0.21-0.40 (fair agreement), 0.41-0.60 (moderate agreement), 0.61-0.80 (good agreement), and 0.81-1.00 (very good agreement)[5]. Diagnostic accuracy measurements including sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were calculated per O-RADS category for each individual reader. Receiver operating characteristic (ROC) analysis was used to evaluate the area under the receiver operating curve (AUC) for each reader. All statistical analyses were conducted using IBM SPSS (version 26) and MedCalc (version 19.6.1). A
value of < 0.05 was considered as statistically significant.
Research conclusions
Less experienced readers in North America achieved excellent specificities and AUC values with very good pairwise inter-reader reliability though they may be subject to misclassification of potentially malignant lesions. Training around dermoid features and smooth
irregular inner wall/septation morphology may improve sensitivity.
Research perspectives
Institutional Health Research Ethics Board (HREB) approval was acquired from the University of Alberta prior to the study (Pro00097690).
FOOTNOTES
All authors contributed equally to the paper.
An important risk amongst less experienced readers is the potential to misclassify potentially malignant lesions as benign. The sensitivity results in this study were variable in both pre-training and post-training assessment, particularly in higher O-RADS categories. In their respective pre-training and post-training assessments, sensitivities were 64%-82% and 75%-92% for O-RADS 4 and 55%-82% and 64%-82% for O-RADS 5. The most frequent error on pre-training assessment was classifying a solid lesion as O-RADS 2 with a “typical dermoid cyst < 10 cm” lexicon descriptor. This error accounted for 45% (10/22) of misclassified cases in the pre-training assessment, with a reduction to 27% (4/17) of misclassified cases following training. This pitfall may be mitigated by comparing the hyperechoic component of a solid ovarian lesion to the surrounding pelvic and subcutaneous fat. The lesion should be classified as a dermoid only if it is isoechoic to the internal reference, and/or demonstrates one of three typical features including: (1) hyperechoic component with shadowing; (2) hyperechoic lines and dots; or (3) floating echogenic spherical structures[1,2]. In reviewing the test cases, all the solid lesions misclassified as dermoid had echogenicity lower than the intrapelvic fat. An example of this misclassification is shown in Figure 2.
This study supports the applied utilization of the O-RADS ultrasound risk stratification tool by less experienced readers in North America.
Institutional ethics approval was obtained for this study which also waived the requirement for the informed consent. Please see institutional HREB approval document for details.
All authors have no conflicts of interest.
I walked to my car a changed person, marveling how one little, charming old lady had risked the possibility of personal ridicule44 to change for the better the mood of an entire grocery store without saying one word, but by speaking eloquently45 through her gift of soothing46 music-a gift not asked for, but sorely needed
No additional data available.
This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BYNC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is noncommercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Then the heart of the man was greatly stirred, and he stretched out his arms to his wife, but she waved her hands and said, We have seen no one yet; it is too soon
Canada
Prayash Katlariwala 0000-0002-5822-1071; Mitchell P Wilson 0000-0002-1630-5138; Vimal Patel 0000-0003-2972-5980; Gavin Low 0000-0002-4959-8934.
Liu JH
A
Why and how I got myself into this situation is still unclear to me. A lady friend, a quite good looking lady friend, asked me to check her two cats while she was out of town for several days. I said, Sure. I thought to myself, How difficult could it be to check on her cats? Before she left she told me that one of her cats was on medication() for dry skin. Yes, dry skin. I thought to myself, Give me a break! However I kept my mouth shut because she is a good friend and as I said before, quite a good looking friend. Her one cat was required to take two pills a day and also needed to have Neosporin put on a sore(,) on his skin. No problem , I foolishly replied.
Liu JH
1 Andreotti RF, Timmerman D, Strachowski LM, Froyman W, Benacerraf BR, Bennett GL, Bourne T, Brown DL, Coleman BG, Frates MC, Goldstein SR, Hamper UM, Horrow MM, Hernanz-Schulman M, Reinhold C, Rose SL, Whitcomb BP,Wolfman WL, Glanc P. O-RADS US Risk Stratification and Management System: A Consensus Guideline from the ACR Ovarian-Adnexal Reporting and Data System Committee.
2020; 294: 168-185 [PMID: 31687921 DOI:10.1148/radiol.2019191150]
2 Andreotti RF, Timmerman D, Benacerraf BR, Bennett GL, Bourne T, Brown DL, Coleman BG, Frates MC, Froyman W,Goldstein SR, Hamper UM, Horrow MM, Hernanz-Schulman M, Reinhold C, Strachowski LM, Glanc P. Ovarian-Adnexal Reporting Lexicon for Ultrasound: A White Paper of the ACR Ovarian-Adnexal Reporting and Data System Committee.
2018; 15: 1415-1429 [PMID: 30149950 DOI: 10.1016/j.jacr.2018.07.004]
3 Pi Y, Wilson MP, Katlariwala P, Sam M, Ackerman T, Paskar L, Patel V, Low G. Diagnostic accuracy and inter-observer reliability of the O-RADS scoring system among staff radiologists in a North American academic clinical setting.
2021; 46: 4967-4973 [PMID: 34185128 DOI: 10.1007/s00261-021-03193-7]
4 Cao L, Wei M, Liu Y, Fu J, Zhang H, Huang J, Pei X, Zhou J. Validation of American College of Radiology Ovarian-Adnexal Reporting and Data System Ultrasound (O-RADS US): Analysis on 1054 adnexal masses.
2021;162: 107-112 [PMID: 33966893 DOI: 10.1016/j.ygyno.2021.04.031]
5 Landis JR, Koch GG. The measurement of observer agreement for categorical data.
1977; 33: 159-174 [PMID:843571]
6 Phipps AI, Ichikawa L, Bowles EJ, Carney PA, Kerlikowske K, Miglioretti DL, Buist DS. Defining menopausal status in epidemiologic studies: A comparison of multiple approaches and their effects on breast cancer rates.
2010; 67: 60-66 [PMID: 20494530 DOI: 10.1016/j.maturitas.2010.04.015]
7 Hill K. The demography of menopause.
1996; 23: 113-127 [PMID: 8735350 DOI:10.1016/0378-5122(95)00968-x]
8 Im SS, Gordon AN, Buttin BM, Leath CA 3rd, Gostout BS, Shah C, Hatch KD, Wang J, Berman ML. Validation of referral guidelines for women with pelvic masses.
2005; 105: 35-41 [PMID: 15625139 DOI:10.1097/01.AOG.0000149159.69560.ef]
9 Basha MAA, Metwally MI, Gamil SA, Khater HM, Aly SA, El Sammak AA, Zaitoun MMA, Khattab EM, Azmy TM,Alayouty NA, Mohey N, Almassry HN, Yousef HY, Ibrahim SA, Mohamed EA, Mohamed AEM, Afifi AHM, Harb OA,Algazzar HY. Comparison of O-RADS, GI-RADS, and IOTA simple rules regarding malignancy rate, validity, and reliability for diagnosis of adnexal masses.
2021; 31: 674-684 [PMID: 32809166 DOI:10.1007/s00330-020-07143-7]