Learning from AI-Generated Annotations for Medical Image Segmentation
Learning from AI-generated annotations is wellrecognized as a key advance of deep learning techniques in medical image segmentation. Towards this direction, in this paper, we investigate two questions: (1) how to accurately measure loss value on AI-generated annotations that often contain errors and (2) how to effectively update model’s parameters when the loss value is no longer a correct supervision for medical image segmentation. The main results are that (1) ‘error-tolerant’ loss functions exist and (2) ‘cross-training’, updating the model using data with a small loss of its ‘twin’ model, can tolerate the loss function to some extent. Per the main results, we yet derived a robust training algorithm, called confidence regularized coteaching, that helps deep models to combat annotation errors in medical image segmentation. This algorithm simultaneously trains two ‘twin’ segmentation models and updates model’s parameters by cross-training with disagreement confident data that are predicted differently by the two models, thereby being able to learning from data with annotation errors. The empirical evidence from a publicly available dataset shows that this new algorithm works better on combating annotation errors than existing methods for medical image segmentation, opening the opportunity to use AI-generated annotations to train segmentation model for medical image segmentation.
History
Refereed
- Yes
Publication title
IEEE Transactions on Consumer ElectronicsISSN
0098-3063External DOI
Publisher
Institute of Electrical and Electronics EngineersFile version
- Accepted version
Item sub-type
ArticleAffiliated with
- School of Computing and Information Science Outputs