Researchers cite safety concerns after uncovering ‘harmful behavior’ of fracture-detecting AI model

Researchers are cautioning the overly optimistic AI enthusiasts after a recent algorithmic audit revealed the potentially “harmful behavior” of a validated model intended to detect femoral fractures. 

A study in the Lancet Digital Health reports that a previously validated, high performing AI model committed troublesome errors when confronted with atypical anatomy while seeking out subtle proximal femur fractures. Researchers noted that despite the model’s exceptional performance on external validation, its preclinical performance revealed barriers that would inhibit the…

Go to publisher site for the complete article:

Read More

      Radiology AI
      Compare items
      • Total (0)