2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDHLT-15.4
Paper Title ANALYSING BIAS IN SPOKEN LANGUAGE ASSESSMENT USING CONCEPT ACTIVATION VECTORS
Authors Xizi Wei, University of Birmingham, United Kingdom; Mark J. F. Gales, Kate M. Knill, Cambridge University, United Kingdom
SessionHLT-15: Language Assessment
LocationGather.Town
Session Time:Thursday, 10 June, 16:30 - 17:15
Presentation Time:Thursday, 10 June, 16:30 - 17:15
Presentation Poster
Topic Human Language Technology: [HLT-LACL] Language Acquisition and Learning
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract A significant concern with deep learning based approaches is that they are difficult to interpret, which means detecting bias in network predictions can be challenging. Concept Activation Vectors (CAVs) have been proposed to address this problem. These use representations - perturbations of activation function outputs - of interpretable concepts to analyse how the network is influenced by the concept. This work applies CAVs to assess bias in a spoken language assessment (SLA) system, a regression task. One of the challenges with SLA is the wide range of concepts that can introduce bias in training data, for example L1, age, acoustic conditions, and particular human graders, or the grading instructions. Simply generating large quantities of expert marked data to check for all forms of bias is impractical. This paper uses CAVs applied to the training data to identify concepts that might be of concern, allowing a more targeted dataset to be collected to assess bias. The ability of CAVs to detect bias is assessed on the BULATS speaking test using both a standard system and a system to which bias was artificially introduced. A strong bias identified by CAVs on the training data matches the bias observed in expert marked held-out test data.