Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

We explore benefits of unsupervised pretraining of wav2vec 2.0 (W2V2) using large-scale unlabeled home recordings collected using LittleBeats (LB) and LENA (Language Environment Analysis) devices. LittleBeats is a new infant wearable multi-modal device that we developed, which simultaneously records audio, movement of the infant, as well as heart-rate variablity. We use W2V2 to advance LB audio pipeline such that it automatically provides reliable labels of speaker diarization and vocalization classifications for family members, including infants, parents, and siblings, at home. We show that W2V2 pretrained on thousands hours of large-scale unlabeled home audio outperforms oracle W2V2 pretrained on 52k-hours released by Facebook/Meta in terms of automatic family audio analysis tasks.

For more details about LittleBeats, check out https://littlebeats.hdfs.illinois.edu/

Model Sources

For more information regarding this model, please checkout our paper

Model Description

<!-- Provide a longer summary of what this model is. --> Two versions of pretrained W2V2 models using fairseq are available:

One version of fine-tuned W2V2 models on labeled LB and LENA data using SpeechBrain is available:

Two pretrained ECAPA-TDNN speaker embeddings are available:

Uses

We develop our complete fine-tuning recipe using SpeechBrain toolkit available at

Quick Start

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> If you wish to use fairseq framework, the following code snippet provides two functions of loading our pretrained W2V2 model and extracting features.

<pre><code> import torch import torch.nn.functional as F from torch import nn import fairseq import torchaudio

def load_model(model_path, freeze=True): ''' This function loads pretrained model using fairseq framework. Arguments --------- model_path : str Path and filename of the pretrained model freeze : bool (default: True) If True, the model is frozen with no parameter updates through training. '''

  model,_,_ = fairseq.checkpoint_utils.load_model_ensemble_and_task([model_path])
  model = model[0]

  if freeze:
      model.eval()
      # Freeze parameters
      for param in model.parameters():
          param.requires_grad = False
  else:
      model.train()
      for param in model.parameters():
          param.requires_grad = True

  #remove unnecessary components
  model.quantizer = None
  model.project_q = None
  model.target_glu = None
  model.final_proj = None

  return model

def extract_features(model, wav, input_norm=None, output_norm=True, tgt_layer=None, output_all_hiddens=False): ''' This function extracts features from w2v2 model. The function extracts the last transformer layer feature by default. It allows for extracting features from certain layer, or features from all layers Arguments --------- model : fairseq wav2vec wav : tensor audio wav for feature extraction input_norm : bool (default: None) If True, a layer_norm (affine) will be applied to the input waveform. output_norm : bool (default: True) If True, a layer_norm (affine) will be applied to the output obtained from the wav2vec model. tgt_layer : int (default: None) Target transformer layer features, 0-indexed. output_all_hiddens : bool (default: False) Whether to extract features from all layers. Need to set tgt_layer as None '''

  if input_norm:
      wav = F.layer_norm(wav, wav.shape)

  # Extract wav2vec output
  out = model.extract_features(wav, padding_mask=None, mask=False)['x']
  if isinstance(tgt_layer, int):
      out = model.extract_features(wav, padding_mask=None, mask=False, layer=tgt_layer)['x']
  elif output_all_hiddens: 
      features = []
      model.layerdrop = 0
      for i in range(len(out['layer_results'])):
          curr_feature = out['layer_results'][i][0].transpose(0,1)
          features.append(curr_feature)
      out = torch.stack(features)

  if output_norm:
      out = F.layer_norm(out, out.shape)
  return out

model=load_model("your/path/to/LL_4300/checkpoint_best.pt") audio, fs = torchaudio.load("sample.wav") audio = audio.transpose(0,1).squeeze(1) features = extract_features(model, audio) </code></pre>

Evaluation

<!-- This section describes the evaluation protocols and provides the results. --> We test 4 unlabeled datasets on unsupervised pretrained W2V2-base models:

results

Additionally, we improve our model performances by adding relevant labeled home recordings and using data augmentation techniques of SpecAug and noise/reverberation corruption. For more details of experiments and results, please refer to our paper.

Paper/BibTex Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you found this model helpful to you, please cite us as

<pre><code> @article{li2023towards, title={Towards Robust Family-Infant Audio Analysis Based on Unsupervised Pretraining of Wav2vec 2.0 on Large-Scale Unlabeled Family Audio}, author={Li, Jialu and Hasegawa-Johnson, Mark and McElwain, Nancy L}, journal={Interspeech}, year={2023} } </code></pre>

Model Card Contact

Jialu Li (she, her, hers)

Ph.D candidate @ Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign

E-mail: jialuli3@illinois.edu

Homepage: https://sites.google.com/view/jialuli/

Our team: https://littlebeats.hdfs.illinois.edu/team/