This is a wav2vec2-base model trained from a dataset of japanese bird songs.

import librosa
import torch
from transformers import Wav2Vec2ForPreTraining,Wav2Vec2Processor

sound_file = 'sample.wav'

sound_data,_ = librosa.load(sound_file, sr=16000)

model_id = "kojima-r/wav2vec2-bird-jp-all"
model = Wav2Vec2ForPreTraining.from_pretrained(model_id)

result=model(torch.tensor([sound_data]))
hidden_vecs=result.projected_states
print(hidden_vecs.shape)

For example, the output of this program is like

torch.Size([1, 444, 256])

where the hidden_vecs represent a tensor with (#samples) x (#Time-steps) x (dim. of hidden vector). Note that #samples is always one in this case.