A Deep 2D Convolutional Network for Waveform-Based Speech Recognition
Due to limited computational resources, acoustic models of early automatic speech recognition ( ASR ) systems were built in low-dimensional feature spaces that incur considerable information loss at the outset of the process. Several comparative studies of automatic and human speech recognition suggest that this information loss can adversely affect the robustness of ASR systems. To mitigate that and allow for learning of robust models, we propose a deep 2 D convolutional network in the waveform domain. The first layer of the network decomposes waveforms into frequency sub-bands, thereby representing them in a structured high-dimensional space. This is achieved by means of a parametric convolutional block defined via cosine modulations of compactly supported windows. The next layer embeds the wave-form in an even higher-dimensional space of high-resolution spectro-temporal patterns, implemented via a 2 D convolutional block. This is followed by a gradual compression phase that selects most relevant spectro-temporal patterns using wide-pass 2 D filtering. Our results show that the approach significantly out-performs alternative waveform-based models on both noisy and spontaneous conversational speech ( 24% and 11% relative error reduction, respectively). Moreover, this study provides empirical evidence that learning directly from the waveform domain could be more effective than learning using hand-crafted features.