Skip to main content
Top

2019 | Book

Statistical Language and Speech Processing

7th International Conference, SLSP 2019, Ljubljana, Slovenia, October 14–16, 2019, Proceedings

insite
SEARCH

About this book

This book constitutes the proceedings of the 7th International Conference on Statistical Language and Speech Processing, SLSP 2019, held in Ljubljana, Slovenia, in October 2019.

The 25 full papers presented together with one invited paper in this volume were carefully reviewed and selected from 48 submissions. They were organized in topical sections named: Dialogue and Spoken Language Understanding; Language Analysis and Generation; Speech Analysis and Synthesis; Speech Recognition; Text Analysis and Classification.

Table of Contents

Frontmatter

Invited Talk

Frontmatter
The Time-Course of Phoneme Category Adaptation in Deep Neural Networks
Abstract
Both human listeners and machines need to adapt their sound categories whenever a new speaker is encountered. This perceptual learning is driven by lexical information. In previous work, we have shown that deep neural network-based (DNN) ASR systems can learn to adapt their phoneme category boundaries from a few labeled examples after exposure (i.e., training) to ambiguous sounds, as humans have been found to do. Here, we investigate the time-course of phoneme category adaptation in a DNN in more detail, with the ultimate aim to investigate the DNN’s ability to serve as a model of human perceptual learning. We do so by providing the DNN with an increasing number of ambiguous retraining tokens (in 10 bins of 4 ambiguous items), and comparing classification accuracy on the ambiguous items in a held-out test set for the different bins. Results showed that DNNs, similar to human listeners, show a step-like function: The DNNs show perceptual learning already after the first bin (only 4 tokens of the ambiguous phone), with little further adaptation for subsequent bins. In follow-up research, we plan to test specific predictions made by the DNN about human speech processing.
Junrui Ni, Mark Hasegawa-Johnson, Odette Scharenborg

Dialogue and Spoken Language Understanding

Frontmatter
Towards Pragmatic Understanding of Conversational Intent: A Multimodal Annotation Approach to Multiparty Informal Interaction – The EVA Corpus
Abstract
The present paper describes a corpus for research into the pragmatic nature of how information is expressed synchronously through language, speech, and gestures. The outlined research stems from the ‘growth point theory’ and ‘integrated systems hypothesis’, which proposes that co-speech gestures (including hand gestures, facial expressions, posture, and gazing) and speech originate from the same representation, but are not necessarily based solely on the speech production process; i.e. ‘speech affects what people produce in gesture and that gesture, in turn, affects what people produce in speech’ ([1]: 260). However, the majority of related multimodal corpuses ‘ground’ non-verbal behavior in linguistic concepts such as speech acts or dialog acts. In this work, we propose an integrated annotation scheme that enables us to study linguistic and paralinguistic interaction features independently and to interlink them over a shared timeline. To analyze multimodality in interaction, a high-quality multimodal corpus based on informal discourse in a multiparty setting was built.
Izidor Mlakar, Darinka Verdonik, Simona Majhenič, Matej Rojc
Lilia, A Showcase for Fast Bootstrap of Conversation-Like Dialogues Based on a Goal-Oriented System
Abstract
Recently many works have proposed to cast human-machine interaction in a sentence generation scheme. Neural networks models can learn how to generate a probable sentence based on the user’s statement along with a partial view of the dialogue history. While appealing to some extent, these approaches require huge training sets of general-purpose data and lack a principled way to intertwine language generation with information retrieval from back-end resources to fuel the dialogue with actualised and precise knowledge. As a practical alternative, in this paper, we present Lilia, a showcase for fast bootstrap of conversation-like dialogues based on a goal-oriented system. First, a comparison of goal-oriented and conversational system features is led, then a conversion process is described for the fast bootstrap of a new system, finalised with an on-line training of the system’s main components. Lilia is dedicated to a chit-chat task, where speakers exchange viewpoints on a displayed image while trying collaboratively to derive its author’s intention. Evaluations with user trials showed its efficiency in a realistic setup.
Matthieu Riou, Bassam Jabaian, Stéphane Huet, Fabrice Lefèvre
Recent Advances in End-to-End Spoken Language Understanding
Abstract
This work investigates spoken language understanding (SLU) systems in the scenario when the semantic information is extracted directly from the speech signal by means of a single end-to-end neural network model. Two SLU tasks are considered: named entity recognition (NER) and semantic slot filling (SF). For these tasks, in order to improve the model performance, we explore various techniques including speaker adaptation, a modification of the connectionist temporal classification (CTC) training criterion, and sequential pretraining.
Natalia Tomashenko, Antoine Caubrière, Yannick Estève, Antoine Laurent, Emmanuel Morin

Language Analysis and Generation

Frontmatter
A Study on Multilingual Transfer Learning in Neural Machine Translation: Finding the Balance Between Languages
Abstract
Transfer learning is an interesting approach to tackle the low resource languages machine translation problem. Transfer learning, as a machine learning algorithm, requires to make several choices such as selecting the training data and more particularly language pairs and their available quantity and quality. Other important choices must be made during the preprocessing step, like selecting data to learn subword units, the subsequent model’s vocabulary. It is still unclear how to optimize this transfer. In this paper, we analyse the impact of such early choices on the performance of the systems. We show that systems performance are depending on quantity of available data and proximity of the involved languages as well as the protocol used to determined the subword units model and consequently the vocabulary. We also propose a multilingual approach to transfer learning involving a universal encoder. This multilingual approach is comparable to a multi-source transfer learning setup where the system learns from multiple languages before the transfer. We analyse subword units distribution across different languages and show that, once again, preprocessing choices impact systems overall performance.
Adrien Bardet, Fethi Bougares, Loïc Barrault
A Deep Learning Approach to Self-expansion of Abbreviations Based on Morphology and Context Distance
Abstract
Abbreviations and acronyms are shortened forms of words or phrases that are commonly used in technical writing. In this study we focus specifically on abbreviations and introduce a corpus-based method for their expansion. The method divides the processing into three key stages: abbreviation identification, full form candidate extraction, and abbreviation disambiguation. First, potential abbreviations are identified by combining pattern matching and named entity recognition. Both acronyms and abbreviations exhibit similar orthographic properties, thus additional processing is required to distinguish between them. To this end, we implement a character-based recurrent neural network (RNN) that analyses the morphology of a given token in order to classify it as an acronym or an abbreviation. A siamese RNN that learns the morphological process of word abbreviation is then used to select a set of full form candidates. Having considerably constrained the search space, we take advantage of the Word Mover’s Distance (WMD) to assess semantic compatibility between an abbreviation and each full form candidate based on their contextual similarity. This step does not require any corpus-based training, thus making the approach highly adaptable to different domains. Unlike the vast majority of existing approaches, our method does not rely on external lexical resources for disambiguation, but with a macro F-measure of 96.27% is comparable to the state-of-the art.
Daphné Chopard, Irena Spasić
Word Sense Induction Using Word Sketches
Abstract
We present three methods for word sense induction based on Word Sketches. The methods are being developed a part of an semiautomatic dictionary creation system, providing annotators with the summarized semantic behavior of a word. Two of the methods are based on the assumption of a word having a single sense per collocation. We cluster the Word Sketch based collocations by their co-occurrence behavior in the first method. The second method clusters the collocations using word embedding model. The last method is based on clustering of Word Sketch thesauri. We evaluate the methods and demonstrate their behavior on representative words.
Ondřej Herman, Miloš Jakubíček, Pavel Rychlý, Vojtěch Kovář
Temporal “Declination” in Different Types of IPs in Russian: Preliminary Results
Abstract
The paper explores temporal changes within an intonational phrase in Russian. The main question we aim to answer is whether we can speak about temporal “declination” in a similar way we speak about melodic declination. In order to answer this question, we analysed stressed vowel duration in intonational phrases (IPs) of different types using a speech corpus. We have found that (1) most intonational phrases in Russian do not have temporal “declination” or “inclination” in the pre-nuclear part: the tempo is relatively stable until the nucleus, where a noticeable lengthening is observed; (2) the rarely occurring temporal “declination” or “inclination” in certain types of IPs can be considered a specific speaker’s trait; (3) the amount of lengthening on the last stressed vowel within the IP may play a role in distinguishing final and non-final IPs, rising vs. falling nuclei, but this is also speaker-specific.
Tatiana Kachkovskaia
Geometry and Analogies: A Study and Propagation Method for Word Representations
Abstract
In this paper we discuss the well-known claim that language analogies yield almost parallel vector differences in word embeddings. On the one hand, we show that this property, while it does hold for a handful of cases, fails to hold in general especially in high dimension, using the best known publicly available word embeddings. On the other hand, we show that this property is not crucial for basic natural language processing tasks such as text classification. We achieve this by a simple algorithm which yields updated word embeddings where this property holds: we show that in these word representations, text classification tasks have about the same performance.
Sammy Khalife, Leo Liberti, Michalis Vazirgiannis
Language Comparison via Network Topology
Abstract
Modeling relations between languages can offer understanding of language characteristics and uncover similarities and differences between languages. Automated methods applied to large textual corpora can be seen as opportunities for novel statistical studies of language development over time, as well as for improving cross-lingual natural language processing techniques. In this work, we first propose how to represent textual data as a directed, weighted network by the text2net algorithm. We next explore how various fast, network-topological metrics, such as network community structure, can be used for cross-lingual comparisons. In our experiments, we employ eight different network topology metrics, and empirically showcase on a parallel corpus, how the methods can be used for modeling the relations between nine selected languages. We demonstrate that the proposed method scales to large corpora consisting of hundreds of thousands of aligned sentences on an of-the-shelf laptop. We observe that on the one hand properties such as communities, capture some of the known differences between the languages, while others can be seen as novel opportunities for linguistic studies.
Blaž Škrlj, Senja Pollak

Speech Analysis and Synthesis

Frontmatter
An Incremental System for Voice Pathology Detection Combining Possibilistic SVM and HMM
Abstract
The voice pathology detection using automatic classification systems is a useful way to diagnose voice diseases. In this paper, we propose a novel tool to detect voice pathology based on an incremental possibilistic SVM-HMM method which can be applied to serval practical applications using non-stationary or a very large-scale data in purpose to reduce the memory issues faced during the storage of the kernel matrix. The proposed system includes the steps of using SVM to incrementally compute possibilitic probabilities and then they will be used by HMM in order to detect voice pathologies. We evaluated the proposed method on the task of the detection of voice pathologies using voices samples from the Massachusetts Eye and Ear Infirmary Voice and Speech Laboratory (MEEI) database. According to the detection rates obtained by our system, the performance sounds robust, efficient and speed applied to a task of voices pathology detection.
Rimah Amami, Rim Amami, Hassan Ahmad Eleraky
External Attention LSTM Models for Cognitive Load Classification from Speech
Abstract
Cognitive Load (CL) refers to the amount of mental demand that a given task imposes on an individual’s cognitive system and it can affect his/her productivity in very high load situations. In this paper, we propose an automatic system capable of classifying the CL level of a speaker by analyzing his/her voice. We focus on the use of Long Short-Term Memory (LSTM) networks with different weighted pooling strategies, such as mean-pooling, max-pooling, last-pooling and a logistic regression attention model. In addition, as an alternative to the previous methods, we propose a novel attention mechanism, called external attention model, that uses external cues, such as log-energy and fundamental frequency, for weighting the contribution of each LSTM temporal frame, overcoming the need of a large amount of data for training the attentional model. Experiments show that the LSTM-based system with external attention model outperforms significantly the baseline system based on Support Vector Machines (SVM) and the LSTM-based systems with the conventional weighed pooling schemes and with the logistic regression attention model.
Ascensión Gallardo-Antolín, Juan M. Montero
A Speech Test Set of Practice Business Presentations with Additional Relevant Texts
Abstract
We present a test corpus of audio recordings and transcriptions of presentations of students’ enterprises together with their slides and web-pages. The corpus is intended for evaluation of automatic speech recognition (ASR) systems, especially in conditions where the prior availability of in-domain vocabulary and named entities is benefitable. The corpus consists of 39 presentations in English, each up to 90 s long. The speakers are high school students from European countries with English as their second language. We benchmark three baseline ASR systems on the corpus and show their imperfection.
Dominik Macháček, Jonáš Kratochvíl, Tereza Vojtěchová, Ondřej Bojar
Investigating the Relation Between Voice Corpus Design and Hybrid Synthesis Under Reduction Constraint
Abstract
Hybrid TTS systems generally try to optimise their cost function with the voice provided to generate the best signal. The voice is based on a speech corpus usually designed for a specific purpose. In this paper, we consider that the voice creation is realized through a corpus design step under reduction constraints. During this stage, a recording script is crafted to be optimal for the target TTS engine and its purpose. In this paper, we investigate the impact of sharing information between the corpus design step and the hybrid TTS optimisation step.
We start from a reduced voice optimized for a unit selection system using a CNN-based model. This baseline is compared to a hybrid TTS system that uses, as its target cost, a linguistic embedding built for the recording script design step. This approach is also compared to a standard hybrid TTS system trained only on the voice and so that does not have information about the corpus design process.
Objective measures and perceptual evaluations show how the integration of the corpus design embedding as target cost outperforms a classical hard-coded target cost. However, the feed-forward DNN acoustic model from the standard hybrid TTS system remains the best. This emphasizes the importance of acoustic information in the TTS target cost, which is not directly available before the voice recording.
Meysam Shamsi, Damien Lolive, Nelly Barbot, Jonathan Chevelu

Speech Recognition

Frontmatter
An Amharic Syllable-Based Speech Corpus for Continuous Speech Recognition
Abstract
Speech recognition systems play an important role in solving problems such as spoken content retrieval. Thus, we are interested in the task of speech recognition for low-resource languages, such as Amharic. The main challenges in solving Amharic speech recognition are the limited availability of corpora and complex morphological nature of the language. This paper presents a new corpus for the low-resource Amharic language which is suitable for training and evaluation of speech recognition systems. The corpus prepared contains 90 h of speech data with word and syllable-based annotation. Moreover, the use of syllable units for acoustic and language model in comparison with a morpheme-based model is presented. Syllable-based triphone speech recognition system provides a lower word error rate of 16.82% on the subset of the dataset. Moreover, syllable-based hybrid deep neural network with hidden Markov model provides a 14.36% word error rate.
Nirayo Hailu Gebreegziabher, Andreas Nürnberger
Building an ASR Corpus Based on Bulgarian Parliament Speeches
Abstract
This paper presents the methodology we applied for building a new corpus of Bulgarian speech suitable for training and evaluating modern speech recognition systems. The Bulgarian Parliament ASR (BG-PARLAMA) corpus is derived from the recordings of the plenary sessions of the Bulgarian Parliament. The manually transcribed texts and the audio data of the speeches are processed automatically to build an aligned and segmented corpus. NLP tools and resources for Bulgarian are utilized for the language specific tasks. The resulting corpus consists of 249 hours of speech from 572 speakers and is freely available for academic use. First experiments with an ASR system trained on the BG-PARLAMA corpus have been conducted showing word error rate of around 7% on parliament speeches from unseen speakers using time-delay deep neural network (TD-DNN) architecture. The BG-PARLAMA corpus is to our knowledge the largest speech corpus currently available for Bulgarian.
Diana Geneva, Georgi Shopov, Stoyan Mihov
A Study on Online Source Extraction in the Presence of Changing Speaker Positions
Abstract
Multi-talker speech and moving speakers still pose a significant challenge to automatic speech recognition systems. Assuming an enrollment utterance of the target speakeris available, the so-called SpeakerBeam concept has been recently proposed to extract the target speaker from a speech mixture. If multi-channel input is available, spatial properties of the speaker can be exploited to support the source extraction. In this contribution we investigate different approaches to exploit such spatial information. In particular, we are interested in the question, how useful this information is if the target speaker changes his/her position. To this end, we present a SpeakerBeam-based source extraction network that is adapted to work on moving speakers by recursively updating the beamformer coefficients. Experimental results are presented on two data sets, one with artificially created room impulse responses, and one with real room impulse responses and noise recorded in a conference room. Interestingly, spatial features turn out to be advantageous even if the speaker position changes.
Jens Heitkaemper, Thomas Fehér, Michael Freitag, Reinhold Haeb-Umbach
Improving Speech Recognition with Drop-in Replacements for f-Bank Features
Abstract
While a number of learned feature representations have been proposed for speech recognition, employing f-bank features often leads to the best results. In this paper, we focus on two alternative methods of improving this existing representation. First, triangular filters can be replaced with Gabor filters, a compactly supported filter that better localizes events in time, or with psychoacoustically-motivated Gammatone filters. Second, by rearranging the order of operations in computing filter bank features, the resulting coefficients will have better time-frequency resolution. By merely swapping f-banks with other types of filters in modern phone recognizers, we achieved significant reductions in error rates across repeated trials.
Sean Robertson, Gerald Penn, Yingxue Wang
Investigation on N-Gram Approximated RNNLMs for Recognition of Morphologically Rich Speech
Abstract
Recognition of Hungarian conversational telephone speech is challenging due to the informal style and morphological richness of the language. Recurrent Neural Network Language Model (RNNLM) can provide remedy for the high perplexity of the task; however, two-pass decoding introduces a considerable processing delay. In order to eliminate this delay we investigate approaches aiming at the complexity reduction of RNNLM, while preserving its accuracy. We compare the performance of conventional back-off n-gram language models (BNLM), BNLM approximation of RNNLMs (RNN-BNLM) and RNN n-grams in terms of perplexity and word error rate (WER). Morphological richness is often addressed by using statistically derived subwords - morphs - in the language models, hence our investigations are extended to morph-based models, as well. We found that using RNN-BNLMs 40% of the RNNLM perplexity reduction can be recovered, which is roughly equal to the performance of a RNN 4-gram model. Combining morph-based modeling and approximation of RNNLM, we were able to achieve 8% relative WER reduction and preserve real-time operation of our conversational telephone speech recognition system.
Balázs Tarján, György Szaszák, Tibor Fegyó, Péter Mihajlik
Tuning of Acoustic Modeling and Adaptation Technique for a Real Speech Recognition Task
Abstract
At the beginning, we had started to develop a Czech telephone acoustic model by evaluating various Kaldi recipes. We had a 500-h Czech telephone Switchboard-like corpus. We had selected the Time-Delay Neural Network (TDNN) model variant “d” with the i-vector adaptation as the best performing model on the held-out set from the corpus. The TDNN architecture with an asymmetric time-delay window also fulfilled our real-time application constrain. However, we were wondering why the model totally failed on a real call center task. The main problem was in the i-vector estimation procedure. The training data are split into short utterances. In the recipe, 2-utterance pseudospeakers are made and i-vectors are evaluated for them. However, the real call center utterances are much longer, in order of several minutes or even more. The TDNN model was trained from i-vectors that did not match the test ones. We propose two ways how to normalize statistics used for the i-vector estimation. The test data i-vectors with the normalization are better compatible with the training data i-vectors. In the paper, we also discuss various additional ways of improving the model accuracy on the out-of-domain real task including using LSTM based models.
Jan Vaněk, Josef Michálek, Josef Psutka

Text Analysis and Classification

Frontmatter
Automatic Identification of Economic Activities in Complaints
Abstract
In recent years, public institutions have undergone a progressive modernization process, bringing several administrative services to be provided electronically. Some institutions are responsible for analyzing citizen complaints, which come in huge numbers and are mainly provided in free-form text, demanding for some automatic way to process them, at least to some extent. In this work, we focus on the task of automatically identifying economic activities in complaints submitted to the Portuguese Economic and Food Safety Authority (ASAE), employing natural language processing (NLP) and machine learning (ML) techniques for Portuguese, which is a language with few resources. We formulate the task as several multi-class classification problems, taking into account the economic activity taxonomy used by ASAE. We employ features at the lexical, syntactic and semantic level using different ML algorithms. We report the results obtained to address this task and present a detailed analysis of the features that impact the performance of the system. Our best setting obtains an accuracy of 0.8164 using SVM. When looking at the three most probable classes according to the classifier’s prediction, we report an accuracy of 0.9474.
Luís Barbosa, João Filgueiras, Gil Rocha, Henrique Lopes Cardoso, Luís Paulo Reis, João Pedro Machado, Ana Cristina Caldeira, Ana Maria Oliveira
Automatic Judgement of Neural Network-Generated Image Captions
Abstract
Manual evaluation of individual results of natural language generation tasks is one of the bottlenecks. It is very time consuming and expensive if it is, for example, crowdsourced. In this work, we address this problem for the specific task of automatic image captioning. We automatically generate human-like judgements on grammatical correctness, image relevance and diversity of the captions obtained from a neural image caption generator. For this purpose, we use pool-based active learning with uncertainty sampling and represent the captions using fixed size vectors from Google’s Universal Sentence Encoder. In addition, we test common metrics, such as BLEU, ROUGE, METEOR, Levenshtein distance, and n-gram counts and report F1 score for the classifiers used under the active learning scheme for this task. To the best of our knowledge, our work is the first in this direction and promises to reduce time, cost, and human effort.
Rajarshi Biswas, Aditya Mogadala, Michael Barz, Daniel Sonntag, Dietrich Klakow
Imbalanced Stance Detection by Combining Neural and External Features
Abstract
Stance detection is the task of determining the perspective “or stance” of pairs of text. Classifying the stance (e.g. agree, disagree, discuss or unrelated) expressed in news articles with respect to a certain claim is an important step in detecting fake news. Many neural and traditional models predict well on unrelated and discuss classes while they poorly perform on other minority represented classes in the Fake News Challenge-1 (FNC-1) dataset. We present a simple neural model that combines similarity and statistical features through a MLP network for news-stance detection. Aiding augmented training instances to overcome the data imbalance problem and adding batch-normalization and gaussian-noise layers enable the model to prevent overfitting and improve class-wise and overall accuracy. We also conduct additional experiments with a light-GBM and MLP network using the same features and text augmentation to show their effectiveness. In addition, we evaluate the proposed model on the Argument Reasoning Comprehension (ARC) dataset to assess the generalizability of the model. The experimental results of our models outperform the current state-of-the-art.
Fuad Mire Hassan, Mark Lee
Prediction Uncertainty Estimation for Hate Speech Classification
Abstract
As a result of social network popularity, in recent years, hate speech phenomenon has significantly increased. Due to its harmful effect on minority groups as well as on large communities, there is a pressing need for hate speech detection and filtering. However, automatic approaches shall not jeopardize free speech, so they shall accompany their decisions with explanations and assessment of uncertainty. Thus, there is a need for predictive machine learning models that not only detect hate speech but also help users understand when texts cross the line and become unacceptable.
The reliability of predictions is usually not addressed in text classification. We fill this gap by proposing the adaptation of deep neural networks that can efficiently estimate prediction uncertainty. To reliably detect hate speech, we use Monte Carlo dropout regularization, which mimics Bayesian inference within neural networks. We evaluate our approach using different text embedding methods. We visualize the reliability of results with a novel technique that aids in understanding the classification reliability and errors.
Kristian Miok, Dong Nguyen-Doan, Blaž Škrlj, Daniela Zaharie, Marko Robnik-Šikonja
Authorship Attribution in Russian in Real-World Forensics Scenario
Abstract
Recent demands in authorship attribution, specifically, cross-topic authorship attribution with small numbers of training samples and very short texts, impose new challenges on corpora design, feature and algorithm development. In the current work we address these challenges by performing authorship attribution on a specifically designed dataset in Russian. We present a dataset of short written texts in Russian, where both authorship and topic are controlled. We propose a pairwise classification design closely resembling a real-world forensic task. Semantic coherence features are introduced to supplement well-established n-gram features in challenging cross-topic settings. Distance-based measures are compared with machine learning algorithms. The experiment results support the intuition that for very small datasets, distance-based measures perform better than machine learning techniques. Moreover, pairwise classification results show that in difficult cross-topic cases, content-independent features, i.e., part-of-speech n-grams and semantic coherence, are promising. The results are supported by feature significance analysis for the proposed dataset.
Polina Panicheva, Tatiana Litvinova
RaKUn: Rank-based Keyword Extraction via Unsupervised Learning and Meta Vertex Aggregation
Abstract
Keyword extraction is used for summarizing the content of a document and supports efficient document retrieval, and is as such an indispensable part of modern text-based systems. We explore how load centrality, a graph-theoretic measure applied to graphs derived from a given text can be used to efficiently identify and rank keywords. Introducing meta vertices (aggregates of existing vertices) and systematic redundancy filters, the proposed method performs on par with state-of-the-art for the keyword extraction task on 14 diverse datasets. The proposed method is unsupervised, interpretable and can also be used for document visualization.
Blaž Škrlj, Andraž Repar, Senja Pollak
Backmatter
Metadata
Title
Statistical Language and Speech Processing
Editors
Carlos Martín-Vide
Matthew Purver
Senja Pollak
Copyright Year
2019
Electronic ISBN
978-3-030-31372-2
Print ISBN
978-3-030-31371-5
DOI
https://doi.org/10.1007/978-3-030-31372-2

Premium Partner