International Journal of Computational Linguistics & Chinese Language
Processing
[䏿�]
Vol. 23, No. 1, June 2018
Title:
Sentiment Analysis on Social Network: Using Emoticon Characteristics for
Twitter Polarity Classification
Author:
Chia-Ping Chen, Tzu-Hsuan Tseng and Tzu-Hsuan Yang
Abstract:
In this paper, we describe a sentiment analysis system implemented for the
semantic-evaluation task of message polarity classification for English on
Twitter. Our system contains modules of data pre-processing, word embedding,
and sentiment classification. In order to decrease the data complexity and
increase the coverage of the word vector model for better learning, we perform
a series of data pre-processing tasks, including emoticon normalization,
specific suffix splitting, and hashtag segmentation. In word embedding, we
utilize the pre-trained word vector provided by GloVe.
We believe that emojis in tweets are important
characteristics for Twitter sentiment classification, but most pre-trained sets
of word vectors contain few or no emoji representations. Thus, we propose
embedding emojis into the vector space by neural
network models. We train the emoji vector with relevant words that contain
descriptions and contexts of emojis. The models of
long short-term memory (LSTM) and convolutional neural network (CNN) are used
as our sentiment classifiers. The proposed emoji embedding is evaluated on the SemEval 2017 tasks. Using emoji embedding, we achieved
recall rates of 0.652 with the LSTM classifier and 0.640 with the CNN
classifier.
Keywords: Sentiment Analysis,
Polarity Classification, Machine Learning, Neural Network, Word Embedding
Title:
A Lexical Coherence Representation Computational Framework using LSTM Forget
Gate For Autism Recognition
Author:
Yu-Shuo Liu, Chin-Po Chen, Susan Shur-Fen
Gau and Chi-Chun Lee
Abstract:
Autistic children are less able to tell a fluent story than typical children,
so measuring verbal fluency becomes an important indicator when diagnosing
autistic children. Fluency assessment, however, needs time-consuming manual
tagging, or using expert specially designed characteristics as indicators,
therefore, this study proposes a coherence representation learned by directly
data-driven architecture, using the forget gate of long short-term memory model
to export lexical coherence representation, at the same time, we also use the
ADOS coding related to the evaluation of narration to test our proposed
representation. Our proposed lexical coherence representation performs high accuracy
of 92% on the task of identifying children with autism from typically
development. Comparing with the traditional measurement of grammar, word
frequency, and latent semantic analysis model, there is a significant
improvement.
This paper
also further randomly shuffles the word order and sentence order,
making the typical child's story content become disfluent. By
visualizing the data samples after dimension reduction, we further observe the
distribution of these fluent, disfluent, and those artificially disfluent data
samples. We found the artificially disfluent typical samples would move closer
to disfluent autistic samples which prove that our extracted features contain
the concept of coherency.
Keywords:
Behavioral
Signal Processing, Lexical Coherence Representation, LSTM, Autism Spectrum
Disorder, Story-telling
Title:
Joint Modeling of Individual Neural Responses using a Deep Voting Fusion Network for Automatic Emotion Perception Decoding
Author:
Wan-Ting Hsieh and Chi-Chun Lee
Abstract:
In
the era underlying grouping life, affective computing and emotion recognition
are closely bonding with daily life, and impose great impact on social ability.
Understanding the individual differences is significant factor that should not
be ignore in fMRI analysis while most of the brain studies on fMRI seldom truly
deal with it, we carry out a system considering individual variability to
recognize the emotion to the vocal stimuli with BOLD signal. In our work, we
propose a novel method using multimodal fusion in a voting DNN framework, where
we utilize a mask on weight matrix of fusion layer to learn an
individual-influenced weight matrix and realize voting in this network, and
achieve 53.10% in UAR for a four-class emotion recognition task. Our analysis
shows that the multimodal voting net is an effective neural network encoding
individual differences and thus enhances the ability to emotion recognition.
Further the join of audio feature also boosts the result to 56.07%.
Keywords: Individual Difference, fMRI, Vocal Emotion, Perception, Deep Voting Fusion Neural Net
��