Author:
Chung-Chi Huang, and Jason S. Chang
Abstract:
We propose a version of Inversion Transduction Grammar (ITG) model with IBM-style notation of fertility to improve word-alignment performance. In our approach, binary context-free grammar rules of the source language, accompanied by orientation preferences of the target language and fertilities of words, are leveraged to construct a syntax-based statistical translation model. Our model, inherently possessing the characteristics of ITG restrictions and allowing for many consecutive words aligned to one and vice-versa, outperforms the Bracketing Transduction Grammar (BTG) model and GIZA++, a state-of-the-art word aligner, not only in alignment error rate (23% and 14% error reduction) but also in consistent phrase error rate (13% and 9% error reduction). Better performance in these two evaluation metrics suggests that, based on our word alignment result, more accurate phrase pairs may be acquired, leading to better machine translation quality.
Keywords:
Inversion Transduction Grammar, Syntax-based Statistical Translation Model, Word Alignment
Author:
Chia-Hung Tai, Jia-Zen Fan, Shu-Ling Huang, and Keh-Jiann Chen
Abstract:
In this paper, we take Determinative-Measure Compounds as an example to demonstrate how the E-HowNet semantic composition mechanism works in deriving the sense representation for a newly coined determinative-measure (DM) compound. First, we define the sense of a closed set of each individual determiner and measure word in E-HowNet representation exhaustively. Afterwards, we make semantic composition rules to produce candidate sense representations for a newly coined DM. Then, we review development set to design sense disambiguation rules. We use these heuristic disambiguation rules to determine the appropriate context-dependent sense of a DM and its E-HowNet representation. The experiment shows that the current system reaches 89% accuracy in DM sense derivation and disambiguation.
Keywords:
Semantic Composition, Determinative-Measure Compounds, Sense Representations, Extended How Net, How Net
Author:
Shu-Yen Lin, Cheng-Chao Su, Yu-Da Lai, Li-Chin Yang, and Shu-Kai Hsieh
Abstract:
Although some traditional readability formulas have shown high predictive validity in the r = 0.8 range and above (Chall & Dale, 1995), they are generally not based on genuine linguistic processing factors, but on statistical correlations (Crossley et al., 2008). Improvement of readability assessment should focus on finding variables that truly represent the comprehensibility of text as well as the indices that accurately measure the correlations. In this study, we explore the hierarchical relations between lexical items based on the conceptual categories advanced from Prototype Theory (Rosch et al., 1976). According to this theory and its development, basic level words like guitar represent the objects humans interact with most readily. They are acquired by children earlier than their superordinate words like stringed instrument and their subordinate words like acoustic guitar. Accordingly, the readability of a text is presumably associated with the ratio of basic level words it contains. WordNet (Fellbaum, 1998), a network of meaningfully related words, provides the best online open source database for studying such lexical relations. Our study shows that a basic level noun can be identified by its ratio of forming compounds (e.g. chair à armchair) and the length difference in relation to its hyponyms. We compared graded readings for American children and high school English readings for Taiwanese students by several readability formulas and in terms of basic level noun ratios (i.e. the number of basic level noun types divided by the number of noun types in a text ). It is suggested that basic level noun ratios provide a robust and meaningful index of lexical complexity, which is directly associated with text readability.
Keywords:
Readability, Prototype Theory, WordNet, Basic Level Words, Compounds.
Author:
Yuen-Hsien Tseng
Abstract:
A Chinese news summarization method is proposed in order to help humans deal with the message services of news briefs broadcast over cell phones. The problem to be solved here is unique because a strict length limit (69 or 45 characters) is imposed on the summaries for the message service. This requires some sort of automatic sentence fusion, rather than sentence selection alone. In the proposed method, important sentences were first identified based on the news content. They were matched against the news headline to determine a suitable position for concatenation with the headline to become candidates. These candidates were then ranked by their length and fitness for manual selection. In our evaluation, among 40 short news updates in the inside testing set, over 75% (80%) of the best candidates yield acceptable summaries without manual editing for the length limit of 69 (45) characters. These numbers, however, reduce to 70.7% (53.3%) for the outside testing set of 75 news stories of ordinary length. It seems that the shorter the length limit, the more difficult the problem of getting the summary from long stories. Nevertheless, the proposed method has the potential not only to reduce the cost of manual operation, but also to integrate and synchronize with other media in such services in the future.
Keywords:
Cell Phone Service, News Brief Message, Automated Summarization, Chinese News
Author:
Wen-Hsiang Tu, and Jeih-weih Hung
Abstract:
Feature statistics normalization techniques have been shown to be very successful in improving the noise robustness of a speech recognition system. In this paper, we propose an associative scheme in order to obtain a more accurate estimate of the statistical information in these techniques. By properly integrating codebook and utterance knowledge, the resulting associative cepstral mean subtraction (A-CMS), associative cepstral mean and variance normalization (A-CMVN), and associative histogram equalization (A-HEQ) behave significantly better than the conventional utterance-based and codebook-based versions in additive noise environments. For the Aurora-2 clean-condition training task, the new proposed associative histogram equalization (A-HEQ) provides an average recognition accuracy of 90.69%, which is better than utterance-based HEQ (87.67%) and codebook-based HEQ (86.00%).
Keywords:
Speech Recognition, Noise-Robust Feature, Codebook