Acceder a contenido central

REBIUN - ODA

Detalle del título

Descripción del título

cover Phonological Parsing in Spe...
Phonological Parsing in Speech Recognition
Springer US 1987

It is well-known that phonemes have different acoustic realizations depending on the context. Thus, for example, the phoneme /t! is typically realized with a heavily aspirated strong burst at the beginning of a syllable as in the word Tom, but without a burst at the end of a syllable in a word like cat. Variation such as this is often considered to be problematic for speech recogniƯ tion: (1) "In most systems for sentence recognition, such modifications must be viewed as a kind of 'noise' that makes it more difficult to hypothesize lexical candidates given an inƯ put phonetic transcription. To see that this must be the case, we note that each phonological rule [in a certain example] results in irreversible ambiguity-the phonological rule does not have a unique inverse that could be used to recover the underlying phonemic representation for a lexical item. For example ... schwa vowels could be the first vowel in a word like 'about' or the surface realization of almost any English vowel appearing in a sufficiently destressed word. The tongue flap [(] could have come from a /t! or a /d/." [65, pp. 548-549] This view of allophonic variation is representative of much of the speech recognition literature, especially during the late 1970's. One can find similar statements by Cole and Jakimik [22] and by Jelinek [50]

Monografía

Más detalles del título

Cambiar el formato de visualización

Más detalles

Título:
Phonological Parsing in Speech Recognition / by Kenneth W. Church
Editorial:
Boston, MA : Springer US, 1987
Descripción física:
1 online resource (272 pages)
Mención de serie:
The Kluwer International Series in Engineering and Computer Science, VLSI, Computer Architecture and Digital Signal Processing, 0893-3405 ; 38
Contenido:
1. Introduction -- 1.1 Historical Background and Problem Statement -- 1.2 Allophonic Constraints are Useful -- 1.3 Problems with Rewrite-Rules -- 1.4 Trends Toward Larger Constituents -- 1.5 Parsing and Matching -- 1.6 Summary -- 1.7 Outline of What's To Come -- 2. Representation of Segments -- 2.1 Stevens' Theory of Invariant Features -- 2.2 Our Position -- 2.3 What's New -- 2.4 Motivations for Representing Phonetic Distinctions -- 2.5 Capturing Generalizations -- 2.6 Summary -- 3. Allophonic Rules -- 3.1 Flapping and Syllable Level Generalizations -- 3.2 Non-Linear Formulations of Flapping -- 3.3 Implementation Difficulties and the Lexical Expansion Solution -- 4. An Alternative: Phrase-Structure Rules -- 4.1 PS Trees Bear More Fruit Than You Would Have Thought -- 4.2 The Constituency Hypothesis -- 4.3 Advantages of Phrase-Structure Formulation -- 4.4 Summary -- 5. Parser Implementation -- 5.1 An Introduction to Chart Parsing -- 5.2 Representation Issues -- 5.3 A Parser Based on Matrix Operations -- 5.4 No Recursion -- 5.5 Order of Evaluation -- 5.6 Feature Manipulation -- 5.7 Additional Lattice Operations -- 5.8 Debugging Capabilities -- 5.9 Summary -- 6. Phonotactic Constraints -- 6.1 The Affix Position -- 6.2 The Length Restriction -- 6.3 The Sonority Hierarchy -- 6.4 Practical Applications of Phonotactic Constraints -- 6.5 Summary -- 7. When Phonotactic Constraints are Not Enough -- 7.1 Basic Principles -- 7.2 Against Stress Resyllabification -- 7.3 Practical Applications of Vowel Resyllabification -- 7.4 Automatic Syllabification of Lexicons -- 7.5 Summary -- 8. Robustness Issues -- 8.1 Alternatives in the Input Lattice -- 8.2 Problems for Parsing -- 8.3 Relaxing Phonological Distinctions -- 8.4 Conservation of Distinctive Features -- 8.5 Probabilistic Methods -- 8.6 Distinctive Features -- 8.7 Summary -- 9. Conclusion -- 9.1 Review of the Standard Position -- 9.2 Review of Nakatani's Position -- 9.3 Review of the Constituency Hypothesis -- 9.4 Review of Phonotactic Constraints -- 9.5 Comparison with Syntactic Notions of Constituency -- 9.6 Contributions -- References -- Appendix I. The Organization of the Lexicon -- I.1. Linear Representation and Linear Search -- I.2. Non-Recursive Discrimination Networks -- I.3. Recursive Discrimination Networks -- I.4. Hash Tables Based on Equivalence Class Abstractions -- I.5. Shipman and Zue -- I.6. Morse Code -- I.7. Selecting the Appropriate Gross Classification -- I.8. Summary -- Appendix II. Don't Depend Upon Syntax and Semantics -- II. 1. Higher Level vs. Lower Level Constraints -- II. 2. Too Much Dependence in the Past -- II. 3. How Much Can Higher Constraints Help? -- II. 4. Detraction from the Important Low-Level Issues -- II. 5. New Directions: Recognition without Understanding -- II. 6. Lower-Level Constraints Bear More Fruit -- II. 7. Summary -- Appendix III. Lexical Phonology -- III. 1. Difference Between + and # -- III. 2. Pipeline Design -- III. 3. Distinctions Between Lexical and Postlexical Rules -- III. 4. Which Rules are Lexical and Which are Postlexical? -- III. 5. The Implementation of Lexical and Postlexical Rules -- Appendix IV. A Sample Grammar -- Appendix V. Sample Lexicon -- Appendix VI. Sample Output
ISBN:
9781461320135 ( electronic bk.)
1461320135 ( electronic bk.)
9781461292005
146129200X
Materia:
Enlace a formato físico adicional:
Print version: 9781461292005
Punto acceso adicional serie-Título:
Kluwer international series in engineering and computer science. VLSI, computer architecture, and digital signal processing ; 38

Préstamo interbibliotecario

Seleccione el centro al que pertenece para solicitar la petición de préstamo de este documento.

Filtrar listado de centros

No hay coincidencias

Relacionados

Misma Editorial y Colección