Details

Automatic Text Summarization


Automatic Text Summarization


1. Aufl.

von: Juan-Manuel Torres-Moreno

139,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 25.09.2014
ISBN/EAN: 9781119044079
Sprache: englisch
Anzahl Seiten: 376

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p>Textual information in the form of digital documents quickly accumulates to create huge amounts of data. The majority of these documents are unstructured: it is unrestricted text and has not been organized into traditional databases. Processing documents is therefore a perfunctory task, mostly due to a lack of standards. It has thus become extremely difficult to implement automatic text analysis tasks. Automatic Text Summarization (ATS), by condensing the text while maintaining relevant information, can help to process this ever-increasing, difficult-to-handle, mass of information.</p> <p>This book examines the motivations and different algorithms for ATS. The author presents the recent state of the art before describing the main problems of ATS, as well as the difficulties and solutions provided by the community. The book provides recent advances in ATS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several examples are also included in order to clarify the theoretical concepts.</p>
<p>FOREWORD BY A. ZAMORA AND R. SALVADOR xi</p> <p>FOREWORD BY H. SAGGION xv</p> <p>NOTATION xvii</p> <p>INTRODUCTION xix</p> <p><b>PART 1. FOUNDATIONS 1</b></p> <p><b>CHAPTER 1. WHY SUMMARIZE TEXTS? 3</b></p> <p>1.1. The need for automatic summarization 3</p> <p>1.2. Definitions of text summarization 5</p> <p>1.3. Categorizing automatic summaries 10</p> <p>1.4. Applications of automatic text summarization 13</p> <p>1.5. About automatic text summarization 15</p> <p>1.6. Conclusion 21</p> <p><b>CHAPTER 2. AUTOMATIC TEXT SUMMARIZATION: SOME IMPORTANT CONCEPTS 23</b></p> <p>2.1. Processes before the process 23</p> <p>2.1.1. Sentence-term matrix: the vector space model (VSM) model 26</p> <p>2.2. Extraction, abstraction or compression? 28</p> <p>2.3. Extraction-based summarization 30</p> <p>2.3.1. Surface-level algorithms 31</p> <p>2.3.2. Intermediate-level algorithms 33</p> <p>2.3.3. Deep parsing algorithms 34</p> <p>2.4. Abstract summarization 35</p> <p>2.4.1. FRUMP 35</p> <p>2.4.2. Information extraction and abstract generation 38</p> <p>2.5. Sentence compression and Fusion 38</p> <p>2.5.1. Sentence compression 38</p> <p>2.5.2. Multisentence fusion 39</p> <p>2.6. The limits of extraction 39</p> <p>2.6.1. Cohesion and coherence 40</p> <p>2.6.2. The HexTAC experiment 42</p> <p>2.7. The evolution of text summarization tasks 43</p> <p>2.7.1. Traditional tasks 43</p> <p>2.7.2. Current and future problems 45</p> <p>2.8. Evaluating summaries 50</p> <p>2.9. Conclusion 51</p> <p><b>CHAPTER 3. SINGLE-DOCUMENT SUMMARIZATION 53</b></p> <p>3.1. Historical approaches 53</p> <p>3.1.1. Luhn’s Automatic Creation of Literature Abstracts 57</p> <p>3.1.2. The Luhn algorithm 59</p> <p>3.1.3. Edmundson’s linear combination 61</p> <p>3.1.4. Extracts by elimination 64</p> <p>3.2. Machine learning approaches 66</p> <p>3.2.1. Machine learning parameters 66</p> <p>3.3. State-of-the-art approaches 69</p> <p>3.4. Latent semantic analysis 73</p> <p>3.4.1. Singular value decomposition (SVD) 73</p> <p>3.4.2. Sentence weighting by SVD 74</p> <p>3.5. Graph-based approaches 76</p> <p>3.5.1. PAGERANK and SNA algorithms 77</p> <p>3.5.2. Graphs and automatic text summarization 78</p> <p>3.5.3. Constructing the graph 79</p> <p>3.5.4. Sentence weighting 80</p> <p>3.6. DIVTEX: a summarizer based on the divergence of probability distribution 83</p> <p>3.7. CORTEX 85</p> <p>3.7.1. Frequential measures 86</p> <p>3.7.2. Hamming measures 87</p> <p>3.7.3. Mixed measures 88</p> <p>3.7.4. Decision algorithm 89</p> <p>3.8. ARTEX 90</p> <p>3.9. ENERTEX 93</p> <p>3.9.1. Spins and neural networks 93</p> <p>3.9.2. The textual energy similarity measure 95</p> <p>3.9.3. Summarization by extraction and textual energy 97</p> <p>3.10. Approaches using rhetorical analysis 102</p> <p>3.11. Lexical chains 107</p> <p>3.12. Conclusion 107</p> <p><b>CHAPTER 4. GUIDED MULTI-DOCUMENT SUMMARIZATION 109</b></p> <p>4.1. Introduction 109</p> <p>4.2. The problems of multidocument summarization 110</p> <p>4.3. DUC/TAC & INEX Tweet Contextualization 112</p> <p>4.4. The taxonomy of MDS methods 115</p> <p>4.4.1. Structure based 115</p> <p>4.4.2. Vector space model based 116</p> <p>4.4.3. Graph based 117</p> <p>4.5. Some multi-document summarization systems and algorithms 117</p> <p>4.5.1. SUMMONS 118</p> <p>4.5.2. Maximal marginal relevance 119</p> <p>4.5.3. A multidocument biography summarization system 120</p> <p>4.5.4. Multi-document ENERTEX 121</p> <p>4.5.5. MEAD 123</p> <p>4.5.6. CATS 126</p> <p>4.5.7. SUMUM and SUMMA 128</p> <p>4.5.8. NEO-CORTEX 131</p> <p>4.6. Update summarization 134</p> <p>4.6.1. Update summarization pilot task at DUC 2007 134</p> <p>4.6.2. Update summarization task at TAC 2008 and 2009 135</p> <p>4.6.3. A minimization-maximization approach 138</p> <p>4.6.4. The ICSI system at TAC 2008 and 2009 142</p> <p>4.6.5. The CBSEAS system at TAC 145</p> <p>4.7. Multidocument summarization by polytopes 146</p> <p>4.8. Redundancy 148</p> <p>4.9. Conclusion 149</p> <p><b>PART 2. EMERGING SYSTEMS 151</b></p> <p><b>CHAPTER 5. MULTI AND CROSS-LINGUAL SUMMARIZATION 153</b></p> <p>5.1. Multilingualism, the web and automatic summarization 153</p> <p>5.2. Automatic multilingual summarization 156</p> <p>5.3. MEAD 159</p> <p>5.4. SUMMARIST 159</p> <p>5.5. COLUMBIA NEWSBLASTER 161</p> <p>5.6. NEWSEXPLORER 163</p> <p>5.7. GOOGLE NEWS 166</p> <p>5.8. CAPS 166</p> <p>5.9. Automatic cross-lingual summarization 168</p> <p>5.9.1. The quality of machine translation 169</p> <p>5.9.2. A graph-based cross-lingual summarizer 172</p> <p>5.10. Conclusion 177</p> <p><b>CHAPTER 6. SOURCE AND DOMAIN-SPECIFIC SUMMARIZATION 179</b></p> <p>6.1. Genre, specialized documents and automatic summarization 179</p> <p>6.2. Automatic summarization and organic chemistry 183</p> <p>6.2.1. YACHS2 183</p> <p>6.3. Automatic summarization and biomedicine 189</p> <p>6.3.1. SUMMTERM 189</p> <p>6.3.2. A linguistic-statistical approach 196</p> <p>6.4. Summarizing court decisions 201</p> <p>6.5. Opinion summarization 204</p> <p>6.5.1. CBSEAS at TAC 2008 opinion task 204</p> <p>6.6.Web summarization 206</p> <p>6.6.1. Web page summarization 206</p> <p>6.6.2. OCELOT and the statistical gist 207</p> <p>6.6.3. Multitweet summarization 211</p> <p>6.6.4. Email summarization 215</p> <p>6.7. Conclusion 216</p> <p><b>CHAPTER 7. TEXT ABSTRACTING 219</b></p> <p>7.1. Abstraction-based automatic summarization 219</p> <p>7.2. Systems using natural language generation 220</p> <p>7.3. An abstract generator using information extraction 222</p> <p>7.4. Guided summarization and a fully abstractive approach 223</p> <p>7.5. Abstraction-based summarization via conceptual graphs 226</p> <p>7.6. Multisentence fusion 227</p> <p>7.6.1. Multisentence fusion via graphs 228</p> <p>7.6.2. Graphs and keyphrase extraction: the TAKAHÉ system 231</p> <p>7.7. Sentence compression 232</p> <p>7.7.1. Symbolic approaches 235</p> <p>7.7.2. Statistical approaches 236</p> <p>7.7.3. A statistical-linguistic approach 238</p> <p>7.8. Conclusion 241</p> <p><b>CHAPTER 8. EVALUATING DOCUMENT SUMMARIES 243</b></p> <p>8.1. How can summaries be evaluated? 243</p> <p>8.2. Extrinsic evaluations 245</p> <p>8.3. Intrinsic evaluations 246</p> <p>8.3.1. The baseline summary 247</p> <p>8.4. TIPSTER SUMMAC evaluation campaigns 248</p> <p>8.4.1. Ad hoc task 249</p> <p>8.4.2. Categorization task 249</p> <p>8.4.3. Question-answering task 250</p> <p>8.5. NTCIR evaluation campaigns 250</p> <p>8.6. DUC/TAC evaluation campaigns 251</p> <p>8.6.1. Manual evaluations 252</p> <p>8.7. CLEF-INEX evaluation campaigns 254</p> <p>8.8. Semi-automatic methods for evaluating summaries 256</p> <p>8.8.1. Level of granularity: the sentence 256</p> <p>8.8.2. Level of granularity: words 257</p> <p>8.9. Automatic evaluation via information theory 263</p> <p>8.9.1. Divergence of probability distribution 265</p> <p>8.9.2. FRESA 266</p> <p>8.10. Conclusion 271</p> <p>CONCLUSION 275</p> <p>APPENDIX 1. INFORMATION RETRIEVAL, NLP AND ATS 281</p> <p>APPENDIX 2. AUTOMATIC TEXT SUMMARIZATION RESOURCES 305</p> <p>BIBLIOGRAPHY 309</p> <p>INDEX 343</p>
<b>Juan-Manuel Torres-Moreno</b> is Associate Professor at the Université d'Avignon et des Pays de Vaucluse (UAPV) in France and is head of the research team Natural Language Processing (NLP/TALNE) at the Laboratoire Informatique d'Avignon (LIA). His current research lies within the field of NLP where he is investigating techniques for ATS. His other research interests include sentence compression, information retrieval, machine learning and artificial consciousness.

Diese Produkte könnten Sie auch interessieren:

Computational Intelligence
Computational Intelligence
von: Diego Andina, Duc Truong Pham
PDF ebook
96,29 €
Advances in Modeling Agricultural Systems
Advances in Modeling Agricultural Systems
von: Petraq Papajorgji, Panos M. Pardalos
PDF ebook
149,79 €
From Combinatorics to Philosophy
From Combinatorics to Philosophy
von: Ernesto Damiani, Ottavio D'Antona, Vincenzo Marra, Fabrizio Palombi
PDF ebook
149,79 €