Aiimi Insight Engine Florina
User GuidesAiimi
  • Introducing Aiimi Insight Engine
  • Architecture
    • Overview and Key Concepts
    • Search Flows
      • Search Flow Types
      • Smart Filtering
      • Query and Prompt Classification
      • Search Algorithms
      • Extractive and Generative Models
    • Hosting Options
    • Architecture and How It Works
      • Agent Servers
        • Security Agent
        • Source Agent
        • Content Agent
        • Enrichment Agent
        • Job Agent
        • OCR Agent
        • Migration Agent
        • Tika Agent
      • Repository
        • Data Node
        • Proxy Node
        • Kibana Node
      • Gateway and User Interface
      • Document and Data Sources
    • Deployment Options
    • Security
      • User Security
      • Data and Document Security
        • Progressive Access
        • Privileged Access
      • Source System Security
      • Firewalling
      • Agent Servers
        • Security Agent
        • Source Agent
        • Content Agent
        • Enrichment Agent
        • Job Agent
        • OCR Agent
        • Migration Agent
        • Tika Agent
      • Repository
      • Gateway (Web Server)
      • Tools & Utilities
  • Installation
    • Elastic and Kibana Install (Windows)
    • Aiimi Insight Engine Installation (Windows)
      • Installation Security
    • AI Services
      • Prerequisites
      • AI Enrichment Service
        • Installation and Setup
        • Enabling Enrichment Steps
        • Using AI Enrichment Steps
        • Performance and Concurrency
      • AI Model Service
        • Installation and Setup
        • Enabling Providers
        • Private Generative AI
        • Azure Open AI
      • Configuration of Logging
      • Offline Set-up of Models
      • Using SSL
      • Running as a Service (Windows)
      • Using GPUs
      • AI and Semantic Search Set Up
        • Open & Closed Book AI
        • Semantic Search
          • Vectors for Semantic Search
          • Source Configuration
          • Sentence Transformer Models
          • Enrichment
          • Kibana
          • Final Search Flow
    • HTML Cleaner Service
  • Control Hub
    • Configurations
      • Config Management
      • Security Configurations
        • Security - General
        • Security - Source
          • Active Directory
          • Atlassian
          • Azure Active Directory
          • Builtin Security
          • Miro Security
          • Google Directory
          • Slack Security
        • Security - Sync
        • Security - Agents
        • Security - Scheduling
      • Source Configurations
        • Source - General
        • Source - Source
          • Alfresco Kafka
          • Azure Blob Storage
          • BBC Monitoring
          • Big Query Cataloguer
          • BIM360
          • CSV Data Loader
          • Confluence
          • Content Server
          • Data File Cataloguer
          • Document Store
          • DocuSign
          • Dropbox
          • Exchange 365
          • Filesystem
          • Google Bucket
          • Google Drive
          • Google Vault
          • Jira
          • JSON Data Loader
          • Livelink
          • Microsoft Teams
          • Mimecast
          • Miro
          • ODBC Data Loader
          • PowerBi Cataloguer
          • Reuters Connect
          • ShareFile
          • SharePoint
          • SQL Server Cataloguer
          • Slack
          • Versioned Document Store
          • Websites
          • XML Data Loader
        • Source - Crawl
        • Source - Agents
        • Source - Schedule
        • Source - Advanced
      • Enrichment Configurations
        • Creating a Pipeline
          • General
          • Steps
            • AccessMiner
            • AI Classification
            • Anonymiser
            • CAD Extractor
            • Checksum
            • Content Retrieval
            • Copy
            • Data Rule Processor
            • Delete
            • Email Extractor
            • Entity Rule Processor
            • External Links
            • Geotag
            • Google NLP Extractor
            • Google Vision Extractor
            • Metrics Calculation
            • Microsoft Vision Extractor
            • OcrRest
            • Office Metadata
            • PCI Extractor
            • REST
            • Set Document Risk
            • Text Cleaner
            • Tika Text Extraction
            • Trie Entity Extractor
            • Update Metadata
          • Filters
          • Agents
          • Schedule
          • Advanced
      • OCR Engine
      • Job Configurations
        • General
        • Job
          • AutomatedSearchJob
          • Command Job
          • ElasticJob
          • Extended Metrics Job
          • GoogleVaultSAR
          • Google Drive Last Access Date
          • Nightly Events Processor Job
          • Notifications Processor Job
          • Portal Sync Job
          • Purge Job
          • Text Content Merge Job
        • Output
        • Agents
        • Scheduling
      • Migration Configuration
        • General
        • Filter
        • Metadata Mappings
        • Agents
        • Scheduling
        • Advanced
    • Credentials
      • Create a Credential
      • Find a Credential
      • Edit a Credential
      • Delete a Credential
    • Mappings
      • Entities
        • Managing Groups
        • Create an Entity
        • Managing Entities
      • Models
        • Create a New Model
        • Find a Model
        • Enable or Disable a Model
      • Vectors
      • Rank Features
    • Featured Links
    • AI Settings
      • Classifications
      • Class
      • Class Rules
      • AI Classification
    • Global Settings
      • General
        • Stackdriver
        • Document Recommendations
        • Searchable PDF Storage
        • Versioning
        • Results
        • Marking Useful Results
        • Folder Browsing
        • Cascading Search
        • Search Suggestions
        • Delve Settings
        • Miscellaneous
      • Authentication
      • Application Access
      • Search Relevancy
        • Core Settings
        • Makers Algorithm
        • Filename Boost Layer
        • Minimum Matching Terms Filter
        • Field Boost
        • Modified Date Boosting
        • Hit Highlighting
        • Why My Search Matched
        • Data Search Strategy
      • Search Performance
      • Filtering
      • Thumbnails
      • Presets
      • Code of Conduct
      • Metrics
      • Viewer
        • Redacting Information
        • Watermarking
      • SAR
        • Importing Data For A SAR
        • SAR Disclosure Document Storage
        • Getting SAR data from Google Vault
        • SAR Access
        • SAR File Status
      • Collections
      • Disclosure Portal
        • Disclosure Portal Set Up
        • SARs From The Portal
        • Email Delivery Settings
          • Delivery Settings
          • Brand Settings
          • Customise Emails
        • SMS Delivery Settings
        • Requestor Message Limit
        • Attachment Configuration
        • Password Configuration
        • File Scanner Configurator
      • Visualisations
        • Related Result Connections Diagram
        • Event Timeline
        • Timeline Lens Activity Chart
        • Relationship Map
      • Notifications
      • Map Lens
      • App
      • Theming
        • General
        • Layout
        • Colours
      • User Avatar
      • Related Results
      • OData API
      • Bulk Search
        • Managing a Bulk Search
      • Search Flows
        • Create a Search Flow
          • General
          • Query Classification Step
          • Search Steps
          • Model Steps
      • Uploads
      • Security
    • User Settings
    • Stats
      • Data Views
  • API Guides
    • Insight API Guide
      • Swagger Documentation
      • Trying Some Endpoints
      • Search Filter
      • Hits / Items
      • Inspecting REST Calls
    • Data Science API Guide
      • REST Interface
        • Login
        • Datasets
        • Fields
        • Field Statistics
        • Search
        • Scroll
        • Update
      • Python Wrapper
        • Login
        • Datasets
        • Fields
        • Field Statistics
        • Search
        • Query Builders
        • Scroll
        • Scroll Search
        • Update Single Document
        • Bulk Update
    • Creating a Native Enrichment Step
      • Creating an Enrichment Step
        • Creating the Core Classes
        • Extending our Enrichment Step
        • Adding a Configuration Template
        • Adding the Enrichment Step
        • Creating an Enrichment Pipeline
      • Other Tasks
        • Entities, Metadata and Data
        • Accessing the Repository
      • Example Code
      • Troubleshooting
    • Creating a Python Enrichment Step
      • Creating an Enrichment Step
        • Running the Example from Command Line
        • Running the Example
      • Creating Your Own Step
      • Adding or Changing Entities, Metadata
  • whitepapers and explainers
    • From a Billion To One – Mastering Relevancy
    • Methods for Text Summarization
      • Application
      • Technology Methods
      • Commercial Tools
      • Key Research Centres
      • Productionisation
      • Related Areas of Text Analytics
      • Conclusion
      • References
Powered by GitBook
On this page
  • Extractive Methods
  • Abstractive Methods
  1. whitepapers and explainers
  2. Methods for Text Summarization

Technology Methods

Extractive Methods

All extractive methods rely on the identification of features: characteristic words or phrases critical to the meaning of the text. Phrases are then scored and the highest scoring are combined into a summary. Approaches vary in the features they choose to identify, and in the scoring methods used to rank them.

Commonly used features include [2], [3]:

  • Content words: typically nouns uniquely occurring frequently in this particular text (tf-idf)

  • Title words, and those in headings and sub-headings

  • Sentence locations: first and last sentences of the first and last paragraphs

  • Proper nouns (this may involve Named Entity Recognition)

  • Uppercase, italic, underlined or bold words

  • Cue phrases: these contain special terms such as “in summary” or “in conclusion”. This can also include domain-specific terms of interest

  • Similarity between sentences and to the whole textual content. This indicates repeated phrases.

Which features are used is dependent on the domain, scoring algorithm and format of the input text. For example, scientific papers in one discipline tend to use predictable language which can inform cue phrases, but works of fiction are far less uniform. Identification of italic, bold or underlined terms requires knowledge of the document’s markup; applicable to web pages and some documents but not necessarily text extracted from a PDF or web form.

In the following subsections, we will describe some of the scoring algorithms used for identifying significant sentences.

Term Frequency-Inverse Document Frequency (tf-idf)

Term Frequency-Inverse Document Frequency (tf-idf) is one of the most common text scoring algorithms and is popular with search engines for ranking results. It assigns a higher score to terms that occur frequently within a text while occurring infrequently in other texts within a collection. This allows the most unique terms to a document to be identified [4]. Tf-idf is most commonly applied to single-word features, but can be expanded to phrases if they are normalized, e.g. similar phrases are converted to standard format that can be considered equivalent within the document corpus.

Graph Theory Representation

Documents can be represented as an undirected graph, with sentences as nodes and vertices joining similar phrases (by cosine/string similarity, or another metric) [5]. The information gained by this approach is twofold: Firstly, clusters of joined nodes represent topics, and can be used either to summarize a particular topic, or the document as a whole if representative sentences are chosen from each cluster. Secondly, nodes with high inter-cluster relations can be considered relevant to multiple topics, and therefore of greater use in a document summary. In contrast to tf-idf, this approach can be applied to sentences of any length and does not require pre-processing, as the similarity scoring makes normalization redundant. The popular summarization algorithm TextRank is based on these methods [6].

Concept-based Clustering

There are various topic and concept extraction tools such as WordNet [7], HowNet [8] and Lingo [9]. Topic clustering is more immediately useful for providing keywords representative of a text, e.g. for suggesting related topics in a search context. However, it can be used to process sections of a longer text, which are analyzed independently for focus and relevance to a particular topic and used to form the basis of a summary. In this manner, concepts instead of words can then be used in a scoring algorithm to produce the final summary [3], [10]. Topic-based approaches are conceptually similar to those used in the more advanced abstractive summarization methods, discussed in the next section.

Abstractive Methods

Abstractive approaches have seen a surge in research due to the development of deep learning techniques for Natural Language Processing (NLP). Summarization of this type offers improved readability and the potential for more efficient knowledge conveyance. NLP is most heavily employed for speech recognition and video captioning where there exists a 1:1 mapping between input and output signals, and therefore the requirement for conceptualizing the text is limited. For a summarization application, the text must not only be read, but be algorithmically understood and the most significant pieces of information interpreted into a coherent output.

Structure Analysis

Most abstractive techniques condense a text corpus into a uniform format based on the structure of input sentences. Sentence Fusion [11], developed using news articles, aims to combine several sentences discussing a common theme into one. A tree structure is built describing the sentences based on the dependencies between constituent words. The figure below is the dependency tree describing the sentence “The IDF spokeswoman did not confirm this, but said the Palestinians fired an antitank missile at a bulldozer on the site.”

The trees reveal branches through the text corpus with highest inter-sentence connectivity, and suggest summarization pathways containing as much of the original information as possible. Output sentences are then constructed using semantic rules. The transferability of this technique to domains other than that of news articles is highly dependent on the focus of the input text, as effective summarization requires strong common focus on the overarching theme.

Data structure translation

The identification of topic-related information is common to all summarization methods, and is usually comprised of techniques similar to those discussed previously (linguistic structure, common words, scoring based on position or mark-up qualities). The synthesis of new sentences depends largely on the choice of data structure for storing them, and several inventive solutions have been proposed distinct from dependence trees. GISTEXTER [12] builds a database of sentences by topic with a focus on distinctness, entries from which are then added to a summary. The Lead-and-body-phrase method analyses the head and body of sentences separately to insert or substitute phrases based on the volume of new information each successive sentence contributes to an existing stem. Opinosis [13] is a graph-based summarizer focused on identifying redundancy in the text corpus, similar to the extractive method discussed previously. It is regarded as a “shallow” abstractive method as it relies heavily on the input text’s similarity scored to identify both phrases that contribute heavily to the main topics and where redundancy in the output can be minimised. For the reason the output is often close to the phrasing in the input, but with semantic enhancements for readability and interpretability.

PreviousApplicationNextCommercial Tools