Aiimi Insight Engine
User GuidesAiimi
  • Introducing Aiimi Insight Engine
  • Architecture
    • Overview and Key Concepts
    • Search Flows
      • Search Flow Types
      • Smart Filtering
      • Query and Prompt Classification
      • Search Algorithms
      • Extractive and Generative Models
    • Hosting Options
    • Architecture and How It Works
      • Agent Servers
        • Security Agent
        • Source Agent
        • Content Agent
        • Enrichment Agent
        • Job Agent
        • OCR Agent
        • Migration Agent
        • Tika Agent
      • Repository
        • Data Node
        • Proxy Node
        • Kibana Node
      • Gateway and User Interface
      • Document and Data Sources
    • Deployment Options
    • Security
      • User Security
      • Data and Document Security
      • Source System Security
      • Firewalling
      • Agent Servers
      • Repository
      • Gateway (Web Server)
      • Tools & Utilities
  • Installation
    • Elastic and Kibana Install (Windows)
    • Aiimi Insight Engine Installation (Windows)
      • Installation Security
      • Certificates in a Key Vault
      • SAR Configuration
      • CSOM Bridge Set Up
      • AI Studio
    • AI Services
      • Prerequisites
      • AI Enrichment Service
        • Installation and Setup
        • Enabling Enrichment Steps
        • Using AI Enrichment Steps
        • Performance and Concurrency
      • AI Model Service
        • Installation and Setup
        • Enabling Providers
        • Private Generative AI
        • Azure Open AI
        • Enabling AI History
        • HTML Cleaner Service
      • Configuration of Logging
      • Offline Set-up of Models
      • Using SSL
      • Running as a Service (Windows)
      • Using GPUs
      • AI and Semantic Search Set Up
        • Open & Closed Book AI
        • Semantic Search
          • Vectors for Semantic Search
          • Source Configuration
          • Sentence Transformer Models
          • Enrichment
          • Kibana
          • Final Search Flow
    • Email Threading Upgrade
  • Run Books
    • SharePoint Online Connector
  • Control Hub
    • Configurations
      • Config Management
      • Security Configurations
        • Security - General
        • Security - Source
          • Active Directory
          • Atlassian
          • Azure Active Directory
          • Builtin Security
          • Miro Security
          • Google Directory
          • Slack Security
        • Security - Sync
        • Security - Agents
        • Security - Scheduling
      • Source Configurations
        • Source - General
        • Source - Source
          • Alfresco Kafka
          • Azure Blob Storage
          • BBC Monitoring
          • Big Query Cataloguer
          • BIM360
          • CSV Data Loader
          • Confluence
          • Content Server
          • Data File Cataloguer
          • Document Store
          • DocuSign
          • Dropbox
          • Exchange 365
          • Filesystem
          • Google Bucket
          • Google Drive
          • Google Vault
          • Jira
          • JSON Data Loader
          • Livelink
          • MFiles
          • Microsoft Teams
          • Mimecast
          • Miro
          • ODBC Data Loader
          • PowerBi Cataloguer
          • Reuters Connect
          • ShareFile
          • SharePoint
            • Azure Portal and Azure AD Authentication
            • Sensitivity Labels
          • SharePoint Legacy
          • SQL Server Cataloguer
          • Slack
          • Versioned Document Store
          • Websites
          • XML Data Loader
        • Source - Crawl
        • Source - Agents
        • Source - Schedule
        • Source - Advanced
      • Enrichment Configurations
        • Creating a Pipeline
          • General
          • Steps
            • AccessMiner
            • AI Classification
            • Anonymiser
            • CAD Extractor
            • Checksum
            • Content Retrieval
            • Copy
            • Data Rule Processor
            • Delete
            • Email Extractor
            • Entity Rule Processor
            • External Links
            • Geotag
            • Google NLP Extractor
            • Google Vision Extractor
            • Metrics Calculation
            • Microsoft Vision Extractor
            • OcrRest
            • Office Metadata
            • PCI Extractor
            • REST
            • Set Document Risk
            • Text Cleaner
            • Tika Text Extraction
            • Trie Entity Extractor
            • Update Metadata
          • Filters
          • Agents
          • Schedule
          • Advanced
      • OCR Engine
      • Job Configurations
        • General
        • Job
          • AutomatedSearchJob
          • Command Job
          • ElasticJob
          • Extended Metrics Job
          • File Extractor
          • GoogleVaultSAR
          • Google Drive Last Access Date
          • Nightly Events Processor Job
          • Notifications Processor Job
          • Portal Sync Job
          • Purge Job
          • Text Content Merge Job
        • Output
        • Agents
        • Scheduling
      • Migration Configuration
        • General
        • Filter
        • Metadata Mappings
        • Agents
        • Scheduling
        • Advanced
    • Credentials
    • Mappings
      • Entities
        • Manage Entity Groups
        • Create an Entity
        • Manage an Entity
      • Models
        • Create a New Model
        • Find a Model
        • Enable or Disable a Model
      • Vectors
      • Rank Features
    • Featured Links
    • AI Settings
      • Classifications
      • Class
      • Class Rules
      • AI Classification
    • User Settings
    • Stats
      • Data Views
    • Global Settings
      • General
      • Authentication
      • App Settings
      • Application Access
      • Thumbnails
      • Presets
      • Code of Conduct
      • Metrics
      • Viewer
      • SAR
        • Importing Data For A SAR
        • SAR Disclosure Document Storage
        • Getting SAR data from Google Vault
        • SAR Access
        • SAR File Status
      • Disclosure Portal
        • Disclosure Portal Set Up
        • SARs From The Portal
        • Email Delivery Settings
          • Delivery Settings
          • Brand Settings
          • Customise Emails
        • SMS Delivery Settings
        • Requestor Message Limit
        • Attachment Configuration
        • Password Configuration
        • File Scanner Configurator
      • Collections
      • Visualisations
        • Related Result Connections Diagram
        • Event Timeline
        • Timeline Lens Activity Chart
        • Relationship Map
      • Notifications
      • Map Lens
      • Theming
      • User Avatar
      • OData API
      • Uploads
      • Security
    • Search Settings
      • Search Relevancy
        • Core Settings
        • Makers Algorithm
        • Filename Boost Layer
        • Minimum Matching Terms Filter
        • Field Boost
        • Modified Date Boosting
        • Hit Highlighting
        • Why My Search Matched
        • Data Search Strategy
      • Bulk Search
        • Managing a Bulk Search
      • Search Flows
        • Create a Search Flow
          • General
          • Query Classification Step
          • Search Steps
          • Model Steps
      • Filtering
      • Search Performance
      • Related Results
  • AI Studio
    • Classifications
      • Classifications
      • Classification Rules
    • Jobs
  • Labels
  • API Guides
    • Insight API Guide
      • Swagger Documentation
      • Trying Some Endpoints
      • Search Filter
      • Hits / Items
      • Inspecting REST Calls
    • Data Science API Guide
      • REST Interface
        • Login
        • Datasets
        • Fields
        • Field Statistics
        • Search
        • Scroll
        • Update
      • Python Wrapper
        • Login
        • Datasets
        • Fields
        • Field Statistics
        • Search
        • Query Builders
        • Scroll
        • Scroll Search
        • Update Single Document
        • Bulk Update
    • Creating a Native Enrichment Step
      • Creating an Enrichment Step
        • Creating the Core Classes
        • Extending our Enrichment Step
        • Adding a Configuration Template
        • Adding the Enrichment Step
        • Creating an Enrichment Pipeline
      • Other Tasks
        • Entities, Metadata and Data
        • Accessing the Repository
      • Example Code
      • Troubleshooting
    • Creating a Python Enrichment Step
      • Creating an Enrichment Step
        • Running the Example from Command Line
        • Running the Example
      • Creating Your Own Step
      • Adding or Changing Entities, Metadata
  • Whitepapers and Explainers
    • From a Billion To One – Mastering Relevancy
    • Methods for Text Summarization
      • Application
      • Technology Methods
      • Commercial Tools
      • Key Research Centres
      • Productionisation
      • Related Areas of Text Analytics
      • Conclusion
      • References
Powered by GitBook
On this page
  1. Control Hub
  2. Configurations
  3. Source Configurations
  4. Source - Source

Websites

PreviousVersioned Document StoreNextXML Data Loader

Introduction

Connect your Website source to Aiimi Insight Engine to make the most of your data. Once you have selected a Source System type more detail will expand to customise this. General

  1. Enter the URL within the website to crawl in to Start Website URLs.

  2. Choose the credentials to use, this will contain both the Username and Password used to connect to Website.

    • For support setting up credentials use

  3. Enter the other page URLs that need indexing within Crawl pages with these URLs.

    • Any URLs you don't want to crawl should be added to Do not crawl pages with these URLs.

  4. You can choose the level of links the crawl will follow within Distance from start URLs to crawl.

    • 0 Will stay within the page it is on.

    • 1 will follow the links on the start page.

    • 2 will then follow the links on their pages.

    • -1 will perform unlimited link scanning.

  5. To exclude URLs from indexing but still crawled enter the URLs within Exclude these URLs from being indexed.

  6. To ensure only websites are crawled check and enter any extensions in Valid web page extensions. This will stop any documents being scanned as a website.

  7. Enter the User Agent, if left blank the default will be used.

  8. Enable Dynamic Expansion in order to expand placeholders in the start URLs.

  9. Enable regular expressions for inclusions and exclusions to use RegEx.

Crawl Details

  1. Between page load crawling, you can set a delay within Page crawl delay in milliseconds.

    • This allows you to control the crawl rate and it can prevent DOS.

  2. Limit the number of pages crawled within Maximum pages to crawl.

    • Leaving this as -1 means there is no limit.

  3. Set the Page timeout in seconds. This will determine how long to wait before timing out a loading page.

Created Date

  1. Determine the date you want to be used for page Created.

    • Null (Left Blank)

    • Modified Date

    • Crawled Date

    • Meta Tags

    • RegEx

  2. If you have selected Meta Tags or RegEx enter the name in Meta Tag/ Regex.

    • Test a RegEx before using it. If there are any errors they will appear in the log file.

  3. Enter the Date Format String in a .net format string.

  4. If it is unable to get the specified field you can set up a Feedback Mode.

Modified Date

  1. Determine the date you want to be used for page Modified.

    • Null (Left Blank)

    • Modified Date

    • Crawled Date

    • Meta Tags

    • RegEx

  2. If you have selected Meta Tags or RegEx enter the name in Meta Tag/ Regex.

    • Test a RegEx before using it. If there are any errors they will appear in the log file.

  3. Enter the Date Format String in a .net format string.

  4. If it is unable to get the specified field you can set up a Feedback Mode.

  5. Check Process Deletes to process deleted sites and pages during crawl.

  6. Check Respect anchor titles for files to check for titles within files marked with anchors.

  7. Uncheck Respect Robots to ignore a robot file and scan all pages.

    • If unchecked proceed with caution. Robot files are created for a reason.

  8. Exclude text that appears between the A tags of a site by adding them to the Metadate field for link inner text box.

  9. Exclude common link terms from your crawl by adding them to Link inner text to exclude field.

    • For Example Click Here, Download or Click the link

Meta tag mappings

  1. You can map the entity fields to meta tags from the web page.

    • Enter the full entity field in the left column. For example, entities.Websites.category.

    • Enter the meta tag name in the right column

  2. You can map the metadata fields to meta tags from the web page.

    • Enter the full metadata field in the left column. For example, metadata.webtype.

    • Enter the meta tag name in the right column

our guide on managing credentials.