Fundamental to Aiimi Insight Engine is the enforced access control rules. The access control rules dictate who can see a piece of data or content.
Users are members of groups, that are usually synchronised from an Active Directory. Items of data and content have a list of groups that have read access. When searching users only see items that match the permissions from their user group list.
Tells a user if items exist for their search but they cannot access them. They can then ask the owner for permission to see items.
Privileged access gives a select set of users a controlled way to bypass permissions.
This could be for audit requirements, legal discovery, or analysis after a cyber attack.
Both progressive and privileged access are locked down to specific groups. They are not available to any user by default.
The use of progressive and privileged access is fully audited. You can use this to monitor and report on it.
Sources represent either different repositories or different areas of a repository. For example, we may have a source for SharePoint HR and a source for SharePoint Asset.
You can control which sources are visible for each app.
In the screenshot below we only surface our ‘Pension Documents’ in the PII and PCI apps.
It's important to note that even if a user has underlying permissions to see the pension documents, they will never see this content via the Search app.
You can see here we are unable to select the ‘Pension Documents’ source in the Search app.
Aiimi Insight Engine can automatically classify data and content. This can be business classifications, such as types of accounts payable document, i.e., invoice, purchase order, goods receipts. Or it can be a security classification, such as public, internal, restricted and top secret.
Using classifications, we can build additional controls such as who can see items, or where information can be sent. We can also use these classifications to help inform and automate information security policies.
Security classifications allow you to apply additional security on top of the standard access control lists.
Users are given the right to see one or more security classifications.
For a user to have access they must have access via the standard access control list. Or they must have all the security classifications that appear on the data or content.
Data and content may also have one or more security classifications applied.
The Aiimi Insight Engine can calculate the potential risk of a piece of data or content. For more information on Risk Ratings see our enrichment guide for Setting Document Risk.
This is based on the items PII data:
The number of people referenced
Amount of personal information
Visibility to your workforce
Frequency of use
Specific keywords that indicate risk
It's a key feature of our PII and DSAR solution. Risk ratings can also be used in a similar way to classifications, and used to restrict what people can see (using the security classifications mechanism).
You can redact the information in your documents within a SAR or Collection. You can redact specific parts like PII or PCI or select the sections of content you want to redact.
Redacted items are only available on Aiimi Insight Engine. The source is not affected. For support setting up redaction use our guide on redacting information.
The removal of PII and other sensitive keywords can be automated using the anonymisation enrichment step. For support creating an anonymiser see our guide on the enrichment anonymiser.
For example, during enrichment, if a 'high-risk' piece of content is found it can be automatically anonymised.
You can create a list of banned words. If these words appear in a document, the document is marked sensitive.
Sensitive documents do not appear in a users search result, even if they match a search.
This feature is a useful safety guard against things that may end up in the wrong place. Typical banned words include things such as; p45, harassment, CV, Disciplinary and so on.
Users may find results in their search that they consider sensitive. Users can mark a result as sensitive where it will be temporarily hidden until it is investigated.
This is a good way of crowd sourcing and quickly removing items that require review. Administrators can review files that have been marked as sensitive and then reinstate them if they are deemed not to be a risk.
The actions of all users' are audited and stored in an audit log. It can be used to detect misuse, show the use and accuracy of the system and support advanced recommendation algorithms.
The auditing stores a users activity, what they searched, what they opened and If they have collaborated on someones collection or DSAR.