From Access Governance to Access Vigilance

Governance is not enough

Having only a routine plan for access governance is no longer a security measure — it’s a vulnerability. Attestation campaigns occurring once or twice a year are equivalent to routine inspections. They contain no ambiguity and offer absolute certainty — not to your advantage, but to your adversary.

Running attestation campaigns yearly or half-yearly, or incorporating multiple levels of approvals during access provisioning — these approaches are no longer sufficient for an organization aiming to be a confident player in the market. Access Vigilance has become the need of the hour, especially for organizations that aspire to play a critical role in modern society.

A Plan for Access Vigilance May Include

1. Frequent Single-User Attestations Based on Logs (including authentication)

With the help of single sign-on (SSO), elevated authentication can quietly generate crucial audit logs without disrupting the user experience. Each time a sensitive or critical part of the system is accessed, a log can capture the who, when, and where. A machine learning model can then flag outlier activity based on variables like IP address, timezone, and time of access.

Rather than relying on multiple layers of approval for provisioning access, organizations can protect critical resources by monitoring elevated SSO logs and launching automated single-user attestations. Being vigilant means being safe.

2. Frequent Self-Attestations on User Information

Attestation campaigns will consistently fail if identity information is not maintained and kept up to date in your governance tool. To ensure accuracy, frequent (preferably monthly) self-attestations should be enforced for key user attributes such as:

  • Manager correlation
  • Ownership of resources
  • Ownership of non-personal IDs
  • Department
  • Hierarchical position within the company

3. Random Short Attestation Campaigns

Organizations should execute multiple short attestation campaigns at random intervals involving a random selection of users and accesses. These campaigns must not be treated as mere functional checks. Their outcomes should be analyzed deeply to identify trends and minimize future rejections.

Each rejection by a reviewer must require a mandatory justification — for example:
Since when has the access been redundant?” Moreover, when a manager or resource owner changes, a fresh attestation campaign must be launched immediately — not delayed until the fiscal year-end, when it might be too late.

Final Thought

Use proactiveness while maintaining ambiguity — keep adversaries guessing.
Therefore, organizations with a vital role in public infrastructure — such as banking, healthcare, telecom, and e-commerce — must adopt Access Vigilance, not just Access Governance. In today’s volatile market, be vigilant to be confident. Because cyber threats are real.

How to Fail an Attestation Campaign

And What You Can Do to Avoid It

Running an attestation campaign sounds straightforward—until it isn’t. Despite the tools and planning in place, many organizations still encounter avoidable failures. Here’s why.

Common Challenges in Attestation Campaigns:

  • Mismatch in entitlement names and descriptions
  • Outdated manager or owner details in the HR system
  • Identity Manager not syncing all records from target systems
  • Stale or inaccurate access profiles
  • Orphaned accounts—no correlation to HR or IGA records
  • Reviewers not receiving campaign-related emails

Root Causes:

– Lack of consistency in maintaining user data and system availability

– Perception of attestation as a one-time yearly event, rather than a continuous security practice

Let’s Analyze Further:

Data Management & Connectivity

Keeping user and access profile information accurate and up-to-date is not a one-time task—it’s a daily effort. No single system can provide all the data required to run an effective attestation campaign. Multiple systems and databases must work in unison.

Think of data synchronization as a spinning wheel. If it stops, getting it going again takes the same effort every time. To avoid that, maintaining data integrity and system availability should be top priorities.

Types of Information Required for an Effective Campaign

  1. User Details:

Name, display name, department, position, etc.
Sourced from HR systems or Identity Manager

  • Unique Identifiers:

Employee ID, samAccountName, User ID, Email ID, etc.
Used for precise identity matching

  • Manager Mapping:

Regular self-attestations should be conducted to keep manager relationships current.
Results should be sent to the HR system team for updates.

  • Resource/Application Ownership:

A directory should exist that lists both technical and business owners for each application.
Periodic self-attestation helps keep this list updated.
Identity Managers can automate this process.

  • User Access Profiles:

These are specific to each system or application.
It’s critical to confirm that every account in a target system matches a real user.
Access mapping is only valid in real-time—any syncing gap increases the risk of failure.

  • Non-personal ID Ownership:

Ownership mapping must also stay current.
Identity Manager tools can greatly assist here.

  • Critical Access Accounts:

Accounts with elevated privileges should be flagged and regularly reviewed.

Today’s IGA tools can handle most of these tasks, but onboarding every application into the Identity Manager is a long game. Until then, application teams must take ownership of maintaining data quality and mapping integrity.

Stop Treating Attestation as a Yearly Event

One big audit per year is not enough. Efforts must be made to ensure the data synchronization wheel never stops. It can be ensured by :
– Launching random but short attestation campaigns regularly throughout the year without having a fixed date and on any subset of users. Ambiguity on the date & the subset of access must be maintained to gauge the preparedness.
– Use a centralized database with standardized access mappings for rapid onboarding and as a temporary workaround. While this increases short-term monitoring effortsand maintenance cost, it significantly improves long-term efficiency and preparedness.

Final Thought

A proactive governance approach—not reactive patchwork—makes organizations more secure, resilient, and confident in today’s volatile business environment.

Guide to Bulk Application On-Boarding in SailPoint IdentityIQ (Part-III)

Comparison among three approaches

As we have learnt quite a lot about each of these approaches, we will see how they compare on the factors, such as – source configuration, schema generation, account & group management, aggregation, provisioning.

Factor MultiplexingApplication Builder Task Logiplexing
Source Configuration All the Multiplexed Applications will have a single physical source feed. Each of the application definitions created are independent, so each of them can have different source feed, but of the same type (e.g., JDBC, AD) All the applications here will also have the same physical source feed
Schema Generation By default, schema will be same, but using Proxy Generator Rule, schemas can be modified for every application. schema can be modified here as well. For application type with a fixed schema will not require schema column to be mentioned. Same as Multiplexing approach
Accounts & Groups One account in physical source can only be assigned to one Multiplexed Application. Sources are considered different so each IIQ account will require a separate account physically as well. One single physical account can be attributed to multiple sub-applications.
Aggregation Only Multiplexing Application Aggregation is enough Each of the application generated from the Application Builder task, will require separate aggregation task also. These tasks can also be generated along with the application through this task only. Only Master application Aggregation is enough if it is configured in adapter mode. If Logiplex connector is configured in classic mode, we will need to run aggregation for both Master and main application.
Provisioning For provisioning, multiplexing application will be used as the proxy. But multiplexed application will get provisioning policy form, by default copied from the multiplexing application. To add a separate provisioning form, proxy generator Rule should be utilized. Each of generated application will be independent applications and each of their provisioning related rules and provisioning policy forms will take care of their provisioning.Here behavior is exactly same as multiplexing approach.
Roadblocks for Implementation Writing the Customization rule in Beanshell script. Managing schema & provisioning form XMLs in CSV file. Customization Rule developmentIn adapter mode, converting the Master application into the Logiplex Connector definition through XML customization

Example of Suitable Scenarios

Multiplexing Application Onboarding Logiplex Connector
Implementation of persona:
Same organization having different domains in Active Directory Forest. And each of these domains are managed separately. To have access to the groups in one domain, accounts must be present in the same domain
Data of multiple disconnected applications, is being dumped in a single database, it may be with different or same schema.
Also, there can be different application with different endpoints but with similar schema.
When one groups in one application is controlling access in another disconnected application, similar to why the logical application was used

So, here we end. I am really glad that you reached till the end and spent a significant time to read the whole guide. Hopefully the time spent has not gone for waste and if you as well feel alike, please give it a like. And if you are interested to share your thoughts please go ahead. Here is my email-id where you can send me your messages – soudips.93@gmail.com .

Guide to Bulk Application On-Boarding in SailPoint IdentityIQ (Part-II)

Application Onboarding Task

SailPoint finally introduced a CSV file based rapid application onboarding task template, called Application Builder Task with IIQ 7.3. In this approach, there is no requirement for performing any additional beanshell scripting. Instead, a CSV file needs to be filled with all the required configuration parameters.

Configuration

Let us see below the involved configuration items –

Task Template

A Task Template with the name Application Builder will be available out of the box from which new tasks can be created. Below mentioned operations can be performed using this task –

  • Create
  • Update
  • Read

Apart from creating multiple applications, this task is useful for updating applications in bulk as well, in case of server migration. The Read operation reads the attribute map of the existing application(s), to gather the data to export into a CSV, which can be utilized as a model, with updated contents, for the Create and Update operation.

This task provides additional options, such as –

  1. Preparing a CSV file as template for reference and save it in the physical server
  2. Performing test connection for the applications created
  3. Creating aggregation task for the created applications and running them afterward as separate threads

Rule

Even though Application Builder task is not of Run Rule type, yet it will refer to a rule that is available in the object editor. Rule name is Application Builder. This rule holds the core logic in the Beanshell script. Based on the project requirement, performance of the task can be modified.

Considerations

  1. At least one application of the desired type must be created to generate the template CSV file for that type
  2. To create application with different schema and provisioning policy Form, a separate column needs to be included in the exported CSV file. Otherwise, p1 patch needs to be applied on IIQ 7.3

Logiplex Connector

Logiplex connector is an upgraded form of Logical connector, with a flavor of multiplex. Logical connector has been available ever since the beginning, but it did not have the capability of automated generation. So, each logical application definitions required to be configured manually. Now, with Logiplex, which has been made available through SSD 6.0, application definitions with behavior like Logical, can be created automatically. Application on-boarding process is very similar to Multiplex application, thus, the name.

Differences it has with Logical application are –

  1. Derived sub-accounts can be based upon only one tier application
  2. Application creation along with aggregation will be automated
  3. Logiplex sub-applications will also hold provisioning policy Form. Like Multiplexing, that Policy Form can be varied, if required, using another rule option called proxyGeneratorRule

Differences it has with Multiplexing are –

  1. Multiple ResourceObjects can be generated and returned from a single entry in source. That leads to multiple derived accounts, spread over multiple sub-applications, from one single account in the physical application

Configuration

Let us see below the involved items for implementation of Logiplex connector squad –

Master Application

Here, the tier application of the Logical application, i.e. the actual single source feed has been termed as Master application. Before setting Logiplex connector application definition, Master application must be in a functioning condition.

Main Application

This is the application definition, we will create, is going to have the connector type as Logiplex. Master application name must be mentioned as part of the configuration. Main application uses the connector information of the Master application to aggregate accounts from the physical source and performs everything of the Logiplexing. As a configuration step, provisioning forms needs to be added manually through XML editor by copying it from the Master application.

Sub-Applications

These are the applications that perform the logical grouping of the entitlements or accounts and will be derived from the Main application Aggregation task. A Sub Application will have the Main application defined as a Proxy. These applications will also copy the provisioning form and schema from the Main application as well. But Sub-applications, if required, these applications can be modified using proxyGeneratorRule.

Logiplex Split Rule

This Rule will be used with the similar functionality of the Customization Rule in Multiplexing. Instead of adding entries to the ResourceObject, here, a HashMap will be generated with sub-applications as its keys and the respective ResourceObjects as their values. So, the single ResourceObject prepared from the data pulled from the source feed, will be cloned and tweaked as per the requirement, for each of the sub-applications.

Input Arguments ResourceObject Object & name of the Main applicationHashMap to returnLogiplex Util – A Utility class comes along with the Logiplex connector
Output HashMap of sub-application names and respective ResourceObjects

Modes of Logiplexing

Logiplexing can be implemented in two modes –

Classic Mode

Above described three tier (Master application à Main application à Sub-application) setup is the classic mode.

Adapter Mode

In this mode, we can do away with the Main application, and make the Mater Application itself behave like Main application as well. To implement, few manual changes in the XML structure are required. For critical applications, modifying the Master application into a Logiplex connector should be performed with utmost care. But if done correctly, implementation of Adapter mode will increase efficiency especially for the application with huge number of user accounts and groups. In this mode, as Main application

Considerations before Implementation

Logiplex connector does not come along with identityiq.war file, instead it is being shipped with SSD package –

  1. In most of the company, IIQ project is managed through SSD only. If the identityiq .war file is generated through SSD, Logiplex connector will get included in it automatically
  2. If SSD is not being used, in that case each of the files for Logiplex connector, such as XHTMLs, class files, and the Connector-Registry XML must be collected from the SSD and deployed in the running identityiq folder structure manually. Server must be restarted after deployment

In the next part we will compare all these three native approaches for rapid application on-boarding. To read please click here.

Guide to Bulk Application On-Boarding in SailPoint IdentityIQ (Part-I)

Application Onboarding in IAM

One of the main pillars in an identity and access management business is application on-boarding task. With the automation being the trend, on-boarding applications in bulk, becomes a competitive edge in the market. whether it is an implementation project or a managed services project, a company with large IT infrastructure always needs numerous applications, which can run in hundreds. The point of setting up an identity and access management system is to have an account centrally, of all the accesses that the members of the organization, including employees, contractors, vendors, customers or others, have to its digital resources. Along with improving the controls and the audits, tools for IAM also offer automation by automating the approvals and access provisioning, as an integral feature. Yet that is an add-on for lot of organizations.

Speed Matters

So, the speed of onboarding the physical applications’ data to the centralized IAM compliance does become an important factor in decision making for the business owners. As access control and management for those applications which are yet to be onboarded will need a separate manual process till the integration process completes. It does create an impact in every aspect, from choosing the product to choosing the vendors for implementing the compliance or managing it through the following years. In this roller-coaster of the software industry, different technologies are sprouting every other day and making the new, a legacy or sometimes even irrelevant. So, in an ever-evolving infrastructure, an IAM compliance with the right framework ready, will help to achieve the speed necessary to be justifiable of its purpose. So being able to generate that speed, will hold an edge in winning the market.

Approaches in SailPoint IdentityIQ

One of the market leading product for IAM compliance, SailPoint IdentityIQ supports different solution approaches on application onboarding for different purposes. In this document we will know about those approaches and their usages.

SailPoint IdentityIQ, which we will be referring as IIQ through this document and the next one’s, comes with 60+ connector type out of the box, and three approaches for bulk application on-boarding.

Multiplexing

Multiplexing is the oldest existing approach available for rapid application creation in IIQ. Instead of a separate module inclusion, just an additional operation has been included in the aggregation engine of the IIQ. IIQ provides many windows, with the façade of Rules, for adding own logic to customize the user data, while performing data aggregation from the physical source. Two of such Rule types are Build-Map and Customization Rule. Any of these two Rules can be utilized to create the Multiplexed applications, as well as, assign the respective accounts to each of them, on the fly.

Architecture diagram

Configuration

Below let us see the involved IIQ API objects that facilitates this type of rapid application onboarding, known as Multiplexing.

Multiplexing Application

This is the base application definition, known as Multiplexing Application, which will be connected to the single physical source.

Multiplexed Application

Multiplexed applications are the application definitions representing the individual resource containers, contained in a single a source feed which is being represented in IIQ by the Multiplexing application definition. These applications will not have aggregation or provisioning capability. Instead, at the time of Multiplexing application aggregation, these applications will be created (in case those are not present) and aggregated simultaneously. Multiplexing application will be used as proxy by all the Multiplexed applications.

ResourceObject

After reading a data object from the source, IIQ converts it into a java Map object. Then that java Map is being transformed into an object type called ResourceObject, before finally taking shape of a schema object, such as account, group etc. ResourceObject is a transient object, as it never gets saved into the database, but it holds all the user or group information which are being pulled from the physical application.

Customization Rule

IIQ created the window inside the aggregation task, in the form of Customization or Build-Map Rule to include our own logic to customize the user data, before it is being saved in the database. Build-Map Rule is available only for the JDBC type connector, whereas Customization Rule is available for every type of OOTB IIQ connectors including JDBC.

In this Customization Rule, two more attributes are required to be included. As ResourceObject has a shape of a Map, these attributes are to be included with “put” method in key-value format.

Attribute Name Attribute Value Description
IIQSourceApplication Multiplexing application name
IIQMulitplexIdentity (Optional) Native identity name, used for correlation

These attributes will behave like a flag. Once aggregation engine finds these flags in the ResourceObject, it will create the Multiplexing applications automatically, if not present and will be adding this ResourceObject there as one of its schema objects.

Proxy Generator Rule

By default, Logiplex connector will generate the sub-applications by copying most of the feature from the main application. But, if any feature of the application, such as schema or provisioning policy is supposed to be different, in that case IIQ gives another wndow to overwrite the logic of application creation. There is no option to add this rule in the UI, so through XML editor of IIQ, an additional line is required to be inserted in the application definition XML file.

<entry key=”proxyGeneratorRule” value=”generateMultiplexApplicationsRule”/>

Two additional input arguments will be available in case of the Logiplex sub-applications.

Variable Type Description
sourceApplication sailpoint.object.Application The main LogiPlex (or Multiplex) application object.
generatedAppName java.lang.String The name for the new sub-application.

Considerations

  1. One account in physical application cannot be assigned to multiple multiplexed application, unlike Logical application. That means, for as many multiplexed applications accounts a user is having that many account user should also have in the physical application
  2. For the JDBC type data source, correlating attribute for each account must be unique

In the next part, we will discuss about two more types of application on-boarding approach. To read the next part click here.

Authentication Types in SailPoint IIQ

Inside an IT infrastructure of a company IIQ or any other IAM compliance tool plays a vital role and it is considered one of the most critical application. This application holds a massive user data along with the power to manage the user accesses. So authentication of IIQ is recommended to be taken very seriously. IIQ also supports different approaches for single sign-on, and and multi factor authentication. We will discuss those approaches below along with the steps for configuring them.

Types

  1. Native (basic) authentication
  2. Pass through authentication
  3. Native authentication with Multifactor (MFA)
  4. SSO with web agent
  5. SSO with SAML

Native (Basic) Authentication

User’s password in this authentication mode is being set by IIQ System Admin. Or, through Forgot Password user can create their own password for logging into IIQ.

SSO will not be available in this method and password will be saved in IIQ spt_identity table.

Pass Through Authentication

In this mode of authentication, IIQ can use one of its managed target application’s account databases to authenticate users. That application is known as pass-through application in IIQ term. In this mode of authentication, users need to have an account in that selected target application, and that application need to be configured in IIQ as well first.

SSO will not be available in this method and password will not be saved in IIQ DB.

Configuration

Global Settings –> Login Configuration  –> Login Settings –> Select application for Pass Through Authentication –> Select required check-boxes –> Save

Native authentication with Multifactor (MFA)

Separate MFA workflow module need to be imported to enable MFA for IIQ login. MFA can work along with pass through authentication or basic authentication. In this mode as well we will not see SSO.

SSO with Web-Agent

In this mode of authentication, a HttpServletRequest request will reach IIQ with auth credentials. Rules need to be developed and set in IIQ to process the HttpServletRequest and perform the credential matching. After Authentication, another rule will be used to check for authorization.

Configuration

Rules

  1. SSOAuthentication – This rule is used to perform credential matching for authentication, by processing a received HTTPServletRequest object
  2. SSOValidation – This rule is used to check for authorization

SSO with SAML

To set up SSO for IIQ login with SAML, IDP and SP information required to be set. IDP certificate required to be set as well.

Configuration

IDP and SP configurations required to be performed though UI. A correlation rule identity must be created and set, which will help to correlate assertion attributes with identity or accounts.

Conclusion

In many organization single sign-on is preferred and if an access management compliance is existing in the organization, that compliance can be leveraged for implementing SSO. IIQ supports both Web agent based and SAML based SSO, but SAML is a more preferred approach for its ease of implementation and security standard. If any access management tool can not be leveraged, Active Directory can be utilized as a pass-through application, but in that case SSO will not be available. Native authentication is normally the least preferred approach for the organization. Native authentication with MFA is more recommended over native authentication, for its higher level security.

Types of Provisioning Supported by SailPoint IdentityIQ and How to Enable Them

Complexity of the IT infrastructure fairly impacts the implementation of Identity and Access Management for any organization. Today, in this digital age where managing the access controls is a challenge to an extent where an IT infrastructure is no more a single planet. SailPoint IdentityIQ, which I will be referring in this article as IIQ, comes with different options and tools which make it a flexible Identity and Access Management solution. IIQ can perform direct as well as ticket-based provisioning of access to the target applications. The critical most applications can stay connected to the centralized Identity Management tool in your organization, which helps in auditing and easy management of user certification, yet not have the privilege to alter data inside it, which makes the critical applications safer. IIQ can also manage applications from different domains even from the cloud. We will go through all the types of provisioning which are supported by IIQ and how to implement them.

Direct Provisioning

Implementing direct provisioning is easy and in most of the OOTB connectors, it is pre-configured. Accounts or privileged entitlements will be provisioned in the target application by IIQ itself.

Configuration

FeatureString

In the application configuration XML, under the featureString tag, PROVISIONING keyword is required to be included.

Provisioning Policy Form

A form that holds the logic for all the account attribute values that will be provisioned. This is just like a normal IIQ form. Form can be hidden, and all the values can be generated automatically, as well as using Review Required feature, it can be presented to the Help-desk personal for a review before provisioning. Help-desk personal fill up the missing data as well.

Indirect Provisioning

Ticketing Based Provisioning

IIQ can integrate various ticketing tools which are prevalent in the market. Few of the supported ticketing systems are mentioned below –

  1. ServiceNow
  2. HP Service Manager
  3. BMC Remedy Service Desk

Flow Diagram

Configuration Steps

  1. Import the below mentioned XMLs, available with IIQ package, into ServiceNow as Update Sets
    1. IdentityIQServiceNowServiceIntegrationModule.v2.1.3.xml
    1. SailPointServiceRequestGenerator.v1.1.xml

Above XMLs will be available in this location – “identityiq-releaseVersion.zip\integration\servicenow\iiqIntegration-ServiceNow.zip\ServiceIntegrationModuleUpdateSet

  • Create the following ACLs in Global scope (in ServiceNow) to view the application logs –
Name Type Operation Active Required Role
App Log Entry[syslog_app_scope] Record Read True x_sap_iiq_sim.admin
  • ServiceNow administrator must have x_sap_iiq_sim.admin role
  • Update the IntegrationConfig XML, which is ServiceNowServiceIntegrationModule.xml
    • Username
    • Password
    • Endpoint URL of ServiceNow
    • Application names in IIQ which will be part of this integration

Location of the XML file is $TOMCAT_HOME$/webapps/identityiq/WEB-INF/config.

  • Import the updated ServiceNowServiceIntegrationModule.xml in IIQ
  • To test generate an access request on the target application mentioned in the ServiceNowServiceIntegrationModule.xml and check if a Request Item (RITM) object has been created or not, in ServiceNow

Note: Here, the configuration steps provided are to implement the integration with very basic requirement. SailPoint_Integration_Guide.pdf has the detailed steps and all the information required to customize the ServiceNow catalog items and variables.  

Workitem Based Provisioning

This provisioning is very similar to ticket-based provisioning. Here, like ticket, a workitem is created in IIQ which is to be diverted to the correct team so that, the people part of that team can refer to data while manually provisioning the user access.

Configuration Steps

  1. Application configuration:
    1. In the application configuration, remove PROVISIONING keyword from the featureString tag
  2. Workitem owner
    1. By default, the workitem will be sent to the system administrator account, but the owner can be changed based on the client requirement
    1. To add logic for workitem owner, clone and update this rule — Build Manual Action Approvals
    1. In the Identity Request Provision workflow, under the step called Manual Actions, there will be an approval configuration included, the approval owner rule name required to be updated with the newly created rule’s

These are the types of provisioning supported by IIQ. In the next article we will be discussing about different Gateway tools shipped with IIQ that can be leveraged to overcome the challenge of managing digital identity in a distributed infrastructure

Design a site like this with WordPress.com
Get started