Search This Blog

Showing posts with label SailPoint. Show all posts
Showing posts with label SailPoint. Show all posts

Friday, October 27, 2023

SailPoint reverse tokenization challenge

SailPoint's original XML Exporter was released with the Standard Services Build (SSB) in Java code, so that if users needed to customize it they could.  I ran into several issues with the original code that I published on Compass to fix:


This was in 2018.  After that I tackled the issue of reverse tokenization because the XML Exporter used a simple text replace and the IIQDA used an XPath method.  I incorporated the XPath reverse tokenization into the XML Exporter java source and deployed that to several clients.

SailPoint has since taken, those concepts and made some of those features in their XML Exporter Plugin.  At the same time, I also developed my own plugin from my original code and have expanded it.

But on a particular client, I realized that there are times when a simple replace reverse tokenization is needed.  This is needed in two places: 1) When java code is tokenized, which the XPath cannot reach, a simple substitution is needed.  2) In IT roles inside the Profiles, there is no way you can adequately describe every XML element's XPath to tokenize the entitlements inside those profiles.  This is required for roles that reference an LDAP domain.  You want to have the LDAP Domain tokenized.  Hence was my first challenge.

To accomplish this, I reactivated the simple reverse tokenization of the original code, which I had literally just coded around, and added a second file called the simple reverse tokenization file.  Reading in that file would cause all of the code to have a simple replace operation on it.

One challenge on this is that the original code expected the tokens to be described like this:

%%TOKEN%%=Pattern

this is backwards and prevents the ability to have multiple patterns reverse tokenize to the same value, so I added the ability to have the tokens in the correct pattern like this:

Pattern1=%%TOKEN%%
Pattern2=%%TOKEN%%

This allows both patterns to create the same token, for example:

dc=example,dc=com=%%AD_DOMAIN%%
dc=test,dc=local=%%AD_DOMAIN%%

To solve this I wrote the following code:

/**
 * Comb through to see if there is a match
 */
private String combAllCasePatterns(String word, String token, String replaceIn) {
  log.debug("XML-400 Trying "+word+" on "+replaceIn);
  String replaceOut=replaceIn;
  word = word.toLowerCase();
  long combinations = 1L << word.length();
  for (long i = 0L; i < combinations; i++) {
    char[] result = word.toCharArray();
    for (int j = 0; j < word.length(); j++) {
      if (((i >> j) & 1) == 1 ) {
        result[j] = Character.toUpperCase(word.charAt(j));
      }
    }
    log.debug("XML-400 Trying combination "+i+" of "+combinations+" :"+new String(result));
    replaceOut=replaceIn.replace(new String(result), token);
    if(!replaceOut.equals(replaceIn)) return replaceOut;
  }
  return replaceOut;
}

Credit to java - Finding all upper/lower case combinations of a word - Code Review Stack Exchange for the start of the comb method.  That code actually wasn't 100% correct but I got it to work.

But then here is the real challenge: what if the data looks like this:

<String>CN=Employee,OU=Example Users,DC=example,DC=com</String>

When you are doing an xml.replace("dc=example,dc=com","%%AD_DOMAIN%%") there is no way to do a case insensitive replace, unless you want to translate the search string to regex.  

In order to tokenize any capitalization version of the key, you literally have to try every combination of upper and lower case letters.

Do you see an issue here?  The longer the search string the longer the computation - a 20 character value would take over a million computations.  Also there is another complication - there are often non-alphabetics in the search string.  In the example which has a 17 character string, only 14 of the characters are alphabetic.  If you can remove those 3 non-alphabetic characters, then you can reduce the iteration from 131,072 to 16,384 iterations.  Here is my logic to accomplish that:

  /**
   * Comb through to see if there is a match
   */
  private String combAllCasePatterns(String wordIn, String token, String replaceIn) {
    log.debug("XML-400 Trying "+wordIn+" on "+replaceIn);
    String replaceOut=replaceIn;
    String word=wordIn;
    int wordlen=word.length();
    log.debug("XML-401 word length is "+wordlen);
    byte[] wordchars=word.getBytes(StandardCharsets.UTF_8);
    byte[] packedchars=new byte[wordlen];
    boolean[] isalphachar=new boolean[wordlen];
    int packedlen=0;
    for(int ipack=0; ipack<wordlen; ++ipack) {
      byte chb=wordchars[ipack];
      if((chb>=65 && chb<=90) || (chb>=97 && chb<=122)) {
        packedchars[packedlen]=chb;
        isalphachar[ipack]=true;
        packedlen++;
      }
      else {
        isalphachar[ipack]=false;
      }
    }
    byte[] newpack=new byte[packedlen];
    for(int ipack=0; ipack<packedlen; ++ipack) {
      newpack[ipack]=packedchars[ipack];
    }
    word = new String(newpack, StandardCharsets.US_ASCII);
    log.debug("XML-402 word length after removing non-letters:"+packedlen);
    log.debug("XML-403 word after removing non-letters:"+word);
    word = word.toLowerCase();
    long combinations = 1L << word.length();
    for (long i = 0L; i < combinations; i++) {
      char[] result = word.toCharArray();
      for (int j = 0; j < word.length(); j++) {
        if (((i >> j) & 1) == 1 ) {
          result[j] = Character.toUpperCase(word.charAt(j));
        }
      }
      log.debug("XML-404 Trying combination "+i+" of "+combinations
        +" :"+new String(result));
      // Rebuild the word from the packed characters
      packedlen=0;
      for(int ipack=0; ipack<wordlen; ++ipack) {
        if(isalphachar[ipack]) {
          packedchars[ipack]=(byte)(result[packedlen]);
          packedlen++;
        }
        else {
          packedchars[ipack]=wordchars[ipack];
        }
      }
      log.debug("XML-405 Trying combination "+i+" of "+combinations
        +" :"+new String(packedchars,StandardCharsets.US_ASCII));
      replaceOut=replaceIn.replace(new String(packedchars,StandardCharsets.US_ASCII), token);
      // Stop on any replace
      if(!replaceOut.equals(replaceIn)) return replaceOut;
    }
    return replaceOut;
  }

This accomplishes the task.  Challenge solved.  Oh, in order to trigger the case insensitive replace I made the user add an extra % to the token, and caution the user to use the smallest search string and only apply to IT roles or whatever particular code you wish it on, or the computation time can be excessive.



Monday, May 8, 2023

SailPoint UIConfig changes that have helped me search in the Debug pages

Changes that have been helpful to me:

Helpful to allow searching in the Debug pages for ManagedAttributes by application, attribute, and value:

<entry key="debugManagedAttributeSearchProperties" value="id,application.name,attribute,value"/>

Helpful for searching AuditEvent objects:

<entry key="debugAuditEventSearchProperties" value="id,action,source,target"/>

Helpful for searching UIPreferences:

<entry key="debugUIPreferencesSearchProperties" value="id,owner.name"/>

Helpful for searching Syslog Events:

<entry key="debugSyslogEventSearchProperties" value="id,created,eventLevel,quickKey,message"/>

Helpful for searching IdentityEntitlement objects:

<entry key="debugIdentityEntitlementSearchProperties" value="identity.name,application.name,name,value,state,type"/>

Helpful for searching ProvisioningTransaction objects:

<entry key="debugProvisioningTransactionSearchProperties" value="id,identityName,applicationName,nativeIdentity"/>

Helpful for viewing details in the debug page

<entry key="debugApplicationGridColumns">
<value>
<List>
<ColumnConfig dataIndex="id" groupProperty="id" headerKey="Id" property="id" sortProperty="id" sortable="true" stateId="id" width="250"/>
<ColumnConfig dataIndex="name" groupProperty="name" headerKey="Name" property="name" sortProperty="name" sortable="true" stateId="name" width="450"/>
<ColumnConfig dataIndex="type" groupProperty="type" headerKey="Type" property="type" sortProperty="type" sortable="true" stateId="type" width="150"/>
<ColumnConfig dataIndex="created" dateStyle="short" groupProperty="created" headerKey="Created" property="created" sortProperty="created" sortable="true" stateId="created" width="150"/>
<ColumnConfig dataIndex="modified" dateStyle="short" groupProperty="modified" headerKey="Modified" property="modified" sortProperty="modified" sortable="true" stateId="modified" width="150"/>
</List>
</value>
</entry>
<entry key="debugApplicationSearchProperties" value="id,name,type"/>
<entry key="debugRuleGridColumns">
<value>
<List>
<ColumnConfig dataIndex="id" groupProperty="id" headerKey="Id" property="id" sortProperty="id" sortable="true" stateId="id" width="250"/>
<ColumnConfig dataIndex="name" groupProperty="name" headerKey="Name" property="name" sortProperty="name" sortable="true" stateId="name" width="450"/>
<ColumnConfig dataIndex="type" groupProperty="type" headerKey="Type" property="type" sortProperty="type" sortable="true" stateId="type" width="150"/>
<ColumnConfig dataIndex="created" dateStyle="short" groupProperty="created" headerKey="Created" property="created" sortProperty="created" sortable="true" stateId="created" width="150"/>
<ColumnConfig dataIndex="modified" dateStyle="short" groupProperty="modified" headerKey="Modified" property="modified" sortProperty="modified" sortable="true" stateId="modified" width="150"/>
</List>
</value>
</entry>
<entry key="debugRuleSearchProperties" value="id,name,type"/>

<entry key="debugTaskDefinitionGridColumns">
<value>
<List>
<ColumnConfig dataIndex="id" groupProperty="id" headerKey="Id" property="id" sortProperty="id" sortable="true" stateId="id" width="250"/>
<ColumnConfig dataIndex="name" groupProperty="name" headerKey="Name" property="name" sortProperty="name" sortable="true" stateId="name" width="450"/>
<ColumnConfig dataIndex="type" groupProperty="type" headerKey="Type" property="type" sortProperty="type" sortable="true" stateId="type" width="200"/>
<ColumnConfig dataIndex="created" dateStyle="short" groupProperty="created" headerKey="Created" property="created" sortProperty="created" sortable="true" stateId="created" width="125"/>
<ColumnConfig dataIndex="modified" dateStyle="short" groupProperty="modified" headerKey="Modified" property="modified" sortProperty="modified" sortable="true" stateId="modified" width="125"/>
</List>
</value>
</entry>
<entry key="debugTaskDefinitionSearchProperties" value="id,name,type"/>

<entry key="debugBundleGridColumns">
<value>
<List>
<ColumnConfig dataIndex="id" groupProperty="id" headerKey="Id" property="id" sortProperty="id" sortable="true" stateId="id" width="250"/>
<ColumnConfig dataIndex="name" groupProperty="name" headerKey="Name" property="name" sortProperty="name" sortable="true" stateId="name" width="450"/>
<ColumnConfig dataIndex="type" groupProperty="type" headerKey="Type" property="type" sortProperty="type" sortable="true" stateId="type" width="150"/>
<ColumnConfig dataIndex="created" dateStyle="short" groupProperty="created" headerKey="Created" property="created" sortProperty="created" sortable="true" stateId="created" width="150"/>
<ColumnConfig dataIndex="modified" dateStyle="short" groupProperty="modified" headerKey="Modified" property="modified" sortProperty="modified" sortable="true" stateId="modified" width="150"/>
</List>
</value>
</entry>
<entry key="debugBundleSearchProperties" value="id,name,type"/>




Tuesday, January 17, 2023

SailPoint migrating provisioning policies from inline to forms

This is more of a personal preference but there is a reason behind it.

I do not like to have inline provisioning policies.  I like to keep them as a separate form object.  The reasons I like to do this revolve around the concept of modularity.  For the same reason, I prefer to use rules for the value section of a Field in a provisioning policy instead of using a script.  This does not necessarily mean I advocate for use of the SSF Field Value framework.  It's as simple as this: if you are using inline scripts and inline provisioning policies, a small change to a script, by the "butterfly effect" requires regression testing of every function of that application.  THIS IS REAL !!  Modularizing these make changes in one module require testing only of the module that was affected.

Let's look at the first and most often needed model: Active Directory.
The provisioning polices on the Active Directory application are:

Account (for account creation)
Create Group (for group creation)
Update Group (for group update)

We'll start with the account create policy.

Start by creating a new Form object in the Form editor.  Do this by opening the Forms UI (Gear Icon -> Global Settings -> Forms).  Click Create Form and select Application Provisioning Policy Form as the form type.

Make the form name descriptive and compliant with your naming standards, but should include Active Directory Account Create or abbreviations of such in the name.  Add a description and save it.

Next, open debugger, select and open the Active Directory application, and find the first Form object in the ProvisioningForms tag.  Find and select all of the Section objects just until the end of the form object.  Copy this data to the clipboard.  Close the debug editor.  Select and open the Form object you just opened, and copy that data inside the Form object tags.  Example:

Cut from the bolded lines to the bolded lines:
  <ProvisioningForms>
    <Form name="Account" objectType="account" type="Create">
      <Attributes>
        <Map>
          <entry key="pageTitle" value="Account"/>
        </Map>
      </Attributes>
      <Section label="Account" name="Account">
        <Field displayName="con_prov_policy_ad_objecttype" name="objectType" postBack="true" reviewRequired="true" section="Account" type="string" value="User">
          <AllowedValuesDefinition>
            <Value>
              <List>

...

        <Field displayName="con_prov_policy_ad_msDSManagedPasswordInterval" helpKey="help_con_prov_policy_ad_msDSManagedPasswordInterval" name="msDS-ManagedPasswordInterval" reviewRequired="true" section="gmsa" type="string"/>
        <Field displayName="con_prov_policy_ad_msDSGroupMSAMembership" helpKey="help_con_prov_policy_ad_msDSGroupMSAMembership" multi="true" name="msDS-GroupMSAMembership" reviewRequired="true" section="gmsa" type="string"/>
        <Field displayName="con_prov_policy_ad_msDSAllowedToActOnBehalfOfOtherIdentity" helpKey="help_con_prov_policy_ad_msDSAllowedToActOnBehalfOfOtherIdentity" multi="true" name="msDS-AllowedToActOnBehalfOfOtherIdentity" reviewRequired="true" section="gmsa" type="string"/>
        <Field displayName="con_prov_policy_ad_ServicePrincipalNames" helpKey="help_con_prov_policy_ad_ServicePrincipalNames" multi="true" name="servicePrincipalName" reviewRequired="true" section="gmsa" type="string"/>
      </Section>
    </Form>
    <Form name="Create Group" objectType="group" type="Create">
      <Attributes>
        <Map>
          <entry key="pageTitle" value="Create Group"/>

Paste this into the form object.

<Form created="1674018819927" id="ac100a5085c013c08185c34c0f5704d7" name="Active Directory Account Create" type="Application">
  <Attributes>
    <Map>
      <entry key="pageTitle" value="Active Directory Account Create"/>
    </Map>
  </Attributes>

     HERE

  <Description>Form for creating an Active Directory account</Description>
</Form>

Save.
Now go back to the application in the UI.  Delete the Account policy and replace with the form.
 

Saturday, June 25, 2022

SailPoint proper way to construct a rolling log file

There is some confusion of how to set up the log4j2.properties file in order to log to a rolling log file.  Here is what the OOTB log4j2.properties file shows:

# Below is an example of how to create a logger that writes to a file.
# Uncomment the following five lines, then uncomment the 
# rootLogger.appenderRef.file.ref definition below
#appender.file.type=File
#appender.file.name=file
#appender.file.fileName=C:/Windows/Temp/sailpoint.log
#appender.file.layout.type=PatternLayout
#appender.file.layout.pattern=%d{ISO8601} %5p %t %c{4}:%L - %m%n

This setup does not roll.  The file also contains the following:

#appender.meter.type=RollingFile
#appender.meter.name=meter
#appender.meter.fileName=C:/Windows/Temp/meter.log
#appender.meter.filePattern=C:/Windows/Temp/meter-%d{yyyy-MM-dd}-%i.log.gz"
#appender.meter.layout.type=PatternLayout
#appender.meter.layout.pattern=%m%n
#appender.meter.policies.type=Policies
#appender.meter.policies.size.type=SizeBasedTriggeringPolicy
#appender.meter.policies.size.size=10MB
#appender.meter.strategy.type=DefaultRolloverStrategy
#appender.meter.strategy.max=5

The main issue with this is the use of the date and the gzip options.  It also doesn't have the proper pattern layout.

The best practice is something like this:

appender.rolling.type=RollingFile
appender.rolling.name=rolling
appender.rolling.fileName=D:/iiq83/logs/sailpoint.log
appender.rolling.filePattern=D:/iiq83/logs/sailpoint-%i.log"
appender.rolling.layout.type=PatternLayout
appender.rolling.layout.pattern=%d{ISO8601} %5p %t %c{4}:%L - %m%n
appender.rolling.policies.type=Policies
appender.rolling.policies.size.type=SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=20MB
appender.rolling.strategy.type=DefaultRolloverStrategy
appender.rolling.strategy.max=5

The word "rolling" can be substituted by any other name.  Be sure to leave out the date and the gzip references in the filePattern part.

Some advice: Never use the time based policy.  Never use the startup policy.  With this method you can archive files by their age.

Usage for this includes the following to make sure the files are correctly being written to:

rootLogger.level=warn
rootLogger.appenderRef.stdout.ref=stdout
rootLogger.appenderRef.rolling.ref=rolling

And all logging should be structured like this:

logger.objexp.name=com.sailpoint.objectexporter.task
logger.objexp.level=trace
logger.objexp.appenderRef.rolling.ref=rolling
logger.objexp.additivity=false

The second portion of this block, must be unique.
The file name you chose, should be provided in the third line in two places as shown.


Wednesday, March 2, 2022

Programmatic options for Account Aggregation

When running an account aggregation you have to feed a Map of values.  This listing shows the names and the checkboxes it checks on the UI.


CheckboxCodeDescription
Select Applications to ScanapplicationsList of application names, comma separated
Optionally select a rule ..creationRuleSpecify rule name
Refresh Assigned and Detected RolescorrelateEntitlementsRefresh roles on aggregation ??
Check Active PoliciescheckPoliciesRun through policy checking on aggregation
Only create links if they..correlateOnlyDo not create uncorrelated identities
Refresh identity risk scorecardsrefreshScorecardRisk scorecards
Maintain identity historiescheckHistorySee history pages
Enable delta aggregationdeltaAggregationIf the connector supports this
Detect deleted accountscheckDeletedDetect deleted accounts - do not use with delta or targeted agg
Maximum deleted accountscheckDeletedThresholdIf more than this, do not delete
Refresh assigned scopecorrelateScopeRefresh scopes based on attributes
Disable auto creation of scopesnoAutoCreateScopesIf enabled do not do this
Disable optimizationnoOptimizeReaggregationProcess every account - no optimization
Promote managed attributespromoteManagedAttributesOnly use this if there is no group schema
Disable auto creation of appsnoAutoCreateApplicationsIf enabled do not do this
Disable marking as needing refreshnoNeedsRefreshDo not set needsRefresh to true
Enable partitioningenablePartitioningEnables partitioning if supported
Objects per partitionobjectsPerPartionIf the connector supports
Loss limitlossLimitIf the connect supports
Terminate when maximumhaltOnMaxErrorSelf explanatory
Maximum errors beforemaxErrorThresholdShould specify this
Sequential ExecutionsequentialTerminate sequence on error
Actions to includelogAllowedActionsSee options: comma sep list

List of logAllowedActions options: (I recommend never using this): 
  • CorrelateManual
  • CorrelateMaintain
  • CorrelateNewAccount
  • CorrelateReassign
  • Create
  • Ignore
  • Remove

Wednesday, September 29, 2021

SailPoint various ways to construct identityName

 Construction of identity name

There are various methods for constructing identity name from an authoritative source.  Each method has its advantages and disadvantages.

The normal construction of an authoritative source is to have the employee number as the identity attribute and a first-last full name as the display attribute.  This is normal and is very helpful in the display of the identity on the main identity warehouse page.  However this has a side effect: the identity name of the user is their display name.  This is because on account creation, the display name informs the identity name.  Hence an issue.  Using the employee number for the display attribute has its own redundancy issues and should be discouraged.

Construction of display name

Some sources do not have a display name field, they might have a first name and a last name field.  In this case you should construct this field using a customization rule.

  • Define the Full Name or Display Name field in the schema
  • Write a customization rule that pulls the first and last names and returns the full or display name to the attribute.
  • Define the new field as the display attribute.
Manipulating the identity name

In order to have the employee number to be used as the identity name instead of the display name, you need to explicitly set the name in the Creation rule of the authoritative source application.  This is also where we typically set the initial password for the identity.  For instance, if the application's name for employee number is FILENUMBER (this is the value for WorkDay typically) then the code would look like this:

 identity.setName(account.getAttribute("FILENUMBER"));

And in fact you could set the identity name to whatever you like here, keeping in mind to make it always unique.  For instance you could include the application name in the identity name:

 identity.setName(application.getName()+

    "-"+account.getAttribute("emplid"));

Or I have also seen:

  identity.setName(account.getAttribute(displayName)+

    "-"+account.getAttribute("emplid"));




Wednesday, September 22, 2021

SailPoint logging best practice

SailPoint recommends using the log4j calls for logging.  This is in contrast to the occasional use of commons logging in some of their codes.  Just for reference:

import org.apache.log4j.Logger;

Logger alog=Logger.getLogger("com.sailpoint.class.module");

The above creates the Logger class and uses log4j (BEST)


import org.apache.commons.logging.Log;

import org.apache.commons.logging.LogFactory;

Log alog=LogFactory.getLog("com.sailpoint.class.module");

The second example uses the commons logging and is not best, but works.  Commons logging actually uses log4j so it's a needless extra layer.


Providing proper construction, level, and content for log statements is important.  Why do we use logging instead of a debugger? Because for most web applications, you can't pause the program to inspect its variables.  Logging is the best option. 

Some people, generally those who don't understand log4j, will start with a statement like:

log.error("I am here");

and then later something like:

log.error("Now I am here");

Let's examine the errors in this method of troubleshooting:

1) Use of an OOTB log object.

2) Using error call for simple debugging.

3) General lack of information about "where" you are in the program.

4) Lack of any meaningful information in the log output.

In more detail:

1) Never use an OOTB log object, always create a new one.  I always call mine alog but you could call your blog or clog, go nuts.  I have seen plogger and others, but I don't like to type so alog fits my style.  You could overwrite log also but I don't recommend that.

2) Learn the log levels and their meaning.  TRACE is the most verbose and should be confined to inner loops in iterative code, that you would expect to print if there is a very insidious logic issue you are trying to track down.  DEBUG is best for debugging code and should be used for most log statements.  INFO is rarely used but could be used for method entry statements. WARN is the first level of statements that the OOTB log setup will always print, and should be a warning. ERROR should be used in catch blocks and anywhere a severe error is found.  Their respective methods, trace(), debug(), info(), warn(), and error() send this data to the logger.  You have to learn how to set the log levels in the log4j2.properties file (or log4j.properties for older versions).

3) In the text of your log statement, provide a coded prefix or "tag" that indicates the initials of the client, a 3 or 4 character module code, and a unique number.  For example, for Acme's WorkDay customization rule, the prefix on the first log statement will be something like "ACM-WDC-001" and the final logging would be something like:

String fileNumber=(String)object.getAttribute("FILENUMBER");

alog.debug("ACM-WDC-001 Entered WorkDay Customization rule for user "+fileNumber);

Each different module has a unique 3 or 4 character module code and each log statement has a unique number, which does not have to be strictly sequential.

4) Notice above that the log statement contains information about what is happening, rather than just a dead line "I am here".  Always try to include data and don't forget to null check things before printing.

Use of these techniques also makes troubleshooting with your SIEM more effective.




Tuesday, January 12, 2021

SailPoint cron settings for different scenarios

Run every 5 minutes:

0 0/5 * 1/1 * ?

Run every 15 minutes, on the 5's (0:05, 0:20, 0:35, 0:50)

0 5/15 * 1/1 * ?


Run every hour at the top of the hour:

0 0 * * * ?

Run every hour at half past:

0 30 * * * ?


Run every 4 hours at top of the hour:

0 0 0/4 * * ?



Wednesday, December 30, 2020

SailPoint Identity attribute naming recommendations

 I have seen some very odd names for Identity attributes.

Just as a refresher, Identity attributes are defined in the ObjectConfig-Identity.xml file.

For example:

<ObjectAttribute displayName="Job Title" editMode="readOnly" name="jobTitle"/>

My example doesn't include any source or target definitions.

If you want the field to be searchable, you have two options.  One option is to use one of the extendedNumber values.  If you just check the searchable box in the UI, SailPoint will assign the next available extendedNumber value.  This option is fraught with dangers.  The first danger is that OOTB there are only 10 extended attributes defined in the IdentityExtended.hbm.xml file, so if you exceed 10, you will need to uncomment the 11-20 lines and then create the database table fields.  The second danger is that only 5 of the OOTB extended attributes have indexes defined, so any search on those non-indexed attributes will generate a table scan in the database, affecting performance.  You should define and create these indexes as soon as possible in your installation process.

The second option is to used named columns.  This method is described in the hibernate file and here is where this post is important to apply.  My recommendation is to always use strict and concise camelCase for identity attribute names, which go in the ObjectConfig-Identity.xml and in the IdentityExtended.hbm.xml files.  Here are some naming schemes that have generated terrible results:

All caps like EMPLID

Trailing caps like personID

Leading caps like ADLoginName

Numbers like AS400Login

Pascal Case such as JobTitle

Repeated caps like autoFRActivate

Long long names like ADLastModifiedDatetime

Using underscores (snake case) like job_title

Database keywords or function names.  Here are some I have discovered:

  • position

Single lower case values are FINE - emplid, title, these are fine although not very descriptive.

If you want to use "ID" in the description, use "Id" in the name such as personId

Keep it short, keep it simple.  Two words is best: jobTitle, departmentName, adLogin, adGuid, empoyeeId, etc.  Remember that Oracle 12c only allows a 30 character identifier.

When you deploy the hibernate file and then execute the iiq extendedSchema command, the extendedSchema job takes the camel case and splits it into words, like this:

jobTitle       becomes job_title

This is done because database don't normally care about case.  For this same reason, always make your indexes to look like the field name, not like the camelCase.

<property name="jobTitle" type="string" length="450"

  access="sailpoint.persistence.ExtendedPropertyAccessor"

  index="spt_identity_job_title_ci"/>

NOT index="spt_identity_jobTitle_ci"/>

Don't try to create the database scripts on your own, you will likely make a mistake.





End

Wednesday, November 25, 2020

SailPoint why you should NEVER open any OOTB or cloned OOTB workflow in UI

Here's a good suggestion

Unless you authored a workflow in the Business Processes UI in SailPoint, don't open it in the UI.  This includes any OOTB workflow - you should NEVER modify any OOTB workflow.  It also includes any workflow that you cloned from OOTB.

How do you clone a workflow?  How to you clone any replicatable object?  Open it in the Debug Page, then change the Name, and then remove the created, id, and (if there) modified tags from the first line (actually line 3).  Also then search for and remove any created, id, or modified tags anywhere in the file.

Why don't you want to open an OOTB workflow in the UI?  Because the UI will reorganize the workflow in a way you don't want to happen.  It will reformat the descriptions, it will add or remove arguments to steps, and you will not be able to make comparisons to the OOTB workflows.  It also adds the explicitTransitions tag which is not wanted.

Here's what it does to LCM Provisioning:

Reformats the Descriptions to remove line feeds - they are ugly

In the Initialize step, it adds the following arguments:

  • <Arg name="identityRequest">
  • <Arg name="asyncCacheRefresh">

It also confusingly shuffles the remaining arguments.

In the Create Ticket step, it adds the following arguments:

  • <Arg name="ticketProject">
  • <Arg name="ticketPlan">
In the Pre Split Approve step, it adds the following arguments:
  • <Arg name="dontUpgradePlan"/>
  • <Arg name="clearApprovalDecisions"/>
  • <Arg name="workItemDescription"/>
In the Approve and Provision Split step, it adds the following arguments:

    • <Arg name="clearApprovalDecisions"/>
    • <Arg name="requesterEmailTemplate"/>
    • <Arg name="approvalEmailTemplate"/>
    • <Arg name="endOnProvisioningForms"/>
    • <Arg name="enableRetryRequest"/>
    • <Arg name="endOnManualWorkItems"/>
    • <Arg name="userEmailTemplate"/>
    • <Arg name="batchRequestItemIId"/>

    In the Approve and Provision step, it adds the following arguments:

    • <Arg name="approvalEmailTemplate"/>
    • <Arg name="batchRequestItemId"/>
    • <Arg name="clearApprovalDecisions"/>
    • <Arg name="enableRetryRequest"/>
    • <Arg name="endOnManualWorkItems"/>
    • <Arg name="endOnProvisioningForms"/>
    • <Arg name="requesterEmailTemplate"/>
    • <Arg name="userEmailTemplate"/>
    In the Finalize step, it adds the following arguments:
    • <Arg name="autoVerifyIdentityRequest"/>
    • <Arg name="ticketDataGenerationRule"/>


    Monday, November 23, 2020

    SailPoint which files should go in your comparison folder

     In the ExportXML class you have an option to compare against existing data and create a merge file for certain files.  Here are the files you will need to place in there and how to organize them.  You want to generate these files from a perfectly stock deployment so that the client's customizations are actually captured in the merge files.  If you don't, then the merge file will only contain modifications after the date you generated it.

    Create a folder I normally call mine base (or Base for Windows based).

     Next create the following folders and put the shown files into those folders:

    AuditConfig

        AuditConfig-AuditConfig.xml

    Configuration

        Configuration-ConnectorRegistry.xml

        Configuration-IdentityAIConfiguration.xml

        Configuration-IdentitySelectorConfiguration.xml

        Configuration-SystemConfiguration.xml

    ObjectConfig

        ObjectConfig-Bundle.xml

        ObjectConfig-Identity.xml

        ObjectConfig-Link.xml

        ObjectConfig-ManagedAttribute.xml

    UIConfig

        UIConfig-UIConfig.xml

     

    My normal ExportXML output is  $Class$-$Name$ that is why they are named that way.

    IdentityAI is not found on lower versions of IIQ.

    Sunday, November 8, 2020

    SailPoint saving AD values as a secondary auth source

     This post is primarily so I remember how I do every client's AD.  The point of saving AD values as Identity Attributes is twofold: first, to indicate if the user has an AD account (this can literally be done for any target, but AD is the one that almost everyone uses for their primary provisioning target).  Second, it allows you the OPTION of saving the values for all time, which would allow you to ensure that no duplicates be created.

     I create 5 Identity Attributes:

    adLogin (AD Login) which derives from sAMAccountName

    adEmailAddress (AD Email Address) which derives from mail

    adDistinguishedName (AD Distinguished Name) which derives from distinguishedName

    adUserPrincipalName (AD User Principal Name) which derives from userPrincipalName

    adObjuctGuid (AD Object Guid) which derives from objectguid

     Any or all of these can be backed up with a global rule I normally call IdentityAttribute-PersistOldValue whose source is literally return oldValue;

    You have to decide on your own which if any are sortable. Make a value sortable if you plan to do a search on it using any search method.

    If you use the global rule then your values will not be removed if the user loses their AD Account.  Be careful and aware of this. 

     


    Friday, June 5, 2020

    Customizing SailPoint Task Definitions - Run with Response

    Subject: Batch Processing in SailPoint

    Regarding: Adding responses to a batch process using Run Rule

    Creating a TaskDefinition for running a rule is normally performed by the following:

    Setup -> Tasks
    New Task -> Run Rule

    Enter the details such as what you want the rule to be named, description, then the rule to be executed.  Save and Run


    Once you have done this, you will have a framework TaskDefinition with the following elements:

    <Attributes>
      <Map>
        <entry key="ruleName" value="the rule you chose"/>
      </Map>
    </Attributes>

    and also:

    <Parent>
      <Reference class="sailpoint.object.TaskDefinition" name="Run Rule"/>
    </Parent>

    You may want to output some results.  The issue with this is that the Run Rule normally does not have a section for outputs.

    To fix this you can add the following elements:

    <Signature>
      <Returns>
        <Argument name="totalCount" type="int">
          <Prompt>Total users processed</Prompt>
        </Argument>
        <Argument name="resultString" type="string">
          <Prompt>Results:</Prompt>
        </Argument>
      </Returns>
    </Signature>

    You also can add this to a clone of Run Rule and use that as a template for new rules.  But this does not populate the values.
     
    To populate the values, the following is needed in the rule:
     
     (imports)
    import sailpoint.tools.Message;
    import sailpoint.tools.Message.Type;
    import sailpoint.object.Attributes;
    import sailpoint.object.TaskResult;
    import sailpoint.object.TaskResult.CompletionStatus;

    variables:
    int resultCount=0;
    String resultString="";

    Set these values in your code.

    Then just before the return:

    if(taskResult!=void) {
      taskResult.addMessage(new Message(Message.Type.Info,"Completed Successfully", null));
      Attributes resultAttr=new Attributes();
      resultAttr.put("totalCount",new Integer(resultCount));
      resultAttr.put("resultString",resultString);
      taskResult.setAttributes(resultAttr);
      taskResult.setCompletionStatus(TaskResult.CompletionStatus.Success);
    }

    I use a StringBuffer instead of concatenating the resultString, and then set resultString to the toString() result of the StringBuffer.


    Inputs:

    If you want to add inputs to the Run Rule task definition, you would need to start by pulling the signature inputs from Run Rule.  From there you can add fields as you would any TaskDefinition.

    For example:

    <Signature>
      <Inputs>
        <Argument helpKey="help_task_run_rule_rule" name="ruleName" required="true" type="Rule">
          <Prompt>label_rule</Prompt>
        </Argument>
        <Argument helpKey="help_task_run_rule_ruleconfig" name="ruleConfig" type="string">
          <Prompt>label_rule_config</Prompt>
        </Argument>
        <Argument helpKey="Enter action to be taken" name="action" type="string">
          <Prompt>Action</Prompt>
        </Argument>
      </Inputs>
    </Signature>

    In the rule you can use the following code to check for action:

    String actionStr=null;
    if(config.containsKey("action"))
      actionStr=config.get("action");
    }

    then later you can check the value of actionStr
    If nothing was entered the value will not be in config



    Customizing SailPoint TaskDefinitions - OOTB method

    Subject: Batch processing in SailPoint

    Regarding: Using a rule to execute batch processing, customization of the inputs



    Creating a TaskDefinition for running a rule is normally performed by the following:

    Setup -> Tasks
    New Task -> Run Rule

    Enter the details such as what you want the rule to be named, description, then the rule to be executed.  Save and Run



    The normal method for making changes to the behavior of the rule is through the Rule Config line.  In this, the input data, separated by commas, is entered as:

    key,value,key,value,key,value,key,value

    Which translates into an input variable called config, which is passed into the rule as a Map.  You can verify the value exists by using this:

    if(config==void || config==null || config.isEmpty()) {
        log message
    }
    else {
      extract the key value pairs out of the map
    }

    The problem with this method is that it is hard to train people how to use it.  See further posts for how to get around this.

    Thursday, May 7, 2020

    Making SailPoint TaskDefinition settings survive a deployment

    There is some discussion (maybe not disagreement) regarding which SailPoint object types to put into the Source Code Repository and managed via SSB.  I won't get into all of them right here, but one of the most difficult decisions is around TaskDefinition objects.

    The newest Object Exporter tool ExportXML.class does have a setting called "Strip environment-specific metadata" which removes the transient values of:

    TaskDefinition.runLengthAverage
    TaskDefinition.runLengthTotal
    TaskDefinition.runs

    from the XML it generates for TaskDefinition objects.  For many of us dealing with Tasks, there are some settings that we would like to manage in the UI and not be overwritten by a deployment.  I suggest that the following changes to the TaskDefinition XML files can help with Task Management.

    The following settings may or may not be in this class of "preserve with deploy"
    •  resultAction (shown as Previous Result Action)
    • TaskSchedule.host (shown as Host)
    • taskCompletionEmailNotify (shown as Email Notification as a dropdown)
    • taskCompletionEmailRecipients (dropdown shown when above is set)
    • taskCompletionEmailTemplate (shown when above is set)
    • optional - Owner (not shown - defined on creation and only edited in debug
     How to do this:  Follow this process:

    Take a current export - using the Export Tool or use my KCSExportXML tool (available soon)

    Make the following edits:
    On the  DOCTYPE tag (line 2) change TaskDefinition to sailpoint
    Add 2 lines between the DOCTYPE tag and the TaskDefinition tag:
    <sailpoint>
      <ImportAction name="merge">
    Then indent everything below that down 4 spaces for proper XML indented format
    Then at the end add:
      </ImportAction>
    </sailpoint>

    Finally, delete the XML attributes or elements you do not want overwritten on a deployment, my list is above but yours may be different.  You might include lists on sequential tasks, or on tasks with lists of Applications, etc.

    Message me for help.