Search This Blog

Thursday, December 10, 2015

Hand editing the User.xml file

Sometimes the User.xml file is corrupted or there is a missing field.  You will see the field in the database, in the User Entity object in the System Administration Webapp (with a sandbox open), maybe even in the metadata/iam-features-requestactions/model-data/ModifyUserDataset.xml and CreateUserDataset.xml files.  Just missing from the User.xml file.

One way to correct this issue is to hand edit the file.  When doing this, use Notepad++ so you can see everything and make sure that the file stays valid (or you can use XMLSpy).

Open the file and search for the UDF section:

find the <entity-attributes> tag.  Inside this tag are a collection of <attribute name= tags.

You should (in unix) grep for 'attribute name=' and check to make sure you don't have multiple missing entities, but for this example just concentrate on the one.

Copy any <attribute tag section, all the way to the </attribute> tag, and paste it below.  You may have good luck copying something that looks similar to the tag that is missing.

Then do the following:
Change the name in the initial tag name="XXX" to the new field name
Change the name where it is referenced below the <name>scim</name> tag.
Change the value below the <name>max-size</name> tag to the field width.  Check the database for this value.

Next go to the <target-fields> tag and find a UDF.  Copy and paste a similar <field name= tag, then edit as follows:
Change the name of the initial tag name="xxx" to the new database table field name.  It should start with usr_udf.

Next find the <attribute-maps> tag and find an <attribute-map> tag to copy.  Copy and then:
Change the name of the entity-attribute to match.
Change the name of the target-field to match.

Next find the <metadata-attachment> tag.  Below this are all of your UDFs and several of the OOTB fields.  You need to make a spreadsheet of all of the <name> attributes and the the <value> attributes for those entries with category of categories.Basic User Information.
Compile this list and then sort by name.  Name is a number.  Look for a gap.  If there is a gap, I suggest your new entry should fill the gap.  If there is no gap, add one to the last value and use it.  You will copy a block from the Basic User Information section and rename the value and use the number for name.

Save the file.  Import it using the procedure I have outlined in my blog - search on importMetadata

Thursday, October 29, 2015

exportWeblogicMetadata obsolete

In OIM 11gR2 PS3, the weblogic.properties file states that the exportWeblogicMetadata.sh scripts are deprecated.  It recommends going to the documentation to find a better way to do this.

The documentation explains that you can use the Enterprise Manager in order to perform targeted exports.  I have tried this since 2012 with very little success.  I always seem to specify something incorrectly.

Here's another method from the same documentation.

My technique is as follows:

1) Create a new folder for exports.  It can be anywhere.  For the purposes of this blog, I will use the folder location /u01/app/mds/export
2) cd to the folder $MW_HOME/oracle_common/common/bin  Alternatively you could put this in your PATH.  Be sure NOT to use the wlst.sh that is in the wlserver folder.
3) execute wlst.sh either through ./wlst.sh  or just wlst.sh if it's in your PATH

wlst.sh
connect()
weblogic
<password>
t3://hostname:7001         or   t3://admin-vhn:7001  for a clustered install
exportMetadata(application='OIMMetadata',server='oim_server1',toLocation='/u01/app/mds/export')
disconnect()
exit()

It will export all of your data to that folder.  You can review all you want.

If you want to import just reverse the process:

mkdir /u01/app/mds/import
Copy only the files that you want to change, keeping their folder structure.
Edit the files (or add files as you please)

wlst.sh
connect()
weblogic
<password>
t3://hostname:7001         or   t3://admin-vhn:7001  for a clustered install
importMetadata(application='OIMMetadata',server='oim_server1',fromLocation='/u01/app/mds/import')
disconnect()
exit()

Happy exporting



Saturday, September 5, 2015

Formatting dates in Linux for sensible log file names

I like to make my log filenames have date stamps and so in my shell scripts I typically will create a log file name at the start of the script:

# Create name for log file
timenow=`date +%Y%m%d-%H%M%S`
logfilename="utils-${timenow}.log"

and the time will now be included in the log filename.

You can remove the %S to take off the seconds if you want.


Thursday, September 3, 2015

New environment changes to use UploadJars

When you set up a new environment, your UploadJars.sh file won't work until you set up your environment properly.  Here is what to do:

Make sure you already have set:

MW_HOME
JAVA_HOME
OIM_ORACLE_HOME

Then set the following environment variables:

APPSERVER_TYPE=wls
APP_SERVER=weblogic

Wednesday, August 19, 2015

OIM xelsysadm password has expired

Stop the presses!!  You are getting a message that the xelsysadm password has expired.

Stop what you are doing.  Go to the database and do the following:

http://keithsmithmsme.blogspot.com/2013/10/how-to-set-xelsysadm-password-to-never.html

Tuesday, July 7, 2015

Quick notes on OVD

Some quick notes regarding Oracle Virtual Directory (OVD):

A global plugin is NOT a plugin on the Local Store.  Many plugins have a disclaimer that they should not be deployed to an Adapter, such as UPNBind.  To define a Global Plugin you need to navigate to the Advanced Tab in ODSM and it is the second section on the left side.

If you are doing a non-join combine of two domains into one, do not use a common top level OU.  Define each domain to a unique OU in a common DC and then use the DC as the search base in anything searching for your users.

Monday, July 6, 2015

Sharing log files with users who are not in the oinstall group

Sometimes a client wants to be able for a user to view log files for an Oracle application.  There are many ways to do this:

  1. Give the user sudo rights to the oracle user.
  2. Put the user in the oinstall group (assuming that was the default group used in the installation for the oracle user)
  3. Open up the umask to 0022 so that any user can read the files.
  4. Do the following:
First, you need to give read access to all of the folders in the chain.  Let's say you have a middleware home of:

/u01/oracle/products/middleware

and in there you have a domain home of

$MW_HOME/user_projects/domains/oim_domain

and in there you have a server

$DOMAIN_HOME/servers/oim_server1

In this case every folder between /u01 and oim_server1 would have to be granted 755 privileges.  It is easy enough to just go through and chmod each folder in order and then check from a user who has not been granted any of 1-3.

Next, the umask in the .bash_profile does have to be 0027 or better for people to read the files if they are in the correct group.

To make this work here is what needs to happen:

As root, execute the command:
# groupadd oshare
(I made up that group name oshare but you can call it whatever you want).
# usermod -a -G oshare oracle
# usermod -a -G oshare username
(username is the user you want to share files with)
# cd <that oim_server1 folder>
# chown -R oracle:oshare logs
# chmod -R 2755 logs

That should do it.  I have not tested this.

To reverse this go back and perform:
# chown -R oracle:oinstall logs

If you want the user to be able to delete files and not just read them, change the 2755 above to 2775.
You will have to do this in any log folder you want to share.  I would not advise sharing any other folder.  This does include the ADR folders.



Wednesday, June 3, 2015

Fix to issue of not being able to write Lookup group name

There is a new bug 21171801 that I requested to be created.  This bug has likely been with OIM for some time and is in all versions I can find.

The bug was first reported in 2013.  Here is the issue: when attempting to update the group name of a lookup using the API, the write fails.  Very few people read or write lookups using the API.  I have written a schedule task to back up and restore lookups.

The bug is in the name of the Field lookup called Lookup Definition.Group which translates to LKU_TYPE_GROUP.  If you query the LKU table for the field lookups you will see that each field lookup translates to a table field name in the database.  There is no LKU_TYPE_GROUP in the database, it is called LKU_GROUP.

Field lookups cannot be exported, imported, or modified in the Design Console.  The only fix to this is the following command executed as the OIM schema owner:

SQL> UPDATE LKU SET LKU_FIELD='LKU_GROUP' WHERE 
  2  LKU_TYPE_STRING_KEY='Lookup Definition.Group';
SQL> COMMIT;

I constructed this update query this way to prevent someone from accidentally forgetting the second line.  This change has no effect on imports, exports, or editing of the Lookups including updating the lookup group name of any lookup.  This translation appears to only be used by the API and does not appear to be used by the Design Console or Nexaweb, both of which are supposedly connected via the EJBs directly to the database.

I will update this blog when a patch for this bug is released.



Wednesday, May 13, 2015

Adding the valueChangeListener to the modify user sandbox

A modify user sandbox will not have the valueChangeListener set properly for those new fields that were added, and so those need to be added.  Here is what to do:

Make a copy of the sandbox.
Using 7-Zip, perform an Extract to: the folder name.
Nav into the folder notice the folders mdssys, oracle, pageDefs, and templates.
Find the file oracle\iam\ui\runtime\form\view\pages\mdssys\cust\site\site\userModifyForm.jsff.xml
Edit this with notepad++

Look for fields that are missing this:

Review each UI component and verify it has a value of 

valueChangeListener="#{pageFlowScope.cartDetailStateBean.attributeValueChangedListener}"

You will find that the checkboxes are missing this.  It goes after value= tag.  Some inputText fields may also be missing this.

look for <af:inputText
look for <af:selectBooleanCheckbox
look for <af:selectOneChoice
look for <af:inputDate also make sure they all have that tag.

When done do the following:

in the folder with the 4 folders mentioned above in it, select them all and do a right click send to compressed archive.  The archive will be called templates.zip just rename it and then use that sandbox for the procedure.

Go into the identity page and import the new file.  Activate the file and then verify functionality before publishing it.  Be sure to sign out and close all windows and tabs of the browser.

Tuesday, April 21, 2015

What's in that Sandbox (UDF)

Opening up a sandbox that was exported after creating a UDF, you should find:

mdssys
persdef
xliffBundles

folders.

In the mdssys folder you will find:

sandbox/active_mdsSandboxMetadata.xml which contains the basic info on the sandbox, including the name you gave it when you created it, and the date and time, which you should have put into the name of the sandbox, like this:

Client_yyyyMMdd_HHmm

In the persdef folder you will find:

oracle/iam/ui/common/model/user/entity/mdssys/cust/site/site/UserEO.xml.xml which contains the definition of the UserEO object.  Grep on "Attribute Name" to get a list of all of the UserEO elements in the sandbox.  Your new ones should be there.

oracle/iam/ui/common/model/user/view/mdssys/cust/site/site/UserVO.xml.xml  which contains the definition of the UserVO object.  Grep on "ViewAttribute Name" to get a list of all of the UserVO elements in the sandbox.  Your new ones should be there.

In the xliffBundles folder you will find oracle/iam/ui/runtime/BizEditorBundle.xlf which contains the ADF mappings for all of the UI components.  Grep on user.entity.userEO to see all of the UserEO elements.  Your new objects should be there.

Tuesday, April 14, 2015

ICF DBAT Connector Trusted Recon

I originally posted this on the Oracle Community site but I thought I would add it here:

--- START OF POST ---

My colleague and I have written a Database Tables ICF connector and this is an update regarding doing multiple trusted recons.

The data is written to a staging table as events rather than what normally you would think of as a summary table (USR is a good example of a summary table). There are about a dozen events. Only one event is valid to trigger a Create User task, carrying with it about 20 of the user's initial data fields, and such I program the Last Name, OIM Organization Name, OIM User Type, and OIM Employee Type into the attribute map when I pass it to the ResultsHandler.

For an update only recon profiles and update tasks. I originally did not believe that I would need to pass in the Last Name, OIM Organization Name, OIM User Type, and OIM Employee Type parameters into the Resource Object since it only does updates. Since it never creates a user (no match=do nothing), there should be no need for these parameters in the Resource Object. But when I run the Recon I got an error: The profile might be corrupt and could possibly cause reconciliation failure:: xxxxx xxxxxx xxxxxx xxxxxx missing mappings for: [ACT_KEY, USR_LAST_NAME, USR_TYPE, USR_EMP_TYPE] and I get an error XL_SP_ReconBlkUsrRqdcValdnMtch while processing batch ID xxx One or more parameters passed as null

The resolution is this: Any trusted recon must map to these four parameters even if they are not provided in the lookup or the scheduled job. For an update only job you just leave them blank.


--- END OF POST ---

Since I wrote this (in 2013) I did also start putting the 3 normally fixed values of Organization Name, Xellerate Type, and Role, spelled exactly that way, into the RO, the PD, and the Lookup.XX.UM.ReconAttrMap.Trusted.Defaults lookup, instead of trying to generate them inside the connector.  This is only needed in the lookup for a recon that can do a create.  Otherwise, as stated above, put them into the RO and PD and leave them off of the lookup.  Of course Last Name is normally provided in a lookup, but if not then use the same process, put Last Name into the RO and PD and leave unmapped for UPDATES.

The names OIM Organization Name, OIM User Type, and OIM Employee Type are not the correct names, they were just made up.

Thursday, April 9, 2015

Constructing the Shuffle Algorithm

I recently needed to perform a shuffle operation on a set.  Shuffling seems very easy, but it actually can be difficult and fraught with errors.  Here's some info on how I went from the start to the finish of the development of my shuffle algorithm.

Like shuffling a deck of cards, a shuffle operation does not change the frequency of the values found in the set or list.  The shuffle operation can apply to a set or a list.  Recalling that a set consists of only unique values (no repeats) and a list consists of values and so, it can contain duplicates.  No matter what the source, the algorithm is the same.

The algorithm requires a random number generator (I will use Java):

Random random=new Random(System.currentTimeMillis());
int ranvalue=0;
int ranloc=0;
for(ranloc=0; ranloc<1000; ranloc++) {
  ranvalue=random.nextInt(1000);
}

I always set up a number generator as I show above, to properly seed the generator and then "spin" it 1000 times to get the random number generator properly set up.

Here is the basic starting setup and algorithm.
List<String> sourceList = new ArrayList<String>();
populate sourceList from the source.
List<String> shuffleList = new ArrayList<String>();
add the values from the sourceList to the shuffleList in a shuffled order.

This method:
shuffleList.addAll(sourceList)
would not be a good idea, the data are copied in the same order.

Using this method:

int numSource=sourceList.size();
for(ranloc=0; ranloc<numSource; ranloc++) {
  ranvalue=random.nextInt(numSource);
  shuffleList.add(sourceList.get(ranvalue);
}

This method would randomize the values, but depending on the size of the source list, there is a high likelyhood of the shuffleList containing the source data in different frequencies, as there is a high probablility of the random.nextInt call returning the same value in two calls. This method is flawed.

You could add a list of used values:

int numSource=sourceList.size();
List<Integer> usedLocations=new ArrayList<Integer>();
Integer usedLocation=null;
for(ranloc=0; ranloc<numSource; ranloc++) {
  ranvalue=random.nextInt(numSource);
  usedLocation=new Integer(ranvalue);
  while(usedLocations.contains(usedLocation) {
    ranvalue=random.nextInt(numSource);
    usedLocation=new Integer(ranvalue);
  }
  shuffleList.add(sourceList.get(ranvalue);
  usedLocations.add(usedLocation);
}

This will work, but can get into a very long spin of the while loop as the shuffleList fills up, looking for that one value that has not been filled in.

The better way is to use a use and delete method:

int numSource=sourceList.size();
List<Integer> unusedLocations=new ArrayList<Integer>();
Integer usedLocation=null;
int numLocations=numSource;
for(ranloc=0; ranloc<numSource; ranloc++) {
  usedLocation=new Integer(ranloc);
  unusedLocations.add(usedLocation);
}
for(ranloc=0; ranloc<numSource; ranloc++) {
  ranvalue=random.nextInt(numLocations);
  usedLocation=unusedLocations.get(ranvalue);
  ranvalue=usedLocations.intValue();
  shuffleList.add(sourceList.get(ranvalue);
  usedLocations.remove(usedLocation);
  numLocations=unusedLocations.size();
}

This method works properly and has a good speed.
Happy Shuffling!

Friday, March 13, 2015

Utility classes available for Process Task Adapters

Some people are familiar with a set of classes that are intended for the user to build Process Task Adapters with.  These are the thortech classes and they work fine for a lot of generic functionality.  Since you can't compile them you may choose to write your own, but they work fine.

You will find them sandwiched between the com.sun and the java.applet classes:

com.thortech.xl.util.adapters.tcUtilBooleanOperations
com.thortech.xl.util.adapters.tcUtilDateOperations
com.thortech.xl.util.adapters.tcUtilHashTableOperations
com.thortech.xl.util.adapters.tcUtilJDBCClass
com.thortech.xl.util.adapters.tcUtilJDBCOperations
com.thortech.xl.util.adapters.tcUtilLDAPController
com.thortech.xl.util.adapters.tcUtilLDAPListener
com.thortech.xl.util.adapters.tcUtilLDAPOrganizationHierarchy
com.thortech.xl.util.adapters.tcUtilMathOperations
com.thortech.xl.util.adapters.tcUtilNumberOperations
com.thortech.xl.util.adapters.tcUtilPSTools
com.thortech.xl.util.adapters.tcUtilStringOperations
com.thortech.xl.util.adapters.tcUtilXellerateOperations

There's a lot to love so check them out.  I use the tcUtilStringOperations the most.

Tuesday, March 3, 2015

Plugins folder and JavaTasks folder - danger

This blog is regarding the plugins and JavaTasks folders in OIM 11gR2++

Most people know that you can easily deploy plugins to the plugins folder found in the $OIM_HOME folder.  It is a very simple way to be able to quickly test changes to your event handlers and scheduled tasks.

Some people know about the JavaTasks folder, which does not exist OOTB, but if created in the $OIM_HOME folder, any Jar files contained therein will be taken in just as if they had been uploaded.  It still normally requires a restart, but some people don't like the DeleteJars and UploadJars process.

So there is a downside to using these.  You cannot export Scheduled Jobs that were built with plugins folder based Scheduled Task plugins.  They don't show up in the export list.  And even if you register the plugin, when you export, the Job won't import.  I suspect similar issues will occur with PTA's built from the JavaTasks folder.

If you have built scheduled jobs from a plugins folder plugin, do this:

1) Screenshot the scheduled job so you remember what you had in it.  If any field goes past the end of the editor, just open the screen shot with paint and type in a text box with the data.  Save the screenshot.
2) Delete all scheduled jobs that were built from this plugin.
3) Remove the plugin from the plugins folder.  I like to just mv them to the plugin_utiliity folder.
4) Watch the oim_serverX-diagnostic.log for the line indicating the plugin has been removed from the cache.
5) Register the plugin
6) Create the scheduled jobs new.
7) Now you can export and import the jobs.  Make sure the scheduled task is also registered on the downstream environment.

One more thing:

Never use UpdateJars.sh - even if the file has the same name it sometimes fails to update the jar.  Always use DeleteJars.sh, then check the OIMHOME_JARS table, then use UploadJars.sh, and then check the table again.

Peace be with you.

Wednesday, February 18, 2015

OIM 11gR2 Connector Server logging mojo

The OOTB ConnectorServer.exe.Config file contains the following <listeners> tag:

<listeners>
  <remove name="Default" />
  <add name="myListener"

       type="System.Diagnostics.TextWriterTraceListener"
       initializeData="c:\connectorserver.log"
       traceOutputOptions="DateTime">
  <filter type="System.Diagnostics.EventTypeFilter"

          initializeData="Information"/>          
  </add>

</listeners>

This produces a single file in the c: drive.  This file can, and does, grow with no way to roll or otherwise start the file over, except to stop the connector server, delete the file, and then restart the connector server.

Instead of using the TextWriterTraceListener, another option can be chosen.

Here is the other option:

 <listeners>
   <remove name="Default" />
   <add name="FileLog"
 type="Microsoft.VisualBasic.Logging.FileLogTraceListener,Microsoft.VisualBasic,Version=8.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a"  

      initializeData="FileLogWriter"
      traceOutputOptions="DateTime"
      BaseFileName="ConnectorServer"
      Location="Custom"
      MaxFileSize="600000000"
      CustomLocation="D:\Identity Connectors\Logs\"
      LogFileCreationSchedule="Daily">
      <filter type="System.Diagnostics.EventTypeFilter" initializeData="Information" />
    </add>
  </listeners>


You will need to find a way to clean up the log files with an external process.  The FileLogTraceListener does not have any options for deleting logs.  See these links:

FileLogTraceListener
TraceOutputOptions Values

  

Getting java.util.logging to work with JUnit

In my development I normally create the JUnit test cases, but had trouble getting the java.util.logging statement to activate in the code I was testing.  I discovered a couple articles on the web and pieced them together for a solution.

As a reference, here is how I normally implement logging, with my OIM Flat File Connector XMLParser as the class:

import java.util.logging.*;

public class FlatFileXMLParser implements FlatFileParser {
    private static final Logger logger = 

        Logger.getLogger(FlatFileXMLParser.class.getName());

public void parse(File flatFile, FlatFileRecordHandler recordHandler,
                  ParserConfig config) throws Exception {
  String methodName="parse";
  logger.logp(Level.FINE, getClass().getName(), methodName,

    "FF-XMLP-001 entering");

Some explanation:
1) I use java.util.logging and never use log4j.
2) I use logp - never anything else.  One statement=commonality
3) I define String methodName to provide the method name in every module.
4) I add tags so that I can grep on the tags.  Each statement gets a tag.
5) Increment the numbers in the method and skip to next 100 on next method.
6) Debug statements as Level.FINE, use good judgement.

When I attempt to test these code modules I found that the logging was not being generated.  I found a great writeup and took most of this from it.  What I did was put the following into the JUnit class - not in the functional class:

public class AppTest extends TestCase {
  static {
    Logger rootLogger = Logger.getLogger("");
    System.setProperty("java.util.logging.SimpleFormatter.format",

           "[%1$tF %1$tr] [%2$s] %4$s: %5$s %n");
    for(Handler handler : rootLogger.getHandlers()) {
      handler.setLevel(Level.FINEST);
      handler.setFormatter(new SimpleFormatter());
    }
    rootLogger.setLevel(Level.FINEST);
  }


After this is the constructor and the test methods.  The logger will record the details to the output screen and you can track your code.

Good luck testing !!