Traditionally this would require in an Iterator in the Feed section to do the initial reading of User accounts. Then in the Data Flow would be an IF-Branch to filter out managers. Under this Branch comes a Connector in Lookup mode to find those people who report to this manager. For each person returned by the search there is another Lookup required to retrieve info from the HR system. Using the traditional approach will require a good deal of Hook scripting - both to deal with various lookp results (none found, and multiple found), as well as dealing with the final search in the HR system.
Connector Loops makes this much easier. Here is my AL to implement the logic described above:
Let's start with the innermost Connector Loop ('Get HR…') which is used instead of a stand-alone Connector in Lookup mode. As a result, I don't have to script (or even enable) the On No Match and On Multiple Found Hooks. Using a Connector Loop in Lookup mode still gives me option to code these Hooks if I choose, but even if I leave them disabled then the Loop will cycle once for each entry found. I may want to ensure that one and only one HR record was found for a managed person, but this is my decision based on the quality of the data I'm reading and the requirements of my solution.
The second point I wanted to make was that I set both Connector Loops (the 'FOR-EACH' components in the screenshot above) to Iterator mode instead of Lookup. Both modes do the same job - searching for data based on some filter - but they do it in different ways. One major difference is that Lookup mode caches the entries read, up to the number you set in the More... > Lookup Limit setting, while Iterator mode leaves the result set on the server and retrieves one at a time for each Get Next operation. Another important difference is that Lookup mode uses Link Criteria to control its search filter, whereas Iterator mode does its selection based on parameter settings. For example, an LDAP Connector uses it's Search Filter parameter, whereas a JDBC Connector will use the SQL Select parameter if it is set, and otherwise construct a 'SELECT *' statement based on the Table Name parameter.
This last point leads me to another handy option for Connector Loops: the Connector Parameters tab.
This feature lets you use standard attribute mapping techniques to set the value of one or more parameters. In the above example I map to ldapSearchFilter to control which entries are selected by the Loop.
Fortunately, the Search Filter is refreshed from the parameter settings whenever entries are selected, as they are at the top of this Loop for each AL cycle. However, most parameters are only updated with the values in the Connection tab once when the Connector initializes. If I needed to set a parameter that is not refreshed for selection, for example the File Path of a File Connector or connecting to a different database server, then I would need to configure the Loop to re-initialize as well as performing the selection.
The next thing I'd like to talk about is how you exit from Loops, as well as continuing to the next cycle of a Loop.
To leave a Loop you make a system.exitBranch("Loop") call in script, documented here: exitBranch. There are two variants of the exitBranch() call: one with no arguments that exits the current (innermost) Loop or Branch, and one with a String argument. If you use the predefined "Loop" argument then it exits the innermost Loop, skipping out of any Branches that might be closer in the AL tree. As the docs state, you can also exit a named Branch or Loop, as well as exiting the Data Flow section or even the entire AL.
On the other hand, if you want to continue to the next cycle of a Loop then you use the system.continueLoop() call, described here: continueLoop. Again you have two variants: one with no arguments that continues the innermost Loop, and one that lets you name the Loop to continue to.
Finally, note that the built-in looping performed by an AL - e.g. the Feed section looping iterated data into the Data Flow section - automatically empties out the Work Entry at the top of each cycle. A Connector Loop does not do this for you. If you want to handle this yourself then putting this snippet of code in the Before GetNext Hook of the Connector Loop (in Iterator mode) will do the trick:
You could use the same code in a Hook like Before Lookup for Lookup mode.
Furthermore, I often save off the work Entry prior to my Connector Loop, like this:
saveEntry = system.newEntry();
Then after the Loop completes, I can empty out the Work Entry once more and then merge back in the saveEntry Attributes.
And if you are still confused, please leave me comments and questions and I'll do my best to answer them.
This might be a little off topic, but we are currently attempting the following:
We run TDI 7.1.1, and built an assembly line to Sync IBM Domino data with Active Directory. Everything is working OK, except that we cannot update the "Manager" field in AD with the value of Manager in Domino.
I believe this is due to Active Directory wanting the complete CN of the AD user/manager.
My Question is, how will I get this to Sync? I assume that I must make use of some Java and a Before Execute hook to lookup / modify, but I am not sure how / what to do.
So you need to have the full DN to the User entry in AD for the manager, which means you will either need to convert this users's entry from Domino, or search for it in AD to find the $dn you need. If you would like to continue this discussion then I suggest we do so in the TDI group: https://groups.google.com/forum/#!forum/ibm.software.network.directory-integrator
See you in the forum :)
I have to develop one AL which will iterate over ldif file containing 300000 user entries and check if user is present in LDAP or not. I need to add entries in CSV based on user Search result. If user present in LDAP then add entry in CSV file as active user other wise add entry in CSV file as departed user. What is the best way to implement this solution. I have used connector loop to read ldif file entries and added if else conditions and csv file connector for writting data report but it is taking 2 hours to run this AL. Requirement is that it should take max 30 mins to process 300000 entries and generate the report .Could u please suggest how can I make this solution optimize with less execution time?
Please lets continue this conversation in the forum:
Post a Comment