TDIing out loud, ok SDIing as well

Ramblings on the paradigm-shift that is TDI.

Thursday, November 7, 2013

Connector Loops and why you're gonna love them

I'm going to use an example where my AL needs to read User accounts and check if this user is a manager. If so, then it needs to search for people with the current user as manager. For each person found it will need to look up the HR information from another system.

Traditionally this would require in an Iterator in the Feed section to do the initial reading of User accounts. Then in the Data Flow would be an IF-Branch to filter out managers. Under this Branch comes a Connector in Lookup mode to find those people who report to this manager. For each person returned by the search there is another Lookup required to retrieve info from the HR system. Using the traditional approach will require a good deal of Hook scripting - both to deal with various lookp results (none found, and multiple found), as well as dealing with the final search in the HR system.

Connector Loops makes this much easier. Here is my AL to implement the logic described above:




Let's start with the innermost Connector Loop ('Get HR…') which is used instead of a stand-alone Connector in Lookup mode. As a result, I don't have to script (or even enable) the On No Match and On Multiple Found Hooks. Using a Connector Loop in Lookup mode still gives me option to code these Hooks if I choose, but even if I leave them disabled then the Loop will cycle once for each entry found. I may want to ensure that one and only one HR record was found for a managed person, but this is my decision based on the quality of the data I'm reading and the requirements of my solution.

The second point I wanted to make was that I set both Connector Loops (the 'FOR-EACH' components in the screenshot above) to Iterator mode instead of Lookup. Both modes do the same job - searching for data based on some filter - but they do it in different ways. One major difference is that Lookup mode caches the entries read, up to the number you set in the More... > Lookup Limit setting, while Iterator mode leaves the result set on the server and retrieves one at a time for each Get Next operation. Another important difference is that Lookup mode uses Link Criteria to control its search filter, whereas Iterator mode does its selection based on parameter settings. For example, an LDAP Connector uses it's Search Filter parameter, whereas a JDBC Connector will use the SQL Select parameter if it is set, and otherwise construct a 'SELECT *' statement based on the Table Name parameter. 

This last point leads me to another handy option for Connector Loops: the Connector Parameters tab.


This feature lets you use standard attribute mapping techniques to set the value of one or more parameters. In the above example I map to ldapSearchFilter to control which entries are selected by the Loop. 

Fortunately, the Search Filter is refreshed from the parameter settings whenever entries are selected, as they are at the top of this Loop for each AL cycle. However, most parameters are only updated with the values in the Connection tab once when the Connector initializes. If I needed to set a parameter that is not refreshed for selection, for example the File Path of a File Connector or connecting to a different database server, then I would need to configure the Loop to re-initialize as well as performing the selection.


The next thing I'd like to talk about is how you exit from Loops, as well as continuing to the next cycle of a Loop.

To leave a Loop you make a system.exitBranch("Loop") call in script, documented here: exitBranch. There are two variants of the exitBranch() call: one with no arguments that exits the current (innermost) Loop or Branch, and one with a String argument. If you use the predefined "Loop" argument then it exits the innermost Loop, skipping out of any Branches that might be closer in the AL tree. As the docs state, you can also exit a named Branch or Loop, as well as exiting the Data Flow section or even the entire AL.

On the other hand, if you want to continue to the next cycle of a Loop then you use the system.continueLoop() call, described here: continueLoop. Again you have two variants: one with no arguments that continues the innermost Loop, and one that lets you name the Loop to continue to.

Finally, note that the built-in looping performed by an AL - e.g. the Feed section looping iterated data into the Data Flow section - automatically empties out the Work Entry at the top of each cycle. A Connector Loop does not do this for you. If you want to handle this yourself then putting this snippet of code in the Before GetNext Hook of the Connector Loop (in Iterator mode) will do the trick:

work.removeAllAttributes();

You could use the same code in a Hook like Before Lookup for Lookup mode.

Furthermore, I often save off the work Entry prior to my Connector Loop, like this:

saveEntry = system.newEntry();
saveEntry.merge(work);

Then after the Loop completes, I can empty out the Work Entry once more and then merge back in the saveEntry Attributes.

And if you are still confused, please leave me comments and questions and I'll do my best to answer them.





Friday, November 1, 2013

My humble insights on TDI and source management

I've gotten this question a lot recently and so thought to share my thinking on the subject.

We here in the TDI team use RTC for shared projects. Due to technical difficulties getting TDI and RTC to run in the same Eclipse environment, I run them separately: TDI in my Windows image (Parallels) and RTC under OS X.

First step is to set up a workspace for RTC, which I call imaginatively enough 'workspace_RTC'. Here I copy in the project that I created locally - at least for projects where I do the initial development.

From RTC I import the project that was copied to the workspace_RTC directory. Only the AssemblyLines and Resources folders (and contents) are tagged as included in source management. Furthermore, under Resource > Properties you will only want to include the actual Property Stores designed for this solution.

Once this is done you can either delete the local copy of the project, or rename it. Then import the project in the workspace_RTC folder via the Existing Projects into Workspace option. Select the project folder and DO NOT select the option to Copy projects into workspace. That way the CE project is linked to the one in RTC.

Alongside the Project folder in workspace_RTC we usually have a second top-level folder where support files are kept and managed separately - i.e. solution deliverables. For example, property files, external scripts and stuff like externalized attribute maps. This folder also holds the 'compiled' Project Config xml. As you probably already know, every time you make a change and then Save or Run/Debug an AL in the CE, the config XML file in the Runtime-<project name> folder under your TDI workspace is updated. That's why we do NOT include the Runtime assets in the main RTC project. Otherwise all team members will constantly be (partially) updating this resource.

Instead, one person is given the task of refreshing his local project and then creating the compiled config XML file, which is then placed in the secondary folder. If this is my job, then I use the Project > Properties setting in my CE to Link to this file. That way, whenever I deliver changes, I also update the complete config XML.

After this, my day-to-day life consists of first retrieving any changes from RTC and then refreshing the project in my CE. After that I work on those items that are my responsibility and then deliver these to RTC. If others will be working on the same source files, you'll need to lock these resources until you're done with them.

Of course, if you are using one of the source management systems that can be plugged into a single Eclipse instance alongside TDI, life is a bit easier. Then you can use the Team option in the context menu for resources directly from the CE.

Wednesday, October 9, 2013

Dynamically changing Attribute Maps

One way to handle multiple Attribute Maps (either Input or Output) for a Connector during AL cycling is to do the mapping yourself via script. However, there are alternatives.

The easiest way to dynamically change a map is by defining multiple Attribute Maps under Resources in your Project and then swapping between these, for example in the Before Execute Hook of the Connector.

// first decide which map to use
// for example, "AttributeMaps/ComputerSystemMap"
// or just "ComputerSystemMap"
mapToUse = computeMapName(work)

if (mapToUse != null) {
   try {
      // true for Input Map, or false for Output
      thisConnector.useAttributeMap(mapToUse,true)
      return
   } catch (exc) {
      // report that the map is missing or invalid
   }
}


Another way is to use an externalized map file, which is a text file containing any number of lines. Each line represents a single mapping rule.


Mapping

Description

sn=
When the assignment is empty then simple mapping is used.
In the example rule shown in the left column, the attribute named 'sn' gets its value(s) from a similarly named attribute in the source Entry. For an Input Map this would be the 'sn' attribute of the conn Entry. For all other map types (Output Maps or Attribute Map components) the source will be the work Entry.
status='Updated'
Anything entered after the equal sign is considered the assignment of the mapping rule. It must be a snippet of Javascript, unless the Text with Substitution flag is used: {S}, described later in this post.

You can also reference variables or call scripted functions that are defined by the time the map rule is invoked.
In the example rule shown in the left column, the 'status' attribute will get the value specified by the Javascript snippet, resulting in the literal string value Updated


giveName=[
first = work.FirstName 

last = work.LastName 
return first + " " + last 
]
Using square brackets as shown allows the Javascript assignment to stretch over multiple lines.
You can also reference variables or call scripted functions that you have defined.
In the example mapping rule, a full name value is returned by concatenating the FirstName and LastName attributes in the Work Entry.

In addition, flags can be specified to control the behavior of a mapping rule. These flags must appear in curly braces immediately after the name of the attribute being mapped. Valid flags are: A, M and S.

The A and M flags correspond to the Add and Mod columns in the Output map of a Connector in Update mode, and control whether this attribute is mapped during Add and Modify operations. The S flag denotes that the assignment uses the TDI Text with Substitution feature, which allows for literal text values that can include one or more substitution tokens.


Mapping


Description

objectClass{A}=
The 'A' and 'M' flags are used to control when this attribute is enabled with regards to the Add and Modify operations of a Connector in Update mode. By default, attributes will be included for both Add and Modify operations unless only one is specified: A or M.
im The A flag shown in this example specifies that the 'objectClass' attribute should only be mapped for Add operations.


mail{S}={work.uid}@acme.com

The 'S' flag indicates that the following mapping rule uses Text with Substitution. As a result, any curly braces found will be replaced with the value of the substitution token specified in the braces.
For this example, the value of the uid attribute in the Work Entry is substituted for the token {work.uid} and then the literal string '@acme.com ' is appended to it. The resulting string is returned as the value for 'mail'.

Note that multiple mapping flags can combined by separating them with commas. For example: {M,S}.

Note also that in order for externalized map files to work correctly, you must apply Fixpack 3 to your TDI 7.1.1 installation.

By externalizing your map to a file, you can easily change the format of the file, or allow users of your solution to do so without having to fire up the CE. You can also swap which map file to use with the same code snippet show above - just make sure that the mapToUse variable contains the path to a valid file on disk.


Monday, September 30, 2013

Using a Connector Loop to replace Connectors in Lookup Mode - and Why

Although our motto has been 'make no assumptions', TDI makes an assumption about Connectors in Lookup mode: One and only entry will be matched successfully. Period. If none are found, you have to script (or at least enable) the On No Match Hook. If multiple are found, then the On Multiple Found Hook must be dealt with. And if you want to perform some operation repeatedly for all entries in the result set, you have to script your way through connector functions like getNextDuplicateEntry(). And you have to make sure the Lookup Limit is set high enough, since TDI will buffer this many entries in memory.



This gets simpler if you turn your Connector into a Connector Loop. In other words, Add a Connector Loop and attach your Lookup Connector to it.

A Connector Loop will not cycle at all if the search returns no entries. Otherwise it cycles once for each entry found. You control what is found each time through the Link Criteria tab of the Loop. The Loop also gives you the option to Initialize each time it starts up each AL cycle, also performing the Lookup. Or you can set the Loop to just perform the Lookup. This latter option can save time by avoiding having to set up time-consuming connections each cycle.


You can still code the On No Match and On Multiple Found Hooks if you want, but these are not mandatory for a Lookup Loop.

The Loop supports both Lookup and Iterator modes if the attached Connector supports them. In order to perform specific actions on each entry returned (like deleting them) you need to add another Connector under the Loop and set the Link Criteria to search for a unique attribute - which should be one of those returned by the Loop Input Map.

The observant reader will have noted that the Lookup Limit setting is still an issue. You solve this by changing the Mode of the Loop from Lookup mode to Iterator, since Iterator mode does not buffer entries. Then you control the search by setting the relevant search filter parameter of the Connector via the Loop's Connector Parameters tab. So for LDAP you would map the ldapSearchFilter parameter like you would any attribute, for example using Javascript.


Or you could do this using a literal Text with Substitution tokens: uid={work.EmpId}.




For for JDBC it could look like this:


Note that the names that show up in the Schema area of the Connector Parameters map are the internal names of the parameters. You see this if you click on the label for a parameter, or the pencil button out to the right of it.


Normally, a Connector gets the values from its Connection Form only when it initializes. After that point, changing a parameter value (which just changes the configuration) will have no affect. However, some Connectors refresh the value for their search criteria parameter, for example the LDAP and JDBC Connectors. As a result, you do not need to tell the Connector to re-initialize, but merely to perform the Select each time.

For those parameters that do require a re-initialization to pick up changes, just tell the Loop to Initialize each time.

It's as easy as spittin'.

Printing the stacktrace of an exception to the log

When you code your error Hooks then you generally don't get the exception stacktrace printed, as you do when the error halts your AL. Here is some handy code to do this yourself:

// Error Hook code - the error Entry will be available
var sw = java.io.StringWriter();
var pw = java.io.PrintWriter(sw);
error.exception.printStackTrace(pw);
pw.close();
task.logmsg("Stacktrace: \n" + sw)


As usual, the secret is in the Java.

Friday, September 13, 2013

JSON and XML Tutorial - Part 1

If you're working with cloud services, or probably any kind of services, you're most likely working with JSON, XML (e.g. SOAP web services) or both. Although TDI provides specific components for handling web services, there is an easier approach to deal with both formats quickly and flexibly - and in the exact same way - if you're ready to do a little scripting. This is a quick tutorial to get you started with TDI's hierarchical Entry features - or hEntry for short.

As an experienced TDIer, you know that data in the system is carried around in Entry objects. Each Entry holds zero or more named Attributes. Each Attribute contains zero or more values.  Each value is a specific Java object used to represent the actual data content of that Attribute. We'll call this a 'flat Entry', since it represents a one dimensional list of Attributes - like a directory entry or a database table.

As of version 7, the Entry now has an hierarchical mode to represent a tree of data. hEntry mode is enabled automatically when you create an Entry based on hierarchical data, for example parsing XML or JSON. You can also explicitly call the enableDOM() method of an Entry, which then makes the DOM Interface functionality active and available. Or you can just add hierarchical attributes to it.

hentry = system.newEntry() // hentry is still flat
hentry.root = null // now it has a single attribute
hentry.root.branch = null // it's become hierarchical now
hentry.root.branch.leaf = "Green" // add a node with a text value

The above script creates a tree of data in the form of an hEntry. You can get the XML representation like this:

task.logmsg(hentry.toXML())

Here is the output:

<root>
  <branch>
  <leaf>Green</leaf>
  </branch>
</root>

And produce the JSON version like this:

task.logmsg(hentry.toJSON())

Which gives you the following:

{"root":{"branch":{"leaf":"Green"}}}

At this point it might help to know a little about how DOM (the XML Document Object Model) works. Here is where I learned about it: http://www.w3schools.com/dom/

In short, the DOM Model describes how an XML document is organized into a tree structure, and the DOM Interface provides methods for reading, searching and manipulating the leaves and branches of this tree. JSON is another way of describing hierarchical data. Once you have converted hierarchical data (either XML or JSON) into an hEntry then you use the DOM Interface methods to work with the data.

The Entry object provides a couple of static methods for turning hierarchical data into an hEntry: fromJSON() and fromXML(). Since they are static methods, this means you can call them using either an existing Entry object - e.g. work.fromJSON() - or by using the Class itself: com.ibm.di.entry.Entry.fromJSON(). Since you have methods for turning an hEntry into an XML or JSON representation, this makes JSON to XML conversion as simple as:

xmlString = work.fromJSON(jsonString).toXML()

And note that the above snippet will not change the contents of work. We're just making use of the static methods.

So much for the theory. You can easily play with this using the Javascript View at the bottom of your TDI Workbench, or by firing up the Debugger. Stay tuned for the next part of this discussion.

Wednesday, June 12, 2013

Shortcuts to faster workflow

I'm an incremental developer - a habit formed by over a decade of TDI, as well as cutting my programming teeth on Turbo Pascal back in the day. To speed things up I have a number of keyboard shortcuts defined in TDI.

The main ones are:

  • Ctrl-R to run the current AL
  • Ctrl-D to debug it
  • Ctrl-DownArrow for Step Over
  • Ctrl-RightArrow for Step Into
  • Ctrl-Shift-DownArrow for Continue (to next breakpoint)
It's easy to set up. First click on the Windows menu and select Preferences. Then in the filter type keys. Select 'Keys' from the filtered list.


This gives you the keyboard shortcut editor. Now type assemblyline in the search field at the top.


Choose Debug AssemblyLine from the list, making sure it's the one for TDI, and add the shortcut. Note to select In Dialogs and Windows. Now do the same for Run AssemblyLine.

Now change the filter to step and choose Step Into and Step Over (again, making during they're for TDI operations). Finally filter on continue and bind that command.

After you press OK at the bottom of the editor dialog you are ready to quickly launch, debug and step your way to new solutions!