TDIing out loud, ok SDIing as well

Ramblings on the paradigm-shift that is TDI.

Tuesday, September 25, 2018

Thinking about agility

My favorite agile tool is TDI (and please forgive an old man for having trouble shifting to 'SDI'). Not only can I whip together integration service prototypes faster than anything I can do in Java (Spring), or Python or TurboPascal, but I can also build Ops features this way. Monitoring and remote control (pause, restart, failover/failback) for enabling auto-healing, or at least making the solution cheaper to own. As you know, the cost of building a service can quickly be dwarfed by the cost of owning that service over time.

This past year as a GBS consultant has made DevOps a focus of my job, and I have been applying modern tools and techniques to TDI. This includes using git for source management, which ties nicely into CI/CD pipelines. Jason of Adventures in TDI and TDI support fame helped me to install eGit in my SDI CE both on Windows and Linux. Git was created by Linux Torvald who gave us Linux, so it's really easy, fast and a de facto standard.

Of course, to build a DevOps pipeline I needed a 'TDI compiler' to turn the TDI workspace Project folder structure and files into the one big Config xml file that the TDI Server needs. So I wrote this Python script. You run it from a command line like this:

    python configify.py -p <Project path>

With these optional parameters:

     -h                 Help & usage instructions
     -v                 Display version
     -c <path>     Write the config to the file specified by <path>
     -n <name>   Set the Solution Instance name (web api) to <name>
     -o                 If present, overwrite the resulting config file

You can grab the script by installing the git commandline and typing someplace where you want the files downloaded:

       git clone https://github.com/eddiehartman/configify.git

Once this was working I focused on Dockerizing a TDI solution. This was pretty simple to do, so I first made a default TDI (SDI) image and then can roll new Dockerfiles to create images based on specific solutions (configs). Then I pushed this up to a Kubernetes cluster and tested it. Pretty simple and the answer to HA and scalability, if you're willing to design for componentization.

Now my next step is to tie this altogether in a pipeline for continuous integration and delivery. Thinking of using Jenkins since I have used it before and have lots of old Jenkinsfiles to copy/paste from. Once I have everything working I though I might quick video.

Sound like a plan? Interested? I know that TDI'ers are often engaged in quick one-off integrations, like security Adapters for ISIM or IGI or PIM. However, for those delivering services (infrastructure wiring) I would think DevOps is important. I look forward to hearing back from y'all :)

No comments: