Quotes

Thursday, October 27, 2016

Websphere IIB or WMB Error handling

Error handling
  1. Create a backup queue and set a backout threshold on the input queue. In many cases, a backup queue can help you know where to look for a failed message.
  2. Use Try, Catch block for the entire Flow.
  3. Use custom exception handling so that even the smallest bug can be analyzed and can be effectively removed.
  4. Logging mechanism :- Logging can be the most effective part as it can in turn become a performance hit. So effective logging mechanism should be present in the flow.

Backout Queue :-
While the MQ administrator sets the backout parameters, they are really used by well-behaved WebSphere MQ applications to remove potential “poisoned” messages from the queue, or messages that cannot be processed due to other issues. This is important for both shared queues and private queues, though the symptoms may differ. The parameters are:
BOQNAME(‘PRT.REQUEST.BOQ’)
Name of queue to which applications should write messages that have been backed out.
BOTHRESH (3)
Number of processing attempts for each message.
HARDENBO
Harden the backout counter to disk when syncpoint is done.
If a message on a private queue cannot be processed and a rollback is issued, that message goes back to the top of the queue. The next MQGET for that queue will pick the problem message up and try to process it again. If that attempt fails and the transaction is rolled back, the message once again goes to the top of the queue. WebSphere MQ maintains a backout count that can notify an application when it is looping on the same message. If the application ignores the backout counter, the first indication of a poison message is that a queue depth begins rising and the getting processes continues to run.
Limitations:-
  • Use one backout queue per application, not per queue. One backout queue can be used for multiple request queues.
  • Do not use the queue-manager-defined dead letter queue as the backout queue, because the contents of the backout queue are usually driven by the application. The dead letter queue should be the backout queue of last resort.
 Exception Handling using Try Catch Block
A TryCatch node does not process a message in any way, it represents only a decision point in a message flow. When the TryCatch node receives a message, it propagates it to the Try terminal. The broker passes control to the sequence of nodes that are connected to that terminal (the try flow).
If an exception is thrown in the try flow, the broker returns control to the TryCatch node. The node writes the current contents of the exception list tree to the local error log, then writes the information for the current exception to the exception list tree, overwriting the information that is stored there.
The node propagates the message to the sequence of nodes that are connected to the Catch terminal (the catch flow). The content of the message tree that is propagated is identical to the content that was propagated to the Try terminal, which is the content of the tree when the TryCatch node first received it. The node enhances the message tree with the new exception information that it wrote to the exception list tree. Any modifications or additions that the nodes in try flow made to the message tree are not present in the message tree that is propagated to the catch flow.
Limitations:-
  • Useful in small flows with less complexity and scenarios.
  • Too many Try Catch node in large flows creates complexity.
  • Too many Try Catch node in large flows leads to performance hit.
Custom Exception handling
Exception handling for big projects or Applications which consists of n number of flows can be customised.
The broker provides basic error handling for all message flows. If basic processing is not sufficient, and you want to take specific action in response to certain error conditions and situations, you can enhance your message flows to provide your own error handling.
For example, we might design a message flow that expects certain errors that you want to process in a particular way. Or perhaps our flow updates a database, and must roll back those updates if other processing does not complete successfully.
Because you can decide to handle different errors in different ways, there are no fixed procedures to describe. This section provides information about the principles of error handling, and the options that are available, and you must decide what combination of choices that you need in each situation based on the details that are provided in this section.
In these cases it is advisable to create a framework which can be made generic and can be reused throughout the flows.
For example check the poison handler in the pic:-
Logging Mechanism:-

Logging can be the most effective way to analyze data for each scenario. So effective logging mechanism should be present in the flow not only for the happy scenario but also for analyzing the smallest errors or loopholes in the flows.
One of the ways is to integrate in the custom reusable components so that each input message can be logged irrespective of success or failure.
Also this can be evolved with the particular node names, timestamp and all other information as per requirement and severity.
Several ways of logging in IIB/WMB:-
  • Database Logging
  • Write to a file using IIB nodes
  • Java Plugin
Limitations:-
  1. Database Logging can be in turn become a performance hit for the application. So decision to be taken depending upon the size and frequency of data. In this case database archiving can be implemented to overcome this sort of situation.
  2. Logging through Java log4net is an efficient way to maintain logging as it minimizes database hit but the only con it has is the files keep on piling up.

How to call jar using ESQL and Java Compute Node in IIB/WMB

Compute Node in IIB :-
The Compute node is contained in the Transformation drawer of the palette. The following can be achieved using the Node:-
  • Transformation of message
  • Routing
  • Build a new message
Java Compute Node in IIB
The JavaCompute node is contained in the Transformation drawer of the palette. The following can be achieved using the Node:-
  • Use Java to examine an incoming message and, depending on its content, propagate it unchanged to one of the two output terminals of the node. The node behaves in a similar way to a Filter node, but uses Java instead of ESQL to determine which output terminal to use.
  • Use Java to change part of an incoming message and propagate the changed message to one of the output terminals.
  • Use Java to create and build a new output message that is independent of the input message.
  • Use Java to create a map in a global cache, and to add and retrieve data from that map. By storing data in the global cache, that data is available to other JavaCompute nodes or message flows.
How to consume jar using ESQL and Java Compute Node
  1. Switch to java View :-
  2. File => New => Java Project
  3. Add jar to Libraries(External Libraries)
5.Add the JAR files to the following directory:
  • ForWindows
workpath\shared-classes
  • ForLinux, UNIX and z/OS
workpath/shared-classes
Example :-  C:\ProgramData\IBM\MQSI\shared-classes
Note : Jar files can also be placed in specific folders to be accessed by particular broker.
6. Add reference of the java project to the Application
7. Restart the broker.
The above steps are common for both
How to call the function from COMPUTE NODE :-
  1. Now in the compute node, create a method which will refer to the method in jar.
CREATE FUNCTION Multiply(IN input1 INTEGER, IN input2 INTEGER)    RETURNS INTEGER       LANGUAGE JAVA EXTERNAL NAME “sample.calculate.Result”;
  1. Now call the function from the main finction :-
SET Multresult = Multiply(2,3);

How to call the function from JAVA COMPUTE NODE :-
  1. Import the class(for example Result Class in this case)
  import sample.calculate.Result;
  1. Now create an object of the class to call the method.
Result obj1 = new Result();
String Res = obj1. Multiply(2,3);

Job Scheduling in IIB9 using Timeout nodes



Requirement: Pick up files at scheduled time and send out an email at the end.
Implementation using: IBM Integration bus(IIB9)
Requirement analysis:
  1. IIB process should transfer the files at 11:00 PM everyday to a destination directory.
  2. A mail needs to be sent to a user with all the file names which are picked up on that particular day.
Below image shows complete message flow.

Complete Message Flow
Complete Message Flow

Configure the properties as shown in below images.

User Defined Properties
User Defined Properties

We need to configure these properties so that values can be changed at run-time. Once created, we need to declare them in the compute node to use in the code as shown below.
DECLARE OutputDirectory EXTERNAL CHARACTER ”;
DECLARE StartTime EXTERNAL CHARACTER ”;
DECLARE Intervl EXTERNAL CHARACTER ”;
DECLARE count EXTERNAL CHARACTER ”;
DECLARE FilesName SHARED CHARACTER ”;
Values for these properties will be set in baroverride file like,
au.com.LearnIIB.FileOperations.FileReadTimeoutControl#InputDirectory=C:\Users\nusa\IBM\InputDir
au.com.LearnIIB.FileOperations.FileReadTimeoutControl#OutputDirectory=C:\Users\nusa\IBM\OutputDir
au.com.LearnIIB.FileOperations.FileReadTimeoutControl#StartTime=23:00:00 —-Will be triggered at 11:00 PM daily.—-
au.com.LearnIIB.FileOperations.FileReadTimeoutControl#Intervl=30 —-After StartTime file gets picked up at every 30 seconds till flow moves to No Match in FileRead node—-
au.com.LearnIIB.FileOperations.FileReadTimeoutControl#count=-1
The first node is used to trigger the controlled node and to control the number of instances and intervals. Properties for which are shown below.

Automated Timeout Notification Properties
Automated Timeout Notification Properties

Here as we want to trigger the flow once in a day, we give time out interval value as 24*60*60 seconds. Once this node gets triggered, we set properties for the controlled node as shown below in the next compute node.
Set OutputLocalEnvironment.TimeoutRequest.Action =’SET’;
Set OutputLocalEnvironment.TimeoutRequest.Identifier =’ControlledTN’;
Set OutputLocalEnvironment.TimeoutRequest.StartDate = CURRENT_DATE;
Set OutputLocalEnvironment.TimeoutRequest.StartTime =CAST(StartTime AS TIME);
Set OutputLocalEnvironment.TimeoutRequest.Interval = CAST(Intervl AS INTEGER);
Set OutputLocalEnvironment.TimeoutRequest.Count=CAST(count as INTEGER);
Set OutputLocalEnvironment.TimeoutRequest.IgnoreMissed=FALSE;
Once values are set for the controlled node, we use timeout control node which will have unique identifier same as controlled timeout notification node as shown below.

Start Job Properties
Start Job Properties

Below image shows the properties we need to set for the Timeout notification node which we are trying to control and schedule here.

Controlled Timeout Notification Properties
Controlled Timeout Notification Properties

Now if everything is right, this node gets triggered at the set time and flow picks up the files and moves to next compute node where we set output file properties.
SET OutputLocalEnvironment.Destination.File.Directory = OutputDirectory;
SET FilesName = InputLocalEnvironment.File.Read.Name;
SET OutputLocalEnvironment.Destination.File.Name = InputLocalEnvironment.File.Read.Name;
PROPAGATE TO TERMINAL ‘out’ DELETE NONE;
Once all the files are picked up, flow moves to No Match node and then to set email properties. Here we need to cancel the timeout notification so that again an instance won’t get created.
SET OutputLocalEnvironment.Variables.sendEmails.emailDetails.mailSuccess.bodyContentType = ‘text/html’;
SET OutputLocalEnvironment.Variables.sendEmails.emailDetails.mailSuccess.to = ‘ToAddress’;
SET OutputLocalEnvironment.Variables.sendEmails.emailDetails.mailSuccess.cc = ‘CCAddress’;
SET OutputLocalEnvironment.Variables.sendEmails.emailDetails.mailSuccess.from = ‘FromAddress’;
SET OutputLocalEnvironment.Variables.sendEmails.emailDetails.mailSuccess.subject = ‘Files transfer was successfull’;
SET OutputLocalEnvironment.Variables.sendEmails.emailDetails.mailSuccess.body = ‘Following files are being transferred successfully<br/>’ || FilesName;
Set OutputLocalEnvironment.TimeoutRequest.Action =’CANCEL’;
Set OutputLocalEnvironment.TimeoutRequest.Identifier =’ControlledTN’;
Now we use Timeout control node which will have unique identifier same as of controlled timeout notification node to cancel the scheduled job.

Stop Job Properties

How to setup WMB v8 Event Monitoring


In WebSphere Message Broker (WMB) version 8, there have been some significant improvements of the ability to easily configure your environment to start capturing monitoring events related to your message flow transactions.  In this post, we are going to provide the few steps for enabling this for a simple flow using WMB v8 and WMQ v7.  These specific steps enable you to capture transaction data as each message processes through your flow and send to an MQ queue.  This event data includes the payload and transaction data (i.e. what message flow, execution group, timestamp, and other useful transaction information).  What is not included in this post is what you can do with this XML event data.  From here, you can implement a message flow to process this transaction data.  Some examples include parsing out information important to you, logging to file (i.e. using Log4J or WMB logging), updating database, etc.  This data can then be processed to provide you with overall processing statistics or general auditing.  Without an expensive monitoring product you can capture start and end transactions and report on performance.  We can cover this in another post of offline if anyone wants to discuss this further.
Setting up and Enabling Monitoring in WMB v8 and WMQ v7
  •  Create WMQ V7 topic ( BROKER_EVENTS_DEFAULT)  to publish broker monitoring events to.  In this example the broker name is:  “wmb8_test” and execution group is “default”.  The topic string the broker uses is:  $SYS/Broker/wmb8_test/Monitoring/default/#
  • Create WMQ subscription (MONITOR.EVENTS.DEFAULT) using above topic.  *NOTE:  leave topic string blank, it will get it from publication:  In this example, the destination queue manager is “wmb8_test” and destination queue is “MONITOR.EVENTS.DEFAULT” (this is the destination queue these XML event messages will be sent to).
  • Enable monitoring on message flow (In this example, I am sending payload (everything under root which includes properties, headers, and payload).
  • Deploy message flows
  • Enable flow monitoring on broker
mqsichangeflowmonitoring wmb8_test –c active –e default –f monitor_example_flow
You can now send a message to your message flow and verify that monitor events are going to the “MONITOR.EVENTS.DEFAULT” queue.   Using a tool like RFHUTIL or WMB’s dequeue, you can view the structure and contents of this message.  This will help you design a message flow or other interface for processing these events (i.e. logging, updating transaction database, responding to conditions, etc).
Thank you,

Automation Build and Deploy in WMB Using Ant

Summary:  This article shows a step-by-step approach for setting up an automated build and deploy framework in IBM® WebSphere® Message Broker (also known as IBM® Integration Bus) using Ant, Hudson and SubVersion. This also includes the set of frameworks components for setting up the auto build and deploy for a sample WMB project .
Broker ARchive(BAR) file:
Broker Archive(BAR)  is a deployable container  in a compressed file format which contains a single deployment descriptor (broker.xml), compiled message flows(*.cmf), message set dictionary files(*.xsdzip, *.dictionary), style sheets(.xsl), XSLT files and JAR files. When you unzip the BAR file, the single descriptor file can be found under META_INF folder. The deployment descriptor file has information about configuration properties of the flow and nodes.
Sample contents of a deployment descriptor file(broker.xml):
<Broker>
<CompiledMessageFlow name="com.ibm.wmb.build.SampleMsgFlow1">
...

<ConfigurableProperty uri="com.ibm.wmb.build.SampleMsgFlow1#additionalInstances"/>
<Con
figurableProperty override="DB2_DSN_DEV" uri="com.ibm.wmb.build.SampleMsgFlow1#Compute.dataSource"/>
<ConfigurableProperty override="SAMPLE.DEV.IN" uri="com.ibm.wmb.build.SampleMsgFlow1#MQ_Input.queueName"/>
...
</CompiledMessageFlow >
</Broker>
Step 1: Get Source Code from SCM:
Whenever the build is scheduled/triggered in Hudson or any changes detected in SCM, the latest (or desired) version of project code is copied from SCM server to a broker toolkit workspace on Hudson server. There are two different ways in which the Broker source code can be imported to Hudson server.
1. [Typical]Hudson Job will retrieve the source code from SubVersion or any SCM to Hudson server as per  SCM configuration in Hudson Job. And then all the resources are copied to broker toolkit workspace using shell script or Ant task. This method is preferred if the SCM configuration is always static and build resources are less.
2. [Custom]Ant script can be written to retrieve the source code from SubVersion or any SCM directly to broker toolkit workspace. This method is preferred if the SCM configuration is dynamic and build resources are selective among huge number of other projects(not required for build) under the same parent. Ant script supports various other SCM including PVCS, CVS, VSS and IBM® ClearCase.
Below sample ant target will check out the code from SubVersion to broker toolkit workspace directly(2nd method).
Note: Ant task SVN requires corresponding library jar files to be loaded prior to execution.
Sample Ant target for importing Broker projects from Subversion:
${broker.build.projects} = SampleMsgFlowProject,SampleMsgSetProject
${source.location} = trunk or tag/... or branch/...
Ant Target:import.broker.projects
<target name="import.broker.projects" description="This ant target copies broker projects files from Subversion to broker toolkit workspace."><for list="${broker.build.projects}" param="var-broker-prj">
<sequential>
<svn javahl="false" svnkit="true" username="${svn.user}" password="${svn.password}">
<checkout url="${svn.url}/@{var-broker-prj}/${source.location}" destPath="${toolkit.workspace}/@{var-broker-prj}" force="true" depth="infinity" ignoreexternals="true"/>
</svn>
</sequential>
</for>
</target>
Step 2: Create BAR file(s):
Message Broker toolkit installation supplies a bunch of workbench command line tools which help in generating deployable BAR files and overriding the configurations inside the BAR file.
An ant task can be configured to execute Broker MQSI workbench command mqsicreatebar  for creating the Broker Archive(BAR) file with Broker project resources (Message flow, Message set and its references) retrieved from local broker toolkit workspace. Since this workbench command runs the eclipse in “headless” mode, the execution of this command takes often noticeably longer than time taken when built from Broker toolkit.
If there are multiple BAR files to be created, Ant script can be programmed to execute below mqsicreatebar command in sequence/loop to avoid any issues with lock or workspace issues. Please refer Download files for more details.
mqsicreatebar -data workspace -b barFileName [-version versionId] [-esql21][-p projectName1 [projectName2 […]] -o filePath1 [filePath2 […]] [-skipWSErrorCheck] [-trace] [-v traceFilepath]
Parameters:
-data workspace (Required)
– The path of the Broker toolkit workspace in which your projects are created.
-b barFileName  (Required)
– The name of the BAR (compressed file format) archive file where the result is stored. The BAR file is replaced if it already exists and the META-INF/broker.xml file is created.
-cleanBuild (Optional)
– Refreshes the projects in the workspace and then invokes a clean build before new items are added to the BAR file.
-version versionId (Optional)
– Appends the _ (underscore) character and the value of versionId to the names of the compiled versions of the message flows (.cmf) files added to the BAR file, before the file extension.
-Esql21 (Optional)
– Compile ESQL for brokers at Version 2.1 of the product.
-p projectName1 [projectName2 […]] (Optional)
– Projects containing files to include in the BAR file in a new workspace. A new workspace is a system folder without the .metadata folder. The projects defined must already exist in the folder defined in the -data parameter, and must include all projects and their reference projects that a deployable resource, defined in the -o parameter, needs. The -p parameter is optional with an existing workspace, but you should use -p, together with a new workspace, in a build environment. If a project that you specify is part of your workspace but is currently closed, the command opens and builds the project so that the files in the project can be included in the BAR file.-o
filePath1 [filePath2 […]] (Required)
 – The workspace relative path (including the project) of a deployable file to add to the BAR file; for example, amsgflow or messageSet.mset file.You can add more than one deployable file to this command by using the following format: -o filePath1 filePath2 ….filePath’n’Note: In Message Broker version 8, the MQSI runtime command mqsipackagebar is used to create the BAR files.
-skipWSErrorCheck (Optional) [From 7.0.0.3 only]
– Forces the BAR file compilation process to run, even if errors exist in the workspace.
-trace (Optional)
– Displays trace information for BAR file compilation. The -trace parameter writes trace information into the system output stream, in the language specified by the system locale.
-v traceFilePath (Optional)
– File name of the output log to which trace information is sent.
Sample Ant target for creating BAR file:
mqsicreatebar -cleanbuild -data ../workspace -b ../Sample.bar -p SampleProject -o com.ibm.poc.build.wmb.SampleMessageFlow.msgflow
${numberOfExecGrps} = 2
${exec_grp1.msg.flows} = SampleMsgFlowProject/com/ibm/wmb/build/SampleMsgFlow1.msgflow, SampleMsgFlowProject/com/ibm/wmb/build/SampleMsgFlow2.msgflow
${exec_grp1.msg.sets} = SampleMsgSetProject/SampleMsgSet/messageSet.mset
${exec_grp2.msg.flows} = SampleMsgFlowProject/com/ibm/wmb/build/SampleMsgFlow3.msgflow
${exec_grp2.msg.sets} = SampleMsgSetProject/SampleMsgSet/messageSet.mset
${version.number} = 1.1.6.2
Ant Target: mqsicreatebar.buildbar
<target name="mqsicreatebar.buildbar" description="This ant target creates the BAR files using mqsicreatebar with referencing all required projects.">
<propertyregex property="argln_broker_prj" input="${broker.build.projects}" regexp="\," replace=" " global="true" defaultValue="${broker.build.projects}"/>
<for list="${numberOfExecGrps}" param="var-exec-grp-num" delimiter="${line.separator}">
<sequential>
<propertyregex property="argln_eg@{var-exec-grp-num}_msgflow" input="${exec_grp@{var-exec-grp-num}.msg.flows}" regexp="\," replace=" " global="true" defaultvalue="${exec_grp@{var-exec-grp-num}.msg.flows}"/>
<propertyregex property="argln_eg@{var-exec-grp-num}_msgset" input="${exec_grp@{var-exec-grp-num}.msg.sets}" regexp="\," replace=" " global="true" defaultValue="${exec_grp@{var-exec-grp-num}.msg.sets}"/>
<exec executable="${toolkit.home}/mqsicreatebar" spawn="false" failonerror="true" vmlauncher="false">
<arg value="-cleanbuild"/>
<arg value="-data"/>
<arg value="${toolkit.workspace}"/>
<arg value="-b" />
<arg value="${exec_grp@{var-exec-grp-num}.bar.file.path}" />
<arg value="${version}"/>
<arg value="${version.number}"/>
<arg value="-p" />
<arg line="${argln_broker_prj}"/>
<arg value="-o"/>
<arg line="${argln_eg@{var-exec-grp-num}_msgflow}"/>
<arg line="${argln_eg@{var-exec-grp-num}_msgset}"/>
</exec>
</sequential>
</for>
</target>
Step 3: Apply BAR Overrides:
All configurable nodes in the message flow have some properties that needs to be changed as per the broker environment before deployment. For example, MQ/JMS nodes properties, Database nodes properties, Timeout nodes properties, HTTP/Web Service nodes properties or any promoted message flow properties which change from environment to environment.
An ant task can be configured to execute Broker MQSI command mqsiapplybaroverride for applying the environment specific configuration on the BAR file through a separate property file or manually on the command itself.
mqsiapplybaroverride -b barFile [-p overridesFile] [-m manualOverrides] [-o outputFile] [-v traceFile]
Parameters:
-b barFile (Required)
– The path to the BAR file (in compressed format) to which the override values apply. The path can be absolute or relative to the executable command.
-p overridesFile (Optional)
 – The path to one of the following resources:
  • A BAR file that contains the deployment descriptor that is used to apply overrides to the BAR file.
  • A properties file in which each line contains a property-name=override or current-property-value=new-property-value pair.
  • A deployment descriptor that is used to apply overrides to the BAR file.
-m manualOverrides (Optional)
 – A list of the property-name=override pairs, current-property-value=override pairs, or a combination of them, to be applied to the BAR file. The pairs in the list are separated by commas (,). On Windows, you must enclose the list in quotation marks (” “). If used with the overridesFile ( –p) parameter, overrides specified by the manualOverrides (–m) parameter are performed after any overrides specified the –p parameter have been made.
-o outputFile (Optional)
– The name of the output BAR file to which the BAR file changes are to be made. If an output file is not specified, the input file is overwritten.
-v traceFile (Optional)
– Specifies that the internal trace is to be sent to the named file.
Note: Each override that is specified in a –p overrides file or a –m overrides list must conform to one of the following syntaxes:
1. FlowName#NodeName.PropertyName=NewPropertyValue (or FlowName#PropertyName=NewPropertyValue for message flow properties) where:
2. OldPropertyValue=NewPropertyValue. This syntax does a global search and replace on the property value OldPropertyValue. It overrides the value fields of OldPropertyValue in the deployment descriptor with NewPropertyValue.
3. FlowName#NodeName.PropertyName (or FlowName#PropertyName for message flow properties). This syntax removes any override applied to the property of the supplied name.
Sample Ant target for applying overrides:
mqsiapplybaroverride -b ../SampleProject.bar -p ../Sample_BarOverride_EG1.property
Sample contents of a BAR Override property file (Sample_BarOverride_EG1.property):
com.ibm.wmb.build.SampleMsgFlow1#additionalInstances=7
com.ibm.wmb.build.SampleMsgFlow1#Compute.dataSource=DB2_DSN
com.ibm.wmb.build.SampleMsgFlow1#MQ_Output.queueName=SAMPLE.PROD.OUT
Ant Target: mqsiapplybaroverride.modifybar
<target name="mqsiapplybaroverride.modifybar" description="This ant target applies overrides to the generated bar file from properties file.">
<for list="${numberOfExecGrps}" param="var-exec-grp-num" delimiter="${line.separator}">
<sequential>
<exec executable="${toolkit.home}/mqsiapplybaroverride" spawn="false" failonerror="true">
<arg value="-b" />
<arg value="${exec_grp@{var-exec-grp-num}.bar.file.path}" />
<arg value="-p" />
<arg value="${exec_grp@{var-exec-grp-num}.property.file}" />
</exec>
</sequential>
</for>
</target>
Before applying overrides: (broker.xml inside BAR file)
<ConfigurableProperty uri="com.ibm.wmb.build.SampleMsgFlow1#additionalInstances"/>
<ConfigurableProperty override="DB2_DSN_DEV" uri="com.ibm.wmb.build.SampleMsgFlow1#Compute.dataSource"/>
<ConfigurableProperty override="SAMPLE.DEV.OUT" uri="com.ibm.wmb.build.SampleMsgFlow1#MQ_Output.queueName"/>
While applying the overrides:
BIP1137I: Applying overrides using toolkit mqsiapplybaroverride...
BIP1140I: Overriding property com.ibm.wmb.build.SampleMsgFlow1#additionalInstances with '7'...
BIP1140I: Overriding property com.ibm.wmb.build.SampleMsgFlow1#Compute.dataSource with 'DB2_DSN'...
BIP1140I: Overriding property com.ibm.wmb.build.SampleMsgFlow1#MQ_Output.queueName with 'SAMPLE.PROD.OUT'...
BIP1143I: Saving Bar file ../TEAM1/generated_bars/PROD/PRJ1_TEAM1_2013-08-27_02-41-21_19_EG1.bar...
BIP8071I: Successful command completion.
After applying overrides: (broker.xml inside BAR file)
<ConfigurableProperty override="7" uri="com.ibm.wmb.build.SampleMsgFlow1#additionalInstances"/>
<ConfigurableProperty override="DB2_DSN" uri="com.ibm.wmb.build.SampleMsgFlow1#Compute.dataSource"/>
<ConfigurableProperty override="SAMPLE.PROD.OUT" uri="com.ibm.wmb.build.SampleMsgFlow1#MQ_Output.queueName"/>
Step 4: Ftp the BAR file(s) to remote Broker server:
After creating and overriding the configurations, the BAR files need to be moved to Broker server for the deployment to the broker.
An ant task can be configured to sftp the BAR files after the successful execution of above two steps. This step will ftp the generated bar file(s) from Hudson server to the remote broker server using SSH public key authentication which returns the ftp status code back to Hudson so that successor steps will not be executed if ftp has failed.
Sample Ant target to ftp the BAR files to Broker server:
Ant Target: sftp.bar.to.broker.server
<target name="sftp.bar.to.broker.server" description="This ant target ftps the BAR file to remote Broker server.">
<for list="${numberOfExecGrps}" param="var-exec-grp-num" delimiter="${line.separator}">
<sequential>
<scp file="${exec_grp@{var-exec-grp-num}.bar.file.path}" todir="${USER}@${HOST}:${REMOTE_PATH}" keyfile="${user.home}/.ssh/id_rsa" failonerror="true"/>
</sequential>
</for>
</target>
Step 5: Deploy BAR file(s):
Broker Runtime installation provides a bunch of command line tools which help in deploying the BAR files to broker and manages(create/delete) Broker/Execution Group etc. The Broker runtime command mqsideploy is used to make deployment requests of all types from a batch command script, without the need for manual interaction. The default operation is a delta or incremental deployment. We can select -m to override the default operation and perform a complete deployment.
An ant task can be configured to execute the MQSI runtime command mqsideploy which retrieves the BAR files from the directory and deploys them on the Broker locally in sequence.
mqsideploy brokerSpec -e execGrpName ((-a barFileName [-m]) | -d resourcesToDelete) [-w timeoutSecs] [-v traceFileName]
Parameters:
‘brokerSpec’ (Required)
– You must specify at least one parameter to identify the target broker for this command, in one of the following forms:
  • ‘brokerName’ : Name of a locally defined broker. You cannot use this option if the broker is on a remote computer.
  • ‘-n brokerFileName : File containing remote broker connection parameters (*.broker).
  • ‘-i ipAddress -p port -q queueMgr : Hostname, Port and Queue manager of a remote broker.
‘-e execGrpName‘ (Optional)
– Name of execution group to which to deploy
‘-a barFileName (Optional)
– Name of the broker archive (BAR) file that is to be used for deployment of the message flow and other resources.
‘-m’ (Optional)
–  Empties the execution group before deployment(Full deployment)
‘-d resourcesToDelete(Optional)
– Deletes a colon-separated list of resources from the Execution group
‘-w timeoutSecs (Optional)
– Maximum number of seconds to wait for the broker to respond (default is 60)
‘-v traceFileName (Optional)
–  Send verbose internal trace to the specified file.
Sample Ant target for deploying BAR file:
mqsideploy BRK7000 -e POC -a ../Sample.bar -m -w 600
Ant Target: mqsideploy.deploybar
<target name="mqsideploy.deploybar" description="This ant target deploys the BAR file to broker server remotely.">
<for list="${numberOfExecGrps}" param="var-exec-grp-num" delimiter="${line.separator}">
<sequential>
<sshexec host="${HOST}" username="${USER}" keyfile="${user.home}/.ssh/id_rsa" command=". ~/.bash_profile; ${MQSI_HOME}/mqsideploy ${BROKER} -e ${EXEC_GRP@{var-exec-grp-num}} -a ${REMOTE_PATH}/${exec_grp@{var-exec-grp-num}.bar.file.name} ${deploy.mode} -w 600" failonerror="true"/>
</sequential>
</for>
</target>
Step 6: Subversion Tagging/Branching:
There are two different ways in which snapshot of the build(in this case, tagging or branching) can be taken.
1. [Typical] Hudson by default provides a configuration to take snapshot of the resources once the build is complete. This option is preferred when the SCM location URL is explicitly configured in the Hudson job and resources are static for all the builds.
2. [Custom]Ant target can be written to take snapshot of the the resources once the build is complete. This option is preferred when the SCM location URL is not explicitly configured in the Hudson job and resources are not static for all the builds.
Sample Ant target for creating Subversion Tagging/Branching:
Below sample Ant target tags the last successful/stable build in Subversion at the tags level of each broker project which involved in the build with current Build id and with comments so that the higher environment build can be configured to refer to take the source from the tagged successful /stable build.
Ant Target: subversion.tagging.branching
<target name="subversion.tagging.branching" description="This target connects to svn and creates the snapshot(tag/branch) from the build code to the respective project.">
<for list="${broker.build.projects}" param="var-broker-prj">
<sequential>
<svn javahl="false" svnkit="true" username="${svn.username}" password="${svn.password}">
<copy srcUrl="${svn.url}/@{var-broker-prj}/${source.location}" destUrl="${svn.url}/@{var-broker-prj}/${snapshot.location}/last-successful/${env.SNAPSHOT_NAME}" message="Tagging Project @{var-broker-prj} with tag name ${env.SNAPSHOT_NAME} from ${source.location}."/>
</svn>
</sequential>
</for>
</target>
Auto Build Deploy Framework Install and Set up:
1. Hudson Set up and Configuration: Download Hudson war file and install on the platform[Windows or Linux]. Create Hudson Job with the similar configuration from the Sample config file in Downloads.
2. WMB Toolkit Installation and Folder structure: Install WMB toolkit on Hudson server along with FP patch and create a toolkit workspace.
3. Master Ant Build Project structure: Import the Broker build master project from Downloads to Broker toolkit workspace and modify the variables like SVN, broker server, user id, password etc.
4. WMB Project setup: Create a folder called “build” in the main Msg flow project for which you want to trigger the build and add the property files and build ant file for this project.
Hudson Parameters for triggering the Broker Build/Deploy:
HudsonParameters