This article describes one way to merge multiple spooled files together - similar to the function: Output Schedule Control (OSC) in InterForm400.
The principle of Output Schedule Control is to wait for many spooled files of various types to arrive on an output queue. The arrived spooled files are not processed directly, but instead merged into a large spooled file, which is then sorted by e.g. the customer number, which is to be found on each page of the spooled files.
The use of this function is match up various spooled files e.g. invoices, order confirmations and other business documents for each customer. That is normally used for OMR (Optical Mark Recognition) inserter machines, where the print out is prepared with a barcode or lines to indicate which pages, that goes together in the same envelope. With this the company can save money on postage by including multiple output pages for the same customer into the same envelope.
There can be multiple sort keys for the spooled files involved and some spooled file attributes might also be a part of the sorted sequence. In the example below we want to sort the pages by these 3 keys:
- Text found in line 1 position 1 to 47 (Most significant sort key).
- Text found in line 11 position 140 to 142.
- The form type of the spooled file from where the page comes (least significant sort key).
The processing is divided into three workflows:
- Output queue monitor: One workflow monitors an output queue for the input spooled files and prepare the spooled files for later processing, calls the AccumulateSplfs workflow to merge them all into a single splf file. Finally this holds the spooled files, but a delete or move of the spooled files could also be considered.
- Accumulate Splfs: This workflow is called by the output queue monitor and it does the merge of all of the spooled files.
- Scheduled processing: Another workflow is waking up at a fixed time of each workday. It sorts the contents of the merged splf file and starts processing it.
An export with the workflows and used transformations is included.
Each of the workflows are described in details below.
1. Output queue monitor
The Output queue monitor workflow is built as shown below:
Each of the workflow components are described below:
The input workflow type, From IBM i output queue is used as the initial part of the workflow. It is setup as shown below:
In this case we are expecting the input spooled files on the output queue, qusrsys/inputqueue. You should consider to select a code page for the spooled file, where the codepage parameter does not clearly indicate the codepage used.
The next workflow component is Force content type. This is used in order to call the next transformation for the input spooled file without any warnings. The component simply tells InterFormNG2 to consider the payload to be an XML file - even though it is a spooled file. The spooled file is at this stage already loaded and available in the payload an XML file similar to this:
So the spooled file attributes are saved as attributes of the spool node and inside there is one page node per page in the spooled file and each line node inside match the spooled file lines of the spooled file.
The next component is an XSL transformation. It calls a transformation which in this case is to copy the form type attribute from the header of the spooled file into an attribute with the same name and value into all the page nodes of the same spooled file. That is done with a transformation named CopyFormtype.xsl with this content:
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="
http://www.w3.org/1999/XSL/Transform"> <!-- Identity transformation rule to copy everything unchanged -->
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
<!-- Copy 'formtype' attribute into each 'page' node -->
<xsl:template match="page">
<xsl:copy>
<xsl:attribute name="formtype">
<xsl:value-of select="../@formtype"/>
</xsl:attribute>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
Now we need to determine is there already is a file into which we are merging the spooled files. This is determined with this condition:
The condition: ng:resourceExist('document','/SEBSRTSTAT/MergedSplfs.splf') verifies if the document, MergedSplfs.splf exists in the SEBSRTSTAT folder.
If it exists the current spooled file will be added to this file.
If it does not exist (otherwise branch), then the current payload will be saved as this document.
This covers the scenario where the document to merging spooled files exists. Here the current spooled file is to be added to the existing document resource. This is handled in the other workflow called, AccumulateSplfs:
Now the spooled file has been processed and in this case we have decided to hold the current input spooled file. This is done with the workflow component, Post process spooled file:
This and the next workflow component handles the scenario, where the current input spooled file is the first of the many spooled files, that are to be merged together (as the document to merging them is missing). So as this spooled file is the first we are simply saving it into the document where the spooled files are merged into:
Now
the spooled file has been processed (for the otherwise branch) and in this case we have decided to
hold the current input spooled file. This is done with the workflow
component, Post process spooled file:
2. Accumulate Splfs
The AccumulateSplfs workflow is defined as follows:
The workflow components are described below:
The workflow input, From other workflow is defined as shown:
It is defined to accept XML as input, but as we consider the spooled files as XML (which they actually are at this point), then this is fine.
We intend to merge the current spooled file, which is saved in the payload together with the other merged spooled files, that are currently stored in the document, /SEBSRTSTAT/MergedSplfs.splf file.
Before that is done we are here loading the merged spooled files into a new workflow variable called secondSPLF with the workflow component, Resource to workflow variable:
The workflow XSL transformation called MergeXMLs.xsl handles the merge of the current spooled file in the payload together with the already merged spooled files, which is currently stored in the workflow variable, secondSPLF. The transformation is setup as shown below:
<?xml version="1.0"?>
<xsl:stylesheet version="3.0"
xmlns:xsl="
http://www.w3.org/1999/XSL/Transform" xmlns:xs="
http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="xs">
<xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes" />
<!-- Parameter holding the second XML document as a string -->
<xsl:param name="secondSPLF" as="xs:string" />
<xsl:template match="/spool">
<!-- Copy the root element and its attributes -->
<spool>
<xsl:copy-of select="@*" />
<!-- Copy all page nodes from the primary document -->
<xsl:copy-of select="page" />
<!-- Copy all page nodes from the second document -->
<xsl:copy-of select="parse-xml($secondSPLF)/spool/page" />
</spool>
</xsl:template>
</xsl:stylesheet>
We keep the spooled file attributes of the current spooled file and add all pages (page nodes) from the secondSPLF variable.
The last task for the workflow is to update the document resource with the addition of the current spooled file. This is done with the workflow component, Save in resources as follows:
3. Scheduled processing
The final workflow processes the merged spooled files and sort them on the keys listed earlier. The workflow looks like this:
The components are:
The workflow input setup is as shown below:
With this setup the workflow start up each workday (Monday to Friday) at 6:00 in the morning.
The first workflow component is a choice element with this When branch:
With this condition it is verified if one or more spooled files has been merged since the last run. If the file is missing, then there is nothing to do.
In the workflow component, Resource to payload we load the contents of the accumulated document (/SEBSRTSTAT/MergedSplfs.splf) as shown below:
In that way the contents is available for the next workflow component.
The next task is to sort all of the accumulated pages on the keys previously mentioned. This is done with the transformation, SortPages.xsl, which is called with the XSL transformation workflow component:
The SortPages.xsl transformation contains the lines below, which sorts the page nodes on the keys:
1. Line 1 position 1 to 47.
2. Line 11 position 140 to 142.
3. The formtype of the original spooled file.
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="3.0"
xmlns:xsl="
http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" indent="yes" encoding="UTF-8"/>
<xsl:template match="/spool">
<spool>
<xsl:copy-of select="@*"/>
<xsl:for-each select="page">
<xsl:sort select="substring(line[1], 1, 47)" />
<xsl:sort select="substring(line[11], 140, 3)" data-type="number" />
<xsl:sort select="@formtype" />
<xsl:copy>
<xsl:copy-of select="@*"/>
<xsl:copy-of select="line"/>
</xsl:copy>
</xsl:for-each>
</spool>
</xsl:template>
</xsl:stylesheet>
In this case we are not going into the actual splitting and processing of the sorted spooled file, but here we simply save the resulting spooled file for verification. That is done with the workflow component, To filesystem as below:
Now that we have processed the merged spooled files we can now delete the document into which all the spooled files was added. That is done with the workflow component, Delete resource as below:
The only thing to consider for the solution above is: If new spooled files are processed while the scheduled job is running, then such spooled files might be missing in the result. One way to get around that could be to ensure, that the scheduled job is not running while processing a spooled file. One way could be to verify that in the output queue monitor. The scheduled job can e.g. create an object resource as the first thing and delete it when the processing is done. Then the output queue monitor could wait for the object to be deleted in a delayed repeat loop before processing the spooled file.
Related Articles
Spooled files in the workflow
This section covers all topics concerning automatic processing of spooled files in the InterFormNG2 workflow: How to monitor an output queue. Load a sample spooled file in the workflow editor Convert a variable CPI spooled file How to condition the ...
Remove original underline handling in spooled files
Input spooled files in InterFormNG2 are always converted into a simple text representation, where one line represents a input spooled file line. This also includes multiple spooled file lines found in the spooled file, that are using overlapping ...
Process spooled files with variable CPI attribute
Normal spooled files can be loaded and processed directly in InterFormNG2, but there is one deviation from this and this concerns spooled files where the attribute, Variable CPI is Y. You can identify such spooled files if you select option ...
SPLF (spooled files)
Spooled files (splf) is one of the file types/payloads, that can be handled by the InterFormNG2 workflow. This is only relevant to the IBM i platform. The support for spooled file is covered here.
Spooled files and duplex
If you want to combine spooled file processing with duplex, then you first need to consider which duplex option, that you want to implement. In the scenarios below it is assumed, that you want to process the input spooled file in the usual way: That ...