The processing of spooled files on an output queue is single threaded, so if you are processing larger spooled files (that takes longer to process and print), then smaller spooled files on the same output queue might wait for the processing of a larger spooled file to be done. To avoid such a bottleneck you can consider a setup where the spooled files can be processed in parallel.
1.Move larger spooled files to another output queue to avoid them blocking the smaller ones. This setup is the easiest one to setup and understand.
2.Save larger spooled files as .splf files and e.g. save them to a monitored folder where another workflow is to monitor and process these files. This setup is more complicated and has some limitations, but on the other hand it can be setup to be real multi-thread.
One way to avoid that larger spooled files (that takes longer to process and print) are blocking smaller spooled files is to consider to move the larger spooled files to another output queue, so that they can be processed on the other output queue while the current output queue processor continues with other smaller spooled files.
Here is how that can be implemented:
Each element is covered below:
The 3 components are here setup like below:
If you intend to run the same sub-workflow components in both workflows, then you can consider to build a common sub-workflow and call that from both main workflows.
The setup above is normally to be recommended, but there is an alternative if you have many, larger spooled files and you e.g. either do not want the extra output queue or want a real multi-thread alternative. Most of the setup is the same as in the example above and only the differences are covered below. Here the main workflow looks like below:
The differences are seen in the branch for the large spooled files:
$interform.input.spooled.splf
|| '_'
|| $interform.input.spooled.jobName
|| '_'
|| $interform.input.spooled.user
|| '_'
|| $interform.input.spooled.jobNbr
|| '_'
|| $interform.input.spooled.splfNbr
|| '.splf
The function || is a concat function, so this expression extracts these attributes from the input spooled file and they are concatenated with an underscore between each - and the file is saved with the extension, .splf.The attributes/variables are covered here in the manual.
Here the spooled file identification is saved as the spooled file name, job name, user profile, job number, spooled file number, but in principle we should also consider to include the timestamp.
The large spooled files are then to be handled in another workflow, that monitors the same folder into which the .splf files are saved. This is here done as below:
For this last solution you should notice the limitations:
1) The status and attributes of the large input spooled file cannot be set to reflect the result of the processing, where the first solution is able to reflect that: In solution 1 the spooled file is only held after it has succesfully been processed.
2) In the second solution the second workflow (that monitors the folder) is unable to use the predefined workflow variables for the spooled file, but it can however find the same information as spooled file attributes.