Use this scenario to understand how to use parallel queue processing for Workload Management (WLM) to handle multiple queues within the same device pool simultaneously.

About this scenario

Jake, an automation administrator, aims to configure a queue using a select group of runners from a device pool. This queue is assigned the highest priority level, set at 1, while other queues are given lower priority levels. For instance, if a device pool contains 10 runners, deployments will begin with the 5 runners allocated to the highest priority queue. This queue is designed to handle tasks efficiently without being overwhelmed by additional queues with numerous tasks. It is crucial for deployments to remain adaptable, especially when a high-priority cluster is part of the same device pool. Meanwhile, other queues will start deployments simultaneously on the remaining servers when there is a high volume of tasks.

Parallel queue processing in workload management is a technique used to optimize the execution of tasks by distributing them across multiple resources simultaneously. This approach is particularly beneficial in environments where tasks need to be processed efficiently and quickly, such as in data centers, cloud computing, and high-performance computing systems. Here’s an elaboration on the concept and its implications:

Scenario walkthrough

Jake wants to execute a parallel queue processing scenario in an existing WLM environment. As the tasks are executed on the default devices associated with run-as users in parallel queue processing, Jake will perform the following steps:
  1. Select the appropriate WLM bot.
  2. Select the queue.
  3. Select Default device for the deployment mode. Here, Jake selects run-as users having default devices.
  4. Click Run with queue to start the processing.
After selecting all these details, work item processing starts automatically. Importantly, there is no requirement to select a device pool, even if a device pool-based deployment option is available.

Scenario deployment use cases

Here are some use cases that you must consider to understand how parallel queue processing will work for WLM automations with various users, shared resources, device pool mode, and run with queue option.

Two WLM automatons, both parallel and having unique users
Consider Automation A-1 with u1, u2, and u3 users and Automation A-2 with u2, u3, and u4 users.

Result: Both the automations will run in parallel.

Two WLM automations, both parallel but with shared resources
Consider Automation A-1 with u1, u2, and u3 users and Automation A-2 with u4 and u5 users.

Result: u2 and u3 users will process Automation A-1 first as the automation was created earlier and then Automation A-2.

Four WLM automations, two in device-pool and two in parallel mode with priority order
Consider,
  • Automation A-1 with u1, u2, u3, u4, u5 (device pool D1 - priority mode P1) users.
  • Automation A-2 with u1, u2, u3, u4, u5 (device pool D1 - priority mode P2) users.
  • Automation A-3 with u1, u2, u3 (parallel mode) users.
  • Automation A-4 with u4, u5 (parallel mode) users.
Result:
  • First, Automation A-1 will be executed, followed by Automation A-2.
  • Then it will come to Automation A-3 and A-4. The execution depends on user availability.
Note: If Automation A-1 or A-2 receives new work items, WLM will wait until Automation A-3 and A-4 are completed. Afterward, it will return to Automation A-1 or A-2.
Four WLM automations, two in device-pool and two in parallel mode with round robin order
Consider,
  • Automation A-1 with u1, u2, u3, u4, u5 (device pool D1 - round-robin mode) users.
  • Automation A-2 with u1, u2, u3, u4, u5 (device pool D1 - round-robin mode) users.
  • Automation A-3 with u1, u2, u3 (parallel mode) users.
  • Automation A-4 with u4, u5 (parallel mode) users.
Result:
  • First, Automation A-1 will be executed, followed by Automation A-2.
  • Then it will come to Automation A-3 and A-4. The execution depends on user availability.
Note: If Automation A-1 or A-2 receives new work items, WLM will wait until Automation A-3 and A-4 are completed. Afterward, it will return to Automation A-1 or A-2.
Four WLM automations, two in device-pool and two in parallel mode
Consider,
  • Automation A-1 with u1, u2, u3 (parallel mode) users.
  • Automation A-2 with u4, u5 (parallel mode) users.
  • Automation A-3 with u1, u2, u3, u4, u5 (device pool D1 - round-robin mode) users.
  • Automation A-4 with u1, u2, u3, u4, u5 (device pool D1 - round-robin mode) users.
Result:
  • First, Automation A-1 will be executed with parallel queue processing followed by Automation A-2.
  • Then it will come to Automation A-3 and A-4. The execution depends on user availability.
Note: If Automation A-1 or A-2 receives new work items, WLM will wait until Automation A-3 and A-4 are completed. Afterward, it will return to Automation A-1 or A-2.
Parallel queue with Run now option
Consider Automation A-1 with u1, u2, u3 (parallel queue deployment) users, and if you select the Run now option on u2, the parallel queue deployment will execute first, and then a bot with Run now will be processed.
Parallel queue with schedule
Consider Automation A-1 with u1, u2, u3 (parallel queue deployment) users, and if you trigger a schedule on u2, parallel queue deployment will execute first, and then a bot with Run now will be processed.

Scenario summary and benefits

Jake can now use parallel queue processing for effective workload management.

Parallel queue processing is a powerful technique in workload management that enhances system performance by efficiently utilizing resources and reducing processing times. While it offers significant benefits, it also requires careful planning and management to address the associated complexities and ensure optimal operation. By understanding and implementing effective parallel processing strategies, organizations can improve their ability to handle large and complex workloads.

Benefits of parallel queue processing:
  • Increased efficiency: By processing multiple tasks simultaneously, systems can achieve higher throughput and faster completion times.
  • Reduced latency: Tasks are processed as soon as resources become available, minimizing wait times and improving response times.
  • Flexibility: Systems can adapt to varying workloads by dynamically allocating resources based on current demand and task priority.