Hesgoal || TOTALSPORTEK|| F1 STREAMS || SOCCER STREAMS

Implementation Guide for Designing Intelligent Parallel Workflows in PARSL for Multi-Tool AI Agent Execution

In this tutorial, we use PARSL, leverages its parallel execution capabilities to run multiple computing tasks as a standalone Python application. We configure the local TreadPoolExecutor for concurrency, define dedicated tools such as fibonacci calculations, prime numbers, keyword extraction and mock API calls, and coordinate them by mapping user targets to lightweight planners for task calls. The outputs of all tasks are summarized and are generated by embracing facial text to produce a coherent human-readable summary. Check The complete code is here.

!pip install -q parsl transformers accelerate


import math, json, time, random
from typing import List, Dict, Any
import parsl
from parsl.config import Config
from parsl.executors import ThreadPoolExecutor
from parsl import python_app


parsl.load(Config(executors=[ThreadPoolExecutor(label="local", max_threads=8)]))

We first need to install the required libraries and import all the necessary modules for our workflow. We then configure PARSL with the local TreadPoolExecutor to run the task simultaneously and load this configuration so that we can execute Python applications in parallel. Check The complete code is here.

@python_app
def calc_fibonacci(n: int) -> Dict[str, Any]:
   def fib(k):
       a, b = 0, 1
       for _ in range(k): a, b = b, a + b
       return a
   t0 = time.time(); val = fib(n); dt = time.time() - t0
   return {"task": "fibonacci", "n": n, "value": val, "secs": round(dt, 4)}


@python_app
def extract_keywords(text: str, k: int = 8) -> Dict[str, Any]:
   import re, collections
   words = [w.lower() for w in re.findall(r"[a-zA-Z][a-zA-Z0-9-]+", text)]
   stop = set("the a an and or to of is are was were be been in on for with as by from at this that it its if then else not no".split())
   cand = [w for w in words if w not in stop and len(w) > 3]
   freq = collections.Counter(cand)
   scored = sorted(freq.items(), key=lambda x: (x[1], len(x[0])), reverse=True)[:k]
   return {"task":"keywords","keywords":[w for w,_ in scored]}


@python_app
def simulate_tool(name: str, payload: Dict[str, Any]) -> Dict[str, Any]:
   time.sleep(0.3 + random.random()*0.5)
   return {"task": name, "payload": payload, "status": "ok", "timestamp": time.time()}

We define four PARSL @python_app functions that are out of sync as part of the proxy workflow. We create a Fibonacci calculator, a routine for mass counting, a keyword extractor for text processing, and a simulation tool for mocking external APIs that simulate randomly delayed calls from external APIs. These modular applications allow us to perform various computations in parallel, thus forming a building block for our multi-tool AI agent. Check The complete code is here.

def tiny_llm_summary(bullets: List[str]) -> str:
   from transformers import pipeline
   gen = pipeline("text-generation", model="sshleifer/tiny-gpt2")
   prompt = "Summarize these agent results clearly:n- " + "n- ".join(bullets) + "nConclusion:"
   out = gen(prompt, max_length=160, do_sample=False)[0]["generated_text"]
   return out.split("Conclusion:", 1)[-1].strip()

We implement a tiny_llm_summary function that uses the lightweight sshleifer/tiny-gpt2 model to generate a concise summary of our agent results using the hug-face pipeline. It formats the collected task output as bullet points, attaches “Conclusion: Tips” and extracts only the final generated conclusions for a clean, readable summary. Check The complete code is here.

def plan(user_goal: str) -> List[Dict[str, Any]]:
   intents = []
   if "fibonacci" in user_goal.lower():
       intents.append({"tool":"calc_fibonacci", "args":{"n":35}})
   if "primes" in user_goal.lower():
       intents.append({"tool":"count_primes", "args":{"limit":100_000}})
   intents += [
       {"tool":"simulate_tool", "args":{"name":"vector_db_search","payload":{"q":user_goal}}},
       {"tool":"simulate_tool", "args":{"name":"metrics_fetch","payload":{"kpi":"latency_ms"}}},
       {"tool":"extract_keywords", "args":{"text":user_goal}}
   ]
   return intents

We define the planning function as mapping the user’s targets into a structured list of tool calls. It checks whether the target text triggers a specific compute task like “fibonacci” or “primes”, and then adds default actions such as mock API queries, metric retrieval and keyword extraction to form an execution blueprint for our AI agent. Check The complete code is here.

def run_agent(user_goal: str) -> Dict[str, Any]:
   tasks = plan(user_goal)
   futures = []
   for t in tasks:
       if t["tool"]=="calc_fibonacci": futures.append(calc_fibonacci(**t["args"]))
       elif t["tool"]=="count_primes": futures.append(count_primes(**t["args"]))
       elif t["tool"]=="extract_keywords": futures.append(extract_keywords(**t["args"]))
       elif t["tool"]=="simulate_tool": futures.append(simulate_tool(**t["args"]))
   raw = [f.result() for f in futures]


   bullets = []
   for r in raw:
       if r["task"]=="fibonacci":
           bullets.append(f"Fibonacci({r['n']}) = {r['value']} computed in {r['secs']}s.")
       elif r["task"]=="count_primes":
           bullets.append(f"{r['count']} primes found ≤ {r['limit']}.")
       elif r["task"]=="keywords":
           bullets.append("Top keywords: " + ", ".join(r["keywords"]))
       else:
           bullets.append(f"Tool {r['task']} responded with status={r['status']}.")


   narrative = tiny_llm_summary(bullets)
   return {"goal": user_goal, "bullets": bullets, "summary": narrative, "raw": raw}

In the Run_Agent function, we perform a complete proxy workflow by creating a task plan for the user’s target and then dispatching each tool as a PARSL application to run in parallel. Once all futures are complete, we convert the results into clear bullets and feed them into our tiny_llm_summary function to create a concise narrative. The function returns a structured dictionary containing the original target, detailed bullet points, LLM-generated summary and original tool output. Check The complete code is here.

if __name__ == "__main__":
   goal = ("Analyze fibonacci(35) performance, count primes under 100k, "
           "and prepare a concise executive summary highlighting insights for planning.")
   result = run_agent(goal)
   print("n=== Agent Bullets ===")
   for b in result["bullets"]: print("•", b)
   print("n=== LLM Summary ===n", result["summary"])
   print("n=== Raw JSON ===n", json.dumps(result["raw"], indent=2)[:800], "...")

In the main execution block, we define an example goal combining numerical calculations, prime numbers, and summary generation. We run the agent on this goal, print the generated bullets, display the summary of the LLM-made, and then preview the original JSON output to verify human-readable and structured results.

In summary, this implementation illustrates how PARSL’s asynchronous application model coordinates diverse workloads in parallel, allowing AI agents to combine numerical analysis, text processing, and simulation of external services in a unified pipeline. By integrating small LLMs in the final stage, we transform structured results into natural language, explaining how parallel computing and AI models can be combined to create responsive, scalable agents suitable for real-time or large-scale tasks.


Check The complete code is here. Check out ours anytime Tutorials, codes and notebooks for github pages. Also, please stay tuned for us twitter And don’t forget to join us 100K+ ml reddit And subscribe Our newsletter.


Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.

You may also like...