How to build a fully functional computer-using agent that uses local AI models to think, plan, and perform virtual operations

In this tutorial, we build from scratch a high-level computer usage agent that can reason, plan, and execute virtual actions using a local open weight model. We created a miniature simulated desktop, equipped it with a tool interface, and designed an intelligent agent that can analyze its environment, decide on actions such as clicking or typing, and execute them step by step. Finally, we see how the agent interprets goals such as opening an email or taking a note, showing how local language models can mimic interactive reasoning and task execution. Check The complete code is here.

!pip install -q transformers accelerate sentencepiece nest_asyncio
import torch, asyncio, uuid
from transformers import pipeline
import nest_asyncio
nest_asyncio.apply()

We set up our environment by installing basic libraries like Transformers, Accelerate, and Nest Asyncio, which allow us to run local models and asynchronous tasks seamlessly in Colab. We prepare the runtime so that subsequent components of the agent can work efficiently without external dependencies. Check The complete code is here.

class LocalLLM:
   def __init__(self, model_name="google/flan-t5-small", max_new_tokens=128):
       self.pipe = pipeline("text2text-generation", model=model_name, device=0 if torch.cuda.is_available() else -1)
       self.max_new_tokens = max_new_tokens
   def generate(self, prompt: str) -> str:
       out = self.pipe(prompt, max_new_tokens=self.max_new_tokens, temperature=0.0)[0]["generated_text"]
       return out.strip()


class VirtualComputer:
   def __init__(self):
       self.apps = {"browser": " "notes": "", "mail": ["Welcome to CUA", "Invoice #221", "Weekly Report"]}
       self.focus = "browser"
       self.screen = "Browser open at  bar focused."
       self.action_log = []
   def screenshot(self):
       return f"FOCUS:{self.focus}nSCREEN:n{self.screen}nAPPS:{list(self.apps.keys())}"
   def click(self, target:str):
       if target in self.apps:
           self.focus = target
           if target=="browser":
               self.screen = f"Browser tab: {self.apps['browser']}nAddress bar focused."
           elif target=="notes":
               self.screen = f"Notes AppnCurrent notes:n{self.apps['notes']}"
           elif target=="mail":
               inbox = "n".join(f"- {s}" for s in self.apps['mail'])
               self.screen = f"Mail App Inbox:n{inbox}n(Read-only preview)"
       else:
           self.screen += f"nClicked '{target}'."
       self.action_log.append({"type":"click","target":target})
   def type(self, text:str):
       if self.focus=="browser":
           self.apps["browser"] = text
           self.screen = f"Browser tab now at {text}nPage headline: Example Domain"
       elif self.focus=="notes":
           self.apps["notes"] += ("n"+text)
           self.screen = f"Notes AppnCurrent notes:n{self.apps['notes']}"
       else:
           self.screen += f"nTyped '{text}' but no editable field."
       self.action_log.append({"type":"type","text":text})

We define core components, lightweight local models, and virtual computers. We use Flan-T5 as the inference engine and create a simulated desktop that opens applications, displays screens, and responds to typing and clicks. Check The complete code is here.

class ComputerTool:
   def __init__(self, computer:VirtualComputer):
       self.computer = computer
   def run(self, command:str, argument:str=""):
       if command=="click":
           self.computer.click(argument)
           return {"status":"completed","result":f"clicked {argument}"}
       if command=="type":
           self.computer.type(argument)
           return {"status":"completed","result":f"typed {argument}"}
       if command=="screenshot":
           snap = self.computer.screenshot()
           return {"status":"completed","result":snap}
       return {"status":"error","result":f"unknown command {command}"}

We introduce the ComputerTool interface, which acts as a communication bridge between agent inference and virtual desktops. We define high-level actions such as clicks, typing, and screenshots to enable agents to interact with the environment in a structured manner. Check The complete code is here.

class ComputerAgent:
   def __init__(self, llm:LocalLLM, tool:ComputerTool, max_trajectory_budget:float=5.0):
       self.llm = llm
       self.tool = tool
       self.max_trajectory_budget = max_trajectory_budget
   async def run(self, messages):
       user_goal = messages[-1]["content"]
       steps_remaining = int(self.max_trajectory_budget)
       output_events = []
       total_prompt_tokens = 0
       total_completion_tokens = 0
       while steps_remaining>0:
           screen = self.tool.computer.screenshot()
           prompt = (
               "You are a computer-use agent.n"
               f"User goal: {user_goal}n"
               f"Current screen:n{screen}nn"
               "Think step-by-step.n"
               "Reply with: ACTION  ARG  THEN .n"
           )
           thought = self.llm.generate(prompt)
           total_prompt_tokens += len(prompt.split())
           total_completion_tokens += len(thought.split())
           action="screenshot"; arg=""; assistant_msg="Working..."
           for line in thought.splitlines():
               if line.strip().startswith("ACTION "):
                   after = line.split("ACTION ",1)[1]
                   action = after.split()[0].strip()
               if "ARG " in line:
                   part = line.split("ARG ",1)[1]
                   if " THEN " in part:
                       arg = part.split(" THEN ")[0].strip()
                   else:
                       arg = part.strip()
               if "THEN " in line:
                   assistant_msg = line.split("THEN ",1)[1].strip()
           output_events.append({"summary":[{"text":assistant_msg,"type":"summary_text"}],"type":"reasoning"})
           call_id = "call_"+uuid.uuid4().hex[:16]
           tool_res = self.tool.run(action, arg)
           output_events.append({"action":{"type":action,"text":arg},"call_id":call_id,"status":tool_res["status"],"type":"computer_call"})
           snap = self.tool.computer.screenshot()
           output_events.append({"type":"computer_call_output","call_id":call_id,"output":{"type":"input_image","image_url":snap}})
           output_events.append({"type":"message","role":"assistant","content":[{"type":"output_text","text":assistant_msg}]})
           if "done" in assistant_msg.lower() or "here is" in assistant_msg.lower():
               break
           steps_remaining -= 1
       usage = {"prompt_tokens": total_prompt_tokens,"completion_tokens": total_completion_tokens,"total_tokens": total_prompt_tokens + total_completion_tokens,"response_cost": 0.0}
       yield {"output": output_events, "usage": usage}

We built ComputerAgent, which serves as the intelligent controller of the system. We program it to reason about goals, decide which actions to take, execute those actions through the tool interface, and record each interaction as a step in the decision-making process. Check The complete code is here.

async def main_demo():
   computer = VirtualComputer()
   tool = ComputerTool(computer)
   llm = LocalLLM()
   agent = ComputerAgent(llm, tool, max_trajectory_budget=4)
   messages=[{"role":"user","content":"Open mail, read inbox subjects, and summarize."}]
   async for result in agent.run(messages):
       print("==== STREAM RESULT ====")
       for event in result["output"]:
           if event["type"]=="computer_call":
               a = event.get("action",{})
               print(f"[TOOL CALL] {a.get('type')} -> {a.get('text')} [{event.get('status')}]")
           if event["type"]=="computer_call_output":
               snap = event["output"]["image_url"]
               print("SCREEN AFTER ACTION:n", snap[:400],"...n")
           if event["type"]=="message":
               print("ASSISTANT:", event["content"][0]["text"], "n")
       print("USAGE:", result["usage"])


loop = asyncio.get_event_loop()
loop.run_until_complete(main_demo())

We put everything together by running a demo where the agent interprets the user’s request and performs tasks on a virtual machine. We watch it generate inferences, execute commands, update virtual screens, and achieve its goals in a clear, step-by-step manner.

In summary, we achieve the essence of computer-using agents capable of autonomous reasoning and interaction. We’ve seen how native language models like Flan-T5 can powerfully simulate desktop-level automation in a secure text-based sandbox. This project helps us understand the architecture behind intelligent agents, such as those in computer usage agents, linking natural language reasoning to virtual tool control. It provides a solid foundation for extending these capabilities to real-world, multi-modal and secure automation systems.


Check The complete code is here. Please feel free to check out our GitHub page for tutorials, code, and notebooks. In addition, welcome to follow us twitter And don’t forget to join our 100k+ ML SubReddit and subscribe our newsletter. wait! Are you using Telegram? Now you can also join us via telegram.


Asif Razzaq is the CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of artificial intelligence for the benefit of society. His most recent endeavor is the launch of Marktechpost, an artificial intelligence media platform that stands out for its in-depth coverage of machine learning and deep learning news that is technically sound and easy to understand for a broad audience. The platform has more than 2 million monthly views, which shows that it is very popular among viewers.

🙌 FOLLOW MARKTECHPOST: Add us as your go-to source on Google.

You may also like...