Build a secure and memory-friendly password workflow for AI agents with dynamic LLM selection and API integration
In this tutorial, we stepped on a compact but fully functional passwordBased on workflow. We first securely capture the GEMINI API key in the COLAB UI without exposing the code. We then implement a dynamic LLM selection feature that automatically switches between OpenAI, Gemini, or humans available based on API keys. The setup phase ensures that Node.js and Cipher CLI are installed, and we then enable memory proxy with long-term recalls by programmatically generating the CIPHER.YML configuration. We create accessibility features to run password commands directly from Python, store key project decisions as persistent memories, retrieve them on demand, and finally rotate passwords in API mode for external integration. Check The complete code is here.
import os, getpass
os.environ["GEMINI_API_KEY"] = getpass.getpass("Enter your Gemini API key: ").strip()
import subprocess, tempfile, pathlib, textwrap, time, requests, shlex
def choose_llm():
if os.getenv("OPENAI_API_KEY"):
return "openai", "gpt-4o-mini", "OPENAI_API_KEY"
if os.getenv("GEMINI_API_KEY"):
return "gemini", "gemini-2.5-flash", "GEMINI_API_KEY"
if os.getenv("ANTHROPIC_API_KEY"):
return "anthropic", "claude-3-5-haiku-20241022", "ANTHROPIC_API_KEY"
raise RuntimeError("Set one API key before running.")
We first use GetPass to safely enter the Gemini API key, so it is hidden in the Colab UI. We then define a select_llm() function that checks our environment variables and automatically selects the appropriate LLM provider, model, and key based on the available content. Check The complete code is here.
def run(cmd, check=True, env=None):
print("▸", cmd)
p = subprocess.run(cmd, shell=True, text=True, capture_output=True, env=env)
if p.stdout: print(p.stdout)
if p.stderr: print(p.stderr)
if check and p.returncode != 0:
raise RuntimeError(f"Command failed: {cmd}")
return p
We create a Run() helper function that executes shell commands, prints out stdout and stderr for visibility, and fails when checking is enabled, making our workflow execution more transparent and reliable. Check The complete code is here.
def ensure_node_and_cipher():
run("sudo apt-get update -y && sudo apt-get install -y nodejs npm", check=False)
run("npm install -g @byterover/cipher")
We define Suse_node_and_cipher() as installing node.js, npm and Cipher CLI globally, ensuring that our environment has all necessary dependencies before running any password-related commands. Check The complete code is here.
def write_cipher_yml(workdir, provider, model, key_env):
cfg = """
llm:
provider: {provider}
model: {model}
apiKey: ${key_env}
systemPrompt:
enabled: true
content: |
You are an AI programming assistant with long-term memory of prior decisions.
embedding:
disabled: true
mcpServers:
filesystem:
type: stdio
command: npx
args: ['-y','@modelcontextprotocol/server-filesystem','.']
""".format(provider=provider, model=model, key_env=key_env)
(workdir / "memAgent").mkdir(parents=True, exist_ok=True)
(workdir / "memAgent" / "cipher.yml").write_text(cfg.strip() + "n")
We implement write_cipher_yml() to generate cipher.yml configuration files in an amazing folder, set selected LLM provider, model, and API keys, enabling system prompts with long-term memory, and registering the file system MCP server for file operations. Check The complete code is here.
def cipher_once(text, env=None, cwd=None):
cmd = f'cipher {shlex.quote(text)}'
p = subprocess.run(cmd, shell=True, text=True, capture_output=True, env=env, cwd=cwd)
print("Cipher says:n", p.stdout or p.stderr)
return p.stdout.strip() or p.stderr.strip()
We define Cipher_once() as using the provided text, capturing and displaying its output and returning a response, allowing us to programmatically interact with Cipher from Python. Check The complete code is here.
def start_api(env, cwd):
proc = subprocess.Popen("cipher --mode api", shell=True, env=env, cwd=cwd,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True)
for _ in range(30):
try:
r = requests.get(" timeout=2)
if r.ok:
print("API /health:", r.text)
break
except: pass
time.sleep(1)
return proc
We create start_api() to start the password in API mode as a subprocess and then repeatedly poll its/health endpoint until the response is responded to ensure that the API server is ready before proceeding. Check The complete code is here.
def main():
provider, model, key_env = choose_llm()
ensure_node_and_cipher()
workdir = pathlib.Path(tempfile.mkdtemp(prefix="cipher_demo_"))
write_cipher_yml(workdir, provider, model, key_env)
env = os.environ.copy()
cipher_once("Store decision: use pydantic for config validation; pytest fixtures for testing.", env, str(workdir))
cipher_once("Remember: follow conventional commits; enforce black + isort in CI.", env, str(workdir))
cipher_once("What did we standardize for config validation and Python formatting?", env, str(workdir))
api_proc = start_api(env, str(workdir))
time.sleep(3)
api_proc.terminate()
if __name__ == "__main__":
main()
In Main() we select the LLM provider, install the dependencies, and create a temporary working directory using the Cipher.yml configuration. We then store the key project decisions in the memory of the password, query it back, and finally start the password API server, and then turn it off to demonstrate the interaction based on the CLI and API.
In short, we have a working password environment that safely manages API keys, automatically selects the appropriate LLM provider, and fully configures memory-enabled proxy via Python Automation. Our implementation includes decision logging, memory retrieval and real-time API endpoints, all carefully curated in a notebook/friendly workflow. This makes the setup reusable other AI-assisted development pipelines, allowing us to programmatically store and query project knowledge while making the environment light and easy to redeploy.
Check The complete code is here. Check out ours anytime Tutorials, codes and notebooks for github pages. Also, please stay tuned for us twitter And don’t forget to join us 100K+ ml reddit And subscribe Our newsletter.

Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.