A coding guide for building a restrictive, memory center and authentication python SDK

In this tutorial, we guide users to build a powerful, production-readable Python SDK. It first shows how to install and configure the required asynchronous HTTP library (AIOHTTP, Nest-Asyncio). It then introduces the implementation of core components, including structured response objects, bucket rate limiting, memory centers using TTL, and a clean data-level driver design. We will see how to wrap these works in an advanced SDK class that supports asynchronous context management, automatic retry/wait rate limiting behavior, JSON/AUTH header injection and convenient HTTP-Verb methods. Along the way, a demo harness for Jsonplaceholder illustrates caching efficiency, batch fetching, and at rate limiting, error handling, and even shows how to extend the SDK for custom configuration with a fluent “builder” mode.
import asyncio
import aiohttp
import time
import json
from typing import Dict, List, Optional, Any, Union
from dataclasses import dataclass, asdict
from datetime import datetime, timedelta
import hashlib
import logging
!pip install aiohttp nest-asyncio
We install and configure the asynchronous runtime by importing Asyncio and AioHTTP and utilities for timing, JSON processing, Dataclass modeling, Caching, Caching (via Hashlib and DateTime), and structured logging. ! PIP Installation of AIOHTTP NEST-ASYNCIO series ensures that notebooks can run event loops seamlessly in Colab, enabling a powerful workflow of asynchronous HTTP requests and rate limiting.
@dataclass
class APIResponse:
"""Structured response object"""
data: Any
status_code: int
headers: Dict[str, str]
timestamp: datetime
def to_dict(self) -> Dict:
return asdict(self)
Apiresponse Dataclass encapsulates HTTP response details, payloads (data), status codes, titles and search timestamps to a single typing object. The to_dict() assistant converts instances into normal dictionaries for easy logging, serialization, or downstream processing.
class RateLimiter:
"""Token bucket rate limiter"""
def __init__(self, max_calls: int = 100, time_window: int = 60):
self.max_calls = max_calls
self.time_window = time_window
self.calls = []
def can_proceed(self) -> bool:
now = time.time()
self.calls = [call_time for call_time in self.calls if now - call_time float:
if not self.calls:
return 0
return max(0, self.time_window - (time.time() - self.calls[0]))
The RateLimiter class enforces a simple token policy by tracking the timestamp of the most recent call and allowing the maximum value of MAX_CALL in the scroll time_Window. After the limit is reached, can_proceed() returns false, and wait_time() calculates how long it will take to pause before the next request.
class Cache:
"""Simple in-memory cache with TTL"""
def __init__(self, default_ttl: int = 300):
self.cache = {}
self.default_ttl = default_ttl
def _generate_key(self, method: str, url: str, params: Dict = None) -> str:
key_data = f"{method}:{url}:{json.dumps(params or {}, sort_keys=True)}"
return hashlib.md5(key_data.encode()).hexdigest()
def get(self, method: str, url: str, params: Dict = None) -> Optional[APIResponse]:
key = self._generate_key(method, url, params)
if key in self.cache:
response, expiry = self.cache[key]
if datetime.now()
The cache class provides a lightweight in-memory TTL cache for API response by putting request signatures (methods, URLs, params) into a unique key. It returns a valid cached highly responsive object and then automatically evicts the stale entries before expiration.
class AdvancedSDK:
"""Advanced SDK with modern Python patterns"""
def __init__(self, base_url: str, api_key: str = None, rate_limit: int = 100):
self.base_url = base_url.rstrip('/')
self.api_key = api_key
self.session = None
self.rate_limiter = RateLimiter(max_calls=rate_limit)
self.cache = Cache()
self.logger = self._setup_logger()
def _setup_logger(self) -> logging.Logger:
logger = logging.getLogger(f"SDK-{id(self)}")
if not logger.handlers:
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
return logger
async def __aenter__(self):
"""Async context manager entry"""
self.session = aiohttp.ClientSession()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
"""Async context manager exit"""
if self.session:
await self.session.close()
def _get_headers(self) -> Dict[str, str]:
headers = {'Content-Type': 'application/json'}
if self.api_key:
headers['Authorization'] = f'Bearer {self.api_key}'
return headers
async def _make_request(self, method: str, endpoint: str, params: Dict = None,
data: Dict = None, use_cache: bool = True) -> APIResponse:
"""Core request method with rate limiting and caching"""
if use_cache and method.upper() == 'GET':
cached = self.cache.get(method, endpoint, params)
if cached:
self.logger.info(f"Cache hit for {method} {endpoint}")
return cached
if not self.rate_limiter.can_proceed():
wait_time = self.rate_limiter.wait_time()
self.logger.warning(f"Rate limit hit, waiting {wait_time:.2f}s")
await asyncio.sleep(wait_time)
url = f"{self.base_url}/{endpoint.lstrip('/')}"
try:
async with self.session.request(
method=method.upper(),
url=url,
params=params,
json=data,
headers=self._get_headers()
) as resp:
response_data = await resp.json() if resp.content_type == 'application/json' else await resp.text()
api_response = APIResponse(
data=response_data,
status_code=resp.status,
headers=dict(resp.headers),
timestamp=datetime.now()
)
if use_cache and method.upper() == 'GET' and 200 APIResponse:
return await self._make_request('GET', endpoint, params=params, use_cache=use_cache)
async def post(self, endpoint: str, data: Dict = None) -> APIResponse:
return await self._make_request('POST', endpoint, data=data, use_cache=False)
async def put(self, endpoint: str, data: Dict = None) -> APIResponse:
return await self._make_request('PUT', endpoint, data=data, use_cache=False)
async def delete(self, endpoint: str) -> APIResponse:
return await self._make_request('DELETE', endpoint, use_cache=False)
The AdvancedSDK class brings everything together, wrapping everything into a clean, asynchronously preferred client: it manages AIOHTTP sessions through an asynchronous context manager, injects JSON and Auth Headers, and coordinates our proportional ratios and caches. Its _make_request method centralizes get/post/put/delete logic, handles cache lookups, rate limiting waits, error logging and response wrappers and response wrappers, while the get/post/pot/put/delete delete assistant provides us with ERGON engineering, advanced calls.
async def demo_sdk():
"""Demonstrate SDK capabilities"""
print("🚀 Advanced SDK Demo")
print("=" * 50)
async with AdvancedSDK(" as sdk:
print("n📥 Testing GET request with caching...")
response1 = await sdk.get("/posts/1")
print(f"First request - Status: {response1.status_code}")
print(f"Title: {response1.data.get('title', 'N/A')}")
response2 = await sdk.get("/posts/1")
print(f"Second request (cached) - Status: {response2.status_code}")
print("n📤 Testing POST request...")
new_post = {
"title": "Advanced SDK Tutorial",
"body": "This SDK demonstrates modern Python patterns",
"userId": 1
}
post_response = await sdk.post("/posts", data=new_post)
print(f"POST Status: {post_response.status_code}")
print(f"Created post ID: {post_response.data.get('id', 'N/A')}")
print("n⚡ Testing batch requests with rate limiting...")
tasks = []
for i in range(1, 6):
tasks.append(sdk.get(f"/posts/{i}"))
results = await asyncio.gather(*tasks)
print(f"Batch completed: {len(results)} requests")
for i, result in enumerate(results, 1):
print(f" Post {i}: {result.data.get('title', 'N/A')[:30]}...")
print("n❌ Testing error handling...")
try:
error_response = await sdk.get("/posts/999999")
print(f"Error response status: {error_response.status_code}")
except Exception as e:
print(f"Handled error: {type(e).__name__}")
print("n✅ Demo completed successfully!")
async def run_demo():
"""Colab-friendly demo runner"""
await demo_sdk()
DEMO_SDK COROUTINE introduces the core features of the SDK, making cached fetch requests, executing posts, executing a batch of get rate limits, and handling errors to print status codes and sample data against the JSONPLACEHOLDER API to illustrate each capability. RUN_DEMO helpers ensure that the demonstration runs smoothly in the existing event loop of COLAB notebooks.
import nest_asyncio
nest_asyncio.apply()
if __name__ == "__main__":
try:
asyncio.run(demo_sdk())
except RuntimeError:
loop = asyncio.get_event_loop()
loop.run_until_complete(demo_sdk())
class SDKBuilder:
"""Builder pattern for SDK configuration"""
def __init__(self, base_url: str):
self.base_url = base_url
self.config = {}
def with_auth(self, api_key: str):
self.config['api_key'] = api_key
return self
def with_rate_limit(self, calls_per_minute: int):
self.config['rate_limit'] = calls_per_minute
return self
def build(self) -> AdvancedSDK:
return AdvancedSDK(self.base_url, **self.config)
Finally, we apply nest_asyncio to a nested event loop in COLAB and then run the demo via asyncio.run (if needed (if needed), execute in manual loop execution). It also introduces an SDKBuilder class that implements a fluent builder pattern that easily configures and instantiates AdvanceSDK using custom authentication and rate limit settings.
In short, this SDK tutorial provides a scalable foundation for any tranquil integration, combining modern Python idioms (Dataclasses, Async/wait, context manager) with utility tools (rate limiter, cache, structured logging). By adjusting the schema shown here, especially the concerns between request orchestration, caching, and response modeling, teams can accelerate the development of new API customers while ensuring predictability, observability, and resilience.
Check Code. All credits for this study are to the researchers on the project. Also, please stay tuned for us twitter And don’t forget to join us 100K+ ml reddit And subscribe Our newsletter.
Sana Hassan, a consulting intern at Marktechpost and a dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. He is very interested in solving practical problems, and he brings a new perspective to the intersection of AI and real-life solutions.
