JSON prompt LLMS: A practical guide to Python encoding examples
JSON prompt is a technique that uses JavaScript object symbols (JSON) format to construct instructions for AI models, making the prompts clear, explicit and machine-readable. With traditional text-based cues (which may leave room for ambiguity and misunderstandings), JSON cues will require organization into key-value pairs, arrays, and nested objects, turning fuzzy requests into accurate blueprints for the model to follow. This approach can greatly improve consistency and accuracy by allowing users to specify content like task types, topics, audiences, output formats, and other parameters in an organized manner, especially for complex or repetitive tasks. As AI systems increasingly rely on predictable, structured inputs for real-world workflows, JSON cues have become the preferred strategy for producing clearer and more reliable results in major LLMs including GPT-4, Claude, and Gemini.
In this tutorial, we’ll dig into the power of JSON hints and why it can change the way you interact with AI models.
We’ll walk you through the benefits of using JSON prompts with coding examples – from simple text prompts to structured JSON prompts and show you a comparison of their output. Finally, you’ll see clearly how structured hints bring precision, consistency, and scalability to your workflow, whether you’re generating digests, extracting data, or building advanced AI pipelines. Check The complete code is here.

Install dependencies
import os
from getpass import getpass
os.environ["OPENAI_API_KEY"] = getpass('Enter OpenAI API Key: ')
To obtain the OpenAI API key, access and generate a new key. If you are a new user, you may need to add billing details and pay a minimum of $5 to activate API access. Check The complete code is here.
from openai import OpenAI
client = OpenAI()
Structured prompts to ensure consistency
Use structured tips, such as JSON-based formats, to force you to think about fields and values - a real advantage when working with LLMS. Check The complete code is here.
By defining a fixed structure, you can disambiguate and guess to ensure that each response follows a predictable pattern.
Here is a simple example:
Summarize the following email and list the action items clearly.
Email:
Hi team, let's finalize the marketing plan by Tuesday. Alice, prepare the draft; Bob, handle the design.
We will deliver this prompt to the LLM in two ways and then compare the outputs from the freeform prompt with the structured (JSON-based) prompt to observe the differences in clarity and consistency. Check The complete code is here.
Freeform Tips
prompt_text = """
Summarize the following email and list the action items clearly.
Email:
Hi team, let's finalize the marketing plan by Tuesday. Alice, prepare the draft; Bob, handle the design.
"""
response_text = client.chat.completions.create(
model="gpt-5",
messages=[{"role": "user", "content": prompt_text}]
)
text_output = response_text.choices[0].message.content
print(text_output)
Summary:
The team needs to finalize the marketing plan by Tuesday. Alice will prepare the draft, and Bob will handle the design.
Action items:
- Alice: Prepare the draft of the marketing plan by Tuesday.
- Bob: Handle the design by Tuesday.
- Team: Finalize the marketing plan by Tuesday.
JSON Tips
prompt_json = """
Summarize the following email and return the output strictly in JSON format:
{
"summary": "short summary of the email",
"action_items": ["task 1", "task 2", "task 3"],
"priority": "low | medium | high"
}
Email:
Hi team, let's finalize the marketing plan by Tuesday. Alice, prepare the draft; Bob, handle the design.
"""
response_json = client.chat.completions.create(
model="gpt-5",
messages=[
{"role": "system", "content": "You are a precise assistant that always replies in valid JSON."},
{"role": "user", "content": prompt_json}
]
)
json_output = response_json.choices[0].message.content
print(json_output)
{
"summary": "Finalize the marketing plan by Tuesday; Alice to draft and Bob to handle design.",
"action_items": [
"Alice: prepare the draft",
"Bob: handle the design",
"Team: finalize the marketing plan by Tuesday"
],
"priority": "medium"
}
In this example, using structured JSON hints results in clear and concise output, easy to parse and evaluate. By defining fields such as “Summary”, “Action_Items”, and “Priority”, the LLM response becomes more consistent and feasible. Rather than generating free flowing text (which may vary), the model provides a predictable structure that disambiguates. This approach not only improves the readability and reliability of the response, but also makes it easier to integrate output into downstream workflows such as project trackers, dashboards, or automated email handlers.
User can control the output
When struct prompts in JSON, you remove ambiguity from the directive and output. In this example, asking about market summary, emotion, opportunity, risk, and confidence scores may result in inconsistent formats when passing through plain text. However, by constructing a request in JSON – with well-defined fields such as “summary”, “emotion”, “opportunity”, “risk”, and “trust_Score” – the response becomes predictable, machine-friendly and easy to parse. This consistency ensures that whether you are generating content, analyzing reports, or extracting insights, your workflow remains streamlined and reliable, without surprises – clean, structured results every time. Check The complete code is here.
Freeform Tips
plain_text_prompt = """
Analyze the following market update:
Market Text:
Tesla's Q2 earnings beat expectations due to higher Model Y sales, but rising competition from BYD is a risk.
Apple reported steady revenue growth driven by iPhone sales, but services revenue slightly declined.
Amazon's AWS division continues to dominate cloud computing, though regulatory scrutiny in Europe is increasing.
Generate:
- A 2-line market summary
- Sentiment for each company (positive, negative, neutral)
- Key growth opportunities and risks
- A confidence score from 0 to 10
"""
response_plain = client.chat.completions.create(
model="gpt-5",
messages=[{"role": "user", "content": plain_text_prompt}]
)
plain_output = response_plain.choices[0].message.content
print(plain_output)
Market summary:
- Earnings updates skew constructive: Tesla beat on Q2 with strong Model Y, Apple grew on iPhone, and AWS remains the cloud leader.
- Offsetting risks include BYD pressure on Tesla, Apple's services dip, and rising European scrutiny on Amazon.
Sentiment:
- Tesla: Positive
- Apple: Neutral
- Amazon: Positive
Key growth opportunities and risks:
- Tesla
- Opportunities: Sustain Model Y momentum; scale production and pricing to drive volume.
- Risks: Intensifying competition from BYD could pressure share and margins.
- Apple
- Opportunities: Monetize large iPhone base; re-accelerate services via bundles and ecosystem engagement.
- Risks: Services softness; dependence on iPhone for top-line growth.
- Amazon (AWS)
- Opportunities: Leverage leadership to win more enterprise/AI workloads and multi-year commitments.
- Risks: European regulatory scrutiny may lead to fines, compliance costs, or contract/pricing constraints.
Confidence score: 7/10
JSON Tips
json_prompt = """
Analyze the following market update and return the response in this JSON format:
{
"summary": "2-line market overview",
"companies": [
{
"name": "string",
"sentiment": "positive | negative | neutral",
"opportunities": ["list of opportunities"],
"risks": ["list of risks"]
}
],
"confidence_score": "integer (0-10)"
}
Market Text:
Tesla's Q2 earnings beat expectations due to higher Model Y sales, but rising competition from BYD is a risk.
Apple reported steady revenue growth driven by iPhone sales, but services revenue slightly declined.
Amazon's AWS division continues to dominate cloud computing, though regulatory scrutiny in Europe is increasing.
"""
response_json = client.chat.completions.create(
model="gpt-5",
messages=[
{"role": "system", "content": "You are a precise assistant that always outputs valid JSON."},
{"role": "user", "content": json_prompt}
]
)
json_output = response_json.choices[0].message.content
print(json_output)
{
"summary": "Markets saw mixed corporate updates: Tesla beat expectations on strong Model Y sales and AWS maintained cloud leadership.nHowever, Apple's growth was tempered by softer services revenue while Tesla and AWS face competition and regulatory risks.",
"companies": [
{
"name": "Tesla",
"sentiment": "positive",
"opportunities": [
"Leverage strong Model Y demand to drive revenue and scale production",
"Sustain earnings momentum from better-than-expected Q2 results"
],
"risks": [
"Intensifying competition from BYD",
"Potential price pressure impacting margins"
]
},
{
"name": "Apple",
"sentiment": "neutral",
"opportunities": [
"Build on steady iPhone-driven revenue growth",
"Revitalize Services to reaccelerate growth"
],
"risks": [
"Slight decline in services revenue",
"Reliance on iPhone as the primary growth driver"
]
},
{
"name": "Amazon (AWS)",
"sentiment": "positive",
"opportunities": [
"Capitalize on cloud leadership to win new enterprise workloads",
"Expand higher-margin managed services and deepen customer spend"
],
"risks": [
"Increasing regulatory scrutiny in Europe",
"Potential compliance costs or operational restrictions"
]
}
],
"confidence_score": 8
}
Freeform cues produce a useful summary, but lack structure, which makes the model too much freedom, making programming parsing or integration into the workflow more difficult.
By contrast, the results of JSON promotion give users complete control over the output format, ensuring clean, readable results, and having different fields for summary, emotion, opportunity, risk and confidence scores. This structured approach not only simplifies downstream processing—for dashboards, automatic alerts or data pipelines, but also guarantees consistency across responses. By pre-defined fields, users effectively guide the model to deliver exactly what they need, reducing ambiguity and improving reliability. Check The complete code is here.
Reusable JSON prompt template unlocks scalability, speed and cleaning toggles.
By pre-defined structured fields, teams can generate consistent machine-readable outputs that are plugged directly into the API, database, or application without manual formatting. This standardization not only accelerates the workflow, but also ensures reliable, repeatable results, allowing collaboration and automation across projects seamlessly.


Check Complete code and notes. Check out ours anytime Tutorials, codes and notebooks for github pages. Also, please feel free to follow us twitter And don’t forget to join us 100K+ ml reddit And subscribe Our newsletter.

I am a civil engineering graduate in Islamic Islam in Jamia Milia New Delhi (2022) and I am very interested in data science, especially neural networks and their applications in various fields.