In this tutorial, we will explore new features introduced in OpenAI’s latest model GPT-5. The update brings several powerful features including detailed parameters, freeform function calls, contextless syntax (CFG) and minimal reasoning. We will see in practice how they work and how they are used. Check The complete code is here.
Install the library
!pip install pandas openai
To obtain the OpenAI API key, access and generate a new key. If you are a new user, you may need to add billing details and pay a minimum of $5 to activate API access. Check The complete code is here.
import os
from getpass import getpass
os.environ['OPENAI_API_KEY'] = getpass('Enter OpenAI API Key: ')
Detailed parameters
Detailed parameters allow you to control the model’s replies without changing the prompts.
- Low → concise and clear, minimal extra text.
- Media (default) → Balance details and clarity.
- High → Very detailed, ideal for explanation, audit or teaching. Check The complete code is here.
from openai import OpenAI
import pandas as pd
from IPython.display import display
client = OpenAI()
question = "Write a poem about a detective and his first solve"
data = []
for verbosity in ["low", "medium", "high"]:
response = client.responses.create(
model="gpt-5-mini",
input=question,
text={"verbosity": verbosity}
)
# Extract text
output_text = ""
for item in response.output:
if hasattr(item, "content"):
for content in item.content:
if hasattr(content, "text"):
output_text += content.text
usage = response.usage
data.append({
"Verbosity": verbosity,
"Sample Output": output_text,
"Output Tokens": usage.output_tokens
})
# Create DataFrame
df = pd.DataFrame(data)
# Display nicely with centered headers
pd.set_option('display.max_colwidth', None)
styled_df = df.style.set_table_styles(
[
{'selector': 'th', 'props': [('text-align', 'center')]}, # Center column headers
{'selector': 'td', 'props': [('text-align', 'left')]} # Left-align table cells
]
)
display(styled_df)
Output tokens with detailed scaling approximate linear scaling: low (731) → medium (1017) → high (1263).
Free form function call
Freeform function calls let GPT-5 send raw text payloads such as Python scripts, SQL queries or shell commands – send directly to your tool without the JSON format used in GPT-4. Check The complete code is here.
This makes it easier to connect GPT-5 to an external runtime:
- Code Sandbox (Python, C++, Java, etc.)
- SQL database (direct output RAW SQL)
- Shell environment (bash that can be run by output)
- Configure generator
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="gpt-5-mini",
input="Please use the code_exec tool to calculate the cube of the number of vowels in the word 'pineapple'",
text={"format": {"type": "text"}},
tools=[
{
"type": "custom",
"name": "code_exec",
"description": "Executes arbitrary python code",
}
]
)
print(response.output[1].input)
This output shows the RAW PYTHON code generated by GPT-5, which counts vowels in the pineapple word, calculates the cube for that count, and prints two values. Instead of returning structured JSON objects (such as GPT-4 is usually used for tool calls), GPT-5 provides plain executable code. This allows the results to be fed directly into the Python runtime without additional parsing.
Contextless Syntax (CFG)
Contextless syntax (CFG) is a set of production rules that can define valid strings in one language. Each rule rewrites non-terminal symbols to terminals and/or other non-terminals without relying on the surrounding environment.
CFG is very useful when you want to strictly limit the output of your model, so it always follows the syntax of programming languages, data formats, or other structured text, for example, making sure the generated SQL, JSON, or code is always syntactically correct.
For comparison, we will run the same script using the same CFG with GPT-4 and GPT-5 to see how the two models comply with syntax rules and how accurate and speed of their outputs are different. Check The complete code is here.
from openai import OpenAI
import re
client = OpenAI()
email_regex = r"^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+.[A-Za-z]{2,}$"
prompt = "Give me a valid email address for John Doe. It can be a dummy email"
# No grammar constraints -- model might give prose or invalid format
response = client.responses.create(
model="gpt-4o", # or earlier
input=prompt
)
output = response.output_text.strip()
print("GPT Output:", output)
print("Valid?", bool(re.match(email_regex, output)))
from openai import OpenAI
client = OpenAI()
email_regex = r"^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+.[A-Za-z]{2,}$"
prompt = "Give me a valid email address for John Doe. It can be a dummy email"
response = client.responses.create(
model="gpt-5", # grammar-constrained model
input=prompt,
text={"format": {"type": "text"}},
tools=[
{
"type": "custom",
"name": "email_grammar",
"description": "Outputs a valid email address.",
"format": {
"type": "grammar",
"syntax": "regex",
"definition": email_regex
}
}
],
parallel_tool_calls=False
)
print("GPT-5 Output:", response.output[1].input)
This example shows how GPT-5 sticks to the specified format more closely when using contextless syntax.
With the same syntax rules, GPT-4 generates additional text around the email address (“Of course, here is the test email you can use for John Doe: [email protected]”), invalidate it according to strict format requirements.
However, GPT-5 outputs accurately [email protected]match syntax and pass verification. This proves that GPT-5’s ability to accurately comply with CFG constraints has improved. Check The complete code is here.
Minimal reasoning
The minimal inference mode runs GPT-5 with few inference tokens, reducing latency and providing faster time.
It is ideal for certain, lightweight tasks such as:
- Data Extraction
- format
- Short rewrite
- Simple classification
Since the model skips most intermediate reasoning steps, the response is fast and concise. If not specified, the reasoning work defaults to medium. Check The complete code is here.
import time
from openai import OpenAI
client = OpenAI()
prompt = "Classify the given number as odd or even. Return one word only."
start_time = time.time() # Start timer
response = client.responses.create(
model="gpt-5",
input=[
{ "role": "developer", "content": prompt },
{ "role": "user", "content": "57" }
],
reasoning={
"effort": "minimal" # Faster time-to-first-token
},
)
latency = time.time() - start_time # End timer
# Extract model's text output
output_text = ""
for item in response.output:
if hasattr(item, "content"):
for content in item.content:
if hasattr(content, "text"):
output_text += content.text
print("--------------------------------")
print("Output:", output_text)
print(f"Latency: {latency:.3f} seconds")

I am a civil engineering graduate in Islamic Islam in Jamia Milia New Delhi (2022) and I am very interested in data science, especially neural networks and their applications in various fields.