Code Examples

gpt-4-turbo Code Examples

Complete API code examples in Python, Node.js, and cURL. Copy and use instantly.

1

Basic Chat

Simple chat completion example

Python
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_GETGOAPI_KEY",
    base_url="https://api.getgoapi.com/v1"
)

response = client.chat.completions.create(
    model="gpt-4-turbo",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is the capital of France?"}
    ]
)

print(response.choices[0].message.content)
Node.js
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_GETGOAPI_KEY',
  baseURL: 'https://api.getgoapi.com/v1'
});

async function main() {
  const response = await client.chat.completions.create({
    model: 'gpt-4-turbo',
    messages: [
      { role: 'system', content: 'You are a helpful assistant.' },
      { role: 'user', content: 'What is the capital of France?' }
    ]
  });

  console.log(response.choices[0].message.content);
}

main();
cURL
curl https://api.getgoapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_GETGOAPI_KEY" \
  -d '{
    "model": "gpt-4-turbo",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "What is the capital of France?"}
    ]
  }'
2

Streaming Output

Real-time streaming responses

Python
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_GETGOAPI_KEY",
    base_url="https://api.getgoapi.com/v1"
)

stream = client.chat.completions.create(
    model="gpt-4-turbo",
    messages=[{"role": "user", "content": "Write a short story"}],
    stream=True  # 开启流式输出
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end='')
Node.js
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_GETGOAPI_KEY',
  baseURL: 'https://api.getgoapi.com/v1'
});

async function main() {
  const stream = await client.chat.completions.create({
    model: 'gpt-4-turbo',
    messages: [{ role: 'user', content: 'Write a short story' }],
    stream: true  // 开启流式输出
  });

  for await (const chunk of stream) {
    process.stdout.write(chunk.choices[0]?.delta?.content || '');
  }
}

main();
cURL
curl https://api.getgoapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_GETGOAPI_KEY" \
  -d '{
    "model": "gpt-4-turbo",
    "messages": [{"role": "user", "content": "Write a short story"}],
    "stream": true
  }'
3

Temperature Control

Adjust output randomness

Python
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_GETGOAPI_KEY",
    base_url="https://api.getgoapi.com/v1"
)

# temperature=0: 确定性输出(适合编程、数学)
response_deterministic = client.chat.completions.create(
    model="gpt-4-turbo",
    messages=[{"role": "user", "content": "Write a Python function to sort a list"}],
    temperature=0
)

# temperature=1: 创造性输出(适合写作、头脑风暴)
response_creative = client.chat.completions.create(
    model="gpt-4-turbo",
    messages=[{"role": "user", "content": "Write a creative story"}],
    temperature=1
)
Node.js
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_GETGOAPI_KEY',
  baseURL: 'https://api.getgoapi.com/v1'
});

async function main() {
  // temperature=0: 确定性输出
  const deterministic = await client.chat.completions.create({
    model: 'gpt-4-turbo',
    messages: [{ role: 'user', content: 'Write a Python function to sort a list' }],
    temperature: 0
  });

  // temperature=1: 创造性输出
  const creative = await client.chat.completions.create({
    model: 'gpt-4-turbo',
    messages: [{ role: 'user', content: 'Write a creative story' }],
    temperature: 1
  });
}

main();
cURL
curl https://api.getgoapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_GETGOAPI_KEY" \
  -d '{
    "model": "gpt-4-turbo",
    "messages": [{"role": "user", "content": "Write a Python function"}],
    "temperature": 0
  }'
4

Limit Output Length

Control generated token count

Python
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_GETGOAPI_KEY",
    base_url="https://api.getgoapi.com/v1"
)

response = client.chat.completions.create(
    model="gpt-4-turbo",
    messages=[{"role": "user", "content": "Explain quantum computing"}],
    max_tokens=100  # 限制输出100个tokens
)

print(response.choices[0].message.content)
print(f"Used tokens: {response.usage.total_tokens}")
Node.js
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_GETGOAPI_KEY',
  baseURL: 'https://api.getgoapi.com/v1'
});

async function main() {
  const response = await client.chat.completions.create({
    model: 'gpt-4-turbo',
    messages: [{ role: 'user', content: 'Explain quantum computing' }],
    max_tokens: 100  // 限制输出100个tokens
  });

  console.log(response.choices[0].message.content);
  console.log(`Used tokens: ${response.usage.total_tokens}`);
}

main();
cURL
curl https://api.getgoapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_GETGOAPI_KEY" \
  -d '{
    "model": "gpt-4-turbo",
    "messages": [{"role": "user", "content": "Explain quantum computing"}],
    "max_tokens": 100
  }'
5

System Prompt

Customize AI behavior and role

Python
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_GETGOAPI_KEY",
    base_url="https://api.getgoapi.com/v1"
)

response = client.chat.completions.create(
    model="gpt-4-turbo",
    messages=[
        {
            "role": "system",
            "content": "You are a professional Python developer. Always provide code with comments and best practices."
        },
        {
            "role": "user",
            "content": "Write a function to calculate Fibonacci sequence"
        }
    ]
)

print(response.choices[0].message.content)
Node.js
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_GETGOAPI_KEY',
  baseURL: 'https://api.getgoapi.com/v1'
});

async function main() {
  const response = await client.chat.completions.create({
    model: 'gpt-4-turbo',
    messages: [
      {
        role: 'system',
        content: 'You are a professional Python developer. Always provide code with comments.'
      },
      {
        role: 'user',
        content: 'Write a function to calculate Fibonacci sequence'
      }
    ]
  });

  console.log(response.choices[0].message.content);
}

main();
cURL
curl https://api.getgoapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_GETGOAPI_KEY" \
  -d '{
    "model": "gpt-4-turbo",
    "messages": [
      {
        "role": "system",
        "content": "You are a professional Python developer."
      },
      {
        "role": "user",
        "content": "Write a function to calculate Fibonacci sequence"
      }
    ]
  }'

Best Practices

Secure Your API Key

Never hardcode API keys. Use environment variables or secret management services.

Error Handling

Always wrap API calls in try-catch blocks for graceful error handling.

Control Token Usage

Use max_tokens to limit output length and control costs.

Optimize Prompts

Clear and concise prompts yield better responses and save tokens.

Need More Help?

Check the full documentation or contact our support team

gpt-4-turbo Code Examples | Python, Node.js | OpenAI API