Overview
The first step to start using GetGoAPI is to create an account and get your API key. After that, feel free to explore our API reference for more details.
Quick Start
Get started with GetGoAPI using the OpenAI SDK or by calling the API directly.
Using the OpenAI SDK
from openai import OpenAI
client = OpenAI(
base_url="https://api.getgoapi.com/v1",
api_key="<API_KEY>",
)
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": "What is the meaning of life?"
}
]
)
print(completion.choices[0].message.content)Using the GetGoAPI API Directly
import requests
import json
response = requests.post(
url="https://api.getgoapi.com/v1/chat/completions",
headers={
"Authorization": "Bearer <API_KEY>",
"Content-Type": "application/json"
},
data=json.dumps({
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": "What is the meaning of life?"
}
]
})
)
print(response.json()["choices"][0]["message"]["content"])Streaming Responses
Enable real-time token delivery by setting stream: true in your request.
from openai import OpenAI
client = OpenAI(
base_url="https://api.getgoapi.com/v1",
api_key="<API_KEY>",
)
stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")Authentication
GetGoAPI uses API keys to authenticate requests. Include your API key in the Authorization header as a Bearer token.
Keep Your API Key Secure
Never share your API key or commit it to version control. Use environment variables to store your key securely.
curl https://api.getgoapi.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'API Reference
GetGoAPI implements the OpenAI API specification, making it easy to switch from OpenAI to GetGoAPI.
Chat Completions
Create a chat completion with any supported model.
/v1/chat/completionsMost popular endpoint for conversational AI
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model ID (e.g., "gpt-4o", "claude-3-5-sonnet") |
messages | array | Yes | Array of message objects with role and content |
temperature | number | No | Sampling temperature (0-2). Default: 1 |
max_tokens | integer | No | Maximum tokens to generate |
stream | boolean | No | Enable streaming responses. Default: false |
top_p | number | No | Nucleus sampling (0-1). Default: 1 |
Response Format
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-4o",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}Completions
/v1/completionsCreate a text completion
Embeddings
/v1/embeddingsCreate embeddings for text
Example: Create Embeddings
from openai import OpenAI
client = OpenAI(
api_key="your-getgoapi-key",
base_url="https://api.getgoapi.com/v1"
)
response = client.embeddings.create(
model="text-embedding-3-small",
input="The quick brown fox jumps over the lazy dog"
)
embedding = response.data[0].embedding
print(f"Embedding dimension: {len(embedding)}")
print(f"First 5 values: {embedding[:5]}")Images
/v1/images/generationsGenerate images from text prompts
Example: Generate Image
from openai import OpenAI
client = OpenAI(
api_key="your-getgoapi-key",
base_url="https://api.getgoapi.com/v1"
)
response = client.images.generate(
model="dall-e-3",
prompt="A futuristic city with flying cars at sunset",
size="1024x1024",
quality="standard",
n=1
)
image_url = response.data[0].url
print(f"Generated image: {image_url}")Audio
/v1/audio/transcriptionsTranscribe audio to text
Models
/v1/modelsList all available models
Code Examples
Python
from openai import OpenAI
client = OpenAI(
api_key="your-getgoapi-key",
base_url="https://api.getgoapi.com/v1"
)
# Chat Completion
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in simple terms."}
],
temperature=0.7,
max_tokens=500
)
print(response.choices[0].message.content)Node.js
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'your-getgoapi-key',
baseURL: 'https://api.getgoapi.com/v1'
});
async function main() {
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain quantum computing in simple terms.' }
],
temperature: 0.7,
max_tokens: 500
});
console.log(response.choices[0].message.content);
}
main();cURL
curl https://api.getgoapi.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Explain quantum computing in simple terms."
}
],
"temperature": 0.7,
"max_tokens": 500
}'Error Handling
GetGoAPI uses standard HTTP response codes to indicate success or failure.
OK
Request succeeded
Bad Request
Invalid request parameters
Unauthorized
Invalid or missing API key
Too Many Requests
Rate limit exceeded
Internal Server Error
Server error, please retry
Response Format
Understanding the structure of API responses helps you extract the information you need.
Chat Completion Response
A typical chat completion response contains the following fields:
| Field | Type | Description |
|---|---|---|
id | string | Unique identifier for the completion |
object | string | Object type (e.g., "chat.completion") |
created | integer | Unix timestamp of creation time |
model | string | Model used for completion |
choices | array | Array of completion choices |
usage | object | Token usage information |
Choices Object
Each choice in the choices array contains:
| Field | Type | Description |
|---|---|---|
index | integer | Index of this choice |
message | object | The generated message (role + content) |
finish_reason | string | Reason for completion ("stop", "length", "content_filter") |
Usage Object
The usage object provides token consumption details:
| Field | Type | Description |
|---|---|---|
prompt_tokens | integer | Tokens in the prompt |
completion_tokens | integer | Tokens in the completion |
total_tokens | integer | Total tokens used (prompt + completion) |
Example: Parsing Response
from openai import OpenAI
client = OpenAI(
api_key="your-getgoapi-key",
base_url="https://api.getgoapi.com/v1"
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
# Access response fields
print(f"ID: {response.id}")
print(f"Model: {response.model}")
print(f"Content: {response.choices[0].message.content}")
print(f"Finish Reason: {response.choices[0].finish_reason}")
print(f"Tokens Used: {response.usage.total_tokens}")Best Practices
Use Environment Variables
Store your API key in environment variables, never hardcode it in your source code.
# .env file
GETGOAPI_KEY=your-api-key-here
# In your code
import os
api_key = os.getenv('GETGOAPI_KEY')Implement Retry Logic
Handle transient errors by implementing exponential backoff retry logic.
import time
from openai import OpenAI
def make_request_with_retry(max_retries=3):
for attempt in range(max_retries):
try:
response = client.chat.completions.create(...)
return response
except Exception as e:
if attempt < max_retries - 1:
time.sleep(2 ** attempt) # Exponential backoff
else:
raise eFrequently Asked Questions
How do I get started with GetGoAPI?
Create an account at api.getgoapi.com/console, get your API key, and start making requests. You can use the OpenAI SDK or any HTTP client.
What models are supported?
GetGoAPI supports 158+ AI models from leading providers including OpenAI (GPT-4o, GPT-4 Turbo), Anthropic (Claude 3.5 Sonnet), Google (Gemini 2.0 Flash), and many more. Visit our models page for the complete list.
Is GetGoAPI compatible with OpenAI SDK?
Yes! GetGoAPI implements the OpenAI API specification. Simply change the base_url tohttps://api.getgoapi.com/v1 and use your GetGoAPI key. No other code changes needed.
How is billing calculated?
Billing is based on token usage. Each model has different pricing for prompt tokens and completion tokens. You can view detailed usage in your dashboard and check our pricing page for model-specific rates.
What are the rate limits?
Rate limits vary by plan and model. Free tier users have lower limits, while paid users get higher limits. If you hit a rate limit, you'll receive a 429 error. Implement exponential backoff to handle this gracefully.
Do you log my prompts and responses?
No. We do not log your prompts or completions. We only store basic request metadata (timestamps, model used, token counts) for billing purposes. Your data privacy is our priority.
Can I use streaming with all models?
Most chat models support streaming. Set stream: true in your request to enable real-time token delivery. Some specialized models may not support streaming - check the model documentation.
How do I handle errors?
GetGoAPI uses standard HTTP status codes. Implement try-catch blocks and check for specific error codes (400, 401, 429, 500). For transient errors (429, 500), use exponential backoff retry logic.
Rate Limits
GetGoAPI implements rate limits to ensure fair usage and system stability. Rate limits vary by plan and model.
Rate Limit Headers
Every API response includes rate limit information in the headers:
X-RateLimit-Limit- Maximum requests allowedX-RateLimit-Remaining- Requests remaining in current windowX-RateLimit-Reset- Time when the limit resets (Unix timestamp)
Handling Rate Limit Errors (429)
When you exceed the rate limit, you'll receive a 429 status code. Implement exponential backoff:
import time
from openai import OpenAI, RateLimitError
client = OpenAI(
api_key="your-getgoapi-key",
base_url="https://api.getgoapi.com/v1"
)
def make_request_with_backoff(max_retries=5):
for attempt in range(max_retries):
try:
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
return response
except RateLimitError as e:
if attempt < max_retries - 1:
wait_time = (2 ** attempt) + 1 # Exponential backoff
print(f"Rate limit hit. Waiting {wait_time}s...")
time.sleep(wait_time)
else:
raise e
response = make_request_with_backoff()
print(response.choices[0].message.content)Supported Models
GetGoAPI supports 158+ AI models from leading providers. View the complete list on our models page.
OpenAI
76 models available
- GPT-4o
- GPT-4 Turbo
- GPT-3.5 Turbo
Anthropic
22 models available
- Claude 3.5 Sonnet
- Claude 3 Opus
- Claude 3 Haiku
60 models available
- Gemini 2.0 Flash
- Gemini 1.5 Pro
- Gemini 1.5 Flash