🎉 New: GPT-5, Claude Sonnet 4, Grok 4 now available! Get started today →
GetGoAPI Documentation
Everything you need to integrate AI models into your applications. Simple, fast, and powerful.
Quick Start
Get started with GetGoAPI in just a few minutes. Follow these simple steps to make your first API call.
Step 1: Get Your API Key
Sign up for a free account and get $10 in API credits instantly.
Get Free API KeyStep 2: Install SDK (Optional)
We support OpenAI SDK, so you can use the official OpenAI library.
pip install openai
Step 3: Make Your First Request
Replace the base URL and use your GetGoAPI key.
from openai import OpenAI
client = OpenAI(
api_key="your-getgoapi-key",
base_url="https://api.getgoapi.com/v1"
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
Authentication
GetGoAPI uses API keys to authenticate requests. Include your API key in the Authorization header.
Keep Your API Key Secure
Never share your API key or commit it to version control. Use environment variables to store your key securely.
curl https://api.getgoapi.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'
API Endpoints
/v1/chat/completions
Create a chat completion with any supported model
/v1/completions
Create a text completion
/v1/embeddings
Create embeddings for text
/v1/images/generations
Generate images from text prompts
/v1/audio/transcriptions
Transcribe audio to text
/v1/models
List all available models
Code Examples
Python
from openai import OpenAI
client = OpenAI(
api_key="your-getgoapi-key",
base_url="https://api.getgoapi.com/v1"
)
# Chat Completion
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in simple terms."}
],
temperature=0.7,
max_tokens=500
)
print(response.choices[0].message.content)
Node.js
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'your-getgoapi-key',
baseURL: 'https://api.getgoapi.com/v1'
});
async function main() {
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain quantum computing in simple terms.' }
],
temperature: 0.7,
max_tokens: 500
});
console.log(response.choices[0].message.content);
}
main();
cURL
curl https://api.getgoapi.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Explain quantum computing in simple terms."
}
],
"temperature": 0.7,
"max_tokens": 500
}'
Error Handling
GetGoAPI uses standard HTTP response codes to indicate success or failure.
OK
Request succeeded
Bad Request
Invalid request parameters
Unauthorized
Invalid or missing API key
Too Many Requests
Rate limit exceeded
Internal Server Error
Server error, please retry
Error Response Example
{
"error": {
"message": "Invalid API key provided",
"type": "invalid_request_error",
"code": "invalid_api_key"
}
}
Best Practices
Use Environment Variables
Store your API key in environment variables, never hardcode it in your source code.
# .env file
GETGOAPI_KEY=your-api-key-here
# In your code
import os
api_key = os.getenv('GETGOAPI_KEY')
Implement Retry Logic
Handle transient errors by implementing exponential backoff retry logic.
import time
from openai import OpenAI
def make_request_with_retry(max_retries=3):
for attempt in range(max_retries):
try:
response = client.chat.completions.create(...)
return response
except Exception as e:
if attempt < max_retries - 1:
time.sleep(2 ** attempt) # Exponential backoff
else:
raise e
Monitor Usage
Track your API usage and costs through the dashboard to avoid unexpected charges.
# Check your usage at:
# https://dashboard.getgoapi.com/usage
# Set up billing alerts to get notified
# when you reach certain thresholds
Optimize Token Usage
Use appropriate max_tokens values and system prompts to control costs.
response = client.chat.completions.create(
model="gpt-4o",
messages=[...],
max_tokens=500, # Limit response length
temperature=0.7, # Control randomness
top_p=0.9 # Nucleus sampling
)
Supported Models
GetGoAPI supports 158+ AI models from leading providers. View the complete list on our models page.