🎉 New: GPT-5, Claude Sonnet 4, Grok 4 now available! Get started today →

API Documentation

GetGoAPI Documentation

Everything you need to integrate AI models into your applications. Simple, fast, and powerful.

Quick Start

Get started with GetGoAPI in just a few minutes. Follow these simple steps to make your first API call.

Step 1: Get Your API Key

Sign up for a free account and get $10 in API credits instantly.

Get Free API Key

Step 2: Install SDK (Optional)

We support OpenAI SDK, so you can use the official OpenAI library.

pip install openai

Step 3: Make Your First Request

Replace the base URL and use your GetGoAPI key.

from openai import OpenAI

client = OpenAI(
    api_key="your-getgoapi-key",
    base_url="https://api.getgoapi.com/v1"
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

Authentication

GetGoAPI uses API keys to authenticate requests. Include your API key in the Authorization header.

Keep Your API Key Secure

Never share your API key or commit it to version control. Use environment variables to store your key securely.

curl https://api.getgoapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

API Endpoints

POST/v1/chat/completions
Most Popular

Create a chat completion with any supported model

POST/v1/completions

Create a text completion

POST/v1/embeddings

Create embeddings for text

POST/v1/images/generations

Generate images from text prompts

POST/v1/audio/transcriptions

Transcribe audio to text

GET/v1/models

List all available models

Code Examples

Python

from openai import OpenAI

client = OpenAI(
    api_key="your-getgoapi-key",
    base_url="https://api.getgoapi.com/v1"
)

# Chat Completion
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing in simple terms."}
    ],
    temperature=0.7,
    max_tokens=500
)

print(response.choices[0].message.content)

Node.js

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-getgoapi-key',
  baseURL: 'https://api.getgoapi.com/v1'
});

async function main() {
  const response = await client.chat.completions.create({
    model: 'gpt-4o',
    messages: [
      { role: 'system', content: 'You are a helpful assistant.' },
      { role: 'user', content: 'Explain quantum computing in simple terms.' }
    ],
    temperature: 0.7,
    max_tokens: 500
  });

  console.log(response.choices[0].message.content);
}

main();

cURL

curl https://api.getgoapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Explain quantum computing in simple terms."
      }
    ],
    "temperature": 0.7,
    "max_tokens": 500
  }'

Error Handling

GetGoAPI uses standard HTTP response codes to indicate success or failure.

200

OK

Request succeeded

400

Bad Request

Invalid request parameters

401

Unauthorized

Invalid or missing API key

429

Too Many Requests

Rate limit exceeded

500

Internal Server Error

Server error, please retry

Error Response Example

{
  "error": {
    "message": "Invalid API key provided",
    "type": "invalid_request_error",
    "code": "invalid_api_key"
  }
}

Best Practices

Use Environment Variables

Store your API key in environment variables, never hardcode it in your source code.

# .env file
GETGOAPI_KEY=your-api-key-here

# In your code
import os
api_key = os.getenv('GETGOAPI_KEY')

Implement Retry Logic

Handle transient errors by implementing exponential backoff retry logic.

import time
from openai import OpenAI

def make_request_with_retry(max_retries=3):
    for attempt in range(max_retries):
        try:
            response = client.chat.completions.create(...)
            return response
        except Exception as e:
            if attempt < max_retries - 1:
                time.sleep(2 ** attempt)  # Exponential backoff
            else:
                raise e

Monitor Usage

Track your API usage and costs through the dashboard to avoid unexpected charges.

# Check your usage at:
# https://dashboard.getgoapi.com/usage

# Set up billing alerts to get notified
# when you reach certain thresholds

Optimize Token Usage

Use appropriate max_tokens values and system prompts to control costs.

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[...],
    max_tokens=500,  # Limit response length
    temperature=0.7,  # Control randomness
    top_p=0.9  # Nucleus sampling
)

Supported Models

GetGoAPI supports 158+ AI models from leading providers. View the complete list on our models page.

OpenAI

76 models available

  • GPT-4o
  • GPT-4 Turbo
  • GPT-3.5 Turbo
  • DALL-E 3
View all models

Anthropic

22 models available

  • Claude 3.5 Sonnet
  • Claude 3 Opus
  • Claude 3 Haiku
View all models

Google

60 models available

  • Gemini 2.0 Flash
  • Gemini 1.5 Pro
  • Gemini 1.5 Flash
View all models

Need Help?

Can't find what you're looking for? We're here to help!