🎉 New: GPT-5, Claude Sonnet 4, Grok 4 now available! Get started today →

Getting Started

Welcome to GetGoAPI

GetGoAPI provides a unified API that gives you access to 158+ AI models through a single endpoint. Get started with just a few lines of code.

Overview

The first step to start using GetGoAPI is to create an account and get your API key. After that, feel free to explore our API reference for more details.

Quick Start

Get started with GetGoAPI using the OpenAI SDK or by calling the API directly.

Using the OpenAI SDK

from openai import OpenAI

client = OpenAI(
  base_url="https://api.getgoapi.com/v1",
  api_key="<API_KEY>",
)

completion = client.chat.completions.create(
  model="gpt-4o",
  messages=[
    {
      "role": "user",
      "content": "What is the meaning of life?"
    }
  ]
)

print(completion.choices[0].message.content)

Using the GetGoAPI API Directly

import requests
import json

response = requests.post(
  url="https://api.getgoapi.com/v1/chat/completions",
  headers={
    "Authorization": "Bearer <API_KEY>",
    "Content-Type": "application/json"
  },
  data=json.dumps({
    "model": "gpt-4o", 
    "messages": [
      {
        "role": "user",
        "content": "What is the meaning of life?"
      }
    ]
  })
)
print(response.json()["choices"][0]["message"]["content"])

Streaming Responses

Enable real-time token delivery by setting stream: true in your request.

from openai import OpenAI

client = OpenAI(
  base_url="https://api.getgoapi.com/v1",
  api_key="<API_KEY>",
)

stream = client.chat.completions.create(
  model="gpt-4o",
  messages=[{"role": "user", "content": "Tell me a story"}],
  stream=True
)

for chunk in stream:
  if chunk.choices[0].delta.content is not None:
    print(chunk.choices[0].delta.content, end="")

Authentication

GetGoAPI uses API keys to authenticate requests. Include your API key in the Authorization header as a Bearer token.

Keep Your API Key Secure

Never share your API key or commit it to version control. Use environment variables to store your key securely.

curl https://api.getgoapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

API Reference

GetGoAPI implements the OpenAI API specification, making it easy to switch from OpenAI to GetGoAPI.

Chat Completions

Create a chat completion with any supported model.

POST/v1/chat/completions

Most popular endpoint for conversational AI

Request Parameters

ParameterTypeRequiredDescription
modelstringYesModel ID (e.g., "gpt-4o", "claude-3-5-sonnet")
messagesarrayYesArray of message objects with role and content
temperaturenumberNoSampling temperature (0-2). Default: 1
max_tokensintegerNoMaximum tokens to generate
streambooleanNoEnable streaming responses. Default: false
top_pnumberNoNucleus sampling (0-1). Default: 1

Response Format

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4o",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "Hello! How can I help you today?"
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}

Completions

POST/v1/completions

Create a text completion

Embeddings

POST/v1/embeddings

Create embeddings for text

Example: Create Embeddings

from openai import OpenAI

client = OpenAI(
    api_key="your-getgoapi-key",
    base_url="https://api.getgoapi.com/v1"
)

response = client.embeddings.create(
    model="text-embedding-3-small",
    input="The quick brown fox jumps over the lazy dog"
)

embedding = response.data[0].embedding
print(f"Embedding dimension: {len(embedding)}")
print(f"First 5 values: {embedding[:5]}")

Images

POST/v1/images/generations

Generate images from text prompts

Example: Generate Image

from openai import OpenAI

client = OpenAI(
    api_key="your-getgoapi-key",
    base_url="https://api.getgoapi.com/v1"
)

response = client.images.generate(
    model="dall-e-3",
    prompt="A futuristic city with flying cars at sunset",
    size="1024x1024",
    quality="standard",
    n=1
)

image_url = response.data[0].url
print(f"Generated image: {image_url}")

Audio

POST/v1/audio/transcriptions

Transcribe audio to text

Models

GET/v1/models

List all available models

Code Examples

Python

from openai import OpenAI

client = OpenAI(
    api_key="your-getgoapi-key",
    base_url="https://api.getgoapi.com/v1"
)

# Chat Completion
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing in simple terms."}
    ],
    temperature=0.7,
    max_tokens=500
)

print(response.choices[0].message.content)

Node.js

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-getgoapi-key',
  baseURL: 'https://api.getgoapi.com/v1'
});

async function main() {
  const response = await client.chat.completions.create({
    model: 'gpt-4o',
    messages: [
      { role: 'system', content: 'You are a helpful assistant.' },
      { role: 'user', content: 'Explain quantum computing in simple terms.' }
    ],
    temperature: 0.7,
    max_tokens: 500
  });

  console.log(response.choices[0].message.content);
}

main();

cURL

curl https://api.getgoapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Explain quantum computing in simple terms."
      }
    ],
    "temperature": 0.7,
    "max_tokens": 500
  }'

Error Handling

GetGoAPI uses standard HTTP response codes to indicate success or failure.

200

OK

Request succeeded

400

Bad Request

Invalid request parameters

401

Unauthorized

Invalid or missing API key

429

Too Many Requests

Rate limit exceeded

500

Internal Server Error

Server error, please retry

Response Format

Understanding the structure of API responses helps you extract the information you need.

Chat Completion Response

A typical chat completion response contains the following fields:

FieldTypeDescription
idstringUnique identifier for the completion
objectstringObject type (e.g., "chat.completion")
createdintegerUnix timestamp of creation time
modelstringModel used for completion
choicesarrayArray of completion choices
usageobjectToken usage information

Choices Object

Each choice in the choices array contains:

FieldTypeDescription
indexintegerIndex of this choice
messageobjectThe generated message (role + content)
finish_reasonstringReason for completion ("stop", "length", "content_filter")

Usage Object

The usage object provides token consumption details:

FieldTypeDescription
prompt_tokensintegerTokens in the prompt
completion_tokensintegerTokens in the completion
total_tokensintegerTotal tokens used (prompt + completion)

Example: Parsing Response

from openai import OpenAI

client = OpenAI(
    api_key="your-getgoapi-key",
    base_url="https://api.getgoapi.com/v1"
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)

# Access response fields
print(f"ID: {response.id}")
print(f"Model: {response.model}")
print(f"Content: {response.choices[0].message.content}")
print(f"Finish Reason: {response.choices[0].finish_reason}")
print(f"Tokens Used: {response.usage.total_tokens}")

Best Practices

Use Environment Variables

Store your API key in environment variables, never hardcode it in your source code.

# .env file
GETGOAPI_KEY=your-api-key-here

# In your code
import os
api_key = os.getenv('GETGOAPI_KEY')

Implement Retry Logic

Handle transient errors by implementing exponential backoff retry logic.

import time
from openai import OpenAI

def make_request_with_retry(max_retries=3):
    for attempt in range(max_retries):
        try:
            response = client.chat.completions.create(...)
            return response
        except Exception as e:
            if attempt < max_retries - 1:
                time.sleep(2 ** attempt)  # Exponential backoff
            else:
                raise e

Frequently Asked Questions

How do I get started with GetGoAPI?

Create an account at api.getgoapi.com/console, get your API key, and start making requests. You can use the OpenAI SDK or any HTTP client.

What models are supported?

GetGoAPI supports 158+ AI models from leading providers including OpenAI (GPT-4o, GPT-4 Turbo), Anthropic (Claude 3.5 Sonnet), Google (Gemini 2.0 Flash), and many more. Visit our models page for the complete list.

Is GetGoAPI compatible with OpenAI SDK?

Yes! GetGoAPI implements the OpenAI API specification. Simply change the base_url tohttps://api.getgoapi.com/v1 and use your GetGoAPI key. No other code changes needed.

How is billing calculated?

Billing is based on token usage. Each model has different pricing for prompt tokens and completion tokens. You can view detailed usage in your dashboard and check our pricing page for model-specific rates.

What are the rate limits?

Rate limits vary by plan and model. Free tier users have lower limits, while paid users get higher limits. If you hit a rate limit, you'll receive a 429 error. Implement exponential backoff to handle this gracefully.

Do you log my prompts and responses?

No. We do not log your prompts or completions. We only store basic request metadata (timestamps, model used, token counts) for billing purposes. Your data privacy is our priority.

Can I use streaming with all models?

Most chat models support streaming. Set stream: true in your request to enable real-time token delivery. Some specialized models may not support streaming - check the model documentation.

How do I handle errors?

GetGoAPI uses standard HTTP status codes. Implement try-catch blocks and check for specific error codes (400, 401, 429, 500). For transient errors (429, 500), use exponential backoff retry logic.

Rate Limits

GetGoAPI implements rate limits to ensure fair usage and system stability. Rate limits vary by plan and model.

Rate Limit Headers

Every API response includes rate limit information in the headers:

  • X-RateLimit-Limit - Maximum requests allowed
  • X-RateLimit-Remaining - Requests remaining in current window
  • X-RateLimit-Reset - Time when the limit resets (Unix timestamp)

Handling Rate Limit Errors (429)

When you exceed the rate limit, you'll receive a 429 status code. Implement exponential backoff:

import time
from openai import OpenAI, RateLimitError

client = OpenAI(
    api_key="your-getgoapi-key",
    base_url="https://api.getgoapi.com/v1"
)

def make_request_with_backoff(max_retries=5):
    for attempt in range(max_retries):
        try:
            response = client.chat.completions.create(
                model="gpt-4o",
                messages=[{"role": "user", "content": "Hello!"}]
            )
            return response
        except RateLimitError as e:
            if attempt < max_retries - 1:
                wait_time = (2 ** attempt) + 1  # Exponential backoff
                print(f"Rate limit hit. Waiting {wait_time}s...")
                time.sleep(wait_time)
            else:
                raise e

response = make_request_with_backoff()
print(response.choices[0].message.content)

Supported Models

GetGoAPI supports 158+ AI models from leading providers. View the complete list on our models page.

OpenAI

76 models available

  • GPT-4o
  • GPT-4 Turbo
  • GPT-3.5 Turbo

Anthropic

22 models available

  • Claude 3.5 Sonnet
  • Claude 3 Opus
  • Claude 3 Haiku

Google

60 models available

  • Gemini 2.0 Flash
  • Gemini 1.5 Pro
  • Gemini 1.5 Flash

Need Help?

Can't find what you're looking for? We're here to help!