datablocks API Documentation

Welcome to the datablocks API documentation. Get started building with our powerful API.

Introduction

datablocks is a modern API management platform that provides efficient access to LLM services with optimized performance and cost management. Our platform offers:

  • Pre-computed context blocks (datablocks) for faster processing
  • OpenAI-compatible API endpoints
  • Flexible pricing based on actual usage
  • Support for multiple LLM models
  • File upload and management
  • Fine-tuning capabilities

Available Models

ModelContext WindowBest ForSpeed
qwen32K tokensGeneral purpose, coding, multilingualFast
llama8K tokensConversation, instruction followingVery Fast

Pricing

ModelInputOutputDatablock
qwen$0.60 / 1M tokens$1.20 / 1M tokens$0.10 / 1M tokens
llama$0.40 / 1M tokens$0.80 / 1M tokens$0.05 / 1M tokens

Datablocks save costs: With datablocks, you pay the datablock rate instead of the input rate for pre-computed context, resulting in up to 85% cost savings on repeated queries.

Getting Started

  1. 1
    Create an account

    Sign up for a datablocks account to get started

  2. 2
    Generate an API key

    Go to your dashboard and create a new API key

  3. 3
    Make your first request

    Use your API key to make requests to our endpoints

    curl /api/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{"model": "qwen", "messages": [{"role": "user", "content": "Hello!"}]}'

Key Features

âš¡ Pre-computed Context

datablocks contain pre-processed content that eliminates the need for repeated context preparation, dramatically reducing response times and costs.

🔄 OpenAI Compatible

Drop-in replacement for OpenAI's API. Use existing SDKs and tools without modification.

💰 Cost Effective

Pay only for what you use. No hidden fees, no minimum commitments. Scale from zero to millions of requests.

🎯 Fine-tuning

Customize models for your specific use case with our fine-tuning capabilities.

Next Steps