December 11, 2025  ·  9 min read

How to Design an API That Developers Want to Use

Featured image — API design whiteboard sketch

APIs are products. The consumers are developers, and developers — like all users — form opinions fast. If your API feels awkward in the first ten minutes, most developers will write code that works around the awkwardness rather than asking you to change anything. Then you're stuck supporting the awkward version forever because people have built on it.

The best API designs aren't clever. They're obvious in retrospect — you look at them and think "yes, that's exactly how this should work." Getting there requires thinking about the consumer's mental model before you think about your data model.

Start With Workflows, Not Resources

Most API design starts with the data model and derives endpoints from it. You have users, orders, and products, so you build /users, /orders, /products. Clean, RESTful, logical.

The problem is that developers don't think about resources — they think about tasks. "I need to process a return." "I need to add a user to a team." "I need to send an invoice." If completing a task requires six API calls in a specific order, and the order isn't obvious, and getting it wrong produces cryptic error messages, your API will generate support tickets regardless of how clean the resource model is.

Before designing any endpoint, write out the five most common things a developer will need to do with your API. For each workflow, describe the sequence of calls. If any workflow requires more than three calls to complete, ask whether you can reduce that number. Sometimes you can't — some operations genuinely require multiple steps. But often, with a task-oriented design, you can collapse a four-call workflow into one or two calls that correspond more directly to what the developer is trying to accomplish.

Consistency Over Cleverness

Consistency is more important than any individual design decision. An API where everything behaves predictably — where the pagination pattern is the same on every collection, where errors always have the same shape, where authentication always works the same way — is dramatically easier to use than one with clever endpoint-specific optimizations.

The practical consequence: decide your conventions before you start, write them down, and enforce them in code review. Conventions that matter: pagination (cursor-based or offset?), error format (what fields does an error object always have?), field naming (camelCase or snake_case?), datetime format (ISO 8601, always), and null handling (do you return null fields or omit them?).

The null handling one causes more bugs than it should. Omitting null fields means consumer code needs to handle field absence, not just null values. Including null fields means consumers always know what fields to expect. Neither is wrong, but you must be consistent, and you must document it.

Error Messages That Actually Help

The gap between "what the error says" and "what the developer needs to know" is where API design fails most visibly. A 400 with a body of {"error": "validation_error"} is worse than useless. It tells the developer something went wrong (they knew that) without telling them what, where, or how to fix it.

Useful error messages include: the field that failed validation, the value that was rejected (within reason — don't echo back passwords or tokens), the constraint that was violated, and often a link to documentation. This sounds like a lot, but it serializes into a JSON structure you define once and reuse:

{
  "error": {
    "code": "validation_error",
    "message": "Request validation failed",
    "details": [
      {
        "field": "amount",
        "value": -50,
        "constraint": "must_be_positive",
        "docs": "https://docs.apiforgehq.com/errors/validation_error"
      }
    ]
  }
}

The docs field in particular is underused. A URL that takes you directly to documentation for the specific error saves the developer a search step. That's worth a few minutes of setup time in your error serialization layer.

Idempotency Is Table Stakes

Any operation that creates resources or makes state changes needs an idempotency key mechanism. Networks are unreliable. Clients retry. Without idempotency, a client that sends a payment request, times out waiting for a response, and retries may end up creating two charges. That's not a theoretical edge case — it's a production incident waiting to happen.

The implementation is straightforward: accept an Idempotency-Key header. Store the key and result for some window (24-48 hours is common). If you see the same key again, return the stored result without re-executing the operation. For the consumer, this means they can retry safely without needing to check whether the first request succeeded.

Make the idempotency key requirement explicit in your documentation and make the error clear when clients forget: idempotency_key_required is a meaningful error code. 400 Bad Request with no body is not.

Pagination That Developers Don't Have to Think About

Offset pagination (?page=3&page_size=20) feels familiar but has a correctness problem: if records are added or deleted between page requests, pages shift. Page 3 when you first requested page 1 may not be the same as page 3 when you request it after two inserts. For most use cases this doesn't matter. For some — streaming all records for a sync operation, for instance — it produces duplicates or gaps.

Cursor-based pagination doesn't have this problem. You return an opaque cursor with each response; clients send the cursor back to get the next page. The cursor encodes a position in a stable ordering (usually a timestamp plus an ID), so new records don't affect already-paginated results.

The trade-off: cursor pagination can't jump to arbitrary pages ("give me page 50 of 200"). For most API use cases, that's fine — developers are usually iterating through data, not navigating to specific pages. If your consumers genuinely need random-access pagination, offset is appropriate. Otherwise, cursor pagination is more correct.

Documentation That Ships With the Code

Documentation that lives separately from the code drifts from the code. The documentation says the response includes a user_id field; the code was refactored to return userId six months ago; nobody noticed. This is a trust erosion problem — developers who find one discrepancy stop trusting the docs entirely and resort to reverse-engineering requests from working examples.

OpenAPI specs generated from code annotations (or even better, from your actual response schemas) solve this. The spec is always in sync because it's derived from the running code. Your documentation site regenerates from the spec. Your SDK generation runs from the spec. You write the annotation once; it propagates everywhere.

This isn't free — annotation discipline is real maintenance — but the alternative is documentation debt that compounds every sprint.

See your API the way developers see it

APIForge gives you a live view of how your endpoints behave, which parameters are being used, and where errors are concentrated. Design with real feedback, not guesswork.

Start Free