HyperRoute

Incremental Delivery

HyperRoute supports @defer and @stream — two GraphQL directives that let you send critical data immediately while loading slower or less important data in the background. This dramatically improves perceived performance for complex queries.


@defer — Deferred Fragments

@defer tells the router to return a fragment's data later, without blocking the initial response.

Usage

query ProductPage($id: ID!) {
  product(id: $id) {
    name
    price
    inStock

    ... on Product @defer(label: "reviews") {
      reviews {
        rating
        text
        author { name }
      }
    }

    ... on Product @defer(label: "recommendations") {
      recommendations {
        name
        price
      }
    }
  }
}

Response Flow

The router sends a multipart response. The client receives data as it becomes available:

Initial response (fast — ~5ms):

{
  "data": {
    "product": {
      "name": "Widget Pro",
      "price": 29.99,
      "inStock": true
    }
  },
  "hasNext": true
}

Deferred chunk 1 (reviews — ~200ms):

{
  "label": "reviews",
  "path": ["product"],
  "data": {
    "reviews": [
      { "rating": 5, "text": "Excellent!", "author": { "name": "Alice" } }
    ]
  },
  "hasNext": true
}

Deferred chunk 2 (recommendations — ~500ms):

{
  "label": "recommendations",
  "path": ["product"],
  "data": {
    "recommendations": [
      { "name": "Widget Plus", "price": 49.99 }
    ]
  },
  "hasNext": false
}

@stream — Streamed Lists

@stream progressively delivers items in a list. Instead of waiting for all items, the client receives them as they resolve:

Usage

query Feed {
  feed @stream(initialCount: 5) {
    id
    title
    content
    author { name avatar }
  }
}

Response Flow

Initial response (first 5 items):

{
  "data": {
    "feed": [
      { "id": "1", "title": "First post", "content": "...", "author": { "name": "Alice" } },
      { "id": "2", "title": "Second post", "content": "...", "author": { "name": "Bob" } },
      { "id": "3", "title": "Third post", "content": "...", "author": { "name": "Carol" } },
      { "id": "4", "title": "Fourth post", "content": "...", "author": { "name": "Dave" } },
      { "id": "5", "title": "Fifth post", "content": "...", "author": { "name": "Eve" } }
    ]
  },
  "hasNext": true
}

Streamed chunks (remaining items arrive progressively):

{
  "items": [
    { "id": "6", "title": "Sixth post", "content": "...", "author": { "name": "Frank" } }
  ],
  "path": ["feed"],
  "hasNext": true
}

Client Integration

Vanilla JavaScript

const response = await fetch('/graphql', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Accept': 'multipart/mixed'
  },
  body: JSON.stringify({ query: `...` })
});

const reader = response.body.getReader();
const decoder = new TextDecoder();
let result = {};

while (true) {
  const { done, value } = await reader.read();
  if (done) break;

  const chunk = decoder.decode(value);
  const parsed = JSON.parse(chunk);

  if (parsed.data) {
    result = { ...result, ...parsed.data };
  }
  if (parsed.path) {
    // Merge deferred/streamed data at the correct path
    mergeAtPath(result, parsed.path, parsed.data || parsed.items);
  }

  render(result); // Re-render with each chunk
}

React (with Apollo Client)

Apollo Client 3.7+ supports @defer natively:

import { gql, useQuery } from '@apollo/client';

const PRODUCT_QUERY = gql`
  query ProductPage($id: ID!) {
    product(id: $id) {
      name
      price
      ... on Product @defer(label: "reviews") {
        reviews { rating text }
      }
    }
  }
`;

function ProductPage({ id }) {
  const { data, loading } = useQuery(PRODUCT_QUERY, { variables: { id } });

  if (loading) return <Skeleton />;

  return (
    <div>
      <h1>{data.product.name}</h1>
      <p>${data.product.price}</p>

      {data.product.reviews ? (
        <ReviewList reviews={data.product.reviews} />
      ) : (
        <ReviewSkeleton />  {/* Shows while @defer loads */}
      )}
    </div>
  );
}

Configuration

streaming:
  defer:
    enabled: true
    max_deferred_fragments: 10    # Max deferred fragments per query

  stream:
    enabled: true
    default_initial_count: 10     # Default items before streaming
    max_chunk_size: 100           # Max items per streamed chunk

When to Use @defer

ScenarioUse @defer?
Slow field (reviews, recommendations)✅ Yes
Data not needed for initial render✅ Yes
Optional content that can load progressively✅ Yes
Critical data needed for layout❌ No
Small, fast fields❌ No (overhead isn't worth it)

Next Steps