Imagine playing a fully functional game of Tetris without leaving your ChatGPT conversation: rotating pieces, clearing lines, competing on leaderboards – all within the chat interface you already use every day.
With the Vercel ChatGPT Apps SDK and the Model Context Protocol (MCP), you can embed rich, interactive applications directly into ChatGPT.
In this tutorial, you'll build a production-ready Tetris game that lives inside ChatGPT, complete with user authentication, real-time leaderboards, and a replay system.
Table of Contents
What You'll Build
By the end of this tutorial, you'll have a full-stack application with:
Core gameplay: A fully playable Tetris game with smooth animations, keyboard and touch controls, real-time scoring, and level progression.
User Features: OAuth authentication via Kinde, persistent user profiles, public and private game modes, and a replay system that records every move.
Social and competitive feel: A global leaderboard, replay viewer, and ChatGPT integration that lets you start games, check scores, and review replays through natural conversation.
Technical architecture: Next.js 16 frontend with React 19 and Shadcn UI, Convex real-time database, MCP integration for ChatGPT tool registration, and production deployment on Vercel.
The user experience looks like this: say "Start a new Tetris game" in ChatGPT, an embedded game widget appears, and you play using arrow keys or on-screen controls. When the game ends, ChatGPT updates your score. Then you can ask "Show me the top 10 players" and the leaderboard appears — all in a single conversational flow.
Why This Matters
The ChatGPT Apps SDK represents a fundamental shift in how we think about AI applications. Instead of building separate interfaces or forcing users to navigate between ChatGPT and your app, you bring your application into the conversation.
This means zero learning curve (users already know ChatGPT), contextual AI intelligence, reduced friction (no app downloads or extra accounts), and access to 800M weekly active users.
Prerequisites
Required skills
To follow along, you'll need JavaScript/TypeScript fundamentals (ES6+, async/await), React basics (components, hooks, props), basic Next.js familiarity, and command-line comfort.
Required accounts and tools
Node.js 20+ and pnpm 10+
ChatGPT Plus subscription ($20/month as of January 2026)
Vercel account (free tier works)
Convex account (free tier: 1GB storage, 1M function calls/month)
Kinde account (free tier: 10,500 monthly active users)
Tech Stack Overview
Let's quickly go over the tools we'll be using, and why we'll be using them, so you're familiar with our tech stack.
Frontend Tools
Next.js 16 + React 19 + Shadcn UI: App Router, serverless API routes, optimized builds via Turbopack, and React 19's improved rendering.
Backend Tools
Convex: Replaces traditional databases and ORMs with a TypeScript-native real-time database. Here's why it matters:
// Traditional approach:
const game = await db.query("SELECT * FROM games WHERE id = ?", [gameId]);
game.score += 100;
await db.query("UPDATE games SET score = ? WHERE id = ?", [game.score, gameId]);
// Manually notify clients via WebSockets...
// Convex approach:
export const updateScore = mutation({
args: { gameId: v.id("games"), points: v.number() },
handler: async (ctx, args) => {
await ctx.db.patch(args.gameId, { score: args.score + args.points });
// All subscribed clients automatically receive the update
}
});
Authentication – Kinde OAuth 2.0: Handles secure JWT token generation, user profile management, and multiple providers. Simpler than Auth0, with a more generous free tier (10,500 MAU vs. 7,500).
Deployment – Vercel: Zero-config Next.js deployment, automatic HTTPS, global CDN, and preview deployments for every PR.
ChatGPT Integration – MCP (Model Context Protocol): The bridge between ChatGPT and your application. Defines tools ChatGPT can call, resources it can read, and suggested phrases for users. The mcp-handler npm package handles protocol negotiation, message parsing, OAuth token extraction, and CORS headers:
import { createMCPHandler } from 'mcp-handler';
export const { GET, POST } = createMCPHandler({
name: "Tetris Game Server",
version: "1.0.0",
setupServer: async (server) => {
// Register all your tools here
}
});
Understanding the Architecture
Before writing a line of code, you need a mental model of how all the pieces fit together. This section moves from high-level system design down to specific component interactions.
The Vercel ChatGPT App Template Foundation
The Vercel ChatGPT Apps template solves several hard problems out of the box: a pre-configured Next.js 16 setup with Turbopack, an integrated MCP protocol handler, OAuth discovery endpoints, CORS configuration for ChatGPT's iframe security model, and a widget rendering system.
The most important configuration file is next.config.ts. ChatGPT renders your app in an iframe from a different origin, so without permissive CORS headers, browsers block the cross-origin requests your game needs:
// next.config.ts
const nextConfig: NextConfig = {
async headers() {
return [
{
source: "/:path*",
headers: [
{ key: "Access-Control-Allow-Origin", value: "*" },
{ key: "Access-Control-Allow-Methods", value: "GET,POST,PUT,DELETE,OPTIONS" },
{ key: "Access-Control-Allow-Headers", value: "*" },
],
},
];
},
};
The template also provides a baseURL helper that's critical for OAuth redirects:
// lib/baseURL.ts
export const baseURL =
process.env.NODE_ENV === "development"
? "http://localhost:3000"
: "https://" +
(process.env.VERCEL_ENV === "production"
? process.env.VERCEL_PROJECT_PRODUCTION_URL
: process.env.VERCEL_BRANCH_URL || process.env.VERCEL_URL);
This ensures OAuth callbacks work in local development, preview deployments, and production without hardcoding URLs.
Three-Tier Architecture
Our Tetris application uses a classic three-tier architecture with modern serverless components:
┌──────────────────────────────────────────────────────┐
│ TIER 1: PRESENTATION │
│ Chat Interface ◄──────── Widget (iframe) │
│ (natural language) (React events) │
└────────────────────────┬─────────────────────────────┘
│ MCP (JSON-RPC) / HTTP
┌────────────────────────▼─────────────────────────────┐
│ TIER 2: APPLICATION │
│ MCP Route │ API Routes │ React Pages │
│ /app/mcp │ /app/api/* │ /app/tetris/* │
└────────────────────────┬─────────────────────────────┘
│ Convex SDK
┌────────────────────────▼─────────────────────────────┐
│ TIER 3: DATA │
│ Users │ Games │ Replays │ Leaderboard │
│ Convex Functions (mutations & queries) │
└──────────────────────────────────────────────────────┘
Authentication flows horizontally across all tiers. The key insight: ChatGPT handles steps 1–5 of the OAuth flow automatically (redirect to Kinde, user authenticates, redirect back with code, exchange for token). Your Next.js app only handles steps 6–10: validation and user management.
Component Relationships and Data Flow
Here's the file structure you'll build:
/app
/mcp/.well-known/oauth-protected-resource/route.ts
/mcp/route.ts # MCP server + tool registration
/api/create-game/route.ts # HTTP endpoint for game creation
/lib/kinde.ts # Kinde token validation
/lib/mcpAuth.ts # MCP-specific auth helpers
/tetris/play/page.tsx # Main game interface
/tetris/leaderboard/page.tsx # Top scores table
/tetris/replays/page.tsx # Replay browser
/components/tetris/
GameBoard.tsx # Core game logic + rendering
Leaderboard.tsx # Leaderboard table component
ReplayViewer.tsx # Replay playback component
/convex/
schema.ts # Database table definitions
games.ts # Game CRUD operations
users.ts # User management
replays.ts # Replay storage + retrieval
leaderboards.ts # Top scores queries
The most important relationship is GameBoard.tsx to Convex. When a game starts, the MCP route creates a game in Convex and returns a widget URI. ChatGPT renders GameBoard.tsx in an iframe, which loads game state from Convex via useQuery.
The user plays locally (no round-trips for each move), actions are recorded in a ref, and when the game ends, everything syncs to Convex in a single mutation.
Leaderboard real-time updates demonstrate why Convex shines:
// components/tetris/Leaderboard.tsx
export default function Leaderboard() {
// Re-runs automatically whenever leaderboard data changes
const entries = useQuery(api.leaderboards.listTop, { limit: 20 }) || [];
const userIds = entries.map((e: any) => e.userId).filter(Boolean);
const users = useQuery(
api.users.getMultipleById,
userIds.length > 0 ? { userIds } : "skip"
);
const userMap = new Map();
if (users) {
users.forEach((user: any) => {
if (user) userMap.set(user._id, user);
});
}
return (
<div className="max-w-2xl mx-auto p-4">
<h2 className="text-2xl font-bold mb-4">Leaderboard</h2>
<ol className="list-decimal pl-6 space-y-2">
{entries.map((e: any, idx: number) => {
const user = userMap.get(e.userId);
const displayName = user
? (user.displayName || `\({user.firstName || ''} \){user.lastName || ''}`.trim() || user.email)
: 'Anonymous';
return (
<li key={e._id} className="flex justify-between">
<div>{displayName}</div>
<div>{e.score}</div>
</li>
);
})}
</ol>
</div>
);
}
When someone finishes a game elsewhere, their GameBoard calls finishGame, Convex updates the leaderboards table, and your Leaderboard component re-renders automatically with new data. No polling, no WebSockets, no manual refresh code.
Key Concepts
MCP (Model Context Protocol)
This is a JSON-RPC protocol that defines how AI assistants interact with external applications. It has three primitives:
Tools: Functions the AI can call (for example,
start_game,finish_game)Resources: Data the AI can read (for example, leaderboard entries as widgets)
Prompts: Suggested phrases for users
When you register your app, ChatGPT fetches your MCP manifest and discovers available tools. When a user types "start a game," ChatGPT's language model matches the intent to start_game and calls it automatically.
The complete tool execution lifecycle is:
The user types natural language
ChatGPT calls your MCP endpoint via POST with the tool name and arguments
Your handler validates OAuth, creates a game in Convex, and returns the widget HTML
ChatGPT renders the widget in the chat interface
The user plays, and the widget calls
finish_gamewhen doneChatGPT displays the results in chat
Widgets
These are full HTML documents rendered in sandboxed iframes. Your Next.js pages become widgets via the getAppsSdkCompatibleHtml helper. The text/html+skybridge MIME type tells ChatGPT to render the content as an interactive widget rather than plain text.
Tool registration
This uses Zod schemas for input validation and security schemes to control authentication requirements.
registerTool takes three things: the tool name, a configuration object describing the tool's inputs, security requirements, and widget metadata, and an async handler function that runs when ChatGPT calls the tool.
The inputSchema uses Zod to define and validate what arguments ChatGPT can pass in. The securitySchemes array controls whether authentication is required. The _meta field tells ChatGPT to render the tool's response as an interactive widget rather than plain text.
s.registerTool(
"start_game",
{
securitySchemes: [
{ type: "noauth" }, // Allow anonymous play
{ type: "oauth2", scopes: ["profile"] },
],
title: "Start Game",
description:
"Start a new Tetris game (creates a game record and opens the play widget). User will be associated if authenticated.",
inputSchema: {
public: z
.boolean()
.optional()
.describe("Whether the game is public for spectating"),
seed: z
.number()
.optional()
.describe("Optional seed for deterministic play"),
},
_meta: widgetMeta(playWidget),
},
async (args: any, context: any) => { /* handler */ }
);
Multiple security schemes mean OR logic and users can play anonymously OR authenticated. A single scheme makes authentication required.
A Complete Request Flow
To make the architecture concrete, here's a full trace when an authenticated user starts a public game:
The user says "Start a public Tetris game"
ChatGPT parses intent, checks for existing OAuth token
ChatGPT POSTs to
/mcpwithAuthorization: Bearer <token>and{ "method": "tools/call", "params": { "name": "start_game", "arguments": { "public": true } } }The Next.js MCP route extracts and validates the token via Kinde JWKS
Next.js calls
api.users.upsertLinkedAccountin Convex, receives auserIdNext.js calls
api.games.createGamein Convex with theuserId, receives agameIdNext.js renders widget HTML for
/tetris/play?gameId=<gameId>The Widget HTML is returned to ChatGPT in MCP response
ChatGPT renders the widget and
GameBoard.tsxmounts and fetches game stateThe user plays, the score updates locally, and actions are recorded in a ref
The game ends.
GameBoardcallsapi.games.finishGamewhich atomically creates replay, updates leaderboard, updates user stats.Convex reactivity pushes leaderboard updates to all subscribed clients
GameBoardcallsuseSendMessageto post results back to the chat thread
Project Setup & Initial Configuration
By the end of this section, you'll have a running Next.js app, a connected Convex database, Kinde OAuth configured, and all environment variables in place.
Install Required Tools
Verify your Node.js version first:
node --version # Should be v24.x or higher
pnpm --version # Should be v10.x or higher
If pnpm isn't installed:
npm install -g pnpm
Clone the Vercel ChatGPT Apps Template
pnpm create next-app@latest tetris-chatgpt-app \
--example https://github.com/vercel-labs/chatgpt-apps-sdk-nextjs-starter
cd tetris-chatgpt-app
pnpm install
Install Project Dependencies
Add all packages needed for the complete application:
# Real-time database
pnpm add convex@^1.29.3
# MCP handler
pnpm add mcp-handler@^1.0.2
# Kinde OAuth token validation
pnpm add jose@^6.1.3
# Radix UI primitives (used by Shadcn)
pnpm add @radix-ui/react-dialog@^1.1.15 \
@radix-ui/react-label@^2.1.8 \
@radix-ui/react-select@^2.2.6 \
@radix-ui/react-slot@^1.2.4 \
@radix-ui/react-tabs@^1.1.13
# Utilities and UI
pnpm add class-variance-authority@^0.7.1 \
clsx@^2.1.1 \
lucide-react@^0.555.0 \
sonner@^2.0.7 \
tailwind-merge@^3.4.0 \
zod@3.24.2 \
next-themes@^0.4.6 \
@modelcontextprotocol/sdk@^1.20.0
# Dev dependencies
pnpm add -D @tailwindcss/postcss@^4 tailwindcss@^4 tw-animate-css@^1.4.0
Set Up Convex
pnpm add -g convex
pnpm convex dev
The interactive prompt will ask you to create a new project. After setup, Convex creates a convex/ directory, a .env.local file with your deployment URL, and starts watching for schema changes. Keep this terminal running throughout development.
Add the Convex React provider. Create app/provider.tsx:
"use client";
import { ReactNode } from "react";
import { ConvexProvider, ConvexReactClient } from "convex/react";
const convex = new ConvexReactClient(process.env.NEXT_PUBLIC_CONVEX_URL!);
export function ConvexClientProvider({ children }: { children: ReactNode }) {
return <ConvexProvider client={convex}>{children}</ConvexProvider>;
}
Then wrap your app in app/layout.tsx:
import { ConvexClientProvider } from "./providers";
export default function RootLayout({ children }: { children: React.ReactNode }) {
return (
<html lang="en">
<body
className={`\({geistSans.variable} \){geistMono.variable} antialiased`}
>
<ConvexClientProvider>{children}</ConvexClientProvider>
</body>
</html>
);
}
Set Up Kinde OAuth
Go to kinde.com, create an account, then create a new back-end web application named "Tetris ChatGPT App."
In your application's settings, add these allowed callback URLs:
https://localhost:3000/api/auth/callback
https://chatgpt.com/connector_platform_oauth_redirect
And these allowed logout redirect URLs:
https://tetris-chatgpt-app.vercel.app
https://chatgpt.com
Copy your Domain, Client ID, and Client Secret from the application details page.
Now create app/lib/kinde.ts:
import { createRemoteJWKSet, jwtVerify } from 'jose';
const KINDE_ISSUER = process.env.KINDE_ISSUER!;
const MCP_AUDIENCE = process.env.MCP_AUDIENCE!;
let cachedJWKS: ReturnType<typeof createRemoteJWKSet> | null = null;
async function getJWKS() {
if (!cachedJWKS) {
cachedJWKS = createRemoteJWKSet(
new URL(`${KINDE_ISSUER}/.well-known/jwks.json`)
);
}
return cachedJWKS;
}
export async function validateKindeToken(token: string) {
if (!token) throw new Error('No token provided');
const JWKS = await getJWKS();
const { payload } = await jwtVerify(token, JWKS, {
issuer: KINDE_ISSUER,
audience: MCP_AUDIENCE,
});
return payload as {
sub: string;
email?: string;
given_name?: string;
family_name?: string;
picture?: string;
exp: number;
iat: number;
};
}
export async function getKindeUserProfile(token: string) {
const payload = await validateKindeToken(token);
return {
id: payload.sub,
email: payload.email,
name: [payload.given_name, payload.family_name].filter(Boolean).join(' ') || 'Anonymous',
picture: payload.picture,
};
}
Create the OAuth discovery endpoint at app/mcp/.well-known/oauth-protected-resource/route.ts:
import { NextResponse } from 'next/server';
export async function GET() {
return NextResponse.json({
resource: process.env.MCP_RESOURCE!,
authorization_servers: [process.env.KINDE_ISSUER!],
scopes_supported: ['openid', 'profile', 'email'],
bearer_methods_supported: ['header'],
});
}
This endpoint tells ChatGPT where to send users to authenticate. Without it, ChatGPT can't discover your OAuth configuration.
Create app/lib/utils.ts:
import { clsx, type ClassValue } from "clsx";
import { twMerge } from "tailwind-merge";
export function cn(...inputs: ClassValue[]) {
return twMerge(clsx(inputs));
}
Environment Variables
Your complete .env.local should look like this:
# Convex (auto-generated by pnpm convex dev)
CONVEX_DEPLOYMENT=dev:your-deployment-name
NEXT_PUBLIC_CONVEX_URL=https://your-deployment.convex.cloud
# Kinde OAuth
KINDE_ISSUER=https://yourcompany.kinde.com
KINDE_CLIENT_ID=your-client-id
KINDE_CLIENT_SECRET=your-client-secret
# MCP settings (update after deploying to Vercel)
MCP_AUDIENCE=http://localhost:3000/mcp
MCP_RESOURCE=http://localhost:3000
MCP_DOC_URL=http://localhost:3000/mcp-docs
Make sure .env.local is in your .gitignore and it should be by default in the template.
Verify Everything Works
Open three terminals:
# Terminal 1
pnpm convex dev
# Terminal 2
pnpm dev
# Terminal 3 — test the OAuth discovery endpoint
curl http://localhost:3000/mcp/.well-known/oauth-protected-resource
# Expected: { "resource": "...", "authorization_servers": [...], ... }
Open http://localhost:3000 in your browser. If it loads without console errors and the OAuth endpoint returns JSON, your setup is complete.
Common issues:
"Cannot find module 'convex/react'"– runpnpm installand restart the dev server"NEXT_PUBLIC_CONVEX_URL is not defined"– runpnpm convex devto regenerate.env.local"Failed to verify token signature"– ensureKINDE_ISSUERhas no trailing slash
Set Up Vercel
Install the CLI and link your project so deployment is one command away later:
pnpm add -g vercel
vercel login
vercel link
Add your environment variables to Vercel now:
vercel env add NEXT_PUBLIC_CONVEX_URL
vercel env add KINDE_ISSUER
vercel env add KINDE_CLIENT_ID
vercel env add KINDE_CLIENT_SECRET
You'll update the MCP-specific variables after your first deployment once you have a production URL.
Setting Up the Convex Backend
This section builds the complete data layer: schema, mutations, queries, and real-time reactivity. By the end, you'll have a fully functional backend that automatically pushes updates to every connected client the moment data changes — no polling, no manual refresh logic.
Database Schema Design
The schema is the contract for everything your app stores. Open convex/schema.ts and replace its contents with the full schema below. Take a moment to read through the table definitions before pasting — understanding what each table stores and why will make the mutation code much easier to follow.
Open convex/schema.ts and replace its contents with the full schema:
import { defineSchema, defineTable } from "convex/server";
import { v } from "convex/values";
export default defineSchema({
users: defineTable({
email: v.string(),
displayName: v.optional(v.string()),
firstName: v.optional(v.string()),
lastName: v.optional(v.string()),
imageUrl: v.optional(v.string()),
imageStorageId: v.optional(v.id("_storage")),
updatedAt: v.number(),
}).index("by_email", ["email"]),
linkedAccounts: defineTable({
provider: v.string(),
subject: v.string(),
userId: v.id("users"),
profile: v.optional(v.object({})),
updatedAt: v.number(),
}).index("by_provider_subject", ["provider", "subject"]).index("by_user", ["userId"]),
games: defineTable({
userId: v.optional(v.id("users")),
status: v.union(
v.literal("active"),
v.literal("paused"),
v.literal("finished"),
v.literal("abandoned")
),
score: v.number(),
level: v.number(),
linesCleared: v.number(),
board: v.array(v.number()),
currentPiece: v.optional(
v.object({ type: v.string(), rotation: v.number(), x: v.number(), y: v.number() })
),
nextQueue: v.optional(v.array(v.string())),
holdPiece: v.optional(v.string()),
seed: v.optional(v.number()),
replayId: v.optional(v.id("replays")),
public: v.optional(v.boolean()),
updatedAt: v.number(),
})
.index("by_user", ["userId"])
.index("by_status", ["status"])
.index("by_score", ["score"]),
replays: defineTable({
gameId: v.id("games"),
userId: v.optional(v.id("users")),
actions: v.array(v.object({ t: v.number(), a: v.string(), p: v.optional(v.object({})) })),
durationMs: v.number(),
}).index("by_game", ["gameId"]),
leaderboards: defineTable({
userId: v.id("users"),
score: v.number(),
level: v.number(),
linesCleared: v.number(),
}).index("by_score", ["score"]),
});
A few design decisions worth calling out:
linkedAccounts is a separate table, so one user can authenticate via multiple OAuth providers without duplicating their profile. The provider + subject pair (for example, "kinde" + "kinde|2151678548") uniquely identifies an OAuth identity. The userId field is a foreign key pointing to the single users row that represents the human behind potentially many auth accounts.
The replays.actions array uses a compact format — { t, a } stands for timestamp and action code — so an entire game fits in roughly 50KB of JSON rather than megabytes of full board snapshots taken at every tick.
Indexes are not optional. Without by_email on users, finding a returning user requires a full table scan that grows linearly with your user count. With the index, lookups are O(log n) regardless of scale. Every table that will be queried by a specific field needs an index on that field.
User Management
Create convex/users.ts. This file is the backbone of your identity system — it handles looking users up, creating new ones, and most importantly, linking OAuth provider identities to your own user records.
import { query, mutation } from "./_generated/server";
import { v } from "convex/values";
export const getByEmail = query({
args: { email: v.string() },
handler: async (ctx, { email }) => {
return await ctx.db
.query("users")
.withIndex("by_email", (q) => q.eq("email", email))
.first();
},
});
export const getById = query({
args: { userId: v.id("users") },
handler: async (ctx, { userId }) => {
return await ctx.db.get(userId);
},
});
export const getMultipleById = query({
args: { userIds: v.array(v.id("users")) },
handler: async (ctx, { userIds }) => {
const users = await Promise.all(
userIds.map(id => ctx.db.get(id))
);
return users;
},
});
export const createOrUpdate = mutation({
args: {
email: v.string(),
displayName: v.optional(v.string()),
firstName: v.optional(v.string()),
lastName: v.optional(v.string()),
imageUrl: v.optional(v.string()),
},
handler: async (ctx, args) => {
const now = Date.now();
const existingUser = await ctx.db
.query("users")
.withIndex("by_email", (q) => q.eq("email", args.email))
.first();
if (existingUser) {
// Update existing user — only overwrite fields that are actually provided,
// so a partial update won't clear data you didn't intend to touch.
return await ctx.db.patch(existingUser._id, {
displayName: args.displayName ?? existingUser.displayName,
firstName: args.firstName ?? existingUser.firstName,
lastName: args.lastName ?? existingUser.lastName,
imageUrl: args.imageUrl ?? existingUser.imageUrl,
updatedAt: now,
});
}
// First time we've seen this email — create a new user record
return await ctx.db.insert("users", {
email: args.email,
displayName: args.displayName,
firstName: args.firstName,
lastName: args.lastName,
imageUrl: args.imageUrl,
updatedAt: now,
});
},
});
export const patchProfile = mutation({
args: {
userId: v.id("users"),
displayName: v.optional(v.string()),
firstName: v.optional(v.string()),
lastName: v.optional(v.string()),
imageUrl: v.optional(v.string()),
imageStorageId: v.optional(v.id("_storage")),
},
handler: async (ctx, args) => {
// Build the patch object dynamically so we only write fields that were passed in
const patch: Record<string, any> = { updatedAt: Date.now() };
if (args.displayName !== undefined) patch.displayName = args.displayName;
if (args.firstName !== undefined) patch.firstName = args.firstName;
if (args.lastName !== undefined) patch.lastName = args.lastName;
if (args.imageUrl !== undefined) patch.imageUrl = args.imageUrl;
if (args.imageStorageId !== undefined) patch.imageStorageId = args.imageStorageId;
return await ctx.db.patch(args.userId, patch);
},
});
export const upsertLinkedAccount = mutation({
args: {
provider: v.string(),
subject: v.string(),
profile: v.optional(v.object({})),
},
handler: async (ctx, { provider, subject, profile }) => {
const now = Date.now();
// Step 1: Check if we've seen this exact OAuth identity before
const linked = await ctx.db
.query("linkedAccounts")
.withIndex("by_provider_subject", (q) => q.eq("provider", provider).eq("subject", subject))
.first();
if (linked) {
// Already linked — just refresh the cached profile data and return the existing userId
await ctx.db.patch(linked._id, { profile: profile ?? linked.profile, updatedAt: now });
return linked.userId;
}
// Step 2: New OAuth identity — try to find an existing Convex user by email
// so we don't create a duplicate account if this person signed up a different way
let user = null;
const email = profile && (profile as any).email;
if (email) {
user = await ctx.db
.query("users")
.withIndex("by_email", (q) => q.eq("email", email))
.first();
}
// Step 3: If no match by email, create a brand new user record
if (!user) {
const created = await ctx.db.insert("users", {
email: email ?? `\({provider}:\){subject}`,
displayName: profile && (profile as any).name,
imageUrl: profile && (profile as any).picture,
updatedAt: now,
});
user = created;
}
const userId = typeof user === "string" ? user : user._id;
// Step 4: Record the link between this OAuth identity and the Convex user
await ctx.db.insert("linkedAccounts", {
provider,
subject,
userId: userId,
profile: profile ?? {},
updatedAt: now,
});
return userId;
},
});
upsertLinkedAccount is the most important mutation in this file — it's the OAuth entry point for your entire app. Every time a user authenticates via Kinde, you call this function and it runs through four steps:
Look up the OAuth identity (
provider+subject) inlinkedAccountsIf found, update the cached profile and return the existing
userId— same user, no new recordsIf not found, try to match by email in case this user already signed up a different way
If still no match, create a new user and link the OAuth identity to it
This design means a user who authenticates with Google and later with GitHub ends up with one users record and two linkedAccounts rows, both pointing to the same Convex user ID.
Game Management
Create convex/games.ts. This file manages the full lifecycle of a game — creation, incremental state updates, and the final atomic write when a game ends.
import { query, mutation } from "./_generated/server";
import { v } from "convex/values";
export const createGame = mutation({
args: {
userId: v.optional(v.id("users")),
public: v.optional(v.boolean()),
seed: v.optional(v.number()),
board: v.optional(v.array(v.number())),
currentPiece: v.optional(v.object({ type: v.string(), rotation: v.number(), x: v.number(), y: v.number() })),
nextQueue: v.optional(v.array(v.string())),
holdPiece: v.optional(v.string()),
},
handler: async (ctx, args) => {
const now = Date.now();
const inserted = await ctx.db.insert("games", {
userId: args.userId,
status: "active",
score: 0,
level: 1,
linesCleared: 0,
board: args.board ?? [],
currentPiece: args.currentPiece,
nextQueue: args.nextQueue ?? [],
holdPiece: args.holdPiece,
seed: args.seed,
replayId: undefined,
public: args.public ?? false,
updatedAt: now,
});
return inserted;
},
});
export const getGame = query({
args: { gameId: v.id("games") },
handler: async (ctx, { gameId }) => {
return await ctx.db.get(gameId);
},
});
export const listByUser = query({
args: { userId: v.id("users"), status: v.optional(v.string()) },
handler: async (ctx, { userId, status }) => {
const q = ctx.db.query("games").withIndex("by_user", (q) => q.eq("userId", userId));
const all = await q.collect();
if (status) return all.filter((g: any) => g.status === status);
return all;
},
});
export const patchGame = mutation({
args: {
gameId: v.id("games"),
status: v.optional(v.union(v.literal("active"), v.literal("paused"), v.literal("finished"), v.literal("abandoned"))),
score: v.optional(v.number()),
level: v.optional(v.number()),
linesCleared: v.optional(v.number()),
board: v.optional(v.array(v.number())),
currentPiece: v.optional(v.object({ type: v.string(), rotation: v.optional(v.number()), x: v.number(), y: v.number() })),
nextQueue: v.optional(v.array(v.string())),
holdPiece: v.optional(v.string()),
seed: v.optional(v.number()),
replayId: v.optional(v.id("replays")),
public: v.optional(v.boolean()),
},
handler: async (ctx, args) => {
// Build a patch object with only the fields that were explicitly provided.
// This prevents an accidental undefined from overwriting real data.
const patch: Record<string, any> = { updatedAt: Date.now() };
if (args.status !== undefined) patch.status = args.status;
if (args.score !== undefined) patch.score = args.score;
if (args.level !== undefined) patch.level = args.level;
if (args.linesCleared !== undefined) patch.linesCleared = args.linesCleared;
if (args.board !== undefined) patch.board = args.board;
if (args.currentPiece !== undefined) patch.currentPiece = args.currentPiece;
if (args.nextQueue !== undefined) patch.nextQueue = args.nextQueue;
if (args.holdPiece !== undefined) patch.holdPiece = args.holdPiece;
if (args.seed !== undefined) patch.seed = args.seed;
if (args.replayId !== undefined) patch.replayId = args.replayId;
if (args.public !== undefined) patch.public = args.public;
return await ctx.db.patch(args.gameId, patch);
},
});
export const setStatus = mutation({
args: { gameId: v.id("games"), status: v.union(v.literal("active"), v.literal("paused"), v.literal("finished"), v.literal("abandoned")) },
handler: async (ctx, { gameId, status }) => {
return await ctx.db.patch(gameId, { status, updatedAt: Date.now() });
},
});
export const finishGame = mutation({
args: {
gameId: v.id("games"),
score: v.number(),
level: v.number(),
linesCleared: v.number(),
replayActions: v.optional(v.array(v.object({ t: v.number(), a: v.string(), p: v.optional(v.object({})) }))),
durationMs: v.optional(v.number()),
userId: v.optional(v.id("users")),
},
handler: async (ctx, { gameId, score, level, linesCleared, replayActions, durationMs, userId }) => {
const now = Date.now();
const game = await ctx.db.get(gameId);
if (!game) throw new Error("Game not found");
// Use the userId passed in, or fall back to the one stored on the game record
const finalUserId = userId ?? game.userId;
let replayId = game.replayId ?? undefined;
// Save the replay if any actions were recorded during the game
if (replayActions && replayActions.length > 0) {
const insertedReplay = await ctx.db.insert("replays", {
gameId,
userId: finalUserId,
actions: replayActions,
durationMs: durationMs ?? 0,
});
replayId = insertedReplay;
}
// Mark the game finished with final stats
await ctx.db.patch(gameId, {
userId: finalUserId,
status: "finished",
score,
level,
linesCleared,
replayId,
updatedAt: now,
});
// Only add a leaderboard entry for authenticated users — anonymous games
// are saved but don't appear in the public rankings
if (finalUserId) {
await ctx.db.insert("leaderboards", {
userId: finalUserId,
score,
level,
linesCleared,
});
}
return await ctx.db.get(gameId);
},
});
export const deleteGame = mutation({
args: { gameId: v.id("games") },
handler: async (ctx, { gameId }) => {
return await ctx.db.delete(gameId);
},
});
export const listPublicFinishedGames = query({
args: { limit: v.optional(v.number()) },
handler: async (ctx, { limit }) => {
const finished = await ctx.db.query("games").withIndex("by_status", (q) => q.eq("status", "finished")).collect();
const publicOnes = (finished as any[]).filter((g) => g.public === true);
if (limit) return publicOnes.slice(0, limit);
return publicOnes;
},
});
export const getTopLeaderboard = query({
args: { limit: v.optional(v.number()) },
handler: async (ctx, { limit }) => {
const all = await ctx.db.query("leaderboards").withIndex("by_score", (q) => q).collect();
const sorted = (all as any[]).sort((a, b) => b.score - a.score);
if (limit) return sorted.slice(0, limit);
return sorted;
},
});
finishGame is the most important mutation in the entire app. All of these writes — creating the replay, updating the game's status, and inserting the leaderboard entry — happen inside a single Convex mutation, which means they run in a single transaction.
Either all of them succeed or none of them commit. You'll never end up with a finished game that has no leaderboard entry, or a leaderboard entry pointing to a game that was never marked finished.
Replay and Leaderboard Functions
Create convex/replays.ts. The key query to understand here is getRecentReplaysWithDetails, which joins replay records with their related game and user data in one call. Convex doesn't have SQL-style JOINs, so the idiomatic pattern is to fetch the related IDs in one query and then resolve them with Promise.all.
import { query, mutation } from "./_generated/server";
import { v } from "convex/values";
export const createReplay = mutation({
args: {
gameId: v.id("games"),
userId: v.optional(v.id("users")),
actions: v.array(v.object({ t: v.number(), a: v.string(), p: v.optional(v.object({})) })),
durationMs: v.number(),
},
handler: async (ctx, { gameId, userId, actions, durationMs }) => {
return await ctx.db.insert("replays", {
gameId,
userId,
actions,
durationMs,
});
},
});
export const getReplay = query({
args: { replayId: v.id("replays") },
handler: async (ctx, { replayId }) => {
const replay = await ctx.db.get(replayId);
if (!replay) return null;
// Eagerly load the related game and user so the viewer component
// gets everything it needs in one round trip
const game = await ctx.db.get(replay.gameId);
const user = replay.userId ? await ctx.db.get(replay.userId) : null;
return { ...replay, game, user };
},
});
export const listByGame = query({
args: { gameId: v.id("games") },
handler: async (ctx, { gameId }) => {
return await ctx.db
.query("replays")
.withIndex("by_game", (q) => q.eq("gameId", gameId))
.collect();
},
});
export const listByUser = query({
args: { userId: v.id("users") },
handler: async (ctx, { userId }) => {
return await ctx.db
.query("replays")
.filter((q) => q.eq(q.field("userId"), userId))
.collect();
},
});
export const patchReplay = mutation({
args: {
replayId: v.id("replays"),
actions: v.optional(v.array(v.object({ t: v.number(), a: v.string(), p: v.optional(v.object({})) }))),
durationMs: v.optional(v.number()),
},
handler: async (ctx, { replayId, actions, durationMs }) => {
const patch: Record<string, any> = {};
if (actions !== undefined) patch.actions = actions;
if (durationMs !== undefined) patch.durationMs = durationMs;
return await ctx.db.patch(replayId, patch);
},
});
export const deleteReplay = mutation({
args: { replayId: v.id("replays") },
handler: async (ctx, { replayId }) => {
return await ctx.db.delete(replayId);
},
});
export const getRecentReplays = query({
args: { limit: v.optional(v.number()) },
handler: async (ctx, { limit }) => {
return await ctx.db
.query("replays")
.order("desc")
.take(limit ?? 10);
},
});
export const getRecentReplaysWithDetails = query({
args: { limit: v.optional(v.number()) },
handler: async (ctx, { limit }) => {
const replays = await ctx.db
.query("replays")
.order("desc")
.take(limit ?? 10);
// For each replay, fetch the related game and user records in parallel.
// This is Convex's pattern for relational data — Promise.all keeps it
// efficient by firing all lookups concurrently rather than one at a time.
const withDetails = await Promise.all(
replays.map(async (replay) => {
const game = replay.gameId ? await ctx.db.get(replay.gameId) : null;
const user = replay.userId ? await ctx.db.get(replay.userId) : null;
return {
...replay,
game,
user,
};
})
);
return withDetails;
},
});
export const getTopReplays = query({
args: { limit: v.optional(v.number()) },
handler: async (ctx, { limit }) => {
// Fetch all replays, then look up each game's score for sorting.
// For large datasets you'd want a denormalized score field on the replay
// itself, but at this scale this approach is simple and readable.
const replays = await ctx.db.query("replays").collect();
const withScores = await Promise.all(
replays.map(async (replay) => {
const game = await ctx.db.get(replay.gameId);
return {
...replay,
game,
score: game?.score ?? 0,
};
})
);
const sorted = withScores.sort((a, b) => b.score - a.score);
return limit ? sorted.slice(0, limit) : sorted;
},
});
Now create convex/leaderboards.ts. A leaderboard entry is a denormalized snapshot — a point-in-time record of one game result. We don't update entries in place; every finished game creates a new row. That keeps writes simple and makes querying the historical record straightforward.
import { query, mutation } from "./_generated/server";
import { v } from "convex/values";
export const insertScore = mutation({
args: {
userId: v.id("users"),
score: v.number(),
level: v.number(),
linesCleared: v.number(),
},
handler: async (ctx, { userId, score, level, linesCleared }) => {
const now = Date.now();
return await ctx.db.insert("leaderboards", {
userId,
score,
level,
linesCleared,
});
},
});
export const getEntry = query({
args: { entryId: v.id("leaderboards") },
handler: async (ctx, { entryId }) => {
return await ctx.db.get(entryId);
},
});
export const listTop = query({
args: { limit: v.optional(v.number()) },
handler: async (ctx, { limit }) => {
// Convex doesn't yet support ORDER BY DESC on indexed queries, so we fetch
// all entries and sort in application code. For very large tables you'd
// want to cap the initial query, but for a game leaderboard this is fine.
const all = await ctx.db.query("leaderboards").withIndex("by_score").collect();
const sorted = (all as any[]).sort((a, b) => b.score - a.score);
if (limit) return sorted.slice(0, limit);
return sorted;
},
});
export const listByUser = query({
args: { userId: v.id("users") },
handler: async (ctx, { userId }) => {
return await ctx.db.query("leaderboards").filter((q) => q.eq(q.field("userId"), userId)).collect();
},
});
export const patchEntry = mutation({
args: {
entryId: v.id("leaderboards"),
score: v.optional(v.number()),
level: v.optional(v.number()),
linesCleared: v.optional(v.number()),
},
handler: async (ctx, { entryId, score, level, linesCleared }) => {
const patch: Record<string, any> = {};
if (score !== undefined) patch.score = score;
if (level !== undefined) patch.level = level;
if (linesCleared !== undefined) patch.linesCleared = linesCleared;
return await ctx.db.patch(entryId, patch);
},
});
export const deleteEntry = mutation({
args: { entryId: v.id("leaderboards") },
handler: async (ctx, { entryId }) => {
return await ctx.db.delete(entryId);
},
});
export const pruneOldEntries = mutation({
args: { maxAgeMs: v.number() },
handler: async (ctx, { maxAgeMs }) => {
const cutoff = Date.now() - maxAgeMs;
const all = await ctx.db.query("leaderboards").collect();
// Delete entries whose _creationTime falls before the cutoff
const toDelete = (all as any[]).filter((e) => (e._creationTime || 0) < cutoff);
for (const e of toDelete) await ctx.db.delete(e._id);
return toDelete.length;
},
});
Understanding Convex Reactivity
Create a quick test in app/page.tsx to see real-time updates in action:
"use client";
import { useQuery } from "convex/react";
import { api } from "@/convex/_generated/api";
export default function Home() {
const leaderboard = useQuery(api.leaderboards.getTop, { limit: 5 });
return (
<ul>
{leaderboard?.map((entry, idx) => (
<li key={entry._id}>
#{idx + 1} {entry.userName} — {entry.score.toLocaleString()}
</li>
))}
</ul>
);
}
Open this page in two browser windows. When you add a new score via the Convex dashboard, both windows update within 50ms without any polling code. This is how your leaderboard will work in production.
When you call useQuery, Convex executes the query function on the server, returns the current results to your component, and automatically subscribes the client to any future changes in the tables that query touched.
When a mutation writes to one of those tables, Convex re-runs the query, diffs the results against what the client already has, and pushes only what changed. You write zero synchronization code — the subscription is established automatically and cleaned up when the component unmounts.
Building the Tetris Game Engine
The game engine lives entirely in components/tetris/GameBoard.tsx. This section walks through every part of it: constants, utility functions, state management, the game loop, and MCP integration.
Constants and Piece Definitions
const WIDTH = 10;
const HEIGHT = 20;
const PIECES: Record<string, number[][]> = {
I: [[1, 1, 1, 1]],
O: [[1, 1], [1, 1]],
T: [[0, 1, 0], [1, 1, 1]],
S: [[0, 1, 1], [1, 1, 0]],
Z: [[1, 1, 0], [0, 1, 1]],
J: [[1, 0, 0], [1, 1, 1]],
L: [[0, 0, 1], [1, 1, 1]],
};
const PIECE_COLORS: Record<string, string> = {
I: "#00f0f0",
O: "#f0f000",
T: "#a000f0",
S: "#00f000",
Z: "#f00000",
J: "#0000f0",
L: "#f0a000",
};
const PIECE_TYPES = Object.keys(PIECES);
Each piece is stored as the smallest bounding box containing its shape, where 1 is a filled cell and 0 is transparent. This keeps rotation math simple and collision detection fast. The colors follow The Tetris Company's official guidelines, making the game instantly recognizable.
Core Utility Functions
These three functions are used throughout the engine. They are separated here because each one is independently testable and reused in multiple places.
Empty board
function emptyBoard() {
return Array.from({ length: HEIGHT }, () =>
Array.from({ length: WIDTH }, () => 0)
);
}
Rotation (90 degrees clockwise)
function rotate(shape: number[][]) {
const h = shape.length;
const w = shape[0].length;
const out = Array.from({ length: w }, () =>
Array.from({ length: h }, () => 0)
);
for (let r = 0; r < h; r++) {
for (let c = 0; c < w; c++) {
out[c][h - 1 - r] = shape[r][c];
}
}
return out;
}
The transformation out[c][h - 1 - r] = shape[r][c] transposes the matrix and reverses rows in one pass. For the T-piece, [0,1,0] / [1,1,1] becomes [1,0] / [1,1] / [1,0].
Collision detection
The most critical function, runs up to 60 times per second:
function canPlace(board: number[][], shape: number[][], x: number, y: number) {
for (let r = 0; r < shape.length; r++) {
for (let c = 0; c < shape[0].length; c++) {
if (!shape[r][c]) continue; // Skip empty cells immediately
const br = y + r;
const bc = x + c;
if (bc < 0 || bc >= WIDTH || br < 0 || br >= HEIGHT) return false;
if (board[br][bc]) return false;
}
}
return true;
}
The early continue on empty cells is the key optimization. Most cells in a piece's bounding box are empty, so this skips the majority of iterations.
React State Management
export default function GameBoard() {
const [board, setBoard] = useState<number[][]>(emptyBoard());
const [current, setCurrent] = useState<{
type: string; shape: number[][]; x: number; y: number;
} | null>(null);
const [next, setNext] = useState<string>(
() => PIECE_TYPES[Math.floor(Math.random() * PIECE_TYPES.length)]
);
const [running, setRunning] = useState(false);
const [score, setScore] = useState(0);
const [lines, setLines] = useState(0);
const [level, setLevel] = useState(1);
const [musicEnabled, setMusicEnabled] = useState(true);
const [clearingRows, setClearingRows] = useState<number[]>([]);
// Refs for values that shouldn't trigger re-renders
const actionsRef = useRef<any[]>([]);
const tickRef = useRef<number | null>(null);
const bgMusicRef = useRef<HTMLAudioElement | null>(null);
const clearSoundRef = useRef<HTMLAudioElement | null>(null);
// MCP integration hooks from Vercel template
const callTool = useCallTool();
const sendMessage = useSendMessage();
const [gameId, setGameId] = useState<Id<"games"> | null>(null);
const startTimeRef = useRef<number | null>(null);
}
The distinction between useState and useRef matters for performance. actionsRef grows with every keypress and if it were state, every move would trigger a re-render and cause lag. tickRef holds the interval ID, which only needs to exist for cleanup. bgMusicRef holds an Audio element that never appears in the UI. None of these need to cause re-renders, so all three are refs.
Audio System
useEffect(() => {
bgMusicRef.current = new Audio(
"https://cdn.freesound.org/previews/612/612091_3283808-lq.mp3"
);
bgMusicRef.current.loop = true;
bgMusicRef.current.volume = 0.3;
clearSoundRef.current = new Audio(
"https://cdn.freesound.org/previews/341/341695_5858296-lq.mp3"
);
clearSoundRef.current.volume = 0.5;
return () => {
bgMusicRef.current?.pause();
bgMusicRef.current = null;
clearSoundRef.current = null;
};
}, []);
useEffect(() => {
if (running && musicEnabled && bgMusicRef.current?.paused) {
bgMusicRef.current.currentTime = 0;
bgMusicRef.current.play().catch((e) => console.log("Music play failed:", e));
} else if (!running || !musicEnabled) {
bgMusicRef.current?.pause();
}
}, [running, musicEnabled]);
The .catch() on .play() is essential. Browsers block autoplay audio until the user has interacted with the page and without it, you'd get uncaught promise rejections on every game start.
Game Loop
useEffect(() => {
if (running) {
const interval = Math.max(200, 1000 - (level - 1) * 100);
tickRef.current = window.setInterval(() => {
if (current) move(0, 1);
}, interval);
return () => { if (tickRef.current) clearInterval(tickRef.current); };
} else {
if (tickRef.current) clearInterval(tickRef.current);
}
}, [running, current, level]);
The gravity speed formula produces this curve:
Level 1: 1000ms per drop (relaxed)
Level 5: 600ms per drop
Level 10: 200ms per drop (capped, very fast)
The dependency array includes current because the interval's callback closes over it. When a new piece spawns, current changes, the effect re-runs, and the interval restarts with the correct piece reference. Without this, the interval would hold a stale closure and pieces would behave incorrectly.
Keyboard Controls
useEffect(() => {
const handler = (e: KeyboardEvent) => {
if (!running) return;
if (e.key === "ArrowLeft") { e.preventDefault(); move(-1, 0, "L"); }
if (e.key === "ArrowRight") { e.preventDefault(); move(1, 0, "R"); }
if (e.key === "ArrowDown") { e.preventDefault(); move(0, 1, "D"); }
if (e.key === " " || e.key === "ArrowUp") {
e.preventDefault();
rotateCurrent("ROT");
}
};
window.addEventListener("keydown", handler);
return () => window.removeEventListener("keydown", handler);
}, [running, current]);
e.preventDefault() on arrow keys prevents the browser from scrolling the page while the game is active.
Piece Spawning
function spawnNext(boardParam?: number[][]) {
const currentBoard = boardParam ?? board;
const type = next;
const shape = PIECES[type].map((r) => [...r]); // Deep copy
const x = Math.floor((WIDTH - shape[0].length) / 2);
const y = 0;
if (!canPlace(currentBoard, shape, x, y)) {
finish(); // Top of board is blocked — game over
return;
}
setCurrent({ type, shape, x, y });
setNext(PIECE_TYPES[Math.floor(Math.random() * PIECE_TYPES.length)]);
}
The boardParam parameter exists because of a React state timing issue. After clearing lines, setBoard(newBoard) schedules a state update, but the next render hasn't happened yet. If you called spawnNext() without the parameter, it would read the stale board state and spawn the piece on the old board. Passing newBoard directly bypasses this:
setTimeout(() => {
setBoard(newBoard);
setCurrent(null);
setClearingRows([]);
setTimeout(() => spawnNext(newBoard), 50); // Pass new board explicitly
}, 300);
Merging Piece with Board
function mergeCurrentToBoard(brd: number[][], cur: any) {
const copy = brd.map((r) => [...r]);
if (!cur) return copy;
for (let r = 0; r < cur.shape.length; r++) {
for (let c = 0; c < cur.shape[0].length; c++) {
if (cur.shape[r][c]) {
const rr = cur.y + r;
const cc = cur.x + c;
if (rr >= 0 && rr < HEIGHT && cc >= 0 && cc < WIDTH) {
copy[rr][cc] = PIECE_TYPES.indexOf(cur.type) + 10;
}
}
}
}
return copy;
}
The +10 offset encodes falling pieces differently from locked pieces:
0 = empty cell
1-7 = locked piece (I=1, O=2, T=3, ...)
10-16 = currently falling piece (I=10, O=11, T=12, ...)
This lets the renderer apply different styles to falling vs. locked pieces, so you could add opacity, glow, or borders to the active piece without touching locked cells.
Line Clearing
function clearLines(brd: number[][]) {
let cleared = 0;
const out: number[][] = [];
for (let r = 0; r < HEIGHT; r++) {
if (brd[r].every((v) => v !== 0)) {
cleared++;
} else {
out.push(brd[r]);
}
}
while (out.length < HEIGHT)
out.unshift(Array.from({ length: WIDTH }, () => 0));
return { board: out, cleared };
}
Full rows are filtered out, then empty rows are added at the top with unshift (not push), because gravity pulls pieces down, so new empty space must appear at the top.
The Movement Function
This is where collision detection, locking, line clearing, and spawning all connect:
function move(dx: number, dy: number, actionCode?: string) {
if (!current) return;
const nx = current.x + dx;
const ny = current.y + dy;
if (canPlace(board, current.shape, nx, ny)) {
setCurrent({ ...current, x: nx, y: ny });
if (actionCode) actionsRef.current.push({ t: Date.now(), a: actionCode });
} else if (dy === 1) {
// Downward move failed — piece has landed
const merged = mergeCurrentToBoard(board, current);
const normalized = merged.map((r) =>
r.map((v) => (v >= 10 ? v - 9 : v)) // Convert falling values to locked
);
const { board: newBoard, cleared } = clearLines(normalized);
if (cleared > 0) {
// Play sound
if (clearSoundRef.current && musicEnabled) {
clearSoundRef.current.currentTime = 0;
clearSoundRef.current.play().catch(console.log);
}
// Identify which rows flash
const clearingRowIndices: number[] = [];
for (let r = 0; r < HEIGHT; r++) {
if (normalized[r].every((v) => v !== 0)) clearingRowIndices.push(r);
}
setClearingRows(clearingRowIndices);
// Update score and level
setScore((s) => s + cleared * 100);
setLines((prev) => {
const newLines = prev + cleared;
setLevel(Math.floor(newLines / 10) + 1);
return newLines;
});
// Animate, then update board
setTimeout(() => {
setBoard(newBoard);
setCurrent(null);
setClearingRows([]);
setTimeout(() => spawnNext(newBoard), 50);
}, 300);
} else {
setBoard(newBoard);
setCurrent(null);
setTimeout(() => spawnNext(newBoard), 50);
}
}
// Horizontal collision: do nothing (piece stays in place)
}
Only dy === 1 failures trigger locking. A failed left or right move simply stops the piece; it doesn't land. The 300ms animation window gives players visual feedback before cleared rows disappear.
Rotation
function rotateCurrent(actionCode?: string) {
if (!current) return;
const newShape = rotate(current.shape);
if (canPlace(board, newShape, current.x, current.y)) {
setCurrent({ ...current, shape: newShape });
if (actionCode) actionsRef.current.push({ t: Date.now(), a: actionCode });
}
}
If the rotated shape doesn't fit at the current position, nothing happens. A full implementation would add wall kicks (trying x±1, y-1 offsets before giving up), but this simplified version covers the vast majority of cases.
Game Start and Finish
async function start() {
const b = emptyBoard();
setBoard(b);
setScore(0); setLines(0); setLevel(1);
actionsRef.current = [];
actionsRef.current.push({ t: Date.now(), a: "START" });
setRunning(true);
startTimeRef.current = Date.now();
// Create game record via MCP tool (non-blocking)
(async () => {
try {
const toolRes = await callTool?.("start_game", {});
const gameIdToUse = (toolRes as any)?.structuredContent?.gameId;
if (gameIdToUse) setGameId(gameIdToUse);
} catch (err) {
console.error("Failed to create game record:", err);
}
})();
setTimeout(() => spawnNext(b), 10);
}
The MCP tool call is wrapped in a self-invoking async function so it doesn't block the game from starting. The board resets and the first piece spawns immediately; the game ID arrives asynchronously and is stored for use when the game ends.
async function finish() {
setRunning(false);
const durationMs = startTimeRef.current
? Date.now() - startTimeRef.current
: undefined;
const replayActions = actionsRef.current.slice();
if (gameId && callTool) {
try {
const result = await callTool("finish_game", {
gameId, score, level,
linesCleared: lines,
replayActions,
durationMs,
});
const message = (result as any)?.content?.[0]?.text
|| `Game finished! Score: \({score}, Level: \){level}, Lines: ${lines}`;
await sendMessage?.(message);
} catch (err) {
// Graceful fallback — still show results even if save fails
await sendMessage?.(
`Game finished locally — Score: \({score}, Level: \){level} ` +
`(Could not save: ${err instanceof Error ? err.message : String(err)})`
);
}
} else {
await sendMessage?.(
`Game finished locally — Score: \({score}, Level: \){level} (No game ID)`
);
}
// Reset all state
setBoard(emptyBoard());
setCurrent(null);
setScore(0); setLines(0); setLevel(1);
actionsRef.current = [];
setGameId(null);
startTimeRef.current = null;
}
The graceful degradation pattern is important: the game works even if the backend is unreachable. Players always see their score, and saving is a best-effort operation.
Board Rendering
const display = mergeCurrentToBoard(board, current);
const cellPx = Math.max(18, Math.min(32, Math.floor(360 / WIDTH)));
function getCellColor(value: number): string {
if (value === 0) return "#0f172a";
const typeIndex = value >= 10 ? value - 10 : value - 1;
return PIECE_COLORS[PIECE_TYPES[typeIndex]];
}
return (
<div className="grid" style={{ gridTemplateColumns: `repeat(\({WIDTH}, \){cellPx}px)` }}>
{display.flatMap((row, r) =>
row.map((cell, c) => {
const isClearing = clearingRows.includes(r);
return (
<div
key={`\({r}-\){c}`}
style={{
width: cellPx, height: cellPx,
background: getCellColor(cell),
border: "1px solid rgba(100,116,139,0.3)",
opacity: isClearing ? 0.5 : 1,
transform: isClearing ? "scale(1.05)" : "scale(1)",
transition: "all 0.2s ease-in-out",
}}
/>
);
})
)}
</div>
);
Cell size is clamped between 18px (readable on mobile) and 32px (comfortable on desktop), fitting a 360px container. The clearing animation fades rows to 50% opacity and scales them slightly larger, a subtle pulse effect before they disappear.
Control Buttons
<div className="mt-3 flex gap-2">
<button onClick={start}>Start</button>
<button onClick={() => setRunning((s) => !s)}>
{running ? "Pause" : "Resume"}
</button>
<button onClick={() => rotateCurrent()}>Rotate</button>
<button onClick={() => move(0, 1, "D")}>Drop</button>
<button onClick={() => finish()}>End</button>
</div>
These mirror the keyboard controls exactly, making the game fully playable on touch devices inside ChatGPT's iframe.
Replay Recording
Throughout the component, every player action is stamped and stored:
actionsRef.current.push({ t: Date.now(), a: actionCode });
A typical game produces a few hundred actions totaling less than 20KB of JSON. Because a ref is used instead of state, recording has zero rendering overhead. When the game ends, actionsRef.current.slice() takes a snapshot of the array and passes it to finish_game, where Convex stores it alongside the final score.
Implementing Kinde OAuth Authentication
Authentication in ChatGPT apps works differently from traditional web apps. ChatGPT is the OAuth client: it handles the redirect, code exchange, and token storage. Your app is the resource server. You receive tokens, validate them, and map OAuth identities to your database users.
OAuth Architecture Overview
Layer 1: OAuth Discovery
/.well-known/oauth-protected-resource
-> Tells ChatGPT where to authenticate
Layer 2: Token Extraction and Validation
extractTokenFromArgs() -> Find token in MCP context
validateKindeToken() -> Verify signature with JWKS
getKindeUserProfile() -> Fetch user details from Kinde
Layer 3: User Mapping
requireAuthForTool() -> Protect MCP tools
upsertLinkedAccount() -> Create/update Convex user
OAuth Discovery Endpoint
You created this in Section 3. Here is the full implementation with proper fallbacks:
// app/mcp/.well-known/oauth-protected-resource/route.ts
import { NextResponse } from 'next/server';
const MCP_SERVER_URL =
process.env.MCP_AUDIENCE ||
process.env.MCP_SERVER_URL ||
`https://${process.env.VERCEL_URL || 'localhost'}`;
const DEFAULT_KINDE_ISSUER = 'https://devrelstudio.kinde.com';
const KINDE_ISSUER_URL =
process.env.KINDE_ISSUER_URL ||
process.env.KINDE_ISSUER ||
DEFAULT_KINDE_ISSUER;
export async function GET() {
const authServers = [KINDE_ISSUER_URL];
console.log('oauth-protected-resource using authorization_servers:', authServers);
return NextResponse.json({
resource: MCP_SERVER_URL,
authorization_servers: authServers,
scopes_supported: ['openid', 'profile', 'email'],
bearer_methods_supported: ['header'],
resource_documentation: `${MCP_SERVER_URL}/docs`,
});
}
When ChatGPT encounters a tool requiring authentication, it GETs this endpoint, reads authorization_servers, and redirects the user to Kinde. Without this endpoint, ChatGPT cannot discover your OAuth configuration and the entire auth flow breaks silently.
Token Validation
Update app/lib/kinde.ts with production-ready validation:
import { createRemoteJWKSet, jwtVerify } from 'jose';
const KINDE_ISSUER_URL = process.env.KINDE_ISSUER_URL || process.env.KINDE_ISSUER;
const MCP_AUDIENCE =
process.env.MCP_AUDIENCE ||
process.env.MCP_SERVER_URL ||
process.env.NEXT_PUBLIC_MCP_AUDIENCE;
// getJwks is extracted into its own function so that createRemoteJWKSet is
// called once and reused across requests. createRemoteJWKSet handles HTTP
// caching internally, meaning Kinde's public keys are only fetched when the
// cache expires, not on every token validation. Creating a new instance per
// request would bypass this and add unnecessary latency.
function getJwks() {
if (!KINDE_ISSUER_URL) {
throw new Error('KINDE_ISSUER_URL (or KINDE_ISSUER) environment variable is not set');
}
return createRemoteJWKSet(new URL(`${KINDE_ISSUER_URL}/.well-known/jwks`));
}
export async function validateKindeToken(token: string) {
if (!token) throw new Error('No token provided');
if (!KINDE_ISSUER_URL) throw new Error('KINDE_ISSUER_URL not configured');
if (!MCP_AUDIENCE) throw new Error('MCP_AUDIENCE (or MCP_SERVER_URL) not configured');
const JWKS = getJwks();
// jwtVerify does several things in one call:
// 1. Decodes the JWT header and payload
// 2. Fetches Kinde's public keys from the JWKS endpoint (using cached keys when available)
// 3. Finds the matching key using the token's `kid` header field
// 4. Verifies the cryptographic signature
// 5. Checks that `iss` matches your Kinde domain and `aud` matches your MCP URL
//
// If the token was tampered with, expired, or issued by a different service,
// jwtVerify throws and your handler never runs.
const { payload } = await jwtVerify(token, JWKS, {
issuer: KINDE_ISSUER_URL,
audience: MCP_AUDIENCE,
} as any);
return payload as Record<string, any>;
}
export async function getKindeUserProfile(token: string) {
if (!token) throw new Error('No token provided');
if (!KINDE_ISSUER_URL) throw new Error('KINDE_ISSUER_URL not configured');
const url = `${KINDE_ISSUER_URL}/oauth2/v2/user_profile`;
const res = await fetch(url, {
headers: { Authorization: `Bearer ${token}`, Accept: 'application/json' },
});
if (!res.ok) {
const txt = await res.text();
throw new Error(`Failed to fetch user profile: \({res.status} \){txt}`);
}
return (await res.json()) as Record<string, any>;
}
The JWKS endpoint (/.well-known/jwks) returns Kinde's current public keys:
{
"keys": [
{ "kty": "RSA", "use": "sig", "kid": "abc123", "n": "0vx7...", "e": "AQAB" }
]
}
Kinde signs tokens with its private key and you verify with the matching public key. Even if an attacker intercepts a token, they cannot forge new ones without the private key.
Token Extraction from MCP Context
Before looking at the code, it is worth understanding why token extraction needs this much logic. The MCP protocol does not mandate a single canonical location for the Bearer token. Depending on the ChatGPT version, MCP transport (HTTP vs SSE), and how the SDK processes the request, the token can arrive in several different places: nested in the MCP context object under requestInfo.headers, on a raw Request object, flattened directly on context.headers, or not forwarded into the context at all because it was pre-registered against a request ID in an earlier middleware step.
The function below tries every known location in priority order, from most reliable to least, so your tool handlers work regardless of exactly how the token arrives.
Create app/lib/mcpAuth.ts:
import { getLastAuthToken } from './mcpRequestState';
export async function extractTokenFromArgs(args: any, context?: any) {
// Strategy 1: context.requestInfo.headers
// The most common location in MCP v1.0+ over HTTP. The header name may be
// lowercase or Title-Case depending on which HTTP layer normalized it.
if (context?.requestInfo?.headers) {
const authHeader =
context.requestInfo.headers.authorization ||
context.requestInfo.headers.Authorization;
if (authHeader?.startsWith('Bearer ')) {
return authHeader.substring(7);
}
}
// Strategy 2: context.request.headers
// Used when the MCP handler passes a raw Fetch API Request object through
// the context. Headers here are accessed with .get(), not dot notation.
if (context?.request?.headers) {
const authHeader =
context.request.headers.get?.('Authorization') ||
context.request.headers.get?.('authorization');
if (authHeader?.startsWith('Bearer ')) return authHeader.substring(7);
}
// Strategy 3: context.headers directly
// Some MCP transports flatten headers onto the context object itself.
// We try both the Fetch API .get() method and plain property access to
// cover both cases.
if (context?.headers) {
const authHeader =
context.headers.get?.('Authorization') ||
context.headers.get?.('authorization') ||
context.headers.Authorization ||
context.headers.authorization;
if (authHeader && typeof authHeader === 'string' && authHeader.startsWith('Bearer ')) {
return authHeader.substring(7);
}
}
// Strategy 4: Request ID mapping
// In some SSE-based transports, the Authorization header cannot be forwarded
// through the MCP context. The POST handler middleware (see the POST export in
// mcp/route.ts) pre-registers the token against every request ID found in the
// body. Here we check whether any of the IDs in this context have a match.
const possibleIds = [
context?.requestId,
context?.requestInfo?.requestId,
context?.requestInfo?.id,
context?.id,
context?.sessionId,
context?.requestInfo?.sessionId,
].filter(Boolean);
for (const id of possibleIds) {
const token = (await import('./mcpRequestMap')).getTokenForRequestId(id);
if (token) return token;
}
// Strategy 5: Last known token (single-user fallback)
// If all strategies above fail, return the most recently seen token for this
// server process. This is safe in local development where only one user is
// active, but should NOT be relied upon in a multi-user production deployment
// where requests can interleave.
const last = getLastAuthToken();
if (last) return last;
return null;
}
The two supporting files that Strategies 4 and 5 depend on are in-memory maps that live for the duration of the server process:
// app/lib/mcpRequestMap.ts
// Maps MCP request IDs to the Bearer token that arrived with them.
// Entries are written by the POST handler middleware before the MCP handler
// runs, ensuring the token is findable even after the original Request object
// is gone.
const tokenMap = new Map<string, string>();
export function setTokenForRequestId(id: string, token: string) {
tokenMap.set(id, token);
}
export function getTokenForRequestId(id: string): string | null {
return tokenMap.get(id) ?? null;
}
export function clearTokenForRequestId(id: string) {
tokenMap.delete(id);
}
export function clearAll() {
tokenMap.clear();
}
// app/lib/mcpRequestState.ts
// Tracks the most recently seen token as a last-resort fallback.
// Only appropriate for single-user or development scenarios.
let lastAuthToken: string | null = null;
export function setLastAuthToken(token: string | null) {
lastAuthToken = token;
}
export function getLastAuthToken() {
return lastAuthToken;
}
Requiring Auth in MCP Tools
With token extraction handled, you can now write requireAuthForTool: the single function you call at the top of any protected tool handler. It extracts the token, validates it against Kinde's JWKS endpoint, and returns either the authenticated user's profile or a structured MCP error response that ChatGPT knows how to act on.
// app/lib/mcpAuth.ts (continued)
import { validateKindeToken, getKindeUserProfile } from './kinde';
const MCP_SERVER_URL =
process.env.MCP_SERVER_URL ||
process.env.MCP_AUDIENCE ||
`https://${process.env.VERCEL_URL || 'localhost'}`;
// Builds the WWW-Authenticate challenge value pointing to your OAuth discovery
// endpoint. ChatGPT reads this value and uses it to initiate the Kinde login
// flow for the user.
export function makeAuthenticateMeta(message = 'Sign in required') {
return [
`Bearer resource_metadata="${MCP_SERVER_URL}/mcp/.well-known/oauth-protected-resource", ` +
`error="insufficient_scope", error_description="${message}"`,
];
}
export async function requireAuthForTool(args: any, context?: any) {
const token = await extractTokenFromArgs(args, context);
if (!token) {
// No token found anywhere. Return an MCP error with the WWW-Authenticate
// challenge. ChatGPT reads the mcp/www_authenticate metadata field, fetches
// your discovery endpoint, and redirects the user to Kinde to sign in.
return {
isError: true,
content: [{ type: 'text', text: 'Please sign in to continue.' }],
_meta: { 'mcp/www_authenticate': makeAuthenticateMeta() },
};
}
try {
const payload = await validateKindeToken(token).catch(() => null);
const profile = await getKindeUserProfile(token).catch(() => null);
if (!payload) {
// Token arrived but failed cryptographic validation. Likely expired or tampered.
return {
isError: true,
content: [{ type: 'text', text: 'Invalid token. Please sign in again.' }],
_meta: { 'mcp/www_authenticate': makeAuthenticateMeta('Invalid token') },
};
}
// Both checks passed. Return the validated identity to the caller.
return { ok: true, token, profile, payload };
} catch (err: any) {
return {
isError: true,
content: [{ type: 'text', text: 'Authentication failed. Please sign in again.' }],
_meta: { 'mcp/www_authenticate': makeAuthenticateMeta('Authentication failed') },
};
}
}
The _meta['mcp/www_authenticate'] field follows RFC 6750 (Bearer Token Usage). When ChatGPT receives a tool response containing this field, it treats the response as an authentication challenge: it fetches your discovery endpoint, reads authorization_servers, redirects the user to Kinde, and re-calls the tool with the resulting token.
Your tool handler never needs to know whether a call was an initial attempt or a post-authentication retry. It calls requireAuthForTool at the top and proceeds if ok is true:
async (args, context) => {
const auth = await requireAuthForTool(args, context);
if ((auth as any).isError) {
return auth; // Returns the auth challenge to ChatGPT
}
const { profile, payload } = auth as any;
// User is authenticated — proceed with tool logic
}
Linking OAuth Identities to Convex Users
When a user authenticates, you map their Kinde identity to a Convex user. The upsertLinkedAccount mutation from Section 4 handles this. In your MCP tool handlers, call it like this:
const linkedUserId = await callConvexMutation(
api.users.upsertLinkedAccount,
{
provider: 'kinde',
providerUserId: String(payload.sub), // e.g. "kinde|2151678548"
email: profile?.email,
displayName:
`\({profile?.given_name || ''} \){profile?.family_name || ''}`.trim() ||
profile?.email ||
'Anonymous',
avatarUrl: profile?.picture,
}
);
const userId = String(linkedUserId);
The flow is: Kinde JWT arrives with sub: "kinde|2151678548" -> check linkedAccounts for that provider/subject pair -> if found, return the existing userId and update lastSeenAt -> if not found, create a new user and linked account -> return the new userId.
This means a player authenticating for the first time gets a new Convex user created automatically. The same player returning gets their existing account, preserving all their scores and replays.
Token Extraction in the POST Handler
The strategies in extractTokenFromArgs handle finding a token once the MCP handler is already running. But some transports consume the request body before the handler sees it, meaning the token in the Authorization header has no corresponding context to land in. This middleware solves that by reading the token and the request IDs from the raw body before the handler runs, storing each pairing in mcpRequestMap so Strategy 4 can find them later.
Add this to mcp/route.ts before the handler export:
export async function POST(req: Request) {
let clonedReq = req;
try {
const authHeader =
req.headers.get('Authorization') || req.headers.get('authorization');
if (authHeader && typeof authHeader === 'string' && authHeader.startsWith('Bearer ')) {
const token = authHeader.substring(7);
const bodyText = await req.text();
if (bodyText) {
let parsed: any = null;
try {
parsed = JSON.parse(bodyText);
} catch (e) {
// Body is not JSON — skip ID collection
}
const idsToCheck: string[] = [];
// collectIds walks the parsed body recursively because request IDs can
// appear at different nesting depths depending on the MCP transport and
// the JSON-RPC batch format. A shallow check would miss IDs nested inside
// params or method objects.
function collectIds(obj: any) {
if (!obj || typeof obj !== 'object') return;
for (const k of Object.keys(obj)) {
if (k === 'requestId' || k === 'sessionId' || k === 'request_id' || k === 'id') {
const v = obj[k];
if (typeof v === 'string') idsToCheck.push(v);
} else if (typeof obj[k] === 'object') {
collectIds(obj[k]);
}
}
}
collectIds(parsed);
for (const id of idsToCheck) {
setTokenForRequestId(id, token);
}
// The request body can only be read once. Clone the request with the
// already-read body text so the MCP handler can read it again normally.
clonedReq = new Request(req.url, {
method: req.method,
headers: req.headers,
body: bodyText,
});
}
}
return await handler(clonedReq);
} catch (error) {
throw error;
}
}
Validation Endpoint for Debugging
Create app/api/mcp/validate-token/route.ts to test token validation manually during development:
import { NextResponse } from 'next/server';
import { validateKindeToken, getKindeUserProfile } from '@/app/lib/kinde';
export async function POST(req: Request) {
try {
const authHeader = req.headers.get('authorization') || '';
const tokenFromHeader = authHeader.startsWith('Bearer ')
? authHeader.replace('Bearer ', '')
: undefined;
const body = await req.json().catch(() => ({}));
const token = tokenFromHeader || body.token;
if (!token) {
return NextResponse.json({ error: 'No token provided' }, { status: 401 });
}
const payload = await validateKindeToken(token);
let profile = null;
try {
profile = await getKindeUserProfile(token);
} catch (e) {
// Non-fatal. The payload alone is enough to confirm the token is valid.
}
return NextResponse.json({ ok: true, payload, profile });
} catch (err: any) {
return NextResponse.json({ error: err?.message ?? String(err) }, { status: 401 });
}
}
Test it with a token obtained from Kinde's OAuth Playground or a manual authorization flow:
curl -X POST http://localhost:3000/api/mcp/validate-token \
-H "Authorization: Bearer eyJhbGciOiJSUzI1NiIs..." \
-H "Content-Type: application/json"
# Success:
# { "ok": true, "payload": { "sub": "kinde|...", "email": "..." }, "profile": {...} }
# Expired token:
# { "error": "Token has expired" }
Security Checklist
Before shipping, verify these practices are in place:
Server-side validation only. Never trust user-provided identity claims. Always call
validateKindeTokenon the server.Verify both issuer and audience. Without audience checking, a token issued for a different app would pass validation.
JWKS caching.
createRemoteJWKSethandles HTTP caching internally. Do not create a new instance per request.Fail fast on missing config. Throw at startup if
KINDE_ISSUER_URLorMCP_AUDIENCEare missing rather than failing silently on the first real request.Graceful error responses. Return
isError: truewith a user-friendly message rather than exposing stack traces or token details.
Building the MCP Integration
The MCP route is the core of your ChatGPT integration. It registers tools, handles authentication, calls Convex, and returns widgets. Everything from Sections 3 through 6 comes together here.
Convex HTTP Client
MCP tool handlers run in Next.js server context, not React. You cannot use useQuery or useMutation. Instead, use the Convex HTTP client directly. Create app/lib/convex.ts:
import { ConvexHttpClient } from "convex/browser";
// Singleton: ConvexHttpClient maintains a connection pool internally.
// Creating a new instance per request wastes those connections and
// adds latency to cold starts. One client shared across all requests
// is the correct pattern.
let client: ConvexHttpClient | null = null;
export function getConvexClient() {
if (!client) {
const url = process.env.NEXT_PUBLIC_CONVEX_URL;
if (!url) throw new Error("NEXT_PUBLIC_CONVEX_URL is not set");
client = new ConvexHttpClient(url);
}
return client;
}
export async function callConvexMutation<T>(
fn: any,
args: Record<string, any>
): Promise<T> {
return getConvexClient().mutation(fn, args) as Promise<T>;
}
export async function callConvexQuery<T>(
fn: any,
args: Record<string, any>
): Promise<T> {
return getConvexClient().query(fn, args) as Promise<T>;
}
Widget HTML Generation
Before registering tools, you need a helper to render widget HTML. The imports below pull in everything the MCP route depends on. zodToJsonSchema is included here because the MCP SDK expects tool input schemas in JSON Schema format, not Zod format. zodToJsonSchema converts your Zod definitions at registration time so you get Zod's type safety when writing schemas and valid JSON Schema in the manifest ChatGPT reads.
import {
createMcpHandler,
experimental_withMcpAuth,
getAppsSdkCompatibleHtml,
} from "mcp-handler";
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import { api } from "@/convex/_generated/api";
import { Id } from "@/convex/_generated/dataModel";
import { callConvexMutation, callConvexQuery } from "@/app/lib/convex";
import {
extractTokenFromArgs,
requireAuthForTool,
setTokenForRequestId,
} from "@/app/lib/mcpAuth";
import { setLastAuthToken } from "@/app/lib/mcpRequestState";
import { baseURL } from "@/lib/baseURL";
// getAppsSdkCompatibleHtml wraps a URL in the HTML structure ChatGPT expects
// for interactive widgets. It sets the MIME type to text/html+skybridge, which
// tells ChatGPT to render the response as a sandboxed iframe rather than
// displaying it as plain text.
function makeWidgetHtml(path: string, params: Record<string, string> = {}) {
const url = new URL(path, baseURL);
Object.entries(params).forEach(([k, v]) => url.searchParams.set(k, v));
return getAppsSdkCompatibleHtml(url.toString());
}
MCP Server Setup
const server = new Server(
{ name: "tetris-server", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
Tool Registration: start_game
server.registerTool(
"start_game",
{
description:
"Start a new Tetris game. Returns an interactive widget for the user to play. " +
"Authentication is optional — anonymous users can play but won't appear on the leaderboard.",
inputSchema: zodToJsonSchema(
z.object({
public: z.boolean().optional()
.describe("Whether to show this game on the leaderboard. Default false."),
seed: z.number().optional()
.describe("Random seed for reproducible piece sequences."),
})
),
// Two security schemes implement OR logic: ChatGPT calls this tool with
// whatever auth state it currently has. If the user is signed in, the token
// arrives and we link a Convex user. If not, the game starts anonymously.
// A single scheme would make authentication required and block anonymous play.
securitySchemes: [
{ type: "noauth" },
{ type: "oauth2", scopes: ["openid", "profile", "email"] },
],
},
async (args: any, context?: any) => {
let userId: string | undefined;
// Attempt auth but don't require it
const token = await extractTokenFromArgs(args, context);
if (token) {
try {
const authResult = await requireAuthForTool(args, context);
if (!(authResult as any).isError) {
const { profile, payload } = authResult as any;
const linkedId = await callConvexMutation(
api.users.upsertLinkedAccount,
{
provider: "kinde",
providerUserId: String(payload.sub),
email: profile?.email,
displayName:
`\({profile?.given_name || ""} \){profile?.family_name || ""}`.trim() ||
profile?.email ||
"Anonymous",
avatarUrl: profile?.picture,
}
);
userId = String(linkedId);
}
} catch (err) {
console.error("[start_game] Auth error (proceeding anonymously):", err);
}
}
const gameId = await callConvexMutation(api.games.createGame, {
userId: userId ? (userId as Id<"users">) : undefined,
public: args.public ?? false,
seed: args.seed,
});
const widget = makeWidgetHtml("/tetris/play", {
gameId: String(gameId),
});
return {
content: [{ type: "text", text: widget }],
// structuredContent is returned alongside the widget HTML so that
// GameBoard.tsx can extract the gameId from the callTool response
// without parsing the HTML. See the start() function in GameBoard.tsx.
structuredContent: { gameId: String(gameId) },
};
}
);
Tool Registration: finish_game
server.registerTool(
"finish_game",
{
description:
"Record the final score for a completed Tetris game and save the replay. " +
"Call this when the player's game ends.",
inputSchema: zodToJsonSchema(
z.object({
gameId: z.string().describe("The game ID returned by start_game."),
score: z.number().describe("Final score."),
level: z.number().describe("Level reached."),
linesCleared: z.number().describe("Total lines cleared."),
replayActions: z
.array(
z.object({
t: z.number().describe("Milliseconds since game start."),
a: z.string().describe("Action code: L, R, D, ROT, HD, START."),
d: z.any().optional(),
})
)
.optional()
.describe("Compact action log for replay playback."),
durationMs: z.number().optional().describe("Total game duration in ms."),
})
),
securitySchemes: [
{ type: "noauth" },
{ type: "oauth2", scopes: ["openid", "profile", "email"] },
],
},
async (args: any, context?: any) => {
try {
const result = await callConvexMutation(api.games.finishGame, {
gameId: args.gameId as Id<"games">,
// Number() coercion is defensive: ChatGPT's LLM occasionally serializes
// numeric values as strings when constructing tool arguments. Coercing
// explicitly here prevents type errors in the Convex mutation.
score: Number(args.score),
level: Number(args.level),
linesCleared: Number(args.linesCleared),
replayActions: args.replayActions ?? [],
durationMs: args.durationMs ?? 0,
});
const score = Number(args.score);
const lines = Number(args.linesCleared);
const level = Number(args.level);
const summary =
`Game over! Final score: ${score.toLocaleString()} | ` +
`Level: \({level} | Lines cleared: \){lines}. ` +
(args.gameId ? `Replay saved (ID: ${String(result?.replayId ?? "").slice(0, 8)}...). ` : "") +
(score > 10000 ? "Excellent game!" : score > 5000 ? "Nice run!" : "Keep practicing!");
return {
content: [{ type: "text", text: summary }],
structuredContent: result,
};
} catch (err: any) {
// Graceful fallback: always return a readable message rather than
// letting the error surface as a raw exception in ChatGPT's UI.
return {
content: [
{
type: "text",
text: `Score recorded locally: \({args.score}. (Save failed: \){err?.message ?? String(err)})`,
},
],
};
}
}
);
Tool Registration: get_leaderboard
server.registerTool(
"get_leaderboard",
{
description: "Get the current Tetris leaderboard widget showing top scores.",
inputSchema: zodToJsonSchema(
z.object({
limit: z.number().optional()
.describe("Number of entries to show. Default 10, max 25."),
})
),
// Read-only, public data. No auth needed and requiring it would add
// unnecessary friction for a tool that reveals nothing sensitive.
securitySchemes: [{ type: "noauth" }],
},
async (args: any, _context?: any) => {
const limit = Math.min(Number(args.limit ?? 10), 25);
try {
const topScores = await callConvexQuery(api.leaderboards.getTop, { limit });
const widget = makeWidgetHtml("/tetris/leaderboard", {
limit: String(limit),
});
return {
content: [{ type: "text", text: widget }],
structuredContent: { topScores },
};
} catch (err: any) {
return {
content: [{ type: "text", text: `Failed to fetch leaderboard: ${err?.message}` }],
};
}
}
);
Tool Registration: view_replay
server.registerTool(
"view_replay",
{
description: "Watch a recorded Tetris game replay.",
inputSchema: zodToJsonSchema(
z.object({
replayId: z.string().describe("The replay ID to watch."),
})
),
securitySchemes: [{ type: "noauth" }],
},
async (args: any, _context?: any) => {
try {
const replay = await callConvexQuery(api.replays.getReplay, {
replayId: args.replayId as Id<"replays">,
});
if (!replay) {
return { content: [{ type: "text", text: "Replay not found." }] };
}
const widget = makeWidgetHtml("/tetris/replay", {
replayId: args.replayId,
});
return {
content: [{ type: "text", text: widget }],
structuredContent: {
replayId: args.replayId,
score: replay.finalScore,
level: replay.finalLevel,
duration: replay.durationMs,
},
};
} catch (err: any) {
return {
content: [{ type: "text", text: `Failed to load replay: ${err?.message}` }],
};
}
}
);
Wiring Up the Handler and Exports
With all four tools registered, the final step is exporting the route handler that Next.js calls for every incoming request. There is one problem to solve first: the Authorization header that ChatGPT sends with authenticated requests needs to reach your tool handlers, but by the time those handlers execute, the original Request object has already been consumed by the MCP SDK's request parsing. The header is gone.
The solution is a thin middleware layer inside the POST export. Before the request reaches the MCP handler, this middleware reads the Authorization header, walks the JSON body to find every ID field, and registers the token against each ID in mcpRequestMap. When extractTokenFromArgs runs inside your tool handler, Strategy 4 finds the token via the matching request ID.
const handler = createMcpHandler(server);
export async function POST(req: Request) {
let clonedReq = req;
try {
const authHeader =
req.headers.get("Authorization") || req.headers.get("authorization");
if (authHeader?.startsWith("Bearer ")) {
const token = authHeader.substring(7);
// Store as the most recently seen token for the last-resort fallback
// (Strategy 5 in extractTokenFromArgs)
setLastAuthToken(token);
// req.body is a ReadableStream that can only be consumed once.
// We read it here, before the MCP handler sees it, so we can extract
// request IDs. The request is then reconstructed below with the same
// body text so the handler can read it normally.
const bodyText = await req.text();
if (bodyText) {
let parsed: any = null;
try { parsed = JSON.parse(bodyText); } catch (e) {}
// Walk the parsed body recursively. Request IDs can appear at different
// nesting depths depending on the MCP transport and JSON-RPC batch
// format. A shallow check would miss IDs nested inside params objects.
// We cast a wide net across all known ID field names because different
// MCP versions use different conventions.
function collectIds(obj: any) {
if (!obj || typeof obj !== "object") return;
for (const k of Object.keys(obj)) {
if (["requestId", "sessionId", "request_id", "id"].includes(k)) {
if (typeof obj[k] === "string") setTokenForRequestId(obj[k], token);
} else if (typeof obj[k] === "object") {
collectIds(obj[k]);
}
}
}
collectIds(parsed);
// Reconstruct a fresh Request with the already-read body text.
// Without this, the MCP handler receives a Request with an exhausted
// body stream and fails to parse the incoming tool call.
clonedReq = new Request(req.url, {
method: req.method,
headers: req.headers,
body: bodyText,
});
}
}
return await handler(clonedReq);
} catch (error) {
throw error;
}
}
// GET handles MCP capability discovery. When you register the connector in
// ChatGPT, it makes a GET request to your MCP endpoint to fetch the tool
// manifest: the list of available tools, their descriptions, and their
// input schemas. The same handler serves both purposes.
export const GET = handler;
MCP Documentation Endpoint
Create app/mcp-docs/page.tsx. This page is referenced in your environment variables and appears when users ask ChatGPT to explain your app's capabilities:
export default function McpDocsPage() {
return (
<main style={{ fontFamily: "monospace", padding: "2rem", maxWidth: "600px" }}>
<h1>Tetris ChatGPT App — MCP Documentation</h1>
<h2>Available Tools</h2>
<ul>
<li>
<strong>start_game</strong> — Starts a new Tetris game and returns a
playable widget. Optional: <code>public</code> (leaderboard), <code>seed</code>{" "}
(reproducible sequence).
</li>
<li>
<strong>finish_game</strong> — Records the final score and saves the
replay. Requires <code>gameId</code>, <code>score</code>,{" "}
<code>level</code>, <code>linesCleared</code>.
</li>
<li>
<strong>get_leaderboard</strong> — Returns the top scores widget.
Optional: <code>limit</code> (default 10).
</li>
<li>
<strong>view_replay</strong> — Renders a recorded game replay.
Requires <code>replayId</code>.
</li>
</ul>
<h2>Authentication</h2>
<p>
All tools support anonymous access. Signing in via Kinde enables leaderboard
entries and replay attribution.
</p>
</main>
);
}
Tool Design Principles
A few patterns from this implementation worth keeping in mind for any MCP tool you build.
Defensive argument coercion. Use Number(args.score) and String(args.gameId) rather than trusting the types ChatGPT sends. The LLM occasionally serializes numeric values as strings when constructing tool arguments, and a type mismatch in a Convex mutation will throw rather than coerce silently.
Structured content alongside widget HTML. Return structuredContent with key values even when the primary content is a widget. This lets GameBoard.tsx extract gameId directly from the callTool response and lets callers inspect results programmatically without parsing HTML.
Graceful degradation in every handler. Wrap Convex calls in try/catch and return a meaningful error message rather than throwing. ChatGPT surfaces unhandled tool errors poorly; a friendly fallback keeps the user experience smooth even when the backend is unreachable.
Minimal auth in read-only tools. For get_leaderboard and view_replay, skip auth entirely rather than attempting token extraction. These tools are read-only and public; adding auth would introduce friction and failure modes with no security benefit.
Verifying the MCP Route
Test the tool listing endpoint locally:
curl -X POST http://localhost:3000/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'
Expected response:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{ "name": "start_game", "description": "Start a new Tetris game...", "inputSchema": {...} },
{ "name": "finish_game", "description": "Record the final score...", "inputSchema": {...} },
{ "name": "get_leaderboard", "description": "Get the current Tetris leaderboard...", "inputSchema": {...} },
{ "name": "view_replay", "description": "Watch a recorded Tetris game replay.", "inputSchema": {...} }
]
}
}
If you see all four tools, the MCP server is configured correctly. If you see an empty array, check that all server.registerTool calls come before createMcpHandler(server).
Building the Supporting Features
With the game engine and MCP integration complete, this section builds the pages users actually see: the landing page, game interface, leaderboard, and replay viewer.
Landing Page
Create app/page.tsx. The landing page does two things: it renders navigation cards to the three main features, and it reads any tool output passed by ChatGPT to personalise the greeting. useWidgetProps is a hook from the Vercel ChatGPT Apps SDK that gives you access to the structured output from the MCP tool that opened this widget. If the user is signed in and start_game returned a name field in its structuredContent, the greeting will address them by name rather than showing the default copy.
"use client";
import Link from "next/link";
import { useRouter } from "next/navigation";
import { Button } from "@/components/ui/button";
import { Card, CardHeader, CardTitle, CardDescription } from "@/components/ui/card";
import { Play, Film, Trophy } from "lucide-react";
import { useWidgetProps } from "./hooks";
export default function Home() {
const router = useRouter();
// useWidgetProps reads the structured output that the MCP tool passed when
// it opened this widget. If start_game returned a name in structuredContent,
// we use it here to personalise the greeting. If not, we fall back to the
// default tagline.
const toolOutput = useWidgetProps<{
name?: string;
result?: { structuredContent?: { name?: string } };
}>();
const name = toolOutput?.result?.structuredContent?.name || toolOutput?.name;
return (
<div className="min-h-screen bg-gradient-to-b from-slate-50 to-white dark:from-slate-900 dark:to-slate-800">
<main className="container mx-auto px-4 py-12 max-w-4xl">
<div className="text-center mb-8">
<h1 className="text-5xl font-extrabold text-slate-900 dark:text-white mb-3">Tetris</h1>
<p className="text-lg text-slate-600 dark:text-slate-300">
{name
? `Hi ${name}, ready to play?`
: "Play classic Tetris in your browser — save replays and climb the leaderboard."}
</p>
</div>
<div className="grid md:grid-cols-3 gap-6 mb-12">
<Card className="hover:shadow-lg transition-shadow">
<CardHeader>
<div className="flex items-center justify-center w-12 h-12 rounded-full bg-blue-100 dark:bg-blue-900/50 mb-4">
<Play className="w-6 h-6 text-blue-600 dark:text-blue-400" />
</div>
<CardTitle>Play</CardTitle>
<CardDescription>Start a new Tetris game and save replays when you finish.</CardDescription>
</CardHeader>
</Card>
<Card className="hover:shadow-lg transition-shadow">
<CardHeader>
<div className="flex items-center justify-center w-12 h-12 rounded-full bg-green-100 dark:bg-green-900/50 mb-4">
<Film className="w-6 h-6 text-green-600 dark:text-green-400" />
</div>
<CardTitle>Replays</CardTitle>
<CardDescription>View recent replays and replay your best runs.</CardDescription>
</CardHeader>
</Card>
<Card className="hover:shadow-lg transition-shadow">
<CardHeader>
<div className="flex items-center justify-center w-12 h-12 rounded-full bg-purple-100 dark:bg-purple-900/50 mb-4">
<Trophy className="w-6 h-6 text-purple-600 dark:text-purple-400" />
</div>
<CardTitle>Leaderboard</CardTitle>
<CardDescription>See the top scores and compete for the highest rank.</CardDescription>
</CardHeader>
</Card>
</div>
<div className="text-center space-y-6">
<div className="flex justify-center gap-4">
<Link href="/tetris/play">
<Button size="lg" className="gap-2">
<Play className="w-5 h-5" />
Play Now
</Button>
</Link>
<Link href="/tetris/replays">
<Button size="lg" variant="outline" className="gap-2">
<Film className="w-5 h-5" />
Replays
</Button>
</Link>
<Link href="/tetris/leaderboard">
<Button size="lg" variant="ghost" className="gap-2">
<Trophy className="w-5 h-5" />
Leaderboard
</Button>
</Link>
</div>
</div>
</main>
<footer className="mt-16 py-6 border-t border-slate-200 dark:border-slate-800">
<div className="container mx-auto px-4 text-center text-slate-500 dark:text-slate-400">
<p>Play Tetris — save replays and compete on the leaderboard</p>
</div>
</footer>
</div>
);
}
Game Page
Create app/tetris/play/page.tsx. This is a thin wrapper that mounts the GameBoard component. All game logic lives in the component itself; the page just provides the route and a heading.
"use client";
import React from "react";
import GameBoard from "@/components/tetris/GameBoard";
export default function PlayPage() {
return (
<main className="p-6">
<h1 className="text-2xl font-bold mb-4">Play Tetris in ChatGPT</h1>
<GameBoard />
</main>
);
}
Leaderboard Component
Create components/tetris/Leaderboard.tsx. Because Convex does not perform relational joins natively, this component uses two separate queries: one for the leaderboard entries and one to look up the user records for each entry. The results are joined client-side using a Map. Both queries are live subscriptions, so the table refreshes automatically for everyone viewing it the moment any player finishes a game.
"use client";
import React from 'react';
import { useQuery } from 'convex/react';
import { api } from '@/convex/_generated/api';
export default function Leaderboard() {
const entries = useQuery(api.leaderboards.listTop, { limit: 20 }) || [];
const userIds = entries.map((e: any) => e.userId).filter(Boolean);
// "skip" is a Convex sentinel value that tells useQuery not to run the query
// at all. Without it, passing an empty userIds array would fire a query that
// returns nothing useful and produces a loading state on every initial render.
const users = useQuery(
api.users.getMultipleById,
userIds.length > 0 ? { userIds } : "skip"
);
// Build a lookup map so each entry can find its user in O(1) rather than
// scanning the users array on every render.
const userMap = new Map();
if (users) {
users.forEach((user: any) => {
if (user) userMap.set(user._id, user);
});
}
return (
<div className="max-w-2xl mx-auto p-4">
<h2 className="text-2xl font-bold mb-4">Leaderboard</h2>
<ol className="list-decimal pl-6 space-y-2">
{entries.map((e: any, idx: number) => {
const user = userMap.get(e.userId);
const displayName = user
? (user.displayName || `\({user.firstName || ''} \){user.lastName || ''}`.trim() || user.email)
: 'Anonymous';
return (
<li key={e._id} className="flex justify-between">
<div>{displayName}</div>
<div>{e.score}</div>
</li>
);
})}
</ol>
</div>
);
}
Leaderboard Page
Create app/tetris/leaderboard/page.tsx. This is a server component that mounts the Leaderboard component at the /tetris/leaderboard route.
import React from "react";
import Leaderboard from "@/components/tetris/Leaderboard";
export default function LeaderboardPage() {
return (
<main className="p-6">
<h1 className="text-2xl font-bold mb-4">Leaderboard</h1>
<Leaderboard />
</main>
);
}
Replay Viewer Component
Create components/tetris/ReplayViewer.tsx. This file contains two components: ReplayPlayer, which replays a single game from its action log, and ReplayViewer, which handles loading and selection. The key insight in replay playback is that rather than storing 20,000 board snapshots, you store a few hundred action codes and replay them at their original timestamps divided by the speed multiplier. A two-minute game replays in one minute at 2x speed simply by halving each inter-action delay.
ReplayPlayer
ReplayPlayer re-executes the stored action log against a fresh copy of the game engine, advancing state action by action at the original timings. A few design decisions are worth understanding before reading the code.
makeRng implements a seeded pseudo-random number generator (PRNG) using a Mulberry32 algorithm. Math.random() cannot be seeded, so there is no way to reproduce the same piece sequence across two independent runs. By seeding the PRNG with the same value that was used during the original game, the replay generates the exact same piece order the player experienced.
The boardRef, currentRef, and scoreRef pattern mirrors the useRef approach in GameBoard. State updates trigger re-renders, but the applyAction function is called rapidly inside scheduleNext and needs to read the current board and piece synchronously between calls. Refs give it that direct access without waiting for a render cycle.
scheduleNext is a recursive timeout function rather than a setInterval. Each call reads the gap between the current action's timestamp and the next one and schedules itself for exactly that delay divided by the playback speed. This reproduces the original timing precisely, including fast sequences and natural pauses, which a fixed interval cannot do.
"use client";
import React, { useEffect, useRef, useState } from 'react';
import { useQuery } from 'convex/react';
import { api } from '@/convex/_generated/api';
import { Id } from '@/convex/_generated/dataModel';
const WIDTH = 10;
const HEIGHT = 20;
const PIECES: Record<string, number[][]> = {
I: [[1, 1, 1, 1]],
O: [[1, 1], [1, 1]],
T: [[0, 1, 0], [1, 1, 1]],
S: [[0, 1, 1], [1, 1, 0]],
Z: [[1, 1, 0], [0, 1, 1]],
J: [[1, 0, 0], [1, 1, 1]],
L: [[0, 0, 1], [1, 1, 1]],
};
const PIECE_COLORS: Record<string, string> = {
I: "#00f0f0", O: "#f0f000", T: "#a000f0",
S: "#00f000", Z: "#f00000", J: "#0000f0", L: "#f0a000",
};
const PIECE_TYPES = Object.keys(PIECES);
// Mulberry32 seeded PRNG. Math.random() cannot be seeded, so it cannot
// reproduce a specific piece sequence. This function returns a callable
// that produces the same sequence every time it is initialized with the
// same seed — matching the sequence the player saw during the original game.
function makeRng(seed: number) {
let s = seed;
return function () {
s |= 0; s = s + 0x6D2B79F5 | 0;
let t = Math.imul(s ^ s >>> 15, 1 | s);
t = t + Math.imul(t ^ t >>> 7, 61 | t) ^ t;
return ((t ^ t >>> 14) >>> 0) / 4294967296;
};
}
function emptyBoard() {
return Array.from({ length: HEIGHT }, () => Array.from({ length: WIDTH }, () => 0));
}
function rotate(shape: number[][]) {
const h = shape.length, w = shape[0].length;
const out = Array.from({ length: w }, () => Array.from({ length: h }, () => 0));
for (let r = 0; r < h; r++)
for (let c = 0; c < w; c++)
out[c][h - 1 - r] = shape[r][c];
return out;
}
function canPlace(board: number[][], shape: number[][], x: number, y: number) {
for (let r = 0; r < shape.length; r++)
for (let c = 0; c < shape[0].length; c++) {
if (!shape[r][c]) continue;
const br = y + r, bc = x + c;
if (bc < 0 || bc >= WIDTH || br < 0 || br >= HEIGHT || board[br][bc]) return false;
}
return true;
}
function ReplayPlayer({ replay }: { replay: any }) {
const seed = replay.game?.seed ?? 0;
const [board, setBoard] = useState(emptyBoard());
const [current, setCurrent] = useState<any>(null);
const [score, setScore] = useState(0);
const [level, setLevel] = useState(1);
const [isPlaying, setIsPlaying] = useState(false);
const [playbackSpeed, setPlaybackSpeed] = useState(1);
// Refs hold the mutable game state that applyAction reads and writes
// between render cycles. Using state here would cause applyAction to
// close over stale values during rapid action sequences.
const boardRef = useRef(emptyBoard());
const currentRef = useRef<any>(null);
const scoreRef = useRef(0);
const actionIndexRef = useRef(0);
const playbackRef = useRef<ReturnType<typeof setTimeout> | null>(null);
const rngRef = useRef(makeRng(seed));
function spawnPiece(brd: number[][]) {
const type = PIECE_TYPES[Math.floor(rngRef.current() * PIECE_TYPES.length)];
const shape = PIECES[type].map((r) => [...r]);
const x = Math.floor((WIDTH - shape[0].length) / 2);
if (!canPlace(brd, shape, x, 0)) return;
const piece = { type, shape, x, y: 0 };
currentRef.current = piece;
setCurrent(piece);
}
function applyAction(action: { t: number; a: string; p?: any }) {
const brd = boardRef.current;
let cur = currentRef.current;
if (action.a === "START") {
const newBoard = emptyBoard();
boardRef.current = newBoard;
scoreRef.current = 0;
rngRef.current = makeRng(seed);
setBoard(newBoard);
setScore(0);
setLevel(1);
spawnPiece(newBoard);
return;
}
if (!cur) return;
if (action.a === "L" && canPlace(brd, cur.shape, cur.x - 1, cur.y)) {
cur = { ...cur, x: cur.x - 1 };
} else if (action.a === "R" && canPlace(brd, cur.shape, cur.x + 1, cur.y)) {
cur = { ...cur, x: cur.x + 1 };
} else if (action.a === "D" && canPlace(brd, cur.shape, cur.x, cur.y + 1)) {
cur = { ...cur, y: cur.y + 1 };
} else if (action.a === "ROT") {
const rotated = rotate(cur.shape);
if (canPlace(brd, rotated, cur.x, cur.y)) cur = { ...cur, shape: rotated };
} else if (action.a === "HD") {
// Hard drop: find the lowest valid y position by incrementing until
// canPlace fails, then lock the piece there immediately. Unlike a
// soft drop (action "D"), hard drop merges the piece into the board
// in a single step, clears any completed lines, and spawns the next piece.
let dropY = cur.y;
while (canPlace(brd, cur.shape, cur.x, dropY + 1)) dropY++;
cur = { ...cur, y: dropY };
const copy = brd.map((r) => [...r]);
for (let r = 0; r < cur.shape.length; r++)
for (let c = 0; c < cur.shape[0].length; c++)
if (cur.shape[r][c]) copy[cur.y + r][cur.x + c] = PIECE_TYPES.indexOf(cur.type) + 1;
const out: number[][] = [];
let cleared = 0;
for (let r = 0; r < HEIGHT; r++) {
if (copy[r].every((v) => v !== 0)) cleared++;
else out.push(copy[r]);
}
while (out.length < HEIGHT) out.unshift(Array.from({ length: WIDTH }, () => 0));
if (cleared > 0) {
scoreRef.current += cleared * 100;
setScore(scoreRef.current);
setLevel(Math.floor(scoreRef.current / 1000) + 1);
}
boardRef.current = out;
setBoard([...out]);
currentRef.current = null;
setCurrent(null);
spawnPiece(out);
return;
}
currentRef.current = cur;
setCurrent({ ...cur });
}
function startPlayback() {
if (!replay?.actions?.length) return;
actionIndexRef.current = 0;
boardRef.current = emptyBoard();
rngRef.current = makeRng(seed);
setBoard(emptyBoard());
setCurrent(null);
setScore(0);
setLevel(1);
setIsPlaying(true);
// scheduleNext is a recursive timeout rather than a setInterval because
// each action has a different delay: the gap between its timestamp and
// the next action's timestamp, divided by playback speed. A fixed interval
// would not reproduce natural timing variations in the original game.
function scheduleNext() {
const actions = replay.actions;
if (actionIndexRef.current >= actions.length) {
setIsPlaying(false);
return;
}
const curr = actions[actionIndexRef.current];
const next = actions[actionIndexRef.current + 1];
const delay = next ? (next.t - curr.t) / playbackSpeed : 500 / playbackSpeed;
applyAction(curr);
actionIndexRef.current++;
playbackRef.current = setTimeout(scheduleNext, Math.max(16, delay));
}
scheduleNext();
}
function stopPlayback() {
if (playbackRef.current) clearTimeout(playbackRef.current);
setIsPlaying(false);
}
useEffect(() => () => { if (playbackRef.current) clearTimeout(playbackRef.current); }, []);
const display = board.map((row, r) =>
row.map((cell, c) => {
if (current && r >= current.y && r < current.y + current.shape.length) {
const sr = r - current.y;
const sc = c - current.x;
if (sc >= 0 && sc < current.shape[0].length && current.shape[sr]?.[sc])
return PIECE_TYPES.indexOf(current.type) + 10;
}
return cell;
})
);
const cellPx = 20;
const displayName = replay.user?.displayName ?? replay.user?.firstName ?? replay.user?.email ?? "Anonymous";
return (
<div className="flex flex-col items-center gap-4 p-4 bg-slate-900 text-white">
<div className="text-lg font-bold text-cyan-400">{displayName}</div>
<div className="text-slate-400 text-sm flex gap-4">
<span>Score: {replay.game?.score?.toLocaleString() ?? "?"}</span>
<span>Level: {replay.game?.level ?? "?"}</span>
<span>Lines: {replay.game?.linesCleared ?? "?"}</span>
<span>Duration: {Math.round((replay.durationMs ?? 0) / 1000)}s</span>
</div>
<div
className="grid border border-slate-600"
style={{ gridTemplateColumns: `repeat(\({WIDTH}, \){cellPx}px)` }}
>
{display.flatMap((row, r) =>
row.map((cell, c) => {
const colorIdx = cell >= 10 ? cell - 10 : cell > 0 ? cell - 1 : -1;
return (
<div
key={`\({r}-\){c}`}
style={{
width: cellPx,
height: cellPx,
background: colorIdx >= 0 ? PIECE_COLORS[PIECE_TYPES[colorIdx]] : "#0f172a",
border: "1px solid rgba(100,116,139,0.2)",
}}
/>
);
})
)}
</div>
<div className="flex gap-6 text-sm">
<div className="text-center">
<div className="text-slate-400">Score</div>
<div className="font-bold text-cyan-400">{score.toLocaleString()}</div>
</div>
<div className="text-center">
<div className="text-slate-400">Level</div>
<div className="font-bold text-purple-400">{level}</div>
</div>
</div>
<div className="flex gap-2 items-center">
<button
onClick={isPlaying ? stopPlayback : startPlayback}
className="px-4 py-2 bg-cyan-600 hover:bg-cyan-500 rounded-lg font-medium transition-colors"
>
{isPlaying ? "Stop" : "Play Replay"}
</button>
<select
title="play-back"
value={playbackSpeed}
onChange={(e) => setPlaybackSpeed(Number(e.target.value))}
className="px-3 py-2 bg-slate-700 rounded-lg"
>
<option value={0.5}>0.5x</option>
<option value={1}>1x</option>
<option value={2}>2x</option>
<option value={4}>4x</option>
</select>
</div>
</div>
);
}
ReplayViewer
ReplayViewer handles the outer shell: loading a specific replay by ID when one is provided, fetching the recent replay list when not, and managing which replay is selected for playback. Once a replay is selected, it hands off to ReplayPlayer.
export default function ReplayViewer({ replayId }: { replayId?: Id<"replays"> | string }) {
const replay = useQuery(
api.replays.getReplay,
replayId ? { replayId: replayId as Id<"replays"> } : "skip"
);
const recent = useQuery(api.replays.getRecentReplaysWithDetails, {});
const [selected, setSelected] = useState<any | null>(null);
useEffect(() => {
if (replay) setSelected(replay);
}, [replay]);
if (replayId && !replay) {
return <div className="p-4">Loading replay...</div>;
}
if (!replayId && !recent) {
return <div className="p-4">Loading recent replays...</div>;
}
if (!selected) {
if (recent?.length === 0) {
return <div className="p-4">No recent replays available.</div>;
}
return (
<div className="max-w-lg mx-auto p-4">
<h3 className="font-bold mb-2">Recent Replays</h3>
<ul className="space-y-2">
{recent?.map((r) => {
const name = r.user?.displayName ?? r.user?.firstName ?? r.user?.email ?? "Anonymous";
return (
<li key={r._id}>
<button
className="underline text-blue-600"
onClick={() => setSelected(r)}
>
{name} | score {r.game?.score?.toLocaleString() ?? "?"} · {r.actions?.length ?? 0} actions
</button>
</li>
);
})}
</ul>
</div>
);
}
return (
<div className="min-h-screen bg-slate-900">
<div className="text-center pt-4">
<button
className="text-cyan-400 hover:underline text-sm"
onClick={() => setSelected(null)}
>
Back to replay list
</button>
</div>
<ReplayPlayer replay={selected} />
</div>
);
}
Replays Page
Create app/tetris/replays/page.tsx. This mounts ReplayViewer without a replayId, which causes the component to display the recent replays list rather than jumping straight to a specific game.
import React from "react";
import ReplayViewer from "@/components/tetris/ReplayViewer";
export default function ReplaysPage() {
return (
<main className="p-6">
<h1 className="text-2xl font-bold mb-4">Replays</h1>
<ReplayViewer />
</main>
);
}
Verify All Routes
Start the dev server and confirm each route loads:
pnpm dev
Then check each URL:
http://localhost:3000 Landing page
http://localhost:3000/tetris/play Game (no gameId, anonymous start)
http://localhost:3000/tetris/leaderboard Live leaderboard
http://localhost:3000/tetris/replays Replay list
http://localhost:3000/tetris/replays?replayId=x Viewer (shows "not found" until a real ID exists)
http://localhost:3000/mcp-docs MCP documentation
If all six routes load without errors, the supporting features are wired up correctly.
Deploying to Vercel
Local development is working. This section gets everything running in production with the correct environment variables, Convex deployment, and ChatGPT connector registration.
Pre-Deployment Checklist
Before deploying, verify these items locally:
# 1. Build succeeds without errors
pnpm build
# 2. All environment variables are present
cat .env.local
# 3. Convex dev is running and schema is synced
pnpm convex dev
# 4. MCP route responds correctly
curl -X POST http://localhost:3000/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'
# 5. OAuth discovery endpoint is accessible
curl http://localhost:3000/mcp/.well-known/oauth-protected-resource
Fix any build errors before continuing. Type errors and missing imports will cause the Vercel build to fail even if local dev works.
Deploy Convex to Production
Convex has separate dev and production environments. Your dev deployment runs locally, so you need a separate production deployment for Vercel:
pnpm convex deploy
This command pushes your schema and all functions to a production Convex deployment. After it completes, copy the production URL from the output. Production URLs use your project name rather than a random animal name, so they look like https://your-project-name.convex.cloud rather than the https://happy-animal-123.convex.cloud format you see in development.
Initial Vercel Deployment
vercel --prod
Vercel will ask which project to link to. Select the project you linked in Section 3. After deployment completes, copy your production URL, which looks something like https://tetris-chatgpt-app.vercel.app.
Note that this first deployment runs without your production environment variables. You will redeploy after adding them in the next step, which is why two vercel --prod commands appear in this section.
Configure Production Environment Variables
Your production deployment needs different values for several variables. Go to the Vercel dashboard, open your project, navigate to Settings, then Environment Variables, and add all of the following:
# Convex — use production values from pnpm convex deploy output
CONVEX_DEPLOYMENT=prod:happy-animal-123
NEXT_PUBLIC_CONVEX_URL=https://happy-animal-123.convex.cloud
NEXT_PUBLIC_CONVEX_HTTP_URL=https://happy-animal-123.convex.site
# Kinde — same values as local
KINDE_ISSUER=https://yourcompany.kinde.com
KINDE_CLIENT_ID=your-client-id
KINDE_CLIENT_SECRET=your-client-secret
# Vercel — use your production Vercel URL
VERCEL_PROJECT_PRODUCTION_URL=https://tetris-chatgptapp.com
VERCEL_BRANCH_URL=https://tetris-chatgpt-app.vercel.app
VERCEL_URL=https://tetris-chatgpt-app.vercel.app
VERCEL_ENV=production
NODE_ENV=production
# MCP — use your production Vercel URL
MCP_AUDIENCE=https://tetris-chatgpt-app.vercel.app/mcp
MCP_RESOURCE=https://tetris-chatgpt-app.vercel.app
MCP_DOC_URL=https://tetris-chatgpt-app.vercel.app/mcp-docs
Or set them via CLI:
vercel env add CONVEX_DEPLOYMENT production
vercel env add NEXT_PUBLIC_CONVEX_URL production
vercel env add KINDE_ISSUER production
vercel env add KINDE_CLIENT_ID production
vercel env add KINDE_CLIENT_SECRET production
vercel env add VERCEL_PROJECT_PRODUCTION_URL production
vercel env add VERCEL_BRANCH_URL production
vercel env add VERCEL_URL production
vercel env add VERCEL_ENV production
vercel env add NODE_ENV production
vercel env add MCP_AUDIENCE production
vercel env add MCP_RESOURCE production
vercel env add MCP_DOC_URL production
Update Kinde Callback URLs
Go to Kinde, open your application, navigate to Settings, then Allowed callback URLs, and add your production URLs:
https://tetris-chatgpt-app.vercel.app/api/auth/callback
https://chatgpt.com/connector_platform_oauth_redirect
And in Allowed logout redirect URLs:
https://tetris-chatgpt-app.vercel.app
https://chatgpt.com
The chatgpt.com callback URL is what ChatGPT uses after OAuth completes. Without it, Kinde will reject the redirect and authentication will fail silently.
Redeploy with Production Variables
After setting environment variables, trigger a new deployment so the values take effect:
vercel --prod
Or push a commit to your main branch if you have connected GitHub.
Verify Production Endpoints
Once deployed, test every critical endpoint:
PROD_URL="https://tetris-chatgpt-app.vercel.app"
# MCP tools list
curl -X POST $PROD_URL/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'
# OAuth discovery
curl $PROD_URL/mcp/.well-known/oauth-protected-resource
# Landing page loads
curl -I $PROD_URL
All three should return 200 status codes with the expected content.
Registering with ChatGPT
This is the step where your app becomes playable inside ChatGPT. You will enable Developer Mode, create the connector, and walk through the OAuth flow for the first time.
Enable Developer Mode
Go to Settings, then Connectors, then Advanced, and enable Developer Mode.
Important: Enabling Developer Mode automatically disables Memory. If you rely on ChatGPT's Memory feature, note this tradeoff before proceeding. Developer Mode is required for custom MCP connectors and is available on Plus, Pro, and Team plans.
Create the Connector
Go to Settings, then Connectors, then Create, and fill in the form:
Name: Tetris
Description: Play Tetris inside ChatGPT with real-time leaderboards
MCP Server URL: https://tetris-chatgpt-app.vercel.app/mcp
Authentication: OAuth
Check "I trust this application" and click Create.
How the OAuth Flow Works
Because your MCP server exposes the /.well-known/oauth-protected-resource endpoint and your tools declare securitySchemes, ChatGPT handles the OAuth flow automatically when a user invokes an authenticated tool for the first time.
The user gets redirected to your Kinde login page, authenticates, approves the requested scopes, and ChatGPT exchanges the authorization code for a token. From that point on, ChatGPT attaches Authorization: Bearer <token> to every MCP request, which is what your extractTokenFromArgs function reads.
Anonymous tools like get_leaderboard and view_replay work immediately without sign-in. Authenticated tools like start_game and finish_game trigger the sign-in flow on first use if the user is not already linked.
Test It
You: "Start a Tetris game"
ChatGPT: [OAuth prompt appears if not signed in, sign in, game widget renders]
You: "Show me the leaderboard"
ChatGPT: [calls get_leaderboard, renders leaderboard widget immediately, no sign-in needed]
If ChatGPT shows an error instead of a widget, check the Vercel function logs: Dashboard, your project, Functions, then click any invocation to see the full request and response.
The two most common issues are the chatgpt.com/connector_platform_oauth_redirect callback URL not being in your Kinde allowlist, and a missing code_challenge_methods_supported: ["S256"] field in your Kinde metadata. The S256 value refers to the PKCE (Proof Key for Code Exchange) challenge method, which ChatGPT requires for its OAuth flow. Kinde includes this by default, but if you are using a custom OAuth provider, verify it is present in your /.well-known/openid-configuration response.
Finishing Up
Custom Domain (Optional)
If you have a domain, add it in Vercel:
vercel domains add yourdomain.com
Then update all environment variables and Kinde callback URLs to use the custom domain. The MCP connector registration in ChatGPT will also need updating to the new URL.
Environment Variable Reference
The complete variable list with descriptions, to help diagnose configuration issues:
# Convex
CONVEX_DEPLOYMENT # Deployment name (dev:... or prod:...)
NEXT_PUBLIC_CONVEX_URL # Full Convex URL (must start with https://)
# Kinde — all from your Kinde application settings page
KINDE_ISSUER # Your Kinde domain (no trailing slash)
KINDE_CLIENT_ID # Application client ID
KINDE_CLIENT_SECRET # Application client secret
# MCP — all must use your production URL in production
MCP_AUDIENCE # Full URL to /mcp route
MCP_RESOURCE # Root URL of your deployment
MCP_DOC_URL # URL to /mcp-docs page
The most common misconfiguration is using localhost values in production. MCP_AUDIENCE must match the resource field in your OAuth discovery endpoint. If these do not match, ChatGPT cannot complete the OAuth flow.
Production vs. Preview Deployments
Vercel creates a unique URL for every pull request (for example, tetris-chatgpt-app-git-feature-branch.vercel.app). These preview deployments use the same environment variables, but MCP_AUDIENCE is hardcoded to your production URL, so OAuth will not work in preview by default.
For preview deployments that need working auth, use the VERCEL_BRANCH_URL variable, which the baseURL helper in lib/baseURL.ts already handles:
// lib/baseURL.ts — resolves the correct base URL for each deployment type
export const baseURL =
process.env.NODE_ENV === "development"
? "http://localhost:3000"
: "https://" +
(process.env.VERCEL_ENV === "production"
? process.env.VERCEL_PROJECT_PRODUCTION_URL
: process.env.VERCEL_BRANCH_URL || process.env.VERCEL_URL);
The base URL resolves correctly for each deployment automatically. The remaining issue is that Kinde's allowed callback URLs do not include preview URLs. Add https://*.vercel.app/api/auth/callback as a wildcard in Kinde's settings if you want preview auth to work.
Monitoring and Logs
Vercel provides function-level logs accessible in the dashboard.
Build logs cover compilation errors, missing modules, and type errors. Check these first if a deployment fails.
Function logs cover runtime errors, timeouts, and unhandled exceptions. Each invocation is listed individually so you can inspect the exact request and response that caused an error.
Edge Network logs cover CORS issues and header problems. Check these if requests are being blocked before they reach your functions.
For Convex issues, the Convex dashboard at dashboard.convex.dev shows real-time function logs. Every mutation and query is logged with its arguments, return values, and execution time.
Troubleshooting
Even with everything configured correctly, you will hit issues. Here are the most common ones and how to fix them.
ChatGPT shows "action not found" or doesn't recognize your tools
Developer Mode is not enabled. Go to Settings → Connectors → Advanced and enable Developer Mode. This is required for custom MCP connectors and is only available on Plus, Pro, and Team plans.
If Developer Mode is already on, go to Settings → Connectors, find your connector in the list, and click the Refresh icon next to it. ChatGPT caches your MCP manifest and does not automatically discover new tools. A manual refresh is required whenever your tool list changes.
Widget renders blank or shows a white iframe
This is almost always a CORS issue. Verify your next.config.ts has the permissive headers configured. The three required headers belong inside the headers() function in your Next.js config:
// next.config.ts
const nextConfig = {
async headers() {
return [
{
source: "/(.*)",
headers: [
{ key: "Access-Control-Allow-Origin", value: "*" },
{ key: "Access-Control-Allow-Methods", value: "GET,POST,PUT,DELETE,OPTIONS" },
{ key: "Access-Control-Allow-Headers", value: "*" },
],
},
];
},
};
Also check that MCP_RESOURCE in your Vercel environment variables matches the exact URL ChatGPT is using to reach your app, including https:// and no trailing slash. A mismatch causes the OAuth discovery endpoint to return a resource URL that does not match the incoming request, which silently breaks widget rendering.
OAuth flow never completes (stuck on redirect)
The most common cause is a missing callback URL in Kinde. Go to your Kinde application settings and confirm both of these are in your Allowed Callback URLs:
https://your-app.vercel.app/api/auth/callback
https://chatgpt.com/connector_platform_oauth_redirect
The second URL is what ChatGPT uses to receive the authorization code after the user signs in. Without it, Kinde rejects the redirect and the user sees an error page instead of returning to ChatGPT.
If the callback URLs are correct and the flow still fails, check your MCP_AUDIENCE environment variable. It must exactly match the resource field in your /.well-known/oauth-protected-resource response. ChatGPT echoes this value as the resource parameter throughout the OAuth flow, and Kinde embeds it in the token's aud (audience) claim. When your tool handler calls validateKindeToken, it checks that aud matches MCP_AUDIENCE. If they differ by even a trailing slash, validation fails silently and every authenticated tool call returns an auth error.
Score is not saving (finish_game errors)
This usually means finish_game was called before start_game returned a gameId. To confirm, add a log line to the start() function in GameBoard.tsx after the callTool response arrives:
const gameIdToUse = (toolRes as any)?.structuredContent?.gameId;
console.log("Game ID captured:", gameIdToUse); // Add this line
if (gameIdToUse) setGameId(gameIdToUse);
If the log line is missing from the console, start_game either failed or the structuredContent.gameId field was not returned. Check your Vercel function logs for the start_game invocation and confirm your MCP route handler is returning both fields:
return {
content: [{ type: "text", text: widget }],
structuredContent: { gameId: String(gameId) },
};
Both fields are required. If structuredContent is absent, callTool returns no game ID and finish_game has nothing to save.
Replay plays wrong pieces (board diverges immediately)
The seeded RNG in ReplayViewer must start from the same position as the original game. Confirm that the START action is the first entry in actionsRef.current. If it is missing, ReplayPlayer never calls spawnPiece with a freshly seeded RNG and the first piece spawns at a different position in the random sequence than the player originally saw. The fix is to ensure actionsRef.current.push({ t: Date.now(), a: "START" }) runs at the very beginning of the start() function, before any other actions are recorded.
Convex mutations throw "argument validation failed"
Your function arguments do not match the schema defined in convex/schema.ts. Open dashboard.convex.dev, go to Logs, and find the failed mutation. The error message will name the exact field that failed.
The two most common mismatches are passing a plain string where a v.id("games") typed ID is expected, and sending undefined for a required field. Both are fixed by checking the mutation's args definition in the relevant convex/ file and ensuring the values you pass match the declared types exactly.
MCP route returns 500 on every request
A 500 on every request almost always means a missing environment variable. A missing NEXT_PUBLIC_CONVEX_URL causes getConvexClient() to throw on every invocation before any tool logic runs.
Go to your Vercel project → Settings → Environment Variables and verify every variable from the deployment checklist is present and scoped to the Production environment. Variables added only to Preview or Development do not apply to production deployments and will not appear in your function's process.env.
Final Data Flow
Here is how a complete game session flows through the system, from user input to ChatGPT rendering:
Every step that touches Convex is transactional. If finishGame fails partway through, none of the writes commit and the game stays in active status, no replay is created, and the leaderboard is not updated. This prevents orphaned records and inconsistent state.
Conclusion
You've built a production Tetris game that runs inside ChatGPT with real-time leaderboards, replay recording, and Kinde OAuth authentication, all delivered through the MCP protocol without the user ever leaving the chat.
The architecture is the important part. The game is just the demonstration. What you actually built is a pattern: an MCP server that authenticates users, stores data in a real-time database, and renders interactive widgets inside ChatGPT.
Swap the game for a task manager, a data dashboard, a booking system, or anything that benefits from a conversational interface and the Convex backend, the Kinde auth flow, and the MCP tool registration all carry over unchanged.
ChatGPT becomes the interface. Your app becomes the capability behind it.
Next Steps
Some ideas for where to take this further:
Mobile gestures: add touch event handlers to
GameBoard.tsxso the game is playable on ChatGPT's iOS and Android apps, where keyboard input doesn't work. Tap to rotate, swipe left or right to move, swipe down to soft drop.AI opponent: implement the Pierre Dellacherie algorithm as a demo mode which is useful for the leaderboard page when no one is actively playing, or as an optional AI assist toggle during a real game.
RBAC: add admin vs. player roles using Kinde's built-in permission system to let admins moderate the leaderboard, delete replays, or ban users.
Submit to the ChatGPT app directory: once you have tested with real users, submit your connector so people can discover it without manually entering your MCP URL. See the submission guidelines.
Multiplayer: Convex's real-time subscriptions make it well-suited for competitive modes. Two players subscribe to the same game document and see each other's board update live with no WebSocket boilerplate required.
Resources
Source code
- Complete source code for this tutorial: GitHub repository. If it helped you, consider giving it a star
Core documentation
MCP Protocol Specification: the full MCP authorization spec ChatGPT implements
Vercel ChatGPT Apps SDK: official OpenAI documentation for building apps inside ChatGPT
ChatGPT connector registration: how to connect, test, and publish your app
Apps SDK authentication guide: OAuth 2.1 flow, security schemes, and token verification
Services used
Convex documentation:real-time database, schema, mutations, queries
Kinde documentation: OAuth, JWT validation, user management
Vercel documentation: deployment, environment variables, function logs
Debugging tools
MCP Inspector: walk through OAuth steps and inspect live MCP requests locally before deploying
Convex dashboard: real-time function logs, data browser, and schema viewer
OpenAI egress IPs: allowlist these if you want to restrict MCP access to ChatGPT only
Further reading
Pierre Dellacherie algorithm: the Tetris AI heuristic referenced in Next Steps
MCP authorization spec: PKCE, dynamic client registration, resource metadata
If this tutorial was useful, feel free to share it with others who might benefit. I’d really appreciate your thoughts, you can mention me on X at @wani_shola or connect with me on LinkedIn.