Skip to main content

Documentation Index

Fetch the complete documentation index at: https://supermemory-convex-docs.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Convex apps don’t have built-in memory for AI. Supermemory fixes that. You get a memory layer that stores conversations, builds user profiles, and gives your AI context about who it’s talking to.

What you can do

  • Store user interactions and retrieve them in future sessions
  • Build automatic user profiles from conversations
  • Search memories to give your AI relevant context
  • Keep everything in your Convex database for full visibility

Setup

Install the packages:
npm install supermemory convex
Set up your environment variable in Convex:
npx convex env set SUPERMEMORY_API_KEY your-supermemory-api-key
Get your Supermemory API key from console.supermemory.ai.

Basic integration

Create simple helper functions for each Supermemory operation:
// convex/memory.ts
import { action } from "./_generated/server";
import { v } from "convex/values";
import Supermemory from "supermemory";

const memory = new Supermemory({ apiKey: process.env.SUPERMEMORY_API_KEY });

// Get user profile and relevant memories
export const getProfile = action({
  args: { userId: v.string(), query: v.optional(v.string()) },
  handler: async (ctx, { userId, query }) => {
    return await memory.profile({
      containerTag: userId,
      q: query,
    });
  },
});

// Add a memory
export const addMemory = action({
  args: { userId: v.string(), content: v.string() },
  handler: async (ctx, { userId, content }) => {
    return await memory.add({
      content,
      containerTag: userId,
    });
  },
});

// Search memories
export const searchMemories = action({
  args: { userId: v.string(), query: v.string(), limit: v.optional(v.number()) },
  handler: async (ctx, { userId, query, limit }) => {
    return await memory.search.memories({
      q: query,
      containerTag: userId,
      searchMode: "hybrid",
      limit: limit ?? 10,
    });
  },
});

Example: AI chat with memory

A chat endpoint using the Supermemory AI SDK middleware. It automatically injects context and saves memories.
// convex/chat.ts
import { action } from "./_generated/server";
import { v } from "convex/values";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { withSupermemory } from "@supermemory/tools/ai-sdk";

export const chat = action({
  args: { userId: v.string(), message: v.string() },
  handler: async (ctx, { userId, message }) => {
    // Wrap the model - automatically injects context and saves memories
    const model = withSupermemory(openai("gpt-4o-mini"), userId, {
      mode: "full",
      addMemory: "always",
    });

    const { text } = await generateText({
      model,
      system: "You are a helpful assistant.",
      prompt: message,
    });

    return text;
  },
});

Storing memories in Convex tables

Keep a local copy of memories in your Convex database for full visibility:
// convex/schema.ts
import { defineSchema, defineTable } from "convex/server";
import { v } from "convex/values";

export default defineSchema({
  memories: defineTable({
    userId: v.string(),
    content: v.string(),
    createdAt: v.number(),
  }).index("by_user", ["userId"]),
});
// convex/memory.ts
import { action, mutation, query } from "./_generated/server";
import { api } from "./_generated/api";
import { v } from "convex/values";
import Supermemory from "supermemory";

const memory = new Supermemory({ apiKey: process.env.SUPERMEMORY_API_KEY });

// Store in Convex
export const storeMemory = mutation({
  args: { userId: v.string(), content: v.string() },
  handler: async (ctx, { userId, content }) => {
    return await ctx.db.insert("memories", {
      userId,
      content,
      createdAt: Date.now(),
    });
  },
});

// Add memory to both Supermemory and Convex
export const addMemory = action({
  args: { userId: v.string(), content: v.string() },
  handler: async (ctx, { userId, content }) => {
    // Add to Supermemory
    await memory.add({ content, containerTag: userId });
    
    // Store in Convex
    await ctx.runMutation(api.memory.storeMemory, { userId, content });
  },
});

// List memories from Convex
export const listMemories = query({
  args: { userId: v.string() },
  handler: async (ctx, { userId }) => {
    return await ctx.db
      .query("memories")
      .withIndex("by_user", q => q.eq("userId", userId))
      .order("desc")
      .take(50);
  },
});

User profiles

How automatic profiling works

Search

Filtering and search modes

Vercel AI SDK

Memory middleware for Next.js

LangChain

Memory for LangChain apps