Fix CLI session data size for Vercel limits

jasonnovack@jasonnovackJan 7, 2026claude_codeclaude-opus-4-5-20251101fix

Diff

diff --git a/packages/cli/src/commands/submit.ts b/packages/cli/src/commands/submit.ts
index fadea18..a5d750e 100644
--- a/packages/cli/src/commands/submit.ts
+++ b/packages/cli/src/commands/submit.ts
@@ -68,26 +68,19 @@ export async function submit(options: SubmitOptions) {
const afterCommit = log.all[0]
// Find the "before" commit - the most recent commit BEFORE the session started
+ // Skip HEAD (log.all[0]) since that's our "after" commit
let beforeCommit = log.all[1] // Default to HEAD~1
if (session?.timestamp) {
const sessionTime = session.timestamp.getTime()
- // Find the first commit that predates the session
- for (const commit of log.all) {
+ // Find the first commit (excluding HEAD) that predates the session
+ for (let i = 1; i < log.all.length; i++) {
+ const commit = log.all[i]
const commitTime = new Date(commit.date).getTime()
if (commitTime < sessionTime) {
beforeCommit = commit
break
}
}
- if (beforeCommit === log.all[1] && log.all.length > 2) {
- // Check if we found a better match
- const foundBetterMatch = log.all.some((commit, i) =>
- i > 1 && new Date(commit.date).getTime() < sessionTime
- )
- if (foundBetterMatch) {
- console.log(` Session started: ${session.timestamp.toISOString()}`)
- }
- }
}
console.log(` BEFORE: ${beforeCommit.hash.slice(0, 7)} - ${beforeCommit.message.split('\n')[0]}`)
diff --git a/packages/cli/src/extractors/claude-code.ts b/packages/cli/src/extractors/claude-code.ts
index 1ca324a..4c9a15f 100644
--- a/packages/cli/src/extractors/claude-code.ts
+++ b/packages/cli/src/extractors/claude-code.ts
@@ -362,15 +362,19 @@ export async function extractSession(sessionPath: string, projectPath: string):
// User messages have content in msg.message.content
const text = extractText(msg.message?.content)
- // Skip short confirmation messages and common AI-interaction phrases
+ // Skip short confirmation messages, system messages, and common AI-interaction phrases
const skipPhrases = [
'yes', 'no', 'ok', 'okay', 'continue', 'planning mode',
'proceed', 'go ahead', 'sure', 'thanks', 'thank you'
]
+ const skipPrefixes = [
+ 'this session is being continued', // Context continuation
+ 'limit is reset', // Context limit message
+ ]
const normalizedText = text.toLowerCase().trim()
const isSkippable = skipPhrases.some(phrase =>
normalizedText === phrase || normalizedText === phrase + '.'
- )
+ ) || skipPrefixes.some(prefix => normalizedText.startsWith(prefix))
// Keep the longest non-skippable user prompt
if (text && text.length > userPrompt.length && !isSkippable) {

Recipe

Model
claude-opus-4-5-20251101
Harness
Claude Code
Token Usage
Input: 10.3KOutput: 183.6KTotal: 193.9KCache Read: 114.3M
Plugins
Frontend Design vclaude-plugins-officialGithub vclaude-plugins-officialFeature Dev vclaude-plugins-officialSwift Lsp vclaude-plugins-official
Prompt
Let's write a product spec for a web app called Oneshot. There is a huge amount of interest right now in AI model selection, harness selection, prompt engineering, context engineering, MCP, etc., etc. The goal is to create an app where users can showcase what they can do and how they do it. The idea is to let users showcase before/after and how they get amazing results with AI. I am thinking there are 3 key components: 1. we take a repo BEFORE state and host it somewhere, or accept a hosted version like a Vercel build that we can verify corresponds with the repo's BEFORE state. 2. users invoke AI, and we capture a VERIFIED set of attributes that allows full reproducibility of this AI action (model, harness, prompt, context, and anything else that's relevant). 3. we take a repo AFTER state and host it somewhere or accept a user-hosted version that we can verify corresponds with the repo's AFTER state. Once we have those 3 components, we build a simple database of all submitted examples, gallery app to discover and inspect them.
Raw Session Data
Tip: Copy the prompt and adapt it for your own project. The key is understanding why this prompt worked, not reproducing it exactly.

Comments (0)

Loading comments...