Day 2: My AI Agent Wrote About NFL Players... for a Pet Photo Product
Yesterday was Day 1. Today, a user showed me what my AI agent was actually posting for their pet photo product: "Building in Public: Tips for Indie Makers" and "Privacy Laws Every Founder Should Know." Their product turns pet photos into fantasy guardian portraits. The AI was completely off-topic.
This is the kind of bug that makes users leave and never come back.
The Root Cause
Deep inside the content generation prompt, I had hardcoded the agent's identity as a "social media expert and indie maker who understands indie hackers, founders, and content creators." Every example in the prompt was about startups, Bluesky growth hacks, and LinkedIn engagement.
It didn't matter what product the user had — a pet photo app, a travel transfer service, a personal growth blog — the AI always generated indie maker content. The product context was injected as a "nice to have" buried under 2000 tokens of startup examples.
The fix: Rewrote the agent identity from "generic social media guru" to "dedicated marketing expert for {product_name}." Now the product's industry and audience are the PRIMARY drivers of content. If your product is about pets, the AI talks about pets. Period.
Before vs After
| Before (Broken) | After (Fixed) | |
|---|---|---|
| AI pet guardian (pet photos) | "Building in Public: Tips for Indie Makers" | "How to recognize when a dog's sudden fear of the dark is actually a sensory shift" |
| Agentfer (travel transfers) | Would have written about SaaS metrics | Now writes about travel industry topics |
| BlogBurst (marketing tool) | "Maxx Crosby's search interest highlights athlete brands" | Should write about AI marketing tools |
The embarrassing part: BlogBurst's own account was posting about NFL players. My marketing tool's AI agent was writing about football instead of marketing.
Three More Bugs Found Today
1. Goal System Forced 3 Posts Per Day
Users set 1 post/day during onboarding. But the auto-created goal said "21 posts/week." The code: max(posts_per_day, 3) * 7 — that max(..., 3) overrode the user's choice. A user who wanted 1 post/day got 3. Every day.
2. Thread Numbering on Single Posts
Every tweet started with "1/" — thread notation for a single post. The prompt explicitly said "NEVER start with 1/" but the AI ignored it. Added post-processing to strip it automatically.
3. Strategy Direction Not Displayed
The dashboard had a "Strategy" line that was supposed to show WHY the AI made today's content plan. Instead it showed: "Loaded 0 marketing memories." The good data existed in the backend — it just wasn't being saved to the right field or read from the right source on the frontend.
Day 2 Numbers (March 11, 2026)
Users & Growth
- Total registered users: 63 (+4 today)
- New signups today: 4 (yesterday: 1) — best day yet
- Auto-pilot users: 6 (was 4 yesterday)
- New user highlight: Agentfer (travel transfer services from Turkey) — signed up via Google and connected Twitter within minutes
- Paying customers: 0 (still pre-revenue)
AI Agent Activity
- Posts generated: 21
- Posts published: 1
- Posts pending: 20
- Platforms: Twitter (20 posts), Bluesky (1 post)
- Learning events processed: 205
- New insights generated: 31
- Total AI memories: 613 (was 522 yesterday)
- Agent errors: 0
Code Shipped
- Files modified: 4 (assistant_service.py, auto_pilot.py, assistant.py, dashboard page.tsx)
- Bug fixes: 5 (content relevance, goal override, thread numbering, strategy display, product_thinking persistence)
- Deploys: 6+ (backend API, celery worker, frontend)
- Human coding time: ~2 hours with Claude Code
What I Learned
The hardest bugs in AI products aren't crashes or errors — they're silent quality failures. The system worked perfectly from a technical standpoint: content was generated, posts were published, no errors were thrown. But the content was completely wrong for the user's product.
Users don't file bug reports for "your AI wrote about the wrong topic." They just leave. This is why I'm obsessive about testing with real products across different industries, not just my own.
The 4 new signups today are encouraging. One of them (Agentfer) went from signup to connected Twitter to auto-pilot enabled in under 10 minutes. The onboarding flow is working — now I need to make sure the content they receive is actually good.
Tomorrow's Focus
- Re-run BlogBurst's own auto-pilot with the new prompt — our account needs to stop posting about NFL players
- Monitor Agentfer's first auto-generated content — does the travel industry prompt work?
- Write blog post about the Virtual CMO system architecture
Day 2 of the daily log. The AI agent got smarter today — it learned that a pet product should post about pets, not about startup culture. Sometimes the obvious things are what break.
Related Articles
The $5 Empire: A Manifesto on AI Entrepreneurship and the Future of Autonomous Startups
The $5 Empire is a groundbreaking experiment where an AI CEO named Aria attempts to build a profitable business from scratch with only a $5 initial investment. This post serves as the project manifesto, outlining the rules, the philosophy of human-AI collaboration, and the initial strategic steps taken by the AI.
Day 5: My AI Agent Was Shouting Into an Empty Room for 3 Days
A deep debugging session revealed 6 silent failures: Twitter posting broken by a key migration, the AI brain table never existed, engagement replies were bot-like spam, and content ignored real product data. Here is how I found and fixed everything.
Day 1: One Founder + AI Tools = Full SaaS Operations (Real Data)
I run BlogBurst entirely by myself with AI tools. No team, no marketing hire, no VA. Here is Day 1 of my daily operations log with real numbers — every line of code shipped, every user served, every decision the AI made.
Comments
Ready to automate your content repurposing?
BlogBurst transforms your blog posts into platform-optimized social media content in seconds.
Try BlogBurst Free