Back to Blog
building in publicsolo founderai agentbug fixdaily logreal data

Day 2: My AI Agent Wrote About NFL Players... for a Pet Photo Product

Nemo7 min read
Share:
Day 2: My AI Agent Wrote About NFL Players... for a Pet Photo Product
Building a fully automated content engine is a dream for many founders. The promise is seductive: set up an AI agent, give it a schedule, and watch as it churns out high-quality, SEO-optimized blog posts while you sleep. That was the plan for Day 2 of my build-in-public journey for my new SaaS, a custom pet photography product called "PawPrints & Pixels." I went to bed last night feeling accomplished. I had set up the cron jobs, connected the OpenAI API, and established a basic pipeline to publish drafts directly to my CMS. I expected to wake up to a heartwarming article about capturing the perfect lighting for a black Labrador or perhaps a guide on posing with cats. Instead, I woke up to a 1,500-word analysis of the Kansas City Chiefs' offensive line strategies and the statistical probability of a Super Bowl run. My AI agent, tasked with selling sentimental pet portraits, had decided to pivot to sports journalism. This is the reality of building with AI. It is not magic; it is math. And when the math lacks variables, the results are unpredictable. Today, I want to walk you through exactly how this happened, the debugging process that revealed a critical flaw in my architecture, and the specific "Context Injection" fix that aligned the AI back to our niche. If you are building AI agents for content, this post might save you from accidentally publishing a treatise on macroeconomics on your knitting blog. ## The Incident: When the Agent Went Rogue To understand the error, we have to look at the prompt architecture I deployed on Day 1. In my rush to get the "plumbing" working (API connections, webhooks, database entries), I neglected the "soul" of the software: the context. My initial prompt chain looked something like this: 1. **Topic Generator:** "Generate a trending topic idea." 2. **Outliner:** "Create an outline for this topic." 3. **Writer:** "Write a blog post based on this outline." Do you see the problem? It is subtle but catastrophic. I assumed the AI would "know" it was working for a pet photography brand because *I* knew it was working for a pet photography brand. But an LLM (Large Language Model) is stateless. It does not know who you are, what your business does, or what you built it for unless you explicitly tell it in every single instance of interaction. When the agent reached Step 1 (Topic Generation), it asked the model for a "trending topic." It didn't ask for a trending *pet* topic. It just asked for what is popular in the world right now. And what is popular in the training data and current search trends? The NFL season. The agent dutifully picked a high-volume keyword topic: NFL player stats. It then passed this topic to the Outliner. The Outliner, seeing a request to outline a post about football, did exactly as told. Finally, the Writer received an outline about quarterbacks and generated a grammatically perfect, tonally confident, and completely irrelevant article. It wasn't a hallucination in the technical sense—the facts about the NFL were likely correct. It was an alignment failure. The AI aligned with the *world's* average interests rather than my *product's* specific niche. ## The Debugging Process: Tracing the "Null" Context I sat down with my coffee, staring at a draft titled "Quarterback Efficiency Ratings: A research," sitting in the draft folder of a website designed to sell canvas prints of Goldendoodles. My first step was to check the logs. I use a simple logging system that records the input prompt and the output completion for every step of the chain. This is crucial for debugging AI; you cannot fix what you cannot trace. **Log Entry 14:02 - Topic Generation** * **Input:** "You are a helpful blog writer. Generate a high-engagement blog topic idea for today." * **Output:** "Analyzing the impact of defensive strategies in the current NFL season." There it was. The "System Prompt"—the instruction that sets the behavior of the AI—was generic. "You are a helpful blog writer" is the default setting of mediocrity. A helpful blog writer writes about what people are reading. People are reading about football. I realized that while I had a variable in my code called `product_context`, it wasn't being passed into the prompt string. It was a classic null pointer issue, but instead of crashing the app, it crashed the *relevance* of the content. This highlights a distinct danger in AI development compared to traditional software engineering. In traditional coding, if a variable is missing, the code creates an error and stops. In AI development, if a variable is missing, the model *improvises*. It fills the void with probability. It degrades gracefully into nonsense, which is much harder to catch than a hard crash. ## The Fix: Context Injection and System Prompting To fix this, I had to move from a "Zero-Shot" generic approach to a "Few-Shot" context-heavy approach. I needed to inject the product identity into the very DNA of the prompt chain. I implemented a three-layer context injection strategy: ### 1. The Global Identity (System Prompt) Instead of "You are a helpful writer," the system prompt now reads: > "You are the Lead Content Strategist for 'PawPrints & Pixels,' a premium brand specializing in turning pet photos into high-quality wall art. Your audience consists of obsessed pet owners (primarily dog and cat parents) who treat their pets like children. Your tone is warm, sentimental, slightly humorous, and authoritative on photography and pet care. NEVER discuss topics outside of pets, photography, or home decor." ### 2. The Product Context (Dynamic Injection) I created a JSON object containing our current product focus. This is injected into the prompt dynamically. If we are promoting canvas prints this week, the AI knows. If we are promoting digital downloads next week, the AI adjusts. ### 3. The Negative Constraints I explicitly added negative constraints. AI models are eager to please, and sometimes that means straying off-topic to be "comprehensive." I added a "Guardrails" section to the prompt: > "Do not write about sports, politics, general news, or celebrity gossip. If the generated topic does not relate to animals or photography, discard it and regenerate." ## Before and After: A Case Study in Alignment The difference was immediate and striking. I re-ran the agent with the exact same configuration, only changing the prompt context as described above. ### The "Before" Output (No Context) * **Headline:** *The Statistical Rise of the Modern Quarterback* * **Excerpt:** "In the modern era of the league, the pocket passer is becoming a relic. We are seeing a shift toward mobile quarterbacks who can extend plays. This impacts fantasy leagues and betting odds significantly.." * **Verdict:** Well-written, completely useless. It creates confusion for the user and dilutes the topical authority of the domain. ### The "After" Output (With Context Injection) * **Headline:** *From Blur to Beauty: How to Capture Your Dog's Zoomies Without the Motion Blur* * **Excerpt:** "We've all been there: your Golden Retriever is having a case of the zoomies, looking absolutely majestic. You pull out your phone, snap a pic, and.. it's a blurry mess. At PawPrints & Pixels, we know that the best moments are the hardest to capture. Here is how to adjust your shutter speed to freeze that joy in time.." * **Verdict:** On-brand, solves a user problem, leads directly into the product offering (turning that photo into art). ## research: The Mechanics of AI Alignment Why did this fix work? It comes down to how Large Language Models function as prediction engines. Think of the LLM as a super-knowledgeable improvisational actor. If you push them onto a stage and just say "Act!", they will likely do something broad and cliché—maybe Hamlet or a knock-knock joke. They revert to the mean. However, if you hand them a script that says, "You are a exhausted mother of three trying to organize a birthday party for a beagle," they immediately narrow their vast database of knowledge down to that specific persona. **Context Injection** is essentially narrowing the search space of the model. By defining the brand, the audience, and the product, we are statistically suppressing the tokens (words) related to the NFL, politics, or cooking, and statistically boosting the tokens related to fur, lenses, shutter speeds, and wall art. There is a concept in AI called **RAG (Retrieval-Augmented Generation)**. While I am not using a full vector database yet (that is for Day 10!), this is a primitive form of it. I am retrieving relevant context (my brand bible) and augmenting the generation (the prompt) with it. ## Lessons Learned About [AI Content](https://blogburst.ai/blog/ai-content-generation-small-business-guide) Generation This failure was the best thing that could have happened on Day 2. It taught me five critical lessons that I will carry forward as this project scales: ### 1. Default is Dangerous Never rely on the default settings of an API or a prompt. The "default" world of an AI is a blend of the entire internet—chaotic, unfocused, and often irrelevant. You must impose order through constraints. ### 2. The "Blank Page" Syndrome Applies to AI Humans freeze when looking at a blank page. AI doesn't freeze; it hallucinates. It fills the silence with noise. Never give an AI a blank page. Give it a scaffold, a persona, and a goal. ### 3. Failures are Silent As mentioned earlier, AI bugs rarely throw exceptions. They throw bad content. This means you cannot rely on uptime monitors or error logs to judge quality. You need "Evaluation" steps. In the future, I plan to have a second AI agent whose sole job is to grade the output of the first agent before it gets published. If the relevance score is low, it rejects the draft. ### 4. Context Windows are Precious I used to think short prompts were better to save money. I was wrong. The cost of a

Related Reading

Want this done automatically for your product?

Try BlogBurst — 7 Days Free
--- **Ready to automate your marketing?** [Try BlogBurst free](https://blogburst.ai) — the AI marketing agent that learns and improves with every post.

Stop spending hours on marketing

BlogBurst is a free AI marketing agent that auto-generates content, posts to Twitter/Bluesky/Telegram/Discord, and learns what works for your audience. Set it up in 2 minutes.

Try BlogBurst Free →

Stop posting manually. Let AI do it 24/7.

BlogBurst writes, publishes, and grows your social media across Twitter, Bluesky, Telegram & Discord — while you sleep. 7-day free trial, no credit card.

Start 7-Day Free Trial

7-day free trial · No credit card required