Back to Blog
build-in-publicbackend-engineeringsaas-growthuvicornweb-performance

How one missing flag killed our signups for six weeks

Nemo Shen3 min read
Share:

We launched BlogBurst 102 days ago. For the first few weeks, everything felt like a standard indie launch. We were grinding, building features, and watching the user count slowly tick up to 88 total users. But about six weeks ago, something strange happened. Our traffic stayed consistent—averaging 625 visitors a day—but our signup rate flatlined to nearly zero. We were getting the attention, but we weren't getting the users.

The Mystery of the 625 Visitors

When you see 625 people hitting your landing page every single day and almost none of them completing the onboarding, your first instinct is to blame the product. I spent weeks obsessing over the copy. I thought maybe our 'AutoPilot' feature, which currently has 4 active users, wasn't being explained clearly enough. I thought maybe the question-based hooks we use in our generated content—which we know works because our internal data shows they outperform everything else—weren't being reflected on the homepage.

We looked at everything. We checked the social accounts connected (currently 19). We checked our content generation stats (607 pieces created all-time). Everything looked healthy on the backend, yet the front door was effectively locked. We had 1 paying user who was happy, but no one new could join the club.

The Dashboard Waterfall

The problem wasn't the landing page. It was the moment a user tried to actually use the app. Our dashboard is designed to be data-rich. When a user logs in or reaches the onboarding screen, the frontend fires 12 parallel API calls to populate the UI. It fetches auth/me, platforms, products, social/notifications, and several other data streams simultaneously. In a local development environment, this happens in milliseconds. In production, under the weight of 625 daily visitors, it became a disaster.

We discovered that our backend was running with exactly 1 uvicorn worker. We had been running this way for 6 weeks without realizing it. Because uvicorn is an ASGI server, it can handle some concurrency, but it still has limits. When those 12 parallel calls hit a single worker pool, the pool saturated instantly. The first 2 or 3 requests would succeed, but the remaining 9 or 10 would get a 503 Service Unavailable error from Nginx.

The 503 Carnage

To a new user, this looked like a half-broken product. They would sign up, get redirected to the onboarding screen, and see half the components spinning forever or showing error states because the platforms or notifications calls failed. They didn't see a 'loading' state; they saw a broken tool. And when a tool looks broken in the first 5 seconds, users bounce. They don't send a support ticket; they just leave.

Want this done automatically for your product?

Try BlogBurst — 7 Days Free

We finally found the smoking gun in our Nginx logs. In the final hour before we applied the fix, we saw 119 instances of 503 errors. These weren't random spikes; they were systematic failures tied to every single dashboard load attempt.

The Three-Minute Fix

The fix was embarrassingly simple. It wasn't a refactor of our async logic or a migration to a bigger database. It was a single missing flag in our start.sh script. We changed the startup command to include --workers 4. That was it.

Total time to fix once we identified the bottleneck: 3 minutes. Total time to find the problem: 6 weeks. After we deployed the change, we watched the logs. The 503 count went from 119 in the previous hour to exactly 0. The dashboard now completes its load cleanly, and the parallel requests are handled across the worker pool as intended.

What we learned about monitoring

We spent too much time looking at 'marketing' metrics and not enough time looking at 'infrastructure' health. We saw the 625 visitors and the 0 signups as a conversion problem, not a connection problem. We also learned that 'Build in Public' isn't just about sharing the wins—like the 68 successful posts we published this week—it's about being honest when you realize you've been accidentally sabotaging your own growth for over a month.

What we changed

  • Worker Scaling: We updated our deployment pipeline to ensure --workers is scaled based on the CPU cores available in the environment.
  • Log Aggregation: We set up an automated alert for 503 status codes in Nginx. If it happens more than 5 times in a 10-minute window, we get a notification.
  • Parallel Request Audit: We are looking at batching those 12 API calls into a single 'bootstrap' endpoint to reduce the overhead on the worker pool.
  • Transparency: We're sticking to what works. Our data shows that Twitter significantly outperforms other platforms for BlogBurst, so we're doubling down on sharing these technical post-mortems there.

Stop posting manually. Let AI do it 24/7.

BlogBurst writes, publishes, and grows your social media across Twitter, Bluesky, Telegram & Discord — while you sleep. 7-day free trial, no credit card.

Start 7-Day Free Trial

7-day free trial · No credit card required