Good Boy Guide
Scrapes 13+ venues, writes a weekly dog-friendly events newsletter for Tampa Bay

The Problem
Dog-friendly event info in Tampa Bay is scattered across 13+ venue websites, none of which talk to each other and all of which render event data differently. My fiancée publishes a newsletter curating these events, and the manual process of visiting every site, deduplicating, and rewriting descriptions was taking hours every week.
The Approach
I built a CLI pipeline that does the whole job end to end. Playwright visits 13 configured sources, each with its own adapter for selectors and date formats. Luxon normalizes dates, a hashing function deduplicates across sources, and history tracking tags events as new, recurring, or returning. OpenAI enriches each event with a one-to-two sentence description, then the pipeline renders HTML and pushes a draft to Beehiiv via their API. Two minutes, scrape to draft.
Key Insight
“Each website is its own little kingdom with its own quirks. A generic scraper that tries to guess where event data lives works maybe 60% of the time. The real unlock was accepting that each source needs its own configuration. It's not elegant, but it's reliable, and reliable matters more when you're generating a newsletter someone actually sends to subscribers.”
Repository
good-boy-guide-agent
Scraper, enrichment, and newsletter pipeline