Crafting experience...
10/26/2025
A Project Made By
Abhay Dronavalli
Engineer
Taher Akolawala
Engineer
Siddhant Pallod
Engineer
Siddharth Radhakrishnan
Engineer
Submitted for
Built At
Gator Hack IV
Hosted By
What is the problem you are trying to solve? Who does it affect?
Freelance web designers and small design agencies face a significant challenge in lead generation. The traditional process of finding potential clients, analyzing their needs, and creating personalized pitches is incredibly time-consuming. A designer might spend 2-3 hours researching businesses, manually reviewing their websites, creating mockups, and writing custom outreach emails, only to receive a handful of responses. This inefficiency prevents designers from scaling their client acquisition and forces them to choose between spending time on actual design work or on business development.
Small businesses also suffer from this problem indirectly. Many local businesses either have outdated, poorly-designed websites or no web presence at all, but they're rarely reached by qualified designers who could help them. The disconnect between designers who need clients and businesses that need design services creates a lose-lose situation in the market.
What is your idea? How does it fix the problem?
We built an AI-powered lead generation and outreach system that automates the entire pipeline from discovery to personalized pitch. Our solution consists of three intelligent agents working together:
Agent 1 (Lead Finder) automatically discovers local businesses using the Google Places API based on the designer's target location and industry preferences. It identifies businesses both with and without existing websites, prioritizing those without websites as they represent higher-value opportunities.
Agent 2 (Website Analyzer) implements a sophisticated dual-scoring system. First, it takes screenshots of existing business websites and applies a custom-built heuristic algorithm that evaluates multiple factors to generate an initial score (0-100) based on technical metrics like mobile responsiveness, load time, broken links, and structural completeness. Then, it passes both the website URL and screenshots to a custom-built AI Vision Model, which conducts a professional design audit analyzing visual design quality, UX issues, content effectiveness, and trust signals. The AI Vision model provides a qualitative assessment that can adjust the initial heuristic score by ±30 points, creating a final composite score. Only businesses scoring below 50 points pass through to the outreach phase, ensuring we're targeting businesses that genuinely need improvement and filtering out those with already-excellent websites.
Agent 3 (Mock Generator) creates personalized website mockups and outreach emails specifically for the filtered leads. For businesses with existing websites (that scored below the 50-point threshold), it uses the screenshot and detailed analysis to generate an improved version that specifically addresses their identified issues. For high-priority businesses without websites, it creates a professional first website tailored to their industry. It also generates personalized cold emails that reference specific observations about the business and includes the designer's portfolio and the mock website prototype, making each outreach feel custom rather than templated.
This fixes the problem by compressing hours of manual work into minutes of automated processing, allowing designers to focus on reviewing and sending the best opportunities rather than spending time on research and initial mockup creation. A designer can now generate 50 qualified leads with personalized pitches in the time it used to take to manually create 2-3.
How do all the pieces fit together? Does your frontend make requests to your backend? Where does your database fit in?
Our stack has three main parts that work together:
Frontend (Next.js)
The UI lets a designer set location/industry, review leads, see scores, read the drafted email, and open the auto-generated mock site. It shows live lead status (“new → contacted → responded → converted”) and basic search/filter.
Backend (Firebase Cloud Functions + Firestore)
Agent 1 – Lead Finder runs as a callable Cloud Function. It uses the Google Places API to find local businesses, enriches each lead (name, address, phone, website if available), de-duplicates by placeId, and writes leads into Firestore.
A Firestore trigger prepares leads for the next steps (marking whether they have a site or not).
Firestore acts as the database for users, preferences, and lead records.
Analysis & Generation tools (Python + APIs)
Agent 2 – Website Analyzer is a Python toolchain. It fetches a site, checks technical items (status codes, robots/sitemap, internal links, broken assets), optionally renders with Playwright to capture CSS/DOM signals, and assigns a heuristic score (0–100) across technical, UX, SEO, credibility, and content. Then an AI Vision pass (OpenAI) looks at a screenshot and adjusts the score by up to ±30 based on visual quality and UX issues. Only leads with a final score < 50 continue.
Agent 3 – Mock Generator uses an LLM to create a simple, tailored static mock (plus a personalized cold email). The mock is deployed automatically (e.g., via Vercel’s deployment API), and the deployment URL and drafted email are saved back to Firestore.
Data flow: Frontend -> calls Cloud Function (Agent 1) -> stores leads in Firestore -> Analyzer/Generator update each lead with scores, issues, mock URL, and drafted email -> Frontend reads Firestore to show results and let the designer send or edit.
What did you struggle with? How did you overcome it?
APIs and rate limits. Google Places and model calls can throttle. We added small batch sizes, retries, and basic caching to avoid hitting limits.
Headless rendering in the cloud. Playwright can be finicky on serverless. We made that step optional with graceful fallback to plain HTML fetch + heuristics so the pipeline still runs.
Scoring that feels fair. Pure heuristics missed visual problems; pure AI opinion felt noisy. We combined them: a weighted technical score plus a bounded AI adjustment (±30) to stay stable but responsive.
Messy lead data. Some Places entries lack websites or list Facebook pages. We added checks, simple normalization, and a “no-site” path that prioritizes first-website outreach.
Secrets & config. Multiple APIs (Places, OpenAI, Vercel) meant careful env handling. We moved keys into environment variables and local config files and avoided hard-coding in the UI.
What did you learn? What did you accomplish?
End-to-end pipeline from discovery → analysis → personalized mock + email, all in one flow.
Dual-stage scoring that is both explainable (heuristics) and design-aware (AI Vision).
Hands-off mock generation with automatic deploy links stored on each lead.
Usable triage UI where a designer can sort/filter, tweak emails, and pick the best leads fast.
Learning wins: practical Firebase Functions + Firestore patterns, headless scraping basics, prompt design for structured outputs, and deployment automation with Vercel.
Improve the analyzer algorithm for more accurate website scoring.
Automatically book meetings with clients who respond positively.
Monetize the product by offering paid plans and packaging it as a SaaS for design agencies.
Add multi-user support so teams within an agency can share and manage leads collaboratively.
Build an analytics dashboard to track lead generation, email success, and conversion metrics.
Create a mobile interface so designers can access and manage leads on the go.