I built this because I needed it. A tool for tracking every grant, fellowship, job, and opportunity you are chasing, and actually finding new ones worth chasing. Designed, built, and shipped alone.
The Applications board. Kanban view with status columns, priority indicators, and deadline tracking.
I was running Sane (an AI mental health startup) while simultaneously tracking fellowship applications, accelerator programmes, and grant opportunities. Every piece of that lived in a different place. Spreadsheet for grants, notes app for deadlines, email for follow-ups, and nothing connecting them.
The discovery side was worse. Finding relevant opportunities meant manually checking ten different websites, subscribing to newsletters, and hoping nothing slipped by. There was no way to bring everything together in one place.
I built Dash to close that gap. And because I needed it myself, I had an unusually clear sense of what it needed to do.
Problem Definition
The target user is anyone juggling five or more applications at once, founders applying to accelerators, researchers chasing fellowships, professionals in an active job search. In Nigeria alone, the volume of grant and fellowship programmes targeting young entrepreneurs runs into the hundreds annually. Globally, early-stage founders apply to an average of 8-12 accelerators per funding round.
The workaround before Dash was a spreadsheet, and the spreadsheet was failing. A spreadsheet tracks what you have applied to. It does not surface what you should apply to. It does not remind you when things go stale. It does not learn from your outcomes.
Problem Framing
I did not run formal user interviews. I was the user. Running Sane while tracking 15 concurrent applications gave me direct access to the problem. Three things kept failing me: I lost track of what I had applied to, I missed opportunities I should have caught, and I never learned from what worked or did not. Those three problems became the three core parts of the product. Before writing any code, I spent a day mapping out exactly what the product needed to do, the features, the user stories, how the backend should be structured. Everything that followed came from that document.
Complete user journey: from sign up through onboarding, across all sections, to outcome and learning.
The Product
Dash was built around three problems I kept running into. The first was keeping track, a kanban board with six statuses, drag-and-drop, and a score that tells you when things are going quiet for too long. The second was finding new opportunities, an engine that runs every morning and surfaces relevant grants, fellowships, programmes, and jobs. The third was learning from outcomes, reflections you write when something gets accepted or rejected, which build up into a picture of what is actually working.
Two more things shipped after the core was working. Materials lets you save your CV, cover letter snippets, and useful links, then attach them directly to specific applications. Schedule pulls in your Google Calendar so your deadlines sit alongside everything else in your week. Deadlines surface in context, not in a separate tab you have to remember to check.
The Home page. Active pipeline, upcoming deadlines, stale items, and pipeline health score at a glance.
Feature Highlight
Paste any opportunity description, job posting, or grant brief into Quick Add. Claude Haiku reads it and fills every field: name, organisation, bucket, deadline, amount, location, tags. What would take three minutes of manual entry takes three seconds.
The biggest friction in any tracking tool is the moment of adding something new. If that moment is slow or annoying, people stop doing it. Removing that friction was the priority.
The Add application modal with AI Quick Add at the bottom. Paste any description and Claude fills every field.
Technical Design
Getting discovery right took the most work. Early on I was passing short search snippets to Claude and the results were poor, too vague, often irrelevant. The difference came when I switched to fetching the actual page content. Claude now reads the full text of each source and pulls out structured opportunity data. The quality jumped.
The dismissal feedback loop: every opportunity a user dismisses gets remembered. The next time Claude scores similar opportunities, it ranks them lower. The feed gets more accurate over time without the user having to do anything extra.
The Discover page. Opportunities scored by relevance, filterable by bucket, deadline, and size.
Information Architecture
Eight sections, nothing buried. Home, Applications, Projects, Discover, Materials, Schedule, Archive, and Settings, all reachable in one tap. The connections matter too: when you find something in Discover you can add it to Applications in one click. When an application gets accepted you can turn it into a Project. Materials attach to whichever applications they belong to. Everything resolved ends up in Archive, where the patterns live.
Information architecture: eight sections with deliberate cross-connections.
Product Thinking
Dash started as a personal tool. Then Sane needed tracking too. Rather than creating a second account, I added workspace labels. Personal and Sane sit in the same pipeline, filterable at the top. One user, multiple life contexts, zero friction switching between them.
Retention Design
Keeping people coming back was part of the design from the start, not something added later. The momentum score, a simple measure of how recently your pipeline items were updated, creates a quiet pull to stay on top of things. The celebration system fires confetti when an application is accepted, then immediately asks: what made this work? Those reflections build up over time into something useful, a record of what actually works for you.
The weekly briefing email only sends when there is something meaningful to report. The subject line tells you exactly what is inside: "Monday briefing: 2 deadlines, 3 need attention, 4 new."
Architecture
The backend is seven independent Supabase Edge Functions on Deno runtime. Each one has a single job: one for discovery, one for the weekly digest email, one for AI Quick Add, one for the discovery cron, one for job fetching, one for account deletion, and one for grant discovery. Each can be deployed, updated, and debugged without touching the others.
Discovery is the most expensive and complex operation. Running it in isolation means a Tavily timeout or a slow source does not block the rest of the application. Different services run on different schedules: discovery runs daily at 6am UTC, the weekly briefing runs Monday at 8am, AI Quick Add runs on demand. The product plan came first. The technical structure followed from it.
UX Decisions
These are the decisions that shaped how the product feels to use, and why each one was made.
Success Metrics
These were defined before building started. Each one points to a real behaviour, not just a number going up.
Scope Decisions
The PRD has an explicit non-goals section for every major feature. Every feature that did not ship was a deliberate choice. The question was never just whether to build something, but whether building it now was worth the focus it would take away from what mattered most.
UX Depth
The application detail panel is where most of the actual work happens. The design principle was progressive disclosure: show what you need most of the time, hide what you need occasionally. The panel opens with name, status, checklist, and links. Everything else, entity, bucket, priority, deadline, amount, tags, sits behind a "More fields" toggle. This keeps the panel readable when you just want to tick something off, without hiding anything permanently.
The pencil icon for name editing is a deliberate pattern repeated across Applications, Projects, and checklist items. A field that looks editable all the time feels unstable when reading. The pencil signals intent without inviting accidents. One interaction model, used consistently, means users learn it once and it works everywhere.
Edge Cases
What comes next
Materials and Schedule both shipped after the first version. Two things are still to come: a proper mobile experience, and trend data: how your acceptance rate changes over time, which categories convert best, whether certain months are better than others. Both need time and real usage before they make sense to build.
What I Learned