What Funders Actually Look For: 7 Criteria Reviewers Score You On
Reviewers don't read your proposal the way you write it. You write to argue. They read to score — against a rubric, in 30 minutes, in a stack of fifty other proposals, often a week before the panel meets. Understanding what they're looking for, in the order they're looking for it, is how you go from “technically thorough but didn't make the cut” to “funded.”
This is the seven-criterion reviewer's rubric, distilled from federal NOFO scoring sheets and foundation review guidelines. Some funders publish these criteria explicitly; many don't and use the same logic anyway.
The reviewer's mindset
Federal peer-review panels read proposals against a published scoring rubric. Each section earns points; the total is your score. Foundation program officers usually don't have a formal rubric, but they absolutely have a mental one — and it converges on the same seven questions.
A few things to know about how reviewers actually work. They read fast. They read tired. They read with a list of priorities given to them by the funder, and they're looking for whether your proposal hits each priority. They are not looking for clever writing or originality — they're looking for evidence that you understood the assignment and can deliver. If you've ever felt your beautifully crafted proposal didn't land, the writing was probably fine. The match against the rubric wasn't.
Criterion 1: Mission alignment with the funder
Reviewer asks: Is this proposal in the lane this funder actually funds?
Often the highest-weighted criterion, especially for foundation grants. Reviewers don't want to fund great work that's outside the funder's focus — they have to justify every award internally, and a strong-but-off-mission grant is harder to justify than a weaker but on-mission one.
How to win on this criterion: use the funder's own language. Quote two or three phrases from their published priorities, verbatim, in your proposal — not as filler, but as honest descriptions of what your work does. If you can't honestly find that overlap, the proposal probably shouldn't be sent.
For a deeper version of this discipline see our post on vetting a foundation funder before you write the LOI. Mission alignment is a research problem, not a writing problem.
Criterion 2: Evidence of need
Reviewer asks: Is this a real, specific problem — not just a topic area I care about?
Two things make a need section persuasive: a specific, sourced statistic that establishes scale, and a particular example or case that establishes texture. Generic problem statements (“literacy is a critical issue in our community”) score low. Specific, evidence-grounded ones (“43% of K–5 students in Philadelphia public schools read below grade level according to NAEP 2024 — at Hancock Elementary, where we run our after-school program, that figure is 67%”) score high.
Watch out for one trap: don't describe the need exclusively at the macro level (“nationally, X million Americans face Y”). Reviewers want to see that your organization understands the problem in your specific community. National statistics establish the topic; local data establishes that you actually work there.
Criterion 3: Soundness of approach
Reviewer asks: Will this approach actually work, and how do you know?
Funders want to see that your methodology is grounded in evidence — either a published evidence base (“our reading intervention is built on the Orton-Gillingham approach, validated in [citation]”) or your own program data (“participants in our pilot gained an average of 1.2 grade levels per year over four years of measurement”).
If your approach is novel, that's allowable, but the proposal has to acknowledge it and explain why innovation is justified given what's already been tried. Federal funders especially are suspicious of proposed work that ignores the existing literature.
A logic model diagram is a fast way to communicate soundness: it forces you to articulate how inputs lead to activities, activities to outputs, outputs to outcomes. Reviewers can read a logic model in 60 seconds and instantly see whether the cause-and-effect chain holds.
Criterion 4: Organizational capacity
Reviewer asks: Can this organization actually pull this off?
Capacity is established with concrete evidence: years operating, scale of current program (“serves 4,800 students annually, $4.2M operating budget”), prior outcomes (“71% of last year's participants reached grade-level reading by year-end”), key personnel and their experience, and infrastructure (financial systems, audited financials, board composition).
For federal grants, capacity also explicitly means readiness for OMB Uniform Guidance compliance: a SAM.gov registration in good standing, a clean Single Audit (or never having had one), a working drawdown procedure, and a budget person who understands MTDC and indirect cost rules.
Reviewers often weight capacity higher than applicants expect. A great idea from a thinly-staffed org with no relevant track record loses to a competent idea from an org that can clearly execute.
Criterion 5: Measurable, attributable outcomes
Reviewer asks: If you receive this funding, how will we know it worked?
Strong proposals distinguish outputs, outcomes, and impact, and they specify how each will be measured. Vague outcomes (“improve participant well-being”) score badly. Precise, attributable ones (“a 25-percentage-point increase in program-completion rates, measured against a matched comparison cohort using state administrative data”) score well.
Watch the attribution problem: many outcomes can't be cleanly attributed to your program because other things in the participant's life are also happening. Reviewers are sympathetic to this if you acknowledge it — for instance, by proposing a comparison group, a pre/post measurement, or a third-party evaluation. They're impatient with proposals that imply causation without acknowledging the design challenge.
Criterion 6: Budget realism and allowability
Reviewer asks: Are these numbers honest, allowable, and tied to the work?
Budgets fail at three things in particular. First, line items without justification — a $25,000 line for “materials and supplies” with no calculation makes the reviewer suspicious. Show the math: how many participants, how much per participant. Second, mismatch between the proposed activities and what the budget actually pays for — if the methodology requires three FTE but the budget funds 1.5, the budget will get scored down regardless of how good the prose is.
Third, federal proposals especially: get the indirect cost math right. The 10% de minimis rate applies to the MTDC base, not to the full direct cost total. Apply your NICRA if you have one. Watch for unallowable costs (lobbying, alcohol, fundraising, most entertainment) sneaking into the budget — these can disqualify the entire application.
Our free Indirect Cost Calculator does the MTDC math automatically.
Criterion 7: Sustainability and replicability
Reviewer asks: What happens after this grant ends, and could this approach work somewhere else?
Funders ask about sustainability for two reasons: they don't want to fund work that collapses on day 366, and they want to see that the recipient organization is thinking past the current grant. Strong sustainability plans name specific revenue sources and a timeline, including non-foundation income (individual donors, government contracts, fee-for-service, earned revenue) so the organization isn't single-funder-dependent.
Replicability is a federal-grant criterion more often than a foundation one. Federal funders sometimes want to see that successful approaches can be exported to other communities. If replicability matters to your funder, address it: what would another organization need to copy your model, and have you documented the operating manuals, training, or technical assistance to make that possible?
For most foundation grants, sustainability matters more than replicability. Read the rubric carefully and prioritize accordingly.
How the rubric is actually applied
Federal NOFOs publish point allocations: criterion 1 worth 20 points, criterion 2 worth 15, etc. Read the published allocation before you write. If sustainability is worth 5 points and organizational capacity is worth 25, your effort should reflect that.
Foundations rarely publish point values, but they've made decisions about which criteria matter most based on their portfolio. A foundation that funds early-stage work weighs innovation and theory of change heavily; one that funds proven models weighs evidence-base and capacity heavily. Look at three of their recent funded grants — you can usually infer the weighting.
The biggest failure mode in proposal writing is treating all criteria as equal-weight. Reviewers don't. Match your prose, your time, and your strongest evidence to whatever the funder actually cares about most.
How to score your own draft
Before you submit, run your draft against this rubric yourself, or give it to a colleague who hasn't worked on it. For each of the seven criteria, ask:
- Where in the proposal is this criterion addressed? Can I point to a paragraph?
- Is the evidence specific (numbers, citations, named examples)?
- Does it match the funder's rubric — their priorities and their language?
- What would a skeptical reviewer push back on?
- What's the weakest of the seven, and is it weak enough to drop the application below the funded line?
Most applicants self-score generously. The exercise is more useful when a colleague does it — they don't have your rationalizations.
Score every draft 0–100 before a funder sees it
GrantMind's AI Reviewer scores your full proposal 0–100 against the funder's published rubric and tells you exactly what to fix — which sections need stronger evidence, which objectives need tightening, which budget lines need justification — before you submit. It's the colleague review step, on demand.
Try the AI Reviewer freeThe bottom line
Reviewers don't read for clever writing. They read for fit with the rubric. The seven criteria above are the rubric, named and weighted slightly differently across funders but consistent in shape: alignment, need, approach, capacity, outcomes, budget, sustainability.
The proposals that get funded aren't the most polished — they're the ones that hit each rubric item with concrete evidence in the funder's own language. Write to the rubric.