96% of SR&ED claims get approved. The ones that don’t usually have the same problem: the technical work was real, but the claim described it like a product launch instead of a scientific investigation. Here’s how to avoid that.
Your claim hits a non-technical desk reviewer first. They’ve read thousands of these — they’re pattern-matching for red flags, not evaluating your architecture. If the story hangs together and the numbers check out, it passes. If something looks off, it goes to a Research Technology Advisor who digs into the details. About 20% of claims get that level of scrutiny. The goal: write clearly enough that yours isn’t one of them. (The CRA published the actual review manual RTAs use to evaluate claims — worth reading before you file.)
The #1 mistake: writing about your product instead of your problem
This is the most common reason claims get flagged, and it makes total sense why it happens. You spend all day pitching your product to customers and investors. When you sit down to write your T661, that same language comes out.
Your T661 shouldn’t read like something you’d put in a pitch deck. The CRA is looking for a description of the technical problem you were trying to solve — not the feature that came out the other end.
What you naturally write: “We developed an AI-powered recommendation engine that personalizes content for each user, improving engagement by 40%.”
What the CRA wants to read: “It was uncertain whether collaborative filtering could generate relevant recommendations with fewer than 50 user interactions per session, given that standard matrix factorization methods require significantly larger datasets to overcome sparsity.”
The underlying work is identical. But the second version frames it as an open technical question that required experimentation — which is what gets approved without follow-up.
The mental shift: take whatever you built and reframe it as the technical problem you had to solve to build it.
The three-part formula for strong uncertainty
Line 242 on the T661 — the technological uncertainty — is the foundation of your entire claim. You get 350 words. Make them count.
Strong uncertainties have three components:
Every strong uncertainty statement has three parts: (1) a specific metric you were trying to achieve, (2) a constraint that made it genuinely hard, and (3) an explanation of why existing approaches couldn’t solve it. Nail all three and the reviewer thinks “yeah, you’d need to experiment to figure that out.”
- A specific metric. Not “high performance” — “10,000 events per second.” Not “low latency” — “sub-50 millisecond response time.” Numbers give the reviewer something concrete.
- A constraint that creates real tension. This is what separates R&D from engineering. You weren’t just hitting a target — you were hitting it under conditions that made it genuinely hard. “Without introducing duplicate or lost updates during failure recovery.” “While maintaining data consistency across distributed nodes.” The constraint is what makes the problem non-trivial.
- Why existing approaches don’t work. This is the part most claims skip — and it’s what reviewers are looking for when they flag something. “Since the coordination overhead required for consistent processing conflicted with the latency constraints.” This makes the reader think: you can’t Google your way out of this. You have to build and test.
Here’s the progression:
Weak: “We faced many challenges building the new software platform.” — You’d be surprised how many claims are basically this. It says nothing. The reviewer has ten follow-up questions.
Better but not enough: “It was uncertain whether the system could handle real-time processing requirements.” — Gestures at something, but too vague. What processing? What’s “real-time” here? What made it hard?
Strong: “It was uncertain whether the system could maintain throughput above 10,000 events per second with sub-50ms latency, without introducing duplicate or lost updates during failure recovery, since the coordination required for consistent processing added overhead that conflicted with the latency constraints.” — One sentence. Metric, constraint, why it’s hard. The reviewer reads this and thinks: yeah, you’d need to experiment to figure that out.
Line 244: show the experiments, not just the outcome
You get 700 words for systematic investigation. This is where you prove the scientific method happened.
- Start with what you hypothesized. “We believed event-sourced architecture with CQRS separation could decouple read and write paths, resolving the throughput-latency conflict.”
- Describe what you actually tested. Be specific. “We implemented a prototype on Apache Kafka, varying partition counts (4, 8, 16, 32) and replication factors to measure latency under sustained load.”
- Say what failed. This is often the most SR&ED-eligible part — and the part everyone forgets to include. “Exactly-once delivery added 15ms overhead per transaction, pushing latency past the 50ms threshold at peak load.”
- Show the iteration. “We redesigned the coordination layer to use custom optimistic concurrency instead of Kafka’s built-in exactly-once semantics, reducing overhead by ~60%.”
It should read like a technical journal entry, not a changelog. Every paragraph should make the reviewer more confident this was real experimentation.
“Routine” vs. SR&ED: where the line actually is
Not all technical work qualifies, even if it’s complex. The dividing line is uncertainty — could you have figured this out through standard practice or publicly available information?
Not SR&ED: Using React to build a standard web app. Known framework, known patterns, no uncertainty.
Potentially SR&ED: Making that same framework handle 100x expected load when nobody’s published a solution for your specific constraints. The knowledge to solve your problem doesn’t exist yet. You have to build and test.
Could you have found the answer on Stack Overflow, in the docs, or by hiring someone who’d done it before? Probably routine. Did you try multiple approaches that failed before finding one that worked, with no published solution for your specific combination of constraints? That’s SR&ED territory.
Six things that trigger audits
- Vague uncertainty. “We faced challenges” is not an uncertainty statement. Use the three-part formula. If your uncertainty section is five lines long, it’s almost certainly too vague.
- Overclaimed hours. Claiming 80% of an engineer’s time on SR&ED when they also handled support, client calls, and bug fixes? The numbers won’t add up under financial review.
- No contemporaneous docs. Writing narratives from memory at year-end is getting riskier every year. The CRA wants evidence that existed before the claim was written.
- Sudden claim jumps. $50K to $300K year-over-year without an obvious reason (new hires, new projects) will get a look.
- R&D mixed with routine work. If you can’t clearly separate the experimental work from standard development in your project tracking, reviewers will question the whole thing.
- Weak contractor docs. Claiming contract expenditures without a written agreement that defines the SR&ED scope, IP ownership, and confirms the contractor is Canadian? The CRA will ask for it. Have it ready. (See contract expenditure requirements on the CRA site.)
Build defensibility into the process, not the filing
The strongest claims aren’t written at tax time. They’re assembled from evidence your dev team is already generating — commit histories, PR reviews, sprint tickets, architecture docs. If your engineering process leaves traces, your SR&ED claim isn’t just easier to write. It’s defensible.
- Document in real time. Brief notes in tickets, meaningful commit messages, meeting notes from technical discussions. This is the single highest-impact change you can make. Everything else gets easier once this habit exists.
- Use technical language from the start. If your Jira tickets already describe work as “investigating collaborative filtering for sparse interaction matrices” instead of “build recommendation engine,” your SR&ED narrative practically writes itself.
- Track time at the project level. Specific buckets, weekly: R&D Project A, Project B, bug fixing, admin. Not “30% of salary on R&D.” Actual project-level tracking. This is the most scrutinized financial element of any claim.
- Get your technical team involved early. The people who did the work need to be able to explain it. If a CRA reviewer calls and your developer is hearing the T661 framing for the first time, it shows.
- Talk to the CRA before you file. Pre-claim consultations were discontinued as of January 1, 2026, but are expected to return in some form. In the meantime, you can request a call with a SR&ED specialist, ask for a presentation tailored to your business, or attend a CRA SR&ED webinar and ask questions directly. Free, and underused.
The CRA offers a Self-Assessment and Learning Tool (SALT) for pressure-testing eligibility before you file. We cloned it and made it more user-friendly — try it on our site.
Keep the financials clean. Salaries, contractor costs, and materials should tie back to your GL without gymnastics. If the CRA asks for pay stubs or invoices, you should be able to produce them in five minutes.
Three questions to ask before you file
- Do your projects have contemporaneous docs? Not docs you could create — docs that already exist because the work generated them. Git commits, tickets, design docs, meeting notes. If not, start now.
- Could a non-technical person read your project descriptions and see that experimentation happened? If your tracking reads “Build recommendation engine” — a desk reviewer sees routine development. “Investigate cold-start recommendation quality under sparse interaction conditions” — they see R&D.
- Do you have project-level time tracking? Weekly. Specific. Not percentage estimates. If you’re planning to file an SR&ED claim and you don’t have this, set it up today.
Most claims that fail do so for preventable reasons — vague narratives, missing docs, sloppy financials. Frame your work as a technical investigation. Document as you go. The claim takes care of itself.