When AI Writes Your Session Notes: What ABA Leaders Need to Know
Apr 02, 2026
AI-generated session note summaries are quickly becoming increasingly common in ABA software, helping agencies save time and streamline documentation. But with that convenience comes real compliance risk. In this article, we’ll walk through one way AI is being used in ABA session notes today, what we've uncovered in actual audits, and what agency leaders need to do to make sure their documentation stays accurate, ethical, and defensible.
AI is already embedded in many of the systems ABA agencies rely on every day.
Most modern EHR platforms now offer AI-generated narrative summaries based on session data. The promise is simple: faster documentation, cleaner notes, and less burden on staff. And yes, it can deliver on that.
But when AI is generating part of the clinical record, it’s no longer just a convenience feature. It becomes part of your compliance risk profile.
The guidelines on AI's use from the Artificial Intelligence Consortium for Applied Behavior Analysis are clear: AI must be human-led. That means outputs should be reviewed, edited, and approved by a qualified professional, who remains responsible for the final content
AI doesn’t own the note.
You do.
Audits That Raised Red Flags
We've audited multiple ABA agencies that had fully adopted AI-generated session summaries.
But they didn’t implement any meaningful safeguards.
They had:
-
No “human in the middle” review process
-
No indication in the notes that AI generated the summaries
-
No client or caregiver consent for AI use in medical records
-
Full reliance on the exact prompt recommended by their EHR vendor
From the agencyies' perspective, they were doing what they thought they should be doing. They trusted the system and assumed the vendors they were working with had done the hard work. But that assumption created real exposure.
Because from a compliance standpoint, “We followed the vendor’s instructions” isn’t a defense. Responsibility for documentation doesn’t transfer to the software company. It stays with the provider organization. Let's look at what the audits found in terms of the summaries AI was producing
What the AI Was Actually Producing
When we reviewed the notes, we saw consistent patterns of risk. These weren’t isolated issues. They showed up across multiple notes and providers.
Here are four categories of problems, each illustrated with real examples.
1. AI Making Clinical Judgments Instead of Reporting Facts
“Social skills were also enhanced…”
“The client reached a high accuracy level of between 75% to 78.6% across different targets.”
“Activities such as sorting items into categories and requesting help facilitated communication development…”
“These observations highlight the areas requiring ongoing attention and intervention to further reduce the frequency of these behaviors…”
Each of these statements sounds appropriate on the surface. But they all share the same issue: they interpret outcomes rather than document observable events.
The AI is summarizing what it thinks the data means instead of sticking to what actually occurred.
In several cases, the underlying data didn’t clearly support these conclusions. The language made progress sound more definitive than the data justified.
That’s a problem because session notes are supposed to be objective and defensible. When AI introduces interpretation without verification, it increases the risk of overstating progress or misrepresenting clinical outcomes.
Without a human reviewing the content, these judgment calls go straight into the record.
2. Assertions Without a Clear Basis in the Record
“He successfully said ‘thank you’ to the aide and his sister with prompts.”
“CLIENT demonstrated progress in several programs, including Manners, Transitions, and Identifying Letter Sounds.”
“The focus remained on addressing these behaviors consistently without any additional unwanted behaviors being observed.”
“The BCBA particularly focused on the programs where attention and task engagement were parts of the key skills being targeted.”
These statements introduce another type of risk: unsupported assertions.
In these examples, the AI included details that either weren't documented or couldn't be traced back to specific data points from those encounters.
The sibling reference is especially telling. There was no indication from the provider who delivered the intervention that a sibling was present. The AI inserted that detail to make the narrative feel complete.
The others make broad claims about progress or focus without tying them to measurable data reproted by an actual provider.
This kind of language can easily slip through because it sounds natural. But if it can’t be substantiated, it weakens the credibility of the note and creates exposure during audits or payer reviews.
3. Inclusion of Non-Billable or Irrelevant Activities
“The BCBA placed orders for all necessary items to ensure sessions can be conducted with fidelity and consistency.”
While these activities may have occurred, that’s not the problem.
The problem is that they don’t necessarily belong in a session note tied to billable services, at least not in the way they’re presented.
AI doesn’t understand payer rules or documentation standards. It pulls in activities that sound relevant without distinguishing whether they support the billed service.
The result is documentation that blends clinical care with supervision, planning, or administrative work.
That creates risk because it muddies the connection between what was documented and what was billed.
4. Fabricated or Speculative Content
“During their discussion, the BCBA and BT likely focused on refining intervention strategies…”
“Although specific program modifications by the BCBA aren't provided, typical adjustments might involve tailoring interventions…”
“While specifics of the client's response to treatment weren't provided, the supervision session likely included reviewing progress…”
“The BCBA and BT likely discussed challenges faced and emphasized data collection accuracy…”
This is the most serious category.
Here, the AI is no longer summarizing. It’s generating hypothetical content. It’s filling in missing information with what usually happens in similar situations.
Words like “likely,” “might,” and “typically” are clear indicators that the content is not based on actual documented events. This isn’t just inaccurate. It’s fabricated.
And if that language becomes part of a signed session note, it can significantly undermine the defensibility of the record.
The Prompt Problem Behind the Scenes
One of the most important findings from this audit wasn’t just what the AI produced.
It was why.
The agencies were using the exact prompts recommended by their respective EHR vendors. They assumed the prompts were sufficient for compliant documentation.
They weren't.
The prompts didn’t restrict inference, didn’t require alignment with source data, and didn’t prevent speculative language.
So AI did what they were designed to do. They generated polished, complete-sounding narratives.
They just weren’t always accurate.
And again, the responsibility for that doesn’t sit with the vendor.
It sits with each ABA agency.
What the Guidelines Make Clear
The AIC-ABA guidelines emphasize several key expectations:
- AI must be human-led, with clear accountability for outputs.
- Use must be transparent, with appropriate disclosure and consent.
- Outputs must be monitored for accuracy, safety, and bias.
- Organizations must be prepared to pause or discontinue use if risks are identified
In these cases, none of those safeguards were in place. That’s what turned a helpful feature into a compliance liability.
A Better Way to Stay in Control
AI isn’t the problem. Unstructured documentation is.
If your team doesn’t have a clear standard for what belongs in a session note, AI will fill in the gaps for you. And as we’ve seen, it won’t always do that in a compliant way.
That’s why we created our ABA Session Note Frameworks.
They define what should and shouldn’t be included in a session note, making it easier to review AI-generated content and catch issues before they become risks.
If you’re using AI, this gives you a way to stay in control of your documentation.
👉 Learn more here: https://www.abacompliance.com/aba-session-notes
Final Thought
AI is going to be part of ABA documentation moving forward.
The question is whether your agency is managing it or trusting it.
Because no matter what your vendor says, and no matter how the note is generated…
If it’s in the note, your organization is responsible for it.
Stay connected with news and updates!
Join our mailing list to receive the latest news and updates from our team.
Don't worry. We won't share your information.
We hate SPAM. We'll never sell your information.
