In rapidly evolving fields like artificial intelligence, the precise meaning of claim terms can make—or break—a patent. As practitioners know, clarity in terminology ensures both robust protection and smooth prosecution. However, the issues are clarity often only crystalize long after drafting. Below, we explore how rapidly evolving technologies such as in AI cases result in potentially more reliance on external sources for interpretation, even if the sources are not established as predating the application.
Under the MPEP, there is a “plain meaning” presumption: claim terms are given their ordinary and customary meaning in the art. Ways to rebut that presumption are (1) by clearly defining a term in the specification, or (2) by unmistakably disavowing claim scope elsewhere in the specification. During prosecution, claims are interpreted under the “broadest reasonable interpretation” (BRI) standard—designed to ensure that the examiner can identify any over-breadth and require narrowing amendments if necessary.
Ordinarily, the starting point for claim interpretation is the effective filing date of the application. Under MPEP § 2173.01’s Editor Note, the “relevant date is the ‘effective filing date’ of the claimed invention” for first-to-file applications, ensuring that no later developments in the art can alter the original meaning. In practice, this means that an applicant may not introduce new definitions or disavowals after filing; any lexicographic or disclaimer-based limitations must reside in the specification as filed.
Yet in some cases, the Board has not hesitated to consult authoritative technical sources—even those potentially published or updated after filing—to ensure examiners apply meanings consistent with the state of the art. In PTAB Appeal No. 2024-000253 (Sleep Number Corp.), involving claims to automatically detecting snoring via an AI model that fuses acoustic and pressure sensors in a smart bed, the examiner had misconstrued “classifier” and “vote” in a way at odds with machine-learning norms.
Claim 1 is reproduced below:
A bed system comprising:
a first bed comprising:
a first mattress;
a first pressure sensor in communication with the first mattress to sense pressure applied to the first
mattress;
a first acoustic sensor placed to sense acoustics from a user on the first mattress;
a first controller in data communication with the first pressure sensor and in data communication with the
first acoustic sensor, the first controller configured to:
receive, from the first pressure sensor, first pressure readings indicative of the sensed pressure
of the first mattress; receive, from the first acoustic sensor, first acoustic readings indicative of the sensed acoustic acoustics from the user; and transmit the first pressure readings and the first acoustic readings to a remote server such that the remote server is able to generate one or more snore classifiers that, when run by a controller on incoming pressure readings and on incoming acoustic readings, provide a snore vote;
a second bed comprising:
a second mattress;
a second pressure sensor in communication with the second mattress to sense pressure applied to the
second mattress;
a second acoustic sensor placed to sense acoustics from a user on the second mattress; and
a second controller in data communication with the second pressure sensor and in data communication with the second acoustic sensor, the controller configured to:
receive the one or more snore classifiers; run the received snore classifiers on second pressure readings and on second acoustic readings in order to collect one or more snore votes from
the running snore classifiers; determine, from the one or more snore votes, a snore state of a user on the second bed; responsive to the determined snore state, operate the bed system according to the determined snore state.
On appeal, the PTAB first turned to the specification to understand the terms at issue, but then also looked to external references such as the C3 AI Glossary’s treatment of classification models—recognizing that “[c]lassification models predict a class label … as understood by those of ordinary skill in the field of data science.” From the various external online sources, the Board concluded that a “snore vote” is properly viewed as a machine-learning output (i.e., the classifier’s label), not a raw sensor measurement—and thus not disclosed by the cited prior art.
This decision is no snoozer - it shows that particularly in cutting-edge technologies, the PTO will sometimes step beyond the filing’s four corners to align claim construction with a proper technical understanding.
sleep number
machine learning
this isn’t a snoozer!
Appeal 2024-000253
Application 16/233,260
snore score…
issue of proper meaning of a claim term - relates to the art of machine learning.
citing a definition last updated in 2025
https://developers.google.com/machine-learning/glossary