42930060

PIP Claimants Warned Over AI Use as Mistakes Raise DWP Suspicion and Risk £800 Loss

The Department for Work and Pensions (DWP) is becoming increasingly suspicious of Personal Independence Payment (PIP) claimants due to growing errors connected to the use of artificial intelligence (AI) tools in form submissions. Recent warnings highlight a “dangerous” dependence on AI platforms like ChatGPT, which might be undermining claimants' chances of receiving up to £800 in support.

PIP entitlement is determined through an assessment covering two main components: daily living and mobility. Each component contains activities scored based on the claimant’s ability and the level of assistance needed. Points are added together in each section, with thresholds set to decide between standard and enhanced rates.

Specifically, scoring between 8 and 11 points in the daily living category grants the standard rate, while 12 or more points lead to the enhanced rate. The same applies to the mobility component. Accurate and personalized descriptions of how conditions affect daily life and movement are critical during this process.

READ MORE: I’m a Shopping Expert Who’s Covered 15 Amazon Sales — and This Bestseller Still Surprises Me

READ MORE: DWP Warns Benefits Claimants Against Using ChatGPT for Form Completion

Michelle Cardno, founder of Fightback4Justice, a charity offering benefits advice, explained to The i newspaper that many claimants fail to review or personalize AI-generated text. “The issue is that people do not seem to be checking over the info AI has written and ensuring it is correct, and that the information is not in their own words. That is what DWP want to hear – their own words,” she said.

Dylan Thomas, a Somerset-based pastor who supports claimants during appeal tribunals, pointed out that AI outputs can be outdated or factually inaccurate, even on simple matters. Meanwhile, benefits expert Noah Bear Nyle, active on TikTok and YouTube, warned that AI tends to produce irrelevant or vague responses. He said, “There’s a danger you miss out key, crucial information about your condition, about the impacts of the condition [on you], because you have this vague AI waffle – this kind of word potpourri."

Such vague or generic AI-generated responses raise concerns among assessors who may doubt the authenticity of the evidence presented, increasing the risk of fraud suspicion. This could ultimately result in claimants losing the financial support they are entitled to.

Claimants are therefore urged to exercise caution when using AI tools, ensuring that their PIP applications contain accurate, personalized, and honest descriptions of their conditions in their own words to avoid jeopardizing their claims.

SUBSCRIBE FOR UPDATES


No spam. Unsubscribe any time.