Blog

rewrite this title in other words: Minnesota uses artificial intelligence to fight AI fraud – in Etokom

0

Summarize this content to 100 words:

(TNS) – Shameless fraudsters and Minnesota officials struggling to stem social services abuses have little in common: They’re both relying on artificial intelligence to reach their goals.ChatGPT has helped criminals create fake customer notes to promote fake companies, who have earned millions in Medicaid reimbursements for non-existent services. State leaders are betting on machine learning to analyze thousands of claims from providers in hopes of identifying claims that deviate from processes.The situation – using AI to detect fraud – reflects the very modern challenges facing Minnesota authorities as they race to stop a snowballing fraud scandal. They are using a range of tools to stop the schemes that trained the national spotlight on the state and helped derail Governor Tim Walz’s bid for a third term.Prosecutors have estimated that the total amount of fraud in 14 high-risk Medicaid programs over seven years could exceed $9 billion, though Walz called that number speculative. Fifteen people have been charged so far with fraud in housing and autism programs, and more charges may emerge as state and federal investigations into social services continue.As the crisis unfolds, some experts are applauding Minnesota’s embrace of AI to explore its more sinister applications.”It’s an old adage, just fight fire with fire,” said Jordan Burris, head of public sector at Socure, which provides AI-powered fraud prevention software to companies and government agencies.But others cautioned that AI-powered algorithms could falsely flag standard claims dubious, substandard providers who aren’t worth it.”They have really impressive results,” said Mona Birjandi, an economist and data analytics director at the New York law firm Outten & Golden. “But they are not without unintended consequences.”When two men from Philadelphia traveled to Minneapolis to take advantage of the Midwestern state’s generous selection of social services, they turned to artificial intelligence to get the job done.Court records show that Anthony Jefferson and Lester Brown allegedly used ChatGPT to create fake emails and notes discussing clients enrolled in their Housing Stabilization Services company. The fabricated documents helped the men steal nearly $3.5 million from the assistance program they claimed to have provided to 230 Medicaid beneficiaries.So-called “fraudulent tourists” pleaded guilty to wire fraud in February, according to the Justice Department, facing the first charges involving an AI used to further a fraud scheme in Minnesota.It’s possible more are on the way.Criminal Apprehension Bureau Superintendent Drew Evans said at a February 26 news conference that his agency has seen an increase in people using artificial intelligence to commit financial crimes, including using AI-generated voices to impersonate others to steal money.People prosecuted over Medicaid fraud in Minnesota also recently received progress notes that appear to be AI-generated and presented by mental health providers, according to a spokesperson for the state attorney general’s office.Socure’s Burris said AI has intensified attacks by bad actors, who no longer require advanced training in data science. Almost anyone can use technologies capable of creating official-looking emails, or rapidly collecting reams of personal information from the Internet for use in government-for-profit applications.As far as defeating modern day cheaters?”The only way to get ahead of the AI ​​scams emerging these days is to use AI at scale to combat them,” Burris said.Part of a broader anti-fraud package introduced by Walz on Feb. 26 would pump more resources toward using machine learning to identify suspicious billing earlier. If successful, that proposal would build on the Department of Human Services’ work with Optum, a subsidiary of UnitedHealth Group, which the state selected to conduct AI-powered reviews of claims.Minnesota IT Services Deputy Commissioner John Eichten said Optum used a “collection of analytics” to parse the provider’s claims, which differed from policies. This includes everything from providers claiming to see dozens of clients per day to repeatedly billing for the same hours.Optum found widespread billing irregularities in the Autism Intervention Program, one of 14 Medicaid-funded services that officials have said are fraud-prone. But Eichten said the flagged claims are not necessarily fraudulent.AI is a useful screening tool, he said, but it’s up to the social service agency to dig deeper into those initial results to identify wrongdoing. (A spokesperson for the state Department of Human Services said investigators do not use AI in the post-payment review process.)And there are potential pitfalls in using algorithms. Patients have accused insurance giant UnitedHealthcare of using a flawed AI program to deny coverage to Medicare patients after acute care. The insurer has described the allegations as “baseless”.Birjandi, the economist, said improperly trained AI-powered fraud detection algorithms risk accidentally singling out legitimate providers, who should have a clear process for appealing against any initial determination made by the algorithm. Eichten said the state has worked with Optum to continually improve analytics to prevent the broad-brush flagging described by Birjandi.”We want analyzes that lead us on the right path to investigating what actually represents fraud, waste or abuse,” he said.Echten said fighting fraud requires careful, deliberate adoption of some of the tools that bad actors use for good. He said officials cannot reject AI tools that are not perfect.”If we do this, we are giving a significant competitive advantage to hackers and fraudsters.”©2026 Minnesota Star Tribune, distributed by Tribune Content Agency, LLC

(TNS) – Shameless fraudsters and Minnesota officials struggling to stem social services abuses have little in common: They’re both relying on artificial intelligence to reach their goals.

ChatGPT has helped criminals create fake customer notes to promote fake companies, who have earned millions in Medicaid reimbursements for non-existent services. State leaders are betting on machine learning to analyze thousands of claims from providers in hopes of identifying claims that deviate from processes.

The situation – using AI to detect fraud – reflects the very modern challenges facing Minnesota authorities as they race to stop a snowballing fraud scandal. They are using a range of tools to stop the schemes that trained the national spotlight on the state and helped derail Governor Tim Walz’s bid for a third term.


Prosecutors have estimated that the total amount of fraud in 14 high-risk Medicaid programs over seven years could exceed $9 billion, though Walz called that number speculative. Fifteen people have been charged so far with fraud in housing and autism programs, and more charges may emerge as state and federal investigations into social services continue.

As the crisis unfolds, some experts are applauding Minnesota’s embrace of AI to explore its more sinister applications.

“It’s an old adage, just fight fire with fire,” said Jordan Burris, head of public sector at Socure, which provides AI-powered fraud prevention software to companies and government agencies.

But others cautioned that AI-powered algorithms could falsely flag standard claims dubious, substandard providers who aren’t worth it.

“They have really impressive results,” said Mona Birjandi, an economist and data analytics director at the New York law firm Outten & Golden. “But they are not without unintended consequences.”

When two men from Philadelphia traveled to Minneapolis to take advantage of the Midwestern state’s generous selection of social services, they turned to artificial intelligence to get the job done.

Court records show that Anthony Jefferson and Lester Brown allegedly used ChatGPT to create fake emails and notes discussing clients enrolled in their Housing Stabilization Services company. The fabricated documents helped the men steal nearly $3.5 million from the assistance program they claimed to have provided to 230 Medicaid beneficiaries.

So-called “fraudulent tourists” pleaded guilty to wire fraud in February, according to the Justice Department, facing the first charges involving an AI used to further a fraud scheme in Minnesota.

It’s possible more are on the way.

Criminal Apprehension Bureau Superintendent Drew Evans said at a February 26 news conference that his agency has seen an increase in people using artificial intelligence to commit financial crimes, including using AI-generated voices to impersonate others to steal money.

People prosecuted over Medicaid fraud in Minnesota also recently received progress notes that appear to be AI-generated and presented by mental health providers, according to a spokesperson for the state attorney general’s office.

Socure’s Burris said AI has intensified attacks by bad actors, who no longer require advanced training in data science. Almost anyone can use technologies capable of creating official-looking emails, or rapidly collecting reams of personal information from the Internet for use in government-for-profit applications.

As far as defeating modern day cheaters?

“The only way to get ahead of the AI ​​scams emerging these days is to use AI at scale to combat them,” Burris said.

Part of a broader anti-fraud package introduced by Walz on Feb. 26 would pump more resources toward using machine learning to identify suspicious billing earlier. If successful, that proposal would build on the Department of Human Services’ work with Optum, a subsidiary of UnitedHealth Group, which the state selected to conduct AI-powered reviews of claims.

Minnesota IT Services Deputy Commissioner John Eichten said Optum used a “collection of analytics” to parse the provider’s claims, which differed from policies. This includes everything from providers claiming to see dozens of clients per day to repeatedly billing for the same hours.

Optum found widespread billing irregularities in the Autism Intervention Program, one of 14 Medicaid-funded services that officials have said are fraud-prone. But Eichten said the flagged claims are not necessarily fraudulent.

AI is a useful screening tool, he said, but it’s up to the social service agency to dig deeper into those initial results to identify wrongdoing. (A spokesperson for the state Department of Human Services said investigators do not use AI in the post-payment review process.)

And there are potential pitfalls in using algorithms. Patients have accused insurance giant UnitedHealthcare of using a flawed AI program to deny coverage to Medicare patients after acute care. The insurer has described the allegations as “baseless”.

Birjandi, the economist, said improperly trained AI-powered fraud detection algorithms risk accidentally singling out legitimate providers, who should have a clear process for appealing against any initial determination made by the algorithm. Eichten said the state has worked with Optum to continually improve analytics to prevent the broad-brush flagging described by Birjandi.

“We want analyzes that lead us on the right path to investigating what actually represents fraud, waste or abuse,” he said.

Echten said fighting fraud requires careful, deliberate adoption of some of the tools that bad actors use for good. He said officials cannot reject AI tools that are not perfect.

“If we do this, we are giving a significant competitive advantage to hackers and fraudsters.”

©2026 Minnesota Star Tribune, distributed by Tribune Content Agency, LLC

[ad_1]

#Minnesota #artificial #intelligence #fight #fraud #trending #[now:year]

Leave a Reply