azoai.com

How Can Banks Beat Deepfakes? A New AI Privacy Framework Offers the Answer

As AI-generated deepfakes escalate financial fraud risks, a new global study unveils a privacy-focused model to help banks detect threats, safeguard data, and maintain trust.

Research: Managing deepfakes with artificial intelligence: Introducing the business privacy calculus. Image Credit: FAMILY STOCK / ShutterstockResearch: Managing deepfakes with artificial intelligence: Introducing the business privacy calculus. Image Credit: FAMILY STOCK / Shutterstock

A new study published in the Journal of Business Research explores how businesses can combat the rising threat of AI-generated deepfakes, which manipulate audio, video, or images to impersonate individuals or fabricate scenarios.

Interviews revealed managers often view deepfake incidents as direct challenges to their professional judgment, triggering resistance to centralized AI governance unless paired with clear operational benefits.

Researchers developed and proposed a novel business privacy calculus model, based on interviews with 27 bank managers from three global banks in nine countries (the US, the UK, Sri Lanka, Hong Kong, Australia, the UAE, Canada, Malaysia, and India).

The framework, grounded in psychological reactance theory (which examines how threats to managerial decision-making autonomy, such as deepfake-driven fraud, influence organizational risk assessments) and privacy calculus theory, highlights how data integrity measures can mitigate deepfake risks while balancing operational efficiency.

The study focuses on the banking sector due to its economic significance and vulnerability to deepfake-enabled fraud, such as forged loan applications or identity theft. Researchers argue that businesses must adopt a proactive "privacy calculus" approach—weighing the costs of privacy investments (e.g., AI detection tools) against the risks of inaction.

Unlike consumer-focused models, this framework emphasizes organizational privacy trade-offs, such as reputational damage, regulatory penalties, and operational disruptions.

Data integrity measures go beyond detecting fakes: The paper stresses rebuilding trust ecosystems through transparent AI audits and customer-facing verification dashboards.

To operationalize this model, the authors recommend AI-enabled measures such as real-time verification, audit trails, and employee training protocols. They stress that collaboration between governments, tech firms, and industries is critical to standardize deepfake detection and response strategies.

“Deepfakes erode trust—the foundation of banking and many other sectors,” said the study’s lead author. “Our framework helps businesses not only react to threats but build systemic resilience by aligning AI governance with organizational privacy priorities.”

The findings come as regulators worldwide grapple with AI ethics and transparency mandates. For businesses, the message is clear: addressing deepfake risks requires integrating technical safeguards, workforce education, and cross-sector partnerships to safeguard stakeholders and maintain trust.

Journal reference:

Read full news in source page