2026 Financial AI Outlook: Emphasis on Enhanced Controllability, Not Just Intelligence

01/26 2026 562

Source | 01CAI

Over the past year, the adoption rate of generative AI, typified by large-scale models, in the financial sector has been notably slower than what the broader public had anticipated.

When considering technology investment and the volume of pilot projects, financial institutions rank among the forefront in their commitment to generative AI. Nevertheless, the integration of generative AI into production systems, especially in pivotal decision-making processes, remains relatively limited. A 2025 survey conducted by the Hong Kong Institute for Monetary and Financial Research highlighted that while over 70% of the surveyed financial institutions have embarked on generative AI deployments or pilot projects, less than 20% have successfully incorporated it into their core business operations.

Many financial institutions find themselves in a paradoxical situation of escalating investments while cautiously proceeding with implementation, reflecting the true state of financial AI's transition into practical business operations.

The challenge does not stem from whether the technology is sufficiently advanced. Rather, it is precisely after the rapid enhancement of model capabilities that financial institutions have become acutely aware that when AI is integrated into the financial system, the risk shifts from "inability" to potentially "overreaching." A response that appears reasonable or a judgment that is unstable may only pose a user experience issue on content platforms, but in financial operations, it could directly affect asset safety and risk exposure, potentially triggering a chain reaction throughout the business chain.

Against this backdrop, the industry has started to reassess the value boundaries of financial AI. The launch of Tianjing 3.0 by FinVolution Group exemplifies this shift, as many technical descriptions no longer solely focus on model scale or generative capabilities but instead strive to address long-standing issues of controllability, trustworthiness, and responsibility boundaries in financial scenarios. From an industry perspective, this signifies a shift in the competitive focus of financial AI from capability demonstration to boundary management.

01

Practical Frontiers of Financial AI: Transitioning from Capability Expansion to Structural Reconstruction

In the realm of global financial AI practices in 2025, the industry's focus is undergoing a transformation. Instead of merely focusing on the extent to which model capabilities have improved, more institutions are concerned about whether these capabilities can be consistently and stably applied in real-world business operations over the long term.

Initially, financial institutions primarily perceived large models as efficiency tools, mainly utilized in relatively low-risk scenarios such as customer service responses, text generation, and internal information retrieval. The common characteristic of these scenarios is that even if the model's output is biased, it can be rectified through manual review, rule validation, or subsequent processes without directly impacting core issues such as capital security and responsibility attribution.

However, as applications gradually deepen, the limitations of a single general-purpose large model in financial scenarios have become evident. Financial operations heavily rely on certainty, auditability, and stable output, whereas large models, which generate content based on probability as their core mechanism, often struggle to independently handle critical decision-making functions without external constraints. Many institutions have discovered that while large models have made significant strides in understanding and expression, they still fall short of the stringent requirements of financial systems in terms of "accuracy, stability, and traceability" when issues arise.

Consequently, modular architectures have emerged as a more practical technical choice for many financial institutions. Modular does not imply the introduction of more complex technologies but rather reducing uncertainty through a clear division of labor. In simpler terms, it means no longer expecting a single model to solve all problems but adopting a "team-based" approach: allowing large models to handle complex information understanding, problem sorting, and cognitive support, while entrusting judgments that require stability, explainability, and accountability to more focused and controllable small models. This collaborative application model of large and small models is becoming an increasingly acceptable path for financial AI implementation.

Some leading financial institutions have already made significant progress in this direction. For instance, FinVolution Group adopted a collaborative architecture of "large model perception + small model decision-making" in advancing its financial AI applications. In this setup, the "Tianjing" large model primarily handles task understanding and process breakdown, identifying complex user intentions, organizing business information, and scheduling processing steps. Meanwhile, critical judgment tasks such as risk assessment and rule validation are executed by thousands of more stable and vertically specialized small models. This design merges the general capabilities of large models with the vertical expertise of small models, enhancing efficiency while maintaining a stable and robust baseline.

Meanwhile, the training and optimization logic of financial AI are also evolving. Traditional models often optimize around a single objective function, but in financial scenarios, a singular perspective can easily amplify systemic biases. In recent years, collective reinforcement learning and multi-source feedback mechanisms have been introduced into the training and optimization systems of financial AI. The core purpose is not to pursue more aggressive strategies but to mitigate the extremity of model decisions by incorporating multiple agents and diverse sources of experience. In practice, this includes human feedback from different business roles and strategic balance formed through collaborative gaming among multiple models.

FinVolution Group followed this approach in upgrading "Tianjing" 3.0. The objective was not to enhance the model's autonomous decision-making capabilities but to systematically organize and accumulate the implicit "empirical intuition" scattered among business experts and frontline practitioners through technical means, transforming it into "collective wisdom" that the entire AI system can continuously learn from and reference. The value lies not in increasing the model's "intelligence" but in aligning its decision outcomes more closely with the organization's long-term risk preferences and business consensus.

In terms of application forms, financial AI is evolving from early Chatbots to Agent systems with certain task execution capabilities. However, contrary to the outside world's imagination of "highly automated decision-making," financial institutions are generally cautious in advancing in this direction. Currently, a more common approach is to position Agents as constrained execution units: invoking tools and completing procedural tasks within clearly defined authorization scopes, while retaining space for human intervention at critical decision points. The potential of multi-agent collaboration is being discussed, but its premise is clear responsibility division and controllable exit mechanisms, rather than complete autonomy.

Overall, a clear trend is emerging in the technological innovation of financial AI: shifting from capability-driven to structure-driven, and from model performance competition to system trustworthiness construction. Against the backdrop of gradually clarified regulatory requirements and heightened risk awareness, financial AI is no longer aiming to be "omnipotent" but is gradually returning to its practical role as a decision-support system.

02

Behind the Technological Surge: How to Secure Financial AI with a "Safety Net"?

Despite continuous emerging innovations, financial AI still faces a series of unavoidable issues in the process of true business implementation. These issues are closely related to the current operating mechanisms of large models and are unlikely to be completely eliminated in the short term. It is precisely because of these issues that financial AI cannot be fully automated at this stage.

Firstly, the hallucination problem has a magnified effect in financial scenarios. The output of large models is essentially a probabilistic generation of the "most likely answer" rather than fact-based verification. In most applications, the risks posed by this mechanism are acceptable, but in financial operations, any seemingly reasonable yet inaccurate judgment could be directly embedded into decision-making processes, affecting capital allocation, risk assessment, or customer rights.

More importantly, financial operations often involve high complexity and strong timeliness, with inevitable lags and gaps between model training data and real-world situations. This means that hallucinations are not isolated incidents but potential structural risks that may accompany financial AI over the long term.

Secondly, data bias introduces potential unfairness into the decision outcomes of financial AI. AI typically relies on historical data for training, and financial data itself carries long-standing institutional arrangements, market choices, and behavioral biases. Without effective correction mechanisms, models may not only replicate these biases but even amplify them in large-scale applications.

In scenarios such as credit approval and risk pricing, this bias has real consequences: certain groups may be systematically underestimated or overestimated in terms of risk. It should be noted that even if technically explicit sensitive variables are removed, implicitly associated features may still lead to unfair outcomes.

Thirdly, the lack of explainability limits the depth of AI technology's use in critical decision-making processes. The core of financial decision-making is not just about correct results but also about whether the decision-making process is understandable, reviewable, and accountable. However, most current mainstream large models struggle to provide clear, stable, and reproducible reasoning paths.

This poses a substantive obstacle in risk management, compliance reviews, and post-event accountability. When business personnel cannot clearly answer "why the model made such a judgment," the output of AI is unlikely to gain institutional trust or be incorporated into formal financial decision-making accountability systems.

Jiang Ning, Executive Vice President of FinVolution Group, used autonomous driving as an analogy: "If safety factors are ignored, technology might achieve faster realization. However, precisely because it involves real people and real risks, autonomous driving has taken over a decade to mature and is still approaching full readiness. This 'slowness' is not conservatism but respect for real-world complexity."

Precisely because of these inherent flaws, the financial industry has gradually reached a sober consensus: blindly pursuing and relying on fully automated decision-making by AI is neither realistic nor safe in financial scenarios. This is not a denial of technological potential but a rational response to the risk propagation mechanisms in finance. Rather than letting AI assume responsibilities beyond its capability boundaries in the financial decision-making chain, it is wiser to clearly define its applicable scope and concentrate its advantages in more suitable positions.

In practice, the application boundaries of financial AI are gradually becoming clearer and can be roughly divided into three levels.

In low-risk, rule-based scenarios, generative AI can assume a relatively high degree of autonomous decision-making functions. Examples include standardized customer service, information queries, transaction record organization, and basic compliance checks. The business logic in these scenarios is relatively stable, with controllable error consequences, and any deviations can be quickly corrected. In these contexts, AI's efficiency advantages are most pronounced, with relatively limited risk spillover.

In medium-risk scenarios requiring judgment but still subject to review, a more feasible model is "AI provides recommendations, humans handle review." For instance, in credit pre-screening, risk warnings, and investment advisory analysis support, AI can leverage its strengths in multi-dimensional data integration and pattern recognition, but final decisions still require human confirmation based on specific contexts. This division of labor avoids excessive labor costs while providing a necessary safety buffer for model outputs.

In high-risk, highly responsibility-concentrated core decision-making scenarios, the industry generally insists on human beings assuming ultimate decision-making responsibility. When it comes to major capital allocations, complex financial product transactions, or systemic risk judgments, generative AI is more suitable as an auxiliary analysis tool rather than the decision-making subject. This principle is not technological conservatism but stems from the financial system's fundamental requirements for clear responsibility and traceability.

03

Prioritizing Trustworthiness: Leading Institutions Set New Industry Paradigms

After the capability boundaries of financial AI gradually become clear, the core issue facing the industry is no longer whether to introduce AI but how to ensure its stability, controllability, and accountability over the long term. This means the focus of discussion needs to shift from individual models or specific applications to a governance system capable of supporting the sustained operation of financial AI.

Unlike numerous general-purpose scenarios, the financial sector exhibits an extremely low tolerance for errors in AI. The crux of the matter is not whether models will err, but rather that once they do, the repercussions are often swiftly magnified. When AI is integrated into critical business processes, its outputs can directly influence capital allocation, risk exposure, and even the vital interests of customers.

Under such circumstances, relying solely on enhancing model capabilities is no longer adequate to sustain the long-term operation of financial AI. A growing number of institutions are recognizing that without a stable operational and constraint mechanism, even the most powerful models are unlikely to remain truly "usable" over the long haul.

Drawing from practical experience, financial institutions are concentrating on addressing three key deficiencies to advance the implementation of trustworthy AI.

The first priority is to implement stringent data management. Large models rely on data to a far greater extent than traditional models, and financial data is highly sensitive with intricate associations. In reality, the key to data governance lies not merely in preventing data leaks but also in clarifying data usage boundaries and responsibility attribution. This is why some institutions, while introducing AI, simultaneously promote data classification, access restrictions, and data auditing. Technologies such as privacy computing and federated learning are also being adopted by some institutions to achieve "data availability without visibility," thereby establishing a trustworthy foundation for subsequent model training and inference processes.

The second crucial aspect is to ensure the explainability of the model's decision-making process. In financial scenarios, a model's credibility largely hinges on whether its decision-making process can be understood, reviewed, and held accountable. If not, it is unlikely to be embraced by the financial system. Consequently, an increasing number of institutions are mandating models to retain comprehensive decision records during operation, documenting key inputs, outputs, and reasoning paths, and complementing them with regular internal audits and external evaluation mechanisms. The significance of algorithmic auditing here is not to curtail model capabilities but to provide an institutional trust foundation for their integration into core business processes.

The third essential point is to always be prepared to "cut the power." Any financial AI system must acknowledge the potential for its own failure. In actual operation, models may malfunction due to changes in data distribution, extreme scenarios, or external attacks. In such cases, if the system cannot promptly halt operations or swiftly transition to manual processes, there is a risk of losing control. Therefore, whether a financial AI system has clear emergency mechanisms and exit paths has become a practical criterion for determining its true "usability." Only when an AI system is designed to be interruptible and capable of rollback can its risks be effectively managed.

Under this governance framework, the demonstration (the Chinese term can be translated as "model" or "demonstration") significance of leading institutions is also evolving. They no longer need to continually prove "how intelligent AI can be" but must focus more on ensuring AI operates stably and safely over the long term.

From disclosed practices, some institutions have not adopted an aggressive "one-step" strategy when introducing generative AI but have positioned it in more upstream auxiliary roles. For instance, it plays a role in complex information comprehension, process organization, streamlining, and risk warnings, while key decisions involving final discretion are still made through rule systems and human judgment. Taking FinVolution Group's disclosed "Tianjing 3.0" as an example, its general-purpose large model does not directly participate in core decision-making but primarily serves information understanding and analytical support roles. From an industry perspective, this approach sends a very clear signal: generative AI needs to be embedded within systems rather than granted full autonomy.

Similar trends are gradually emerging among banks and insurance institutions. Some banks have prioritized deploying generative AI in internal knowledge retrieval, compliance assistance, and risk warning roles rather than directly participating in credit decision-making. Insurance institutions are more inclined to let AI handle material organization and risk profile supplementation but explicitly retain final discretion in manual reviews. While the paths vary, the underlying logic is highly consistent: the introduction of financial AI should enhance governance capabilities rather than weaken existing responsibility structures.

The next phase of competition in financial AI is likely to shift away from technical metrics and focus instead on governance capabilities. Those who can develop systematic capabilities in key areas such as data security, algorithmic auditing, and risk emergency response will be better positioned to achieve long-term, stable AI applications in an environment of stricter regulation and heightened risk awareness.

 04 

Advancing Within Boundaries: The Path to Longevity for Financial AI

Reviewing the evolution of financial AI, a counterintuitive yet increasingly evident truth is emerging: finance is not a field where "greater automation equals greater advancement." On the contrary, as technical capabilities expand, what truly determines successful implementation and long-term operation is the degree of respect for boundaries.

As Jiang Ning observed, precisely during periods of heightened technical focus and high expectations, a clear-headed judgment of "pace" and "boundaries" becomes essential.

Such clarity does not signify a decline in the financial industry's enthusiasm for technological innovation. Rather, it marks the industry's entry into a more mature phase: shifting from "can we use AI?" to "how can we use AI responsibly?" In this new phase, the value of technology lies not in replacing human judgment but in amplifying human decision-making capabilities without undermining the stability, explainability, and accountability structures of the financial system.

Looking ahead to 2026, the evolution of financial AI is unlikely to manifest as radical disruption but rather as a more gradual yet profound transformation. AI systems will continue to permeate areas such as data processing, risk alerting, and decision support, but their applications will be governed by clearer boundaries. The demand for automation will persist, but its scope of application will become more defined; human-machine collaboration will transition from practical experience to a default option embedded in process design and institutional arrangements.

Simultaneously, trustworthiness will cease to be a mere competitive advantage for financial AI and will instead become a fundamental prerequisite for entering core business scenarios. Model explainability, system interruptibility, and accountability traceability will directly determine whether AI can be integrated more deeply into institutional business frameworks, rather than remaining confined to pilot programs or auxiliary tools.

In this process, the role of leading institutions will be particularly critical. Their value lies not in being the first to demonstrate the most aggressive technical capabilities but in pioneering sustainable, replicable pathways in complex environments, thereby establishing a shared understanding of governance, responsibility, and boundaries across the industry. Such slow yet steady exploration, while less attention-grabbing than technological breakthroughs, is more likely to shape the long-term trajectory of financial AI.

As machines approach their limits in data processing and pattern recognition, the human role is being re-emphasized—not as objects to be replaced but as the ultimate bearers of risk and responsibility. In this context, the future of financial AI belongs less to early adopters pursuing full automation and more to practitioners who maintain a clear sense of boundaries between technological progress and institutional stability.

Advancing within boundaries may well be the necessary path for financial AI to achieve maturity.

References:

1. Hong Kong Monetary and Financial Research Centre, Financial Services in the Era of Generative AI: Facilitating Responsible Adoption, April 2025;

2. IMF, Global Financial Stability Report, October 2024;

3. Tencent Research Institute, 2025 Financial Industry Large Model Application Report: Systemic Implementation, Value Symbiosis;

4. Economic Observer Network, Large Models Venture into the Core of Finance, August 2025;

5. Accenture, New Landscape, New Growth: 2025 Accenture China Enterprise Digital Transformation Index;

6. Tsinghua Financial Review, From 'Pilot' to 'Mass Production': Breaking Through and Sailing Far in Financial Large Model Applications, September 2025;

7. Xinhua Net, AI Ethics Observation | Ethical Risks and Governance Wisdom Behind the Intelligent Financial Revolution, May 2025;

8. CFN Finance, AI: Hunting Season for the Banking Industry in 2026, January 2026;

9. 21st Century Business Review, Instant Finance: Human-Machine Collaboration Brings Decisive Opportunities, September 2025;

10. Caixin, Witnessing the Present and Envisioning the Future: A New Chapter in Consumer Finance.

-End-

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.