Created on 29th October 2024
•
Financial advice in India is limited by awareness, access, affordability, and accuracy.
Fingenie aims to solve these problems by combining two revolutionary technologies: Account Aggregator (AA) and Artificial Intelligence (AI). Here's how we plan to do it:
Awareness:
By aggregating 100% of a user's financial data on a single platform through the AA framework, we aim to foster a closer relationship between users and their finances. Many users today lack this connection because their financial data is scattered, hard to access, or difficult to analyze. The AA framework consolidates this data, making it readily accessible. Enter AI: using AI, users can interactively obtain insights about their spending habits, saving opportunities, monthly cash flow, and more. This engagement is a stepping stone to introducing users to more complex topics of financial advice and products.
Access:
By training AI to function as an advisor, we make financial information and advice accessible to everyone, 24/7.
Affordability:
The combination of AA and AI helps scale financial advice in a cost-effective manner. The efficiencies gained reduce operational costs, which in turn should increases the affordability of financial products and drive greater market penetration.
Accuracy:
AA ensures fresh financial data at all times which the AI can leverage to provide accurate analaysis and advise which is in sync with the user's financial position. This is in contrast to infrequent check ins by current advisory models which run the risk of the plan falling out of sync with the user's reality.
Fingenie is a POC. We have trained the AI to understand and leverage AA data. Fingenie uses this training to respond to user queries. We are using a familiar chat based UI to enable this.
In the future, we envision leveraging voice to further increase that usability and adoption of Fingenie.
Limitation on the Number of Tokens:
LLMs have a token limit ranging from 8k to 120k. When passing full user data from AA, the token limit is often exceeded. To address this, we only pass the necessary data based on the user's questions. A dynamic system was needed to analyze queries and determine which data points to retrieve. We decided to use LLMs to generate database queries.
Generating Database Queries Using LLM:
While LLMs are good at writing database queries, they may fail without proper context. To utilize LLMs effectively, we provided them with a full description of our database model, including available tables, relationships, columns, data types, and JSON schemas.
Generating Charts Using LLM:
Since we handle financial data, chart visualizations are crucial. While LLMs cannot generate both text and images via API, we defined response structures for various chart types. OpenAI’s beta version supports inputting response formats, ensuring the correct format for chart generation. These are then plotted using frontend libraries for an enhanced user experience.
Interpreting and Executing Actions via Chat Interface:
We aimed to enable actions (e.g., saving charts, adding goals) within the chat interface. The model must understand user intentions, even when expressed in different ways. For instance, if a user mentions a goal of going to Europe in 2025, the model should guide the user with follow-up questions and add the goal appropriately. To achieve this, we provided LLM with a list of executable functions, their descriptions, and parameters. The LLM can now gather necessary information and perform actions based on user input.
Mathematical Limitations of LLMs:
LLMs struggle with precise calculations, which is critical for a financial app. However, LLMs excel at writing code. We instructed the LLM to write code for any calculations, which we execute, allowing it to respond with accurate results.
Tracks Applied (2)