Exploring the Benefits of Machine Learning Algorithm Training to Enhance Business Analysis Processes
- Peter Johnson

- Dec 20, 2023
- 4 min read

One of many public-private sector partnerships that were aiming to develop new solutions to a current problem unfortunately disbanded when the drive coming from the public sector faded. Usually this occurs after the analysis is completed and the goal of arousing action has been met.
The aim of the experiment is to investigate if the process for documenting business requirements for internal risk management reports can be improved via the usage of a machine learning algorithm trained in natural language processing and ontology. This algorithm would be used within a chat-bot-like interface for eliciting requirements from risk managers, instead of relying on a professional business analyst to do so through in-person conversations.
NLP is an area of study aimed at teaching algorithms how to grasp the context and causal connection between the concepts in a processed text and other ideas. It is similar to the way that humans read a passage to grasp the meaning and bring to mind related memories. Ontology is a type of academic idea that provides a structure to better arrange associations between concepts using predefined categories. An illustration of ontology in action is the way that physicists divide matter into solid, liquid, and gas states. Applying ontology helps with using a pre-set framework to assist with the organization and linking of concepts.
The FIs in the public-private consortium all encounter the same difficulty when producing internal risk reports due to the blend of legacy technology and different outlooks on how risk should be assessed both qualitatively and quantitatively.
Theoretically, the best approach for ease of reporting would be to employ one large information system for all FI activities, using a standard workflow and data structure. Unfortunately, the financial investment and risks associated with migrating existing information systems and creating this large system make it an unappealing option.
So, a process of evaluating the use-case and business-case begins to crystalize the purpose of the information system. Financial Institutions (FIs) often have multiple information systems spread around their technology landscape to do the necessary tasks. But to create a risk report, the information has to be taken out of these various systems, unified, and combined. Multiple sessions with risk managers are then needed to make sure that the report meets their criteria.
Due to the different types of risks, such as market risk leading to impacts on investment portfolio performance, liquidity risk affecting the FI's operations and operational risk from services not getting delivered on time, as well as counterparty risk that occurs when one is too dependent on a particular partner, risk reporting scope varies and the data for the report must be taken into consideration.
This twin challenge necessitates FIs to have small teams of business analysts spread throughout their technology team, whose primary objective is to continuously adjust to changing risk manager requirements with regard to various risk reports, or investigate why some reports display figures that are not in line with what is expected or reconciled figures, or to ascertain that newly coded reports are retrieving and representing data correctly.
Much of this process entails risk manager-facing roles validating the correctness of risk reports. However, FIs have realized that the primary elicitation process is often more onerous and laborious. The factor for this is that competent business analysts need domain knowledge and the knack to draw out requirements from risk managers, and can also adjust back and forth with them to confirm a shared comprehension that is sufficiently distinct to be recorded.
The FIs acknowledge the worth of face-to-face communication, but they also see this as an opportunity to utilize cutting-edge machine learning technologies to supplement the data collection. To them, the data collection procedure is a back-and-forth dialogue between a risk management expert and an analyst in order to holistically detail the qualitative and quantitative stipulations of the risk report.
It was their opinion that deploying a machine learning algorithm to ask guided questions could better the process, resulting in a deliverable that developers could use immediately to produce the desired risk report.
By changing their approach to risk management, they hypothesized that rather than asking open-ended questions of what an end-user wants, they should start by looking at the available data and proposing combinations thereof to risk managers. In this way, they would be limiting the scope of possibilities to those that are practical instead of idealistic.
They intend to train their machine learning algorithm on the existing data schema of the dataset, and use an ontology to classify the various datasets. With this, they aspire for the algorithm to be able to respond to the query “System, can you tell me about the market risk report?” with an answer such as “Sure. Nonetheless, market risk covers a broad range; can you be more precise about the exchange you refer to?” followed by “Great, what asset class are you looking for?”, and so on until the ultimate inquiry “Thanks, would a report with this information be satisfactory to you?”.
If the end-user consents to that, the algorithm can then generate a pseudo-code outlining the coding of the report, which the software development team can make use of.
If the experiment succeeds, risk managers may be able to utilize a self-service portal that allows them to identify their needs when it comes to risk reports, without needing the help of an experienced business analyst to guide them. This would be incredibly advantageous to both risk managers and business analysts, allowing them to focus on activities that provide more value.



Comments