From Bosonit and Nfq we have tested the latest advances in Natural Language Processing (NPL) in a PoC with the BBVA Innovation department of CIB & CS, applying AI and Machine Learning to the democratisation of the knowledge of the technical teams, achieving advances in the resolution and management of technical queries and incidents.
The PoC has enabled the group to gain an understanding of the current state of the NLP field. Also the limits of the state-of-the-art models in the field and their possible applications. The result of this PoCs is a Q/A (Questions/Answers) tool for the resolution of queries/incidents based on the historical information from BBVA.
Working with BBVA in this proof of concept on NPL is of great value and provides visibility to Bosonit y Nfq in fields and areas where we were known. The result of the test shows great potential for making the use case productive and even incorporating it into our service offering.
Who has been part of this project?
Q/A tool with NPL Q&A
In view of the release of a new NPL model called GPT-3 by the company OPEN IA with successful results in a variety of tasks in the area. We decided to undertake a proof of concept in this field. GPT-3 has beaten different results set by previous models in tasks such as information synthesis, Q&A, self-identity identification and text generation which made it a perfect model for the creation of a question chatbot. Through it, coherent conversations can be held on different topics while being able to maintain the context of the conversation. The problem with this model is that due to its large size it was impossible to specialise it in such a technical sector as it could be our information from the different departments of the company. BBVA. It should also be noted that this was not an Open Source model, which would lead to an added cost for the company every time information was requested from the model, making the project unfeasible with it.
To overcome this problem we used smaller models from both Open IA and Google that will have been made freely available for public use. We chose GPT-2 and BERT as they were the perfect candidates for our task and were widely recognised for their results. Once we trained and tested these models we quickly realised that they were incapable of adapting to technical texts. The models were designed to generate text for informal conversations or news, so we had to change the idea of our product development. It was not possible to train models that were able to learn all the methods and strategies for solving technical problems that the bank has because small details or senses in the question could change the answer we gave the employee and make it incorrect.
An example of this could be a problem with the connection of tools in different operating systems. If the model had obtained the information when training it on this problem in Windows, it would return this information, but in our case we still wanted it to solve it in Linux. This resulted in inconsistencies in the result for the employee.
That is why we considered changing our primary focus and using all the potential we already had for questions and answers on the bank's different platforms, to give more weight to people's answers and let the model take care of finding the answer and give you back the information most in line with your question.
To carry out our tool we used different techniques and models that allowed us to group all the information we had in a database and through transformers (State of Art tools) we were able to synthesise the information from the answers, so that the employee would be able to find the closest solution to their problem. In the information we had there were direct questions, but there were also threads where different agents participated in a problem. In the single-answer solutions we used a normal synthesis model, but in the multi-threaded ones we opted for a model that is not only able to collect the information from the agents, but also the interactions between them, and that has allowed us to add more value to our result.