Machine Learning

Case study: Latest developments in the field of natural language processing (NPL), a PoC for BBVA

From Bosonit y Nfq we have participated with the department of BBVA Innovation of CIB & CS in a PoC. We test the latest developments in the field of Natural Language Processing (NLP), applying AI and Machine Learning to the democratisation of the knowledge of the technical teams by achieving progress in the resolution and management of technical queries and incidents.

The PoC has enabled the group to gain an understanding of the current state of the NLP field. Also the limits at which the state-of-the-art models in the sector and their possible applications. The result of this PoCs is a Q/A (Question/Answer) tool. for the resolution of queries/incidents on the basis of historical information from BBVA.

Working with BBVA in this proof-of-concept is of great value and gives visibility to the Bosonit y Nfq in fields and areas where we were known. The result of the test shows great potential for making the use case more productive and even incorporating it into our range of services.

Who has been part of this project?

For this challenge, the Group N team was formed by Manuel Ranea (Nfq) y Javier Gonzalez Peñalosa (Bosonit), who tells us about the process.

Q/A tool with NPL Q&A

In view of the launch of the new model of NLP called GPT-3 by the company OPEN IA with successful results in a variety of tasks in the area. We decided to undertake a proof of concept in this field. GPT-3 has beaten different results marked by previous models in tasks such as information synthesis, Q&A, self-identity identification and text generation, which made it a perfect model for the creation of a chatbot for questions. Through it, coherent conversations can be held on different topics and the context of the conversation can be maintained. The problem with this model is that due to its large size, it was impossible to specialise it in such a technical sector as our information from the different departments of the BBVA. It should also be pointed out that this was not a OPEN-SOURCE model This would lead to an added cost for the company every time information was requested from the model, making the project unfeasible with it.
To overcome this problem, we use smaller models of both Open AI as Google's which will have been made available to the public for free use. We chose GPT-2 and BERT as they were the perfect candidates for our task and were models that were widely recognised for their performance. Once trained and tested We quickly realised that these models were incapable of adapting to technical texts. The models were designed to generate text from informal conversations or news, so we had to change the idea of our product development. It was not possible to train models that were capable of learn all methods and strategies when solving technical problems that the bank has because small details or senses in the question could change the answer we give to the employee making it incorrect.

An example of this could be a problems with connecting tools on different operating systems. If the model had obtained the information when training it on that problem in Windows, it would return that information, but in our case we still wanted it to solve it in Linux. This resulted in inconsistencies in the result for the employee.

That's why we considered changing our primary focus and using all the potential we already had for questions and answers on the bank's different platforms, to give more weight to people's answers and let them the model will search for the answer and return the information that best matches your question.

In order to carry out our tool we used different techniques and models that allowed us to group all this information we had in a database and through transformers (State of Art Tools) would enable us to synthesise the information from the answers, so that the employee is able to to find the closest solution to your problem. In the information we had, there was direct questionsbut there were also threads where different actors were involved in a problem. In the one-answer solutions we use a normal synthesis model, but in multithreading we opt for a model that is not only able to collect information from the agents, but also to collect information from the agents.The interactions between them, and that has allowed us to add more value to our result.



Tech & Data

You may be interested in

Take the leap

Contact us.