Research Article
Luis Eduardo Muñoz Guerrero, Yony Fernando Ceballos, Luis David Trejos Rojas
CONT ED TECHNOLOGY, Volume 17, Issue 3, Article No: ep582
ABSTRACT
Recent progress made in conversational AI lays emphasis on the need for development of language models that possess solid logical reasoning skills and further extrapolated capabilities. An examination into this phenomenon investigates how well the Capybara dataset can improve one’s ability to reason using language-based systems. Multiple cutting-edge linguistic models were fine-tuned using the Capybara corpus before assessing their performances on standard tasks demanding sophisticated reasoning. The comparison using different ways reveals that the logical reasoning of models improves and their ability to make inferences is enhanced. This research explores this further by considering what it means for developers who want more human-like machine conversation intelligence. We also see that this could become an invaluable tool when training reasoning-oriented language generating models.
Keywords: logical reasoning, language models, Capybara dataset, fine-tuning, extrapolation, conversational AI