Abstrait

Large Language Models Bias Issues Solving through SDRT

Aarush1*, Chandhu 2

Since the start of transformer development and recent advancements in Large Language Models (LLMS), the whole world has been taken by storm. However, multiple LLM models, such as gpt-3, gpt-4, and all open-source LLM models, come with their own set of challenges. The development of Natural Language Processing (NLP) utilizing transformers commenced in 2017, initiated by google and Facebook. Since then, substantial language models have emerged as formidable tools in the domains of both natural language and artificial intelligence research. These models possess the capability to learn and predict, enabling them to generate coherent and contextually relevant text for a diverse array of applications. Additionally, large language models have made a significant impact on various industries, including healthcare, finance, customer service, and content generation. They have the potential to automate tasks, improve language understanding, and enhance user experiences when deployed effectively. However, along with these benefits, there are also major risks and challenges associated with these models, including pre-training and fine-tuning. To address these challenges, we are proposing SDRT (Segmented Discourse Representation Theory) and making the models more conversational to overcome some of the toughest obstacles.

Avertissement: Ce résumé a été traduit à l'aide d'outils d'intelligence artificielle et n'a pas encore été examiné ni vérifié

Indexé dans

Google Scholar
Academic Journals Database
Open J Gate
Academic Keys
ResearchBible
CiteFactor
Electronic Journals Library
RefSeek
Hamdard University
Scholarsteer
International Innovative Journal Impact Factor (IIJIF)
International Institute of Organised Research (I2OR)
Cosmos

Voir plus