Pakistan Science Abstracts
Article details & metrics
No Detail Found!!
Using Large Language Model for understanding Journalistic Polarization on Social Media
Author(s):
1. Laiba Rehman: Institute of Computer Science and IT, Women University Multan, Pakistan
2. F. Bukhari: Institute of Computer Science and IT, Women University Multan, Pakistan; NFC Institute of Engineering and technology, Multan, Pakistan
3. Humera B. Gill: Institute of Computer Science and IT, Women University Multan, Pakistan
4. W. Yaseen: Institute of Computer Science and IT, Women University Multan, Pakistan
5. Shaista Ilyas: Institute of Computer Science and IT, Women University Multan, Pakistan; NFC Institute of Engineering and technology, Multan, Pakistan
Abstract:
The introduction of social media platforms significantly transformed the landscape of journalism, ushering in an era in which user-generated content and algorithmic recommendations largely influence news consumption. However, the ongoing problem of journalistic polarization on social media has made it difficult to distinguish between reliable information and biased narratives. This research looks into the use of big language models in detecting and correcting journalistic polarization on social media. Large language models, such as GPT-3.5, have shown impressive skills in natural language processing, context understanding, and text generation. These algorithms can be used to analyze and classify news articles, social media posts, and comments, allowing polarizing content to be identified. Large language models can assist reveal the level of polarization within a specific issue or across multiple media outlets by analyzing the linguistic patterns, mood, and underlying biases contained in journalistic content. Large language models can also help analyze the impact of polarizing narratives on public opinion. These algorithms can assess the impact of polarized news stories and social media posts on altering public discourse by analyzing user interactions, sentiment, and engagement levels. Insights gained from such analysis can be useful to journalists, social media platforms, and governments in formulating initiatives to combat polarization and increase media literacy. However, using big language models to analyze journalistic polarization on social media has ethical and technical concerns. Privacy problems, algorithmic biases, and potential data manipulation must all be carefully handled. Furthermore, fine-tuning language models on varied datasets and providing openness in decision-making processes are critical stages in establishing confidence and trust-worthiness. In conclusion, the use of big language models opens up new paths for analyzing journalistic polarization on social media. These models can help identify polarizing content, analyze its impact, and suggest actions to alleviate the harmful repercussions of media polarization by using their language processing capabilities. It is critical that researchers, journalists, and technology developers work together to overcome ethical concerns and capitalize on the potential of massive language models to build an informed and inclusive digital public sphere.
Page(s): 396-396
DOI: DOI not available
Published: Journal: Abstract Book on International Conference on Food and Applied Sciences (ICFAS-23) 3-5 August 23, Volume: 0, Issue: 0, Year: 2023
Keywords:
Social Media , natural language processing , journalistic polarization , sentiment analysis , large language models
References:
References are not available for this document.
Citations
Citations are not available for this document.
0

Citations

0

Downloads

57

Views