153
0
0
2024-09-02

Special Issue on Understanding Human Behaviors Through Large Language Models Submission Date: 2024-09-30 This special issue embraces studies on human behavior and opinion simulation using LLMs across multidisciplinary fields to enhance the understanding of humans. We aim not only to spotlight the innovative uses of LLMs in understanding human behavior but also to critically assess their role as a tool in the broader research environment.


Guest editors:


Dr. Jang Hyun Kim

Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Korea


Dr. Xiao-Liang Shen

School of Information Management, Wuhan University, Wuhan, People's Republic of China


Dr. Hyejin Youn

Kellogg School of Management, Northwestern University, Evanston, Illinois, United States


Special issue information:


The research exploring and understanding human behavior and opinions has traditionally been conducted through methodologies, such as experiments, surveys, and opinion polls. However, studies involving human participants have recently encountered limitations due to difficulties in recruiting, high costs, and challenges in sample representativeness. In response to these burgeoning issues, a novel research paradigm is emerging, pivoting towards utilizing Large Language Models (LLMs) to simulate human behavior and decision-making processes.LLMs have demonstrated an ability to reflect social norms, background knowledge, and even the biases and stereotypes that permeate human societies. There is a scarcity of research on how suitable LLMs are as subjects for mimicking human behavior/opinions in terms of their generalizability and applicability for deployment in actual research. Considering the rapidly evolving capabilities and inherently opaque mechanisms of LLMs (often called the "black box" problem), there is a need for academic discourse on this subject.


Therefore, this special issue embraces studies on human behavior and opinion simulation using LLMs across multidisciplinary fields to enhance the understanding of humans. Possible subjects of submissions could include, but are not limited to:


- Human sub-population simulation using LLMs

- Measuring human value sets and behaviors using LLMs

- Explore LLMs’ personal traits

- Evaluate the capabilities of LLMs for understanding human society

- Investigate/mitigate inherent social biases in LLMs

- Agent-based modeling using LLMs

- Replicating traditional experiments using LLMs

- Explainable AI to explain human behaviors and value sets


References


[1] W. Shapiro. The polling industry is in crisis, June 21 2019. URL https://newrepublic.com/ article/154124/polling-industry-crisis

[2] Keeter, S., Hatley, N., Kennedy, C., & Lau, A. (2017). What low response rates mean for telephone surveys. Pew Research Center, 15(1), 1-39.

[3] Simmons, G., & Hare, C. (2023). Large language models as subpopulation representative models: A review. arXiv preprint arXiv:2310.17888.

[4] Gao, C., Lan, X., Li, N., Yuan, Y., Ding, J., Zhou, Z., ... & Li, Y. (2023). Large language models empowered agent-based modeling and simulation: A survey and perspectives. arXiv preprint arXiv:2312.11970.

[5] Kolisko, S., & Anderson, C. J. (2023, June). Exploring social biases of large language models in a college artificial intelligence course. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 13, pp. 15825-15833).

[6] Argyle, L. P., Busby, E. C., Fulda, N., Gubler, J. R., Rytting, C., & Wingate, D. (2023). Out of one, many: Using language models to simulate human samples. Political Analysis, 31(3), 337-351.

[7] Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K., ... & Xie, X. (2023). A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology.


Manuscript submission information:


Submissions for this special issue should be submitted through the Journal’s submission system


by choosing the article type " VSI: LLMs and Human Behaviors" . Detailed guidelines on submission format and process can be found on the Journal's website.


Important dates


Submissions open: 23 May 2024

Submissions close: 30 September 2024


Keywords:


Large language model (LLM), human behavior, simulation, natural language processing (NLP)

登录用户可以查看和发表评论, 请前往  登录 或  注册


SCHOLAT.com 学者网
免责声明 | 关于我们 | 用户反馈
联系我们: