计算机科学
相互依存
人工智能
数据科学
机器学习
人际交往
多模式学习
知识管理
人机交互
心理学
社会心理学
政治学
法学
作者
Xueming Luo,Nan Jia,Erya Ouyang,Zheng Fang
摘要
Abstract Research Summary Multimodal data, comprising interdependent unstructured text, image, and audio data that collectively characterize the same source, with video being a prominent example, offer a wealth of information for strategy researchers. We emphasize the theoretical importance of capturing the interdependencies between different modalities when evaluating multimodal data. To automate the analysis of video data, we introduce advanced deep machine learning and data fusion methods that comprehensively account for all intra‐ and inter‐modality interdependencies. Through an empirical demonstration focused on measuring the trustworthiness of grassroots sellers in live streaming commerce on Tik Tok, we highlight the crucial role of interpersonal interactions in the business success of microenterprises. We provide access to our data and algorithms to facilitate data fusion in strategy research that relies on multimodal data. Managerial Summary Our study highlights the vital role of both verbal and nonverbal communication in attaining strategic objectives. Through the analysis of multimodal data—incorporating text, images, and audio—we demonstrate the essential nature of interpersonal interactions in bolstering trustworthiness, thus facilitating the success of microenterprises. Leveraging advanced machine learning techniques, such as data fusion for multimodal data and explainable artificial intelligence, we notably enhance predictive accuracy and theoretical interpretability in assessing trustworthiness. By bridging strategic research with cutting‐edge computational techniques, we provide practitioners with actionable strategies for enhancing communication effectiveness and fostering trust‐based relationships. Access our data and code for further exploration.
科研通智能强力驱动
Strongly Powered by AbleSci AI