Aceves, Pedro and James A. Evans

Word embedding models are a powerful approach for representing the multidimensional conceptual spaces within which communicated concepts relate, combine, and compete with one another. This class of models represent a recent advance in machine learning allowing scholars to efficiently encode complex systems of meaning with minimal semantic distortion based on local and global word co-occurrences from large-scale text data. Although their use has the potential to broaden theoretical possibilities within organization science, embeddings are largely unknown to organizational scholars, where known they have only been mobilized for a narrow set of uses, and they remain unlinked to a theoretical scaffolding that can enable cumulative theory building within the organizations community. Our goal is to demonstrate the promise embedding models hold for organization science by providing a practical roadmap for users to mobilize the methodology in their research and a theoretical guide for consumers of that research to evaluate and conceptually link embedded representations with theoretical significance and potential. We begin by explicitly defining the notions of concept and conceptual space before proceeding to show how these can be represented and measured with word embedding models, noting strengths and weaknesses of the approach. We then provide a set of embedding measurements along with their theoretical interpretation and flexible extension. Our aim is to extract the operational and conceptual significance from technical treatments of word embeddings and place them within a practical, theoretical framework to accelerate research committed to understanding how individuals, teams, and broader collectives represent, communicate, and deploy meaning in organizational life.