You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: index.Rmd
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ Author(s): Xinlan Emily Hu, Mark E. Whiting, Linnea Gandhi, Duncan J. Watts, and
11
11
12
12
# Abstract
13
13
14
-
Research on teams spans many diverse contexts, but integrating knowledge from heterogeneous sources is challenging because studies typically examine different tasks that cannot be directly compared. Most investigations involve teams working on just one or a handful of tasks, and researchers lack principled ways to quantify how similar or different these tasks are from one another. We address this challenge by introducing the “Task Space,” a multidimensional framework that represents tasks along 24 theoretically-motivated dimensions. To demonstrate its utility, we apply the Task Space to a fundamental question in team research: *when do interacting groups outperform individuals*? Using the Task Space to systematically sample 20 diverse tasks, we conduct an integrative experiment with 1,231 participants working at three complexity levels, either individually or in groups of three or six (180 experimental conditions). We find striking heterogeneity in group advantage, with groups performing anywhere from three times worse to 60% better than the best individual working alone, depending on the task context. Task Space dimensions significantly outperform traditional typologies in predicting group advantage on unseen tasks. Additionally, our models reveal theoretically meaningful interactions between task features; for example, group advantage on creative tasks depends on whether the answers are objectively verifiable. The Task Space ultimately enables researchers to move beyond isolated findings to identify boundary conditions and build cumulative knowledge about team performance.
14
+
Research on teams spans many diverse contexts, but integrating knowledge from heterogeneous sources is challenging because studies typically examine different tasks that cannot be directly compared. Most investigations involve teams working on just one or a handful of tasks, and researchers lack principled ways to quantify how similar or different these tasks are from one another. We address this challenge by introducing the “Task Space,” a multidimensional framework that represents tasks along 24 theoretically-motivated dimensions. We also build a crowd-annotated repository of 102 tasks from published literature, which serves as the basis of an integrative experiment that demonstrates the Task Space’s utility. Our experiment answers a fundamental question in team research: *when do interacting groups outperform individuals*? We use the annotated repository to systematically sample 20 diverse tasks, and we recruit 1,231 participants to work at three complexity levels, either individually or in groups of three or six (180 experimental conditions). Our experiment reveals striking heterogeneity in group advantage, with groups performing anywhere from three times worse to 60% better than the best individual working alone, depending on the task context. Critically, the Task Space makes this heterogeneity predictable, and it significantly outperforms traditional typologies in predicting group advantage on unseen tasks. Our models also reveal theoretically meaningful interactions between task features; for example, group advantage on creative tasks depends on whether the answers are objectively verifiable. The Task Space ultimately enables researchers to move beyond isolated findings to identify boundary conditions and build cumulative knowledge about team performance.
15
15
16
16
# Understanding teams requires understanding their tasks.
0 commit comments