Skip to content

Commit e497324

Browse files
committed
update abstract on website
1 parent 62e2ed9 commit e497324

File tree

2 files changed

+21
-18
lines changed

2 files changed

+21
-18
lines changed

index.Rmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Author(s): Xinlan Emily Hu, Mark E. Whiting, Linnea Gandhi, Duncan J. Watts, and
1111

1212
# Abstract
1313

14-
Research on teams spans many diverse contexts, but integrating knowledge from heterogeneous sources is challenging because studies typically examine different tasks that cannot be directly compared. Most investigations involve teams working on just one or a handful of tasks, and researchers lack principled ways to quantify how similar or different these tasks are from one another. We address this challenge by introducing the “Task Space,” a multidimensional framework that represents tasks along 24 theoretically-motivated dimensions. To demonstrate its utility, we apply the Task Space to a fundamental question in team research: *when do interacting groups outperform individuals*? Using the Task Space to systematically sample 20 diverse tasks, we conduct an integrative experiment with 1,231 participants working at three complexity levels, either individually or in groups of three or six (180 experimental conditions). We find striking heterogeneity in group advantage, with groups performing anywhere from three times worse to 60% better than the best individual working alone, depending on the task context. Task Space dimensions significantly outperform traditional typologies in predicting group advantage on unseen tasks. Additionally, our models reveal theoretically meaningful interactions between task features; for example, group advantage on creative tasks depends on whether the answers are objectively verifiable. The Task Space ultimately enables researchers to move beyond isolated findings to identify boundary conditions and build cumulative knowledge about team performance.
14+
Research on teams spans many diverse contexts, but integrating knowledge from heterogeneous sources is challenging because studies typically examine different tasks that cannot be directly compared. Most investigations involve teams working on just one or a handful of tasks, and researchers lack principled ways to quantify how similar or different these tasks are from one another. We address this challenge by introducing the “Task Space,” a multidimensional framework that represents tasks along 24 theoretically-motivated dimensions. We also build a crowd-annotated repository of 102 tasks from published literature, which serves as the basis of an integrative experiment that demonstrates the Task Space’s utility. Our experiment answers a fundamental question in team research: *when do interacting groups outperform individuals*? We use the annotated repository to systematically sample 20 diverse tasks, and we recruit 1,231 participants to work at three complexity levels, either individually or in groups of three or six (180 experimental conditions). Our experiment reveals striking heterogeneity in group advantage, with groups performing anywhere from three times worse to 60% better than the best individual working alone, depending on the task context. Critically, the Task Space makes this heterogeneity predictable, and it significantly outperforms traditional typologies in predicting group advantage on unseen tasks. Our models also reveal theoretically meaningful interactions between task features; for example, group advantage on creative tasks depends on whether the answers are objectively verifiable. The Task Space ultimately enables researchers to move beyond isolated findings to identify boundary conditions and build cumulative knowledge about team performance.
1515

1616
# Understanding teams requires understanding their tasks.
1717

index.html

Lines changed: 20 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -366,23 +366,26 @@ <h1>Abstract</h1>
366366
and researchers lack principled ways to quantify how similar or
367367
different these tasks are from one another. We address this challenge by
368368
introducing the “Task Space,” a multidimensional framework that
369-
represents tasks along 24 theoretically-motivated dimensions. To
370-
demonstrate its utility, we apply the Task Space to a fundamental
371-
question in team research: <em>when do interacting groups outperform
372-
individuals</em>? Using the Task Space to systematically sample 20
373-
diverse tasks, we conduct an integrative experiment with 1,231
374-
participants working at three complexity levels, either individually or
375-
in groups of three or six (180 experimental conditions). We find
376-
striking heterogeneity in group advantage, with groups performing
377-
anywhere from three times worse to 60% better than the best individual
378-
working alone, depending on the task context. Task Space dimensions
379-
significantly outperform traditional typologies in predicting group
380-
advantage on unseen tasks. Additionally, our models reveal theoretically
381-
meaningful interactions between task features; for example, group
382-
advantage on creative tasks depends on whether the answers are
383-
objectively verifiable. The Task Space ultimately enables researchers to
384-
move beyond isolated findings to identify boundary conditions and build
385-
cumulative knowledge about team performance.</p>
369+
represents tasks along 24 theoretically-motivated dimensions. We also
370+
build a crowd-annotated repository of 102 tasks from published
371+
literature, which serves as the basis of an integrative experiment that
372+
demonstrates the Task Space’s utility. Our experiment answers a
373+
fundamental question in team research: <em>when do interacting groups
374+
outperform individuals</em>? We use the annotated repository to
375+
systematically sample 20 diverse tasks, and we recruit 1,231
376+
participants to work at three complexity levels, either individually or
377+
in groups of three or six (180 experimental conditions). Our experiment
378+
reveals striking heterogeneity in group advantage, with groups
379+
performing anywhere from three times worse to 60% better than the best
380+
individual working alone, depending on the task context. Critically, the
381+
Task Space makes this heterogeneity predictable, and it significantly
382+
outperforms traditional typologies in predicting group advantage on
383+
unseen tasks. Our models also reveal theoretically meaningful
384+
interactions between task features; for example, group advantage on
385+
creative tasks depends on whether the answers are objectively
386+
verifiable. The Task Space ultimately enables researchers to move beyond
387+
isolated findings to identify boundary conditions and build cumulative
388+
knowledge about team performance.</p>
386389
</div>
387390
<div id="understanding-teams-requires-understanding-their-tasks." class="section level1">
388391
<h1>Understanding teams requires understanding their tasks.</h1>

0 commit comments

Comments
 (0)