Skip to content

Commit 6e0efe8

Browse files
committed
2 parents 1de8c91 + 685a6a1 commit 6e0efe8

File tree

8 files changed

+291
-115
lines changed

8 files changed

+291
-115
lines changed

LICENSE.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
# MIT License
2+
3+
Copyright (c) 2024 the Authors
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in all
13+
copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21+
SOFTWARE.

README.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
[![DOI](https://zenodo.org/badge/797952110.svg)](https://doi.org/10.5281/zenodo.14603935) [![arXiv](https://img.shields.io/badge/arXiv-2412.21159-b31b1b.svg)](https://arxiv.org/abs/2412.21159)
2+
3+
# A Standardized Framework for Sensor Placement in Human Motion Capture
4+
5+
A unified framework for standardizing sensor placement across different sensing modalities and applications in human motion capture and wearable technology.
6+
7+
## Overview
8+
9+
This framework addresses the critical need for standardized sensor placement protocols in human movement analysis and physiological monitoring. While existing standards like SENIAM address specific applications, there has been no comprehensive framework spanning different sensing modalities and applications. Our standard ensures reproducibility and transferability of human movement data across different recording systems and research domains.
10+
11+
## Key Features
12+
13+
- **Precise Anatomical Landmarks**: Comprehensive set of anatomical landmarks chosen for reliability and accessibility
14+
- **Standardized Coordinate Systems**: Clear definitions for 16 major body segments
15+
- **Hierarchical Reference Frames**: Structured approach to relating different coordinate systems
16+
- **Quantified Precision Levels**: Three-tier system for placement accuracy
17+
- **BIDS/HED Compatible**: Designed to work with existing data sharing standards
18+
19+
## Documentation
20+
21+
- 📖 [Full Paper on arXiv](https://doi.org/10.48550/arXiv.2412.21159)
22+
- 🌐 [Framework Website](https://human-sensor-placement.github.io)
23+
- 📊 [Complete Anatomical Table](https://human-sensor-placement.github.io/anatomical_table.html)
24+
25+
## Contributing
26+
27+
We welcome contributions from the research community in several key areas:
28+
29+
- Opinions on the preprint and the proposed framework (the preprint is available on arXiv as well as here in the repository as `paper.md`)
30+
- Standard vocabulary and communication formats through BIDS, HED, and other specifications
31+
- Validation studies for inter-operator reliability assessment
32+
- Mappings between this framework and existing standards (such as SENIAM)
33+
- Software tools for coordinate calculation and placement visualization
34+
35+
Please submit your suggestions and feedback via the Issues section.
36+
37+
## License
38+
39+
This project is licensed under MIT LICENSE - see the [LICENSE.md](LICENSE.md) file for details.

_quarto.yml

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,3 @@ format:
1919
grid:
2020
body-width: 1500px
2121
sidebar-width: 50px
22-
23-
24-

anatomical_table.qmd

Lines changed: 17 additions & 17 deletions
Large diffs are not rendered by default.

authors.qmd

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,10 @@ Julius has completed his masters in 2019 in Cognitive Neuropsychology in Oldenbu
2020

2121
![](pics/julius.jpg){ width="50%" class=center }
2222

23-
## Yahya Shirazi
24-
Yahya is an Assistant Project Scientist at the Swartz center for Computational Neuroscience working hard to make the MOBI world a better place.
23+
## Seyed (Yahya) Shirazi
24+
Yahya is an Assistant Project Scientist at the [Swartz center for Computational Neuroscience](https://sccn.ucsd.edu) working hard to make the MOBI world a better place.
2525

2626
![](pics/yahya.jpg){ width="50%" class=center }
27+
28+
## Lara Godbersen
29+
Lara is a Master student at the Kiel University. She is interested in the field of cognitive neuroscience and just finished her master thesis with advanced EEG processing.

index.qmd

Lines changed: 67 additions & 93 deletions
Original file line numberDiff line numberDiff line change
@@ -1,126 +1,100 @@
1-
---
2-
title: "On the Transferability and Accessibility of Human Movement Data"
3-
format: html
4-
---
5-
![](pics/logo_placement.png){ width="20%" class=center }
1+
# A Standardized Framework for Sensor Placement in Human Motion Capture
62

7-
# Introduction
3+
::: {.callout-note}
4+
This website presents a unified framework for sensor placement in human motion capture and wearable applications. For the complete research paper, please visit our preprint on [ArXiv](https://doi.org/10.48550/arXiv.2412.21159).
5+
:::
86

9-
Human motion capture encompasses various technologies and techniques to record different modalities of human motion, such as position, speed, or acceleration. These techniques are extensively used in movement science, rehabilitation, sports, and entertainment. However, the heterogeneity of recorded modalities and the varying spatial and temporal resolutions of these recordings pose challenges for the utility and interpretability of motion capture data. This necessitates a clear, unified approach for sensor placement to ensure data transferability and accessibility across different systems.
7+
## Introduction
108

11-
Motion capture generally involves acquiring motion-related physical quantities (motion modalities) such as position, speed, and acceleration. The fundamental differences between these quantities highlight the need for different technologies and methods, including passive and active marker-based systems, IMUs, ToF, dot projection, video cameras, and deep learning techniques.
9+
The proliferation of wearable sensors and monitoring technologies has created an urgent need for standardized sensor placement protocols. While existing standards like SENIAM address specific applications, there is no comprehensive framework that spans different sensing modalities and applications. We present a unified sensor placement standard that ensures reproducibility and transferability of human movement data across different recording systems and research domains.
1210

13-
Each motion capture method has limitations, impacting the interpretability and transferability of data from one modality to another. To address these limitations, we propose a precise annotation of features affecting the quality of motion capture interpretation.
11+
## Fundamentals
1412

15-
In this manuscript, we will briefly describe the main features of each modality and introduce a set of definitions that we will use throughout. We finally propose a scheme for unified sensor placement annotation with quantifiable levels of precision. We try to align this scheme with the currently available standards for data sharing and annotation, namely the **[Brain Imaging Data Structure (BIDS)](https://bids.neuroimaging.io/)** and the **[Hierarchical Event Descriptors (HED)](https://www.hedtags.org/)**. See [Motion-BIDS](https://bids-specification.readthedocs.io/en/stable/modality-specific-files/motion.html) and definition of body parts in [HED schema browser](https://www.hedtags.org/display_hed.html) for relevant details.
13+
### Reference Frames and Coordinate Systems
1614

17-
<style>
18-
/* Style the button to appear as a box */
19-
.centered-button {
20-
display: flex;
21-
justify-content: center;
22-
align-items: center;
23-
height: 20vh; /* Vertically center */
24-
margin: 0; /* Remove default margin */
25-
}
15+
A **reference frame** consists of an origin point and a set of axes that define directions in space. In human movement analysis, we encounter multiple reference frames:
2616

27-
.button-box {
28-
background-color: #4CAF50; /* Replace with your desired color */
29-
color: white;
30-
padding: 15px 30px;
31-
text-align: center;
32-
text-decoration: none;
33-
font-size: 16px;
34-
font-weight: bold;
35-
border-radius: 8px;
36-
border: none;
37-
cursor: pointer;
38-
box-shadow: 0px 4px 6px rgba(0, 0, 0, 0.1);
39-
transition: background-color 0.3s, transform 0.3s;
40-
}
17+
1. **Global laboratory frame**: The fixed reference frame of the measurement space
18+
2. **Anatomical frames**: Tied to specific body segments
19+
3. **Sensor-specific frames**: Related to individual sensor positioning
4120

42-
.button-box:hover {
43-
background-color: #45a049; /* Darker green on hover */
44-
transform: scale(1.05); /* Slightly enlarges on hover */
45-
}
46-
</style>
21+
A **coordinate system** is fully described by:
22+
1. The origin relative to which coordinates are expressed
23+
2. The interpretation of the three axes
24+
3. The units in which measurements are expressed
4725

48-
<div class="centered-button">
49-
<a href="subpage.html" class="button-box">Landmark table</a>
50-
</div>
26+
### Hierarchical Structure
5127

28+
Reference frames can have a hierarchical structure, where one frame is nested within another. For example:
29+
- Torso position within the room frame
30+
- Arm position relative to the torso
31+
- Hand position relative to the arm
5232

33+
The **global reference frame** sits at the top of this hierarchy, associated with the space through which the entire body moves.
5334

35+
## Unified Placement Framework
5436

55-
# Types of Motion Capture
56-
## Optical Motion Capture (OMC)
37+
### Anatomical Coordinate System
5738

58-
**Optical Motion Capture (OMC)** systems utilize multiple cameras to track the movement of reflective markers placed on a subject's body or objects. These markers reflect light emitted by the cameras, allowing the system to triangulate their positions in three-dimensional space. The cameras record the positions of these markers at high frame rates, capturing data on **marker positions (POS)** and, optionally, **orientation (ORNT)**. Additionally, **velocity (VEL)**, **acceleration (ACCEL)**, and **angular acceleration (ANGACCEL)** data can be derived from the marker positions over time. **Marker placement** should define the type, size, and shape of markers, along with their specific placement on anatomical landmarks, such as joints and limb segments.
39+
We define precise anatomical coordinate systems for each body segment using palpable landmarks. These definitions ensure consistent interpretation and implementation across different applications.
5940

60-
## Inertial Measurement Units (IMUs)
41+
::: {.callout-note}
42+
The complete **anatomical landmark table** with detailed coordinate systems for all body segments is available [here](./anatomical_table.qmd).
43+
:::
6144

62-
**Inertial Measurement Units (IMUs)** consist of small sensors, including accelerometers, gyroscopes, and sometimes magnetometers, attached to a subject's body. IMUs measure changes in **acceleration (ACCEL)**, **angular acceleration (ANGACCEL)**, **velocity (VEL)**, and **orientation (ORNT)**. They are compact and versatile, making them suitable for wearable applications like sports performance analysis and motion tracking in remote or outdoor environments. **Positioning of the IMU** on the body should include details about the location and orientation, typically described using text and photographs or diagrams showing the sensor's orientation relative to the body part it is attached to.
45+
### Placement Principles
6346

64-
## Markerless Motion Capture
47+
Our unified placement scheme follows these core principles:
6548

66-
**Markerless Motion Capture** systems use algorithms to track a subject's movements without physical markers. Cameras capture video footage, which software processes to extract data on **position (POS)**, **orientation (ORNT)**, and sometimes **velocity (VEL)** and **acceleration (ACCEL)**. Markerless motion capture is non-invasive and captures natural movement, making it popular in entertainment, biomechanics, and human-computer interaction. The tracked points' definition is often software-specific, depending on which points the software allows to be tracked.
49+
1. Sensor placement must be reproducible by a human with defined precision
50+
2. Placement coordinates relate to anatomical landmarks of the relevant body part
51+
3. Landmarks define the origin, direction, and limits of axes
52+
4. Sensor locations are reported as ratios of the axis limits
53+
5. Placement precision depends on landmarks, axes, and measurement method
6754

68-
# Definitions
55+
### Precision Levels
6956

70-
## Space
71-
In BIDS terms, **space** is defined as an artificial frame of reference used to describe different anatomies in a unifying manner (see Appendix VIII). Data collected in studies of physical or virtual motion typically have a reference frame anchored to the physical lab space or the virtual environment.
57+
We define three levels of placement precision:
7258

73-
## Reference Frame
74-
A **reference frame** is an abstract coordinate system with a specified origin, orientation, and scale, defined by a set of reference points (Kovalevsky & Mueller, 1989). It broadly describes the type of space or context associated with the data, whether the space is fixed or moving (global or local reference frame), or the identity of the object it moves with. For instance, an anatomical reference frame is fixed to the body and moves as the body moves through space.
59+
1. **Level 1**: ~10% precision, such as Visual Inspection
60+
- Placement defined by visual inspection of body parts and landmarks
61+
- Limited by human eye resolution and alignment ability
7562

76-
## Coordinate System
77-
A **coordinate system** is fully described by (1) the origin, (2) the interpretation of the axes, and (3) the units. In BIDS terms, a coordinate system comprises information about (1) the origin relative to which the coordinate is expressed, (2) the interpretation of the three axes, and (3) the units in which the numbers are expressed.
63+
2. **Level 2**: ~5% precision, such as Tape Measure
64+
- Placement defined by measuring distances between landmarks
65+
- Limited by tape measure resolution and alignment ability
7866

79-
## Hierarchical Structure of Reference Frames
80-
Reference frames can have a hierarchical structure, where one reference frame is nested within another. For example, the position of the torso can be expressed within a room reference frame, the arm position relative to the torso, and the hand position relative to the arm. The reference frame at the top of this hierarchy is the **global reference frame**, associated with the space through which the entire body moves. This representation is useful in scenarios where the location of the person in space is relevant rather than their posture or limb motion.
67+
3. **Level 3**: ~1% precision, such as 3D Scanning
68+
- Placement defined by 3D scanning body parts
69+
- Limited by scanner resolution and alignment ability
8170

82-
# Unified Placement Scheme
71+
### Standardized Annotation
8372

84-
## Anatomical Coordinate System for Rigid Body Parts
85-
Axis definitions per body part are provided in the anatomical **[landmark table](./anatomical_table.qmd)**. The table includes the name of the body part, the axis, and the direction of the axis, defined using anatomical landmarks, with axis limits ranging from 0 to 100% of the distance between the landmarks.
73+
Sensor placement should be documented using a standardized format that includes:
8674

87-
## Principles of Sensor Placement Annotation
88-
We propose a **unified placement scheme** for sensors based on anatomical landmarks and the axes defined in the anatomical landmark table. The scheme follows these principles:
75+
1. Body part name
76+
2. Axis name and direction
77+
3. Axis limits
78+
4. Sensor location (as ratio of axis limits)
79+
5. Placement precision level
8980

90-
1. Sensor placement should be reproducible by a human with defined precision.
91-
2. Placement in each dimension should be related to the anatomical landmarks of the relevant body part.
92-
3. Landmarks define the origin, direction, and limits of the axes.
93-
4. Sensor locations should be reported as a ratio of the limits of each axis for each body part.
94-
5. Placement precision depends on the precision of landmarks, axes, and the measurement method.
81+
This framework does not prescribe specific annotation formats; different standards and specification can use the principles to develop their own. However, this framework is designed to be compatible with existing data sharing standards such as [Brain Imaging Data Structure (BIDS)](https://bids.neuroimaging.io/) and [Hierarchical Event Descriptors (HED)](https://www.hedtags.org/). Specifically, using this framework would provide precise details for the sensor placement as described in [Motion-BIDS](https://bids-specification.readthedocs.io/en/stable/modality-specific-files/motion.html).
9582

96-
## Placement Precision
97-
The precision of sensor placement is related to the precision of landmark definitions, axis orthogonality, and measurement methods. We propose the following precision levels:
83+
An exemplar annotation following the general HED instructions can be represented as:
9884

99-
1. **Visual Inspection**: Placement defined by visual inspection of body parts and landmarks, limited by human eye resolution and alignment ability. Estimated precision: ~10% of the distance between landmarks.
100-
2. **Tape Measure**: Placement defined by measuring distances between landmarks and placing the sensor at a specific ratio. Limited by tape measure resolution and alignment ability. Estimated precision: ~5% of the distance between landmarks.
101-
3. **3D Scanning**: Placement defined by 3D scanning body parts and placing the sensor at a specific ratio. Limited by 3D scanner resolution and alignment ability. Estimated precision: ~1% of the distance between landmarks.
85+
```
86+
(Body-part, (X-position/#, Y-position/#, Z-position/#), Precision)
87+
```
88+
(note that the exact HED tags are under development under [HED-SLAM](https://www.hedtags.org/display_hed_prerelease.html?schema=slam_prerelease))
10289

103-
**Sensor placement precision** should always be reported in the dataset metadata.
90+
This standardization framework represents a significant step toward improving data quality, reproducibility, and interoperability in human movement analysis, from clinical biomechanics to continuous health monitoring.
10491

105-
## Sensor Placement Annotation
106-
Sensor placement should be annotated using a standardized format, including:
92+
## How to Contribute
93+
**Contributions**: We welcome contributions to this framework from the research community. Please submit your suggestions and feedback via the [GitHub repository](https://github.com/human-sensor-placement/human-sensor-placement.github.io) Issues section.
10794

108-
1. Body part name
109-
2. Axis name
110-
3. Axis direction
111-
4. Axis limits
112-
5. Sensor location as a ratio of the axis limits
113-
6. Sensor placement precision
95+
There are specific areas where we seek contributions:
11496

115-
### Hierchical Event Descriptors (HED) for Sensor Placement
116-
We *propose* using Hierarchical Event Descriptors (HED) to annotate sensor placement in a standardized format. A possible HED tag for sensor placement could be:
117-
118-
(Body-part, (X-position/#, Y-position/#, Z-position/#), Precision)
119-
120-
Where:
121-
122-
- Body-part: The name of the body part where the sensor is placed.
123-
- X-position, Y-position, Z-position: The sensor's position along the X, Y, and Z axes, respectively, expressed as a ratio of the axis limits.
124-
- Precision: The precision level of sensor placement (e.g., Visual Inspection, Tape Measure, 3D Scanning).
125-
126-
NOTE: This proopsal is still in discussion and we are open to feedback and suggestions. Importantly, we aim to establish a **HED partnered schemaw** to define (1) the anatomical landmarks, (2) axis limits, and (3) axis direction for each body part.
97+
- Standard vocabulary and communication formats through BIDS, HED, and other specifications
98+
- Validation studies for inter-operator reliability assessment
99+
- Mappings between this framework and existing standards (such as SENIAM)
100+
- Software tools for coordinate calculation and placement visualization

0 commit comments

Comments
 (0)