Skip to content

EITLabworks/Dynamic-Object-Trajectory-Prediction-of-Time-Dependent-EIT-Data-Using-RNNs

Repository files navigation

Results of a Master's thesis


Dynamic-object-trajectory-prediction-of-time-dependent-EIT-data-using-recurrent-neural-networks

This project presents a novel approach for dynamic image reconstruction of Electrical Impedance Tomography (EIT). This approach uses a data-driven reconstruction model consisting of a Variational Autoencoder (VAE) and a mapper with an integrated Long-Short-Term-Memory (LSTM) unit. The network has been specically designed for dynamic object trajectory prediction, allowing accurate tracking of an object's movement within the EIT tank and also predicting future object positions by exploiting temporal information in sequential EIT data. This approach was developed for 2D and 3D reconstructions of object motion. Data collection was performed using FEM simulation (pyEIT forward solver) for simulation data and an EIT tank equipped with two electrode rings (32 electrodes each) and a Sciospec EIT device for experimental data. In this project, the reconstruction network was trained and tested on simulation data, experimental EIT data collected during 2D motion and experimental EIT data collected during 3D motion.

Reconstruction network architecture

The reconstruction model consists of two core components: a mapper with an integrated LSTM layer at the output and a VAE decoder. The architecture is illustrated in figure 1.

Empty_mesh

Figure 1: Architecture of reconstruction model.

The LSTM mapper, denoted as $\Xi$, processes temporal sequences of voltage measurements and maps it to the latent space $\mathbf{h}$. Subsequently, the VAE decoder, denoted as $\Psi$, reconstructs the latent representation into a conductivity distribution. The complete reconstruction network $\Gamma$ is defined as the composition of these mapping processes:

$$ \Gamma := \Xi \circ \Psi : V_{t} \mapsto h_{t+1} \mapsto \hat{\gamma}_{t+1} $$

Here, $V_{t}$ represents the voltage measurements at time $t$, $h_{t+1}$ the predicted latent space representation at time $t+1$, and $\hat{\gamma}_{t+1}$ is the reconstructed conductivity distribution at time $t+1$. Figure 2 illustrates the working principle of the reconstruction network, demonstration how a sequence of voltage measurements as input of the network is used to predict the future conductivity distribution.

Figure 2: Overview of the reconstruction process of the proposed reconstruction model. A sequence of four voltage measurements is used to predict the conductivity distribution of the next time step.

Training of reconstruction network

The training process was conducted in two stages. In the first stage, the VAE was trained in an unsupervised using synthetically generated conductivity distributions for both 2D and 3D space. For the 2D reconstructions, a triangular mesh representing the electrode plane of a cylindrical tank was used. For 3D reconstructions, a voxel-based approach was used. In the second training stage, the LSTM mapper was trained in a supervised manner. The VAE encoder generated a latent representations of known conductity distributions, which served as labels for the supervised learning of the LSTM mapper. Sequences of voltage measurements were paired with the corresponding latent representations of future conductivity distributions.

EIT data collection

EIT data were acquired in both simulated and experimental settings. Simulations were performed using FEM-based modeling with the pyEIT package, while experimental data were collected using an EIT water tank. For 2D data, both FEM simulation and experimental measurements were conducted on a single electrode plane, yielding $32^2$ voltage data points per frame. For 3D data, experimental measurements with two electrode planes were performed, resulting in $64^2$ voltage data points per frame. The EIT data were collected by tracking an acrylic ball along predefined trajectories at discrete positions. In 2D space, a circular, spiral, eight, polynomial, square trajectory were used. In 3D space, the trajectories uses were a helix, a spiral helix and a circular sine wave.

Results

2D simulation model

The 2D simulation model was trained on a spiral trajectory and tested on circular and eight shaped trajectory. The results demonstrate high predicition accuracy for the proposed resonstruction network.

Circle Trajectory Eight Trajectory

2D experimental model

The 2D experimental model was trained on a spiral trajectory. The trained model was then evaluated on different test trajectories to assess its generalisation capabilities. To test the robustness to velocity variations, an additional experiment was performed where the movement speed was increased by increasing the distance between each discrete point. A comparative analysis between model architectures with and without an LSTM layer was also performed to highlight the capability of the LSTM layer to model the time-dependent behavior of moving objects. The following figures show the results of the tests.

Prediction of different trajectories

Circle Trajectory Eight Trajectory
Polynomial Trajectory Square Trajectory

Prediction with different velocities

Normal Velocity Increased Velocity

Comparision of model with and without LSTM layer

With LSTM Layer Without LSTM Layer

3D experimental model

The 3D experimental model was trained using a spiral helix trajectory with a radius that decreases with increasing height. Like the 2D experimental model, the 3D model was tested on various test trajectory (a normal helix trajectory and a circular sine wave). Different velocity variations were also tested and, finally, a comparison between the model with and without LSTM layer was performed. The following figures show the results of the tests.

Prediction of different trajectories

Helix Trajectory Circular Sine Wave Trajectory

Prediction with different velocities

Normal Velocity Increased Velocity

Comparision of model with and without LSTM layer

With LSTM Layer Without LSTM Layer

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •