The Dataset in Numbers

SIMUL was recorded in many different environments, with many different participants to be as generally useful as possible.

32 Participants

The SIMUL dataset contains IMU motion recordings from 32 different participants.

550 Minutes

In total, the SIMUL dataset contains 550 minutes of annotated IMU recordings across all participants.

6 Activities

Contained in the recordings are 6 differentiated activities: Standing, Walking, Ascending/Descending Stairs, and Riding Elevator Up/Down.

~46.8k Steps

Across all 6 performed activities, the dataset contains 46858 steps, of which 90% which belong to the Walking activity, and 10% to Ascending/Descending Stairs.

6 IMU Devices

Every recording contains data of 6 separate XSens MTw Awinda IMUs, consistently placed on strategically chosen body positions.

16 Sensor Channels

Each of the IMUs produces 16 data channels: Accelerometer(3), Free Acceleration(3), Gyroscope(3), Magnetic Field sensor(3), Orientation quaternion(4).

Time-Synchronized

The sensor data streams are time-synchronized across all IMUs with a maximum delay of 10µs, allowing to apply labels generated in one IMUs data stream to that of another.

IMUs Sampled at 80Hz

Every sensor in the system is consistently sampled at 80Hz. With 6 IMUs and 16 dimensions per IMU, that amounts to 7680 data points per second.

Accurate Labeling

Every recording contains labels for the performed activity, as well as every step with start end end timestamp, allowing SIMUL to be used for Activity Recognition and Step Identification purposes.

Step Labeling

A simple heuristics was employed to automatically create a first labeling proposal for every recording, that was then corrected by hand. The heuristics is based on zero movement detection on the IMU data, recorded in the left and right foot position. Due to the time-synchronization between all IMUs, these labels are automatically accurate for all other contained sensor streams, such as the IMU data that was recorded in hand.

Step Labeling Accuracy

To evaluate the labeling accuracy, a calibration walk was recorded using the 6 XSens MTw Awinda in the usual recording setup, accompanied by a 60fps video camera. After the steps were labeled in both IMU and video data separately, the resulting labels were compared. The result showed that in the calibration recording, the difference between step start timestamps between IMU and video labeling was (\mu_{\textrm{s}}=-103\textrm{ms}, \sigma_{\textrm{s}}=24\textrm{ms}) while the difference for the end timestamps were (\mu_{\textrm{e}}=99\textrm{ms}, \sigma_{\textrm{e}}=30\textrm{ms}). An effect that can be attributed to the higher accuracy in the IMU data, compared to the optical evaluation of movements in the video. However, when comparing the center timestamp of every step, calculated by \frac{s^{\textrm{start}} + s^{\textrm{end}}}{2}, the difference: \mu_{\textrm{c}}=-1.9\textrm{ms}, \sigma_{\textrm{c}}=17.1\textrm{ms} was close to 0 and showed a standard deviation close to the sampling interval of 60fps.

IMU Positioning

For recording, the 6 synchronized XSens MTw Awinda IMUs were positioned on 6 strategically chosen locations on the body of every participant. Two IMUs were fixed on each participant's shoes. Those are the datasource for the automatic step labeling based on the heuristics described above. Furthermore, two IMUs were placed in the left and right front trouser pocket, and one in a back trouser pocket. The last of the six sensors was carried in hand, as if it were a smartphone running a navigation app with directions.