Question about the time synchronization

Hi, Ego-Exo4D team.

I want to get the corresponding information(like trajectory and gaze) about the frame_aligned_videos.

As I understand it, 1 capture folder corresponds to multiple takes. Therefore, I need to find the information about takes in trajectory.csv(or gaze.csv) in the capture folder. And I see that each take in takes.json has timesync_start_idx and timesync_end_idx. So I need to use these two indexes to find the timestamp of the corresponding period in timesync.csv, and then use the timestamp to go to trajectory.csv to get the trajectory information.

  1. Is this the right way to do it?

  2. If this is the right line of thought, I’m asking if the “timesync_start_idx” and “timesync_end_idx” represent the number of rows, or the number of rows in “timesync.csv” minus 1? (I’m not sure what the indexes mean here, due to the existence of the csv file culture table header)

Hey @zgchen333, thanks for the question!

That’s right, a single capture encapsulates 1 or more takes. The timesync start/end idx are 0-indexed pointers to rows in the capture’s timesync.csv.

For example, given the take metadata - you can load the trajectory for that section similar to this:

import json
import pandas as pd
from projectaria_tools.core import mps

def get_aria_start_end_sec(timesync_path, start_idx, end_idx):
    timesync = pd.read_csv(timesync_path)
    aria_rgb_col = next(
        for col in timesync.columns
        if "aria" in col.lower() and "214-1" in col and "capture_timestamp_ns" in col
    return (
        timesync.iloc[start_idx][aria_rgb_col] * 1e-9,
        timesync.iloc[end_idx][aria_rgb_col] * 1e-9,

with open("path/to/metadata/takes.jsonl") as json_file: 
        take_metadata = json.load(json_file)
start_idx, end_idx = take_metadata[0]['timesync_start_idx'], take_metadata[0]['timesync_end_idx']
start_sec, end_sec = get_aria_start_end_sec("path/to/capture/timesync.csv", start_idx, end_idx)

mps_trajectory = mps.read_closed_loop_trajectory(closed_loop_trajectory_filepath)
for pose in mps_trajectory:
        if start_sec and pose.tracking_timestamp.total_seconds() < start_sec:
        if end_sec and pose.tracking_timestamp.total_seconds() >= end_sec:
        # Get take-aligned poses here
        tracking_timestamp_ns = int(pose.tracking_timestamp.total_seconds()*1e9)
        T_world_device = pose.transform_world_device

We’re working on getting pre-trimmed versions of MPS out as well - stay tuned!

Okay. I got it. Thank you for your help.

Thanks a lot. Could you please tell me where are the intrinsics of the ego views? I can only get the pose from the trajectory.csv

They are in the VRS files.

thanks! However, are the VRS files released? I downloaded the dataset and nowhere to find them.

@dkukreja @miguelmartin Hey, Thank you for your great work.
When I want to download the ‘take_eye_gaze’ by

egoexo -o ../data --parts take_eye_gaze

And the output is

ERROR: could not get manifests for all parts (nothing to download)

It is worth noting that I can download takes like"egoexo -o …/data --uid 853961be-a9e1-4b0c-81f6-d3e17c190b08 ".
Therefore, I want to ask if this error resulted from your ongoing updating work.

1 Like

thanks! However, are the VRS files released? I downloaded the dataset and nowhere to find them.

Please use --parts take_vrs

Thank you for the flag, I have resolved this issue, you can now download with --parts take_eye_gaze

You can alternatively use --parts capture_eye_gaze and then with the timesync.csv and start/end idx in the takes.json file you can correspond to the correct eye gaze region.

Hello, Ego-Exo4D team.

I’ve observed transformation parameters and quaternions between the camera and world coordinates within the visualizer. Besides, The ‘gopro_calibs.csv’ file contains parameters denoted as (tx_world_cam, ty_world_cam, tz_world_cam, qx_world_cam, qy_world_cam, qz_world_cam, qw_world_cam).

The uid for this example is 853961be-a9e1-4b0c-81f6-d3e17c190b08, which was utilized for verification purposes.

In my analysis, I anticipated that the parameters in the ‘gopro_calibs.csv’ file would correspond to the 3D transform observed in the visualizer, assuming they both refer to the same world coordinate system. However, in this example, they differ. Does this indicate that the world coordinate systems are distinct?

Additionally, could you provide sample code demonstrating how to obtain the transformation matrices between GoPro cameras and the Aria device?