npz dict | __key__ stringlengths 33 42 | __url__ stringclasses 1
value |
|---|---|---|
{"depth":[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | ./-5JaYFNtYlM_115181849/depths/depth_00155 | "hf://datasets/ZhengGuangze/Stereo4D_vlbm@41890d20f2e5cc1d0cf06f341f6435765d602157/stereo4d_vlbm_000(...TRUNCATED) |
{"depth":[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | ./-5JaYFNtYlM_115181849/depths/depth_00192 | "hf://datasets/ZhengGuangze/Stereo4D_vlbm@41890d20f2e5cc1d0cf06f341f6435765d602157/stereo4d_vlbm_000(...TRUNCATED) |
{"depth":[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | ./-5JaYFNtYlM_115181849/depths/depth_00100 | "hf://datasets/ZhengGuangze/Stereo4D_vlbm@41890d20f2e5cc1d0cf06f341f6435765d602157/stereo4d_vlbm_000(...TRUNCATED) |
{"depth":[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | ./-5JaYFNtYlM_115181849/depths/depth_00171 | "hf://datasets/ZhengGuangze/Stereo4D_vlbm@41890d20f2e5cc1d0cf06f341f6435765d602157/stereo4d_vlbm_000(...TRUNCATED) |
{"depth":[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | ./-5JaYFNtYlM_115181849/depths/depth_00036 | "hf://datasets/ZhengGuangze/Stereo4D_vlbm@41890d20f2e5cc1d0cf06f341f6435765d602157/stereo4d_vlbm_000(...TRUNCATED) |
{"depth":[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | ./-5JaYFNtYlM_115181849/depths/depth_00016 | "hf://datasets/ZhengGuangze/Stereo4D_vlbm@41890d20f2e5cc1d0cf06f341f6435765d602157/stereo4d_vlbm_000(...TRUNCATED) |
{"depth":[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | ./-5JaYFNtYlM_115181849/depths/depth_00141 | "hf://datasets/ZhengGuangze/Stereo4D_vlbm@41890d20f2e5cc1d0cf06f341f6435765d602157/stereo4d_vlbm_000(...TRUNCATED) |
{"depth":[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | ./-5JaYFNtYlM_115181849/depths/depth_00004 | "hf://datasets/ZhengGuangze/Stereo4D_vlbm@41890d20f2e5cc1d0cf06f341f6435765d602157/stereo4d_vlbm_000(...TRUNCATED) |
{"depth":[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | ./-5JaYFNtYlM_115181849/depths/depth_00133 | "hf://datasets/ZhengGuangze/Stereo4D_vlbm@41890d20f2e5cc1d0cf06f341f6435765d602157/stereo4d_vlbm_000(...TRUNCATED) |
{"depth":[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | ./-5JaYFNtYlM_115181849/depths/depth_00182 | "hf://datasets/ZhengGuangze/Stereo4D_vlbm@41890d20f2e5cc1d0cf06f341f6435765d602157/stereo4d_vlbm_000(...TRUNCATED) |
Stereo4D (converted to VLBM format, quality top-5%)
This dataset contains a quality-filtered subset of Stereo4D sequences converted to the VLBM/Flock4D-compatible format using the conversion tool stereo4d2vlbm.py.
Scale of Source and Filtered Dataset
The original Stereo4D dataset contains 98,112 sequences sourced from in-the-wild stereo videos. To focus on high-quality dynamic content, we applied an automated video quality detection pipeline (quality_detect.py) and retained only the top-5% sequences by composite quality score, resulting in 4,908 sequences included in this dataset.
Quality Filtering Pipeline
Quality detection was performed with quality_detect.py, which uniformly samples 16 frames per video and computes the following checks:
| Issue | Criterion |
|---|---|
| STATIC / NEAR_STATIC | Mean inter-frame pixel difference below threshold |
| HARD_CUT | Single frame-diff spike above threshold |
| FADE / DISSOLVE | Monotonic brightness drift or sustained moderate diff |
| DARK / OVEREXPOSED | Mean brightness out of the valid range |
| BLURRY | Median Laplacian variance too low |
| FLICKER | Std of per-frame brightness too high |
| DUPLICATE_FRAMES | High ratio of near-identical consecutive frames |
| LOOP | First and last frames nearly identical |
| TOO_SHORT | Fewer than 16 total frames |
| BLACK_FRAMES | High ratio of near-black frames |
| LOW_RESOLUTION | Width or height below 64 px |
Each video also receives a composite quality score (0β100) combining motion magnitude (40%), sharpness (35%), brightness balance (15%), and temporal stability (10%). Videos with any detected issue are penalized. The top-5% by score are selected for conversion.
Pseudo-Depth Generation
Stereo4D does not provide ground-truth depth maps. Instead, sparse pseudo-depth is computed from the provided 3D point tracks and camera poses via stereo4d2vlbm.py:
- Load annotations: read
camera2world(T, 3, 4),track_lengths,track_indices,track_coordinatesfrom the per-sequence.npzfile. - Compute intrinsics: derive a pinhole intrinsic matrix from the horizontal FOV (
fov_bounds) and image resolution. - Compute extrinsics: invert the
camera2worldmatrix to obtain world-to-camera transforms (W2C). - Dense track array: convert the sparse track representation to a dense
(T, N, 3)world-coordinates array and a boolean visibility mask. - Project to 2D: apply W2C and projection to obtain
(T, N, 2)image-space coordinates. - Sparse depth map: transform visible 3D points to camera space; the Z-component gives metric depth. Points are rounded to the nearest pixel and written to a sparse
(H, W)depth map (zero = unknown).
The resulting depth maps are sparse β only pixels covered by tracked 3D points carry valid depth values.
Dataset Structure
Each sequence directory follows this layout:
{sequence_id}/
βββ rgbs/
β βββ rgb_00000.jpg
β βββ rgb_00001.jpg
β βββ ...
βββ depths/
β βββ depth_00000.npz
β βββ depth_00001.npz
β βββ ...
βββ annotations.npz
βββ scene_info.json
File Descriptions
rgbs/: RGB frames extracted from the left-rectified video and saved as JPEG (rgb_XXXXX.jpg). Resolution is 512Γ512 pixels.depths/: Sparse pseudo-depth maps saved as compressed NumPy archives (depth_XXXXX.npz). Each archive stores a float32 array under the keydepthof shape(H, W); zero values indicate unknown depth.annotations.npz: NumPy compressed file containing the following float16 arrays:trajs_2d: 2D trajectories(T, N, 2)β pixel coordinates (x, y).trajs_3d: 3D trajectories(T, N, 3)β world-space coordinates (x, y, z); zero-filled where invisible.visibilities:(T, N)β visibility flags (1.0 visible, 0.0 not visible).intrinsics:(T, 3, 3)β camera intrinsic matrices for each frame.extrinsics:(T, 4, 4)β world-to-camera extrinsic matrices (W2C) for each frame.
scene_info.json: JSON file with per-sequence metadata. Fields:source,num_frames,image_size,num_trajectories,depth_range,depth_type,original_sequence.
Data Specifications
- Image format: JPEG (RGB), 512Γ512 px
- Depth format: NPZ (float32), sparse (zero = unknown)
- Annotation format:
annotations.npz(float16 arrays for compact storage) - Frames per sequence: ~199 frames (varies slightly by sequence)
- Points per sequence: tens of thousands of 3D tracked points per sequence
Usage Example (Python)
import numpy as np
from PIL import Image
from pathlib import Path
import json
seq_dir = Path("stereo4d_vlbm/<sequence_id>")
# Load annotations
annotations = np.load(seq_dir / "annotations.npz", allow_pickle=True)
trajs_2d = annotations['trajs_2d'] # (T, N, 2)
trajs_3d = annotations['trajs_3d'] # (T, N, 3)
vis = annotations['visibilities'] # (T, N)
intrinsics = annotations['intrinsics'] # (T, 3, 3)
extrinsics = annotations['extrinsics'] # (T, 4, 4)
# Load an image and sparse depth map
frame_idx = 0
rgb = Image.open(seq_dir / "rgbs" / f"rgb_{frame_idx:05d}.jpg")
depth_npz = np.load(seq_dir / "depths" / f"depth_{frame_idx:05d}.npz")
depth = depth_npz['depth'] # float32 array (H, W), 0 = unknown
# Load scene info
with open(seq_dir / "scene_info.json", 'r') as f:
scene_info = json.load(f)
print(scene_info)
Conversion Script
The full conversion pipeline is provided in stereo4d2vlbm.py. It supports single-sequence, batch, and resume-from-checkpoint modes:
# Convert a single sequence
python stereo4d2vlbm.py --seq "_0be62W7ndY_15081748"
# Batch convert top-5% sequences (uses quality filter file)
python stereo4d2vlbm.py --batch --num_workers 8 \
--top5_file tmp/data/stereo4d/quality_top5_full/top5pct_videos.txt
# Batch convert all available sequences
python stereo4d2vlbm.py --batch --num_workers 8
Citation
Please cite the original Stereo4D dataset when using the converted data. If you use the VLBM/Flock4D conversion, please also cite this repository.
Contact
If you encounter issues with the conversion or the converted files, please open an issue in the repository.
- Downloads last month
- 136