Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
DOI:
License:
TAIX-Ray / README.md
mueller-franzes's picture
Update README.md
246798f verified
metadata
license: cc-by-4.0
task_categories:
  - image-classification
language:
  - en
tags:
  - x-ray
  - medical
  - chest
size_categories:
  - 100K<n<1M
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: val
        path: data/val-*
  - config_name: original
    data_files:
      - split: train
        path: original/train-*
      - split: test
        path: original/test-*
      - split: val
        path: original/val-*
dataset_info:
  - config_name: default
    features:
      - name: UID
        dtype: string
      - name: Fold
        dtype: int64
      - name: Split
        dtype: string
      - name: PatientID
        dtype: string
      - name: PhysicianID
        dtype: string
      - name: StudyDate
        dtype: string
      - name: Age
        dtype: int64
      - name: Sex
        dtype: string
      - name: HeartSize
        dtype: int64
      - name: PulmonaryCongestion
        dtype: int64
      - name: PleuralEffusion_Right
        dtype: int64
      - name: PleuralEffusion_Left
        dtype: int64
      - name: PulmonaryOpacities_Right
        dtype: int64
      - name: PulmonaryOpacities_Left
        dtype: int64
      - name: Atelectasis_Right
        dtype: int64
      - name: Atelectasis_Left
        dtype: int64
      - name: Image
        dtype: image
    splits:
      - name: train
        num_bytes: 36724515176.076
        num_examples: 137593
      - name: test
        num_bytes: 11088307165.008
        num_examples: 42928
      - name: val
        num_bytes: 9210192401
        num_examples: 34860
    download_size: 58343808539
    dataset_size: 57023014742.084
  - config_name: original
    features:
      - name: UID
        dtype: string
      - name: Fold
        dtype: int64
      - name: Split
        dtype: string
      - name: PatientID
        dtype: string
      - name: PhysicianID
        dtype: string
      - name: StudyDate
        dtype: string
      - name: Age
        dtype: int64
      - name: Sex
        dtype: string
      - name: HeartSize
        dtype: int64
      - name: PulmonaryCongestion
        dtype: int64
      - name: PleuralEffusion_Right
        dtype: int64
      - name: PleuralEffusion_Left
        dtype: int64
      - name: PulmonaryOpacities_Right
        dtype: int64
      - name: PulmonaryOpacities_Left
        dtype: int64
      - name: Atelectasis_Right
        dtype: int64
      - name: Atelectasis_Left
        dtype: int64
      - name: Image
        dtype: image
    splits:
      - name: train
        num_bytes: 793575463284.632
        num_examples: 137593
      - name: test
        num_bytes: 235100370576.352
        num_examples: 42928
      - name: val
        num_bytes: 197760028732.64
        num_examples: 34860
    download_size: 1266898242525
    dataset_size: 1226435862593.624

TAIX-Ray Dataset

TAIX-Ray is a comprehensive dataset of approximately 200k bedside chest radiographs from around 50k intensive care patients at University Hospital Aachen, Germany, collected between 2010 and 2024.

Trained radiologists provided structured reports at the time of acquisition, assessing key findings such as cardiomegaly, pulmonary congestion, pleural effusion, pulmonary opacities, and atelectasis on an ordinal scale.


Code & Details

The code for data loading, preprocessing, and baseline experiments is available at: https://github.com/mueller-franzes/TAIX-Ray


How to Use

Prerequisites

Ensure you have the following dependencies installed:

pip install datasets matplotlib huggingface_hub pandas tqdm

Configurations

This dataset is available in two configurations:

Name Size Image Size
default 62GB 512px
original 1.2TB variable

Option A: Use within the Hugging Face Framework

If you want to use the dataset directly within the Hugging Face datasets library, you can load and visualize it as follows:

from datasets import load_dataset
from matplotlib import pyplot as plt

# Load the TAIX-Ray dataset
dataset = load_dataset("TLAIM/TAIX-Ray", name="default")

# Access the training split (Fold 0)
ds_train = dataset["train"]

# Retrieve a single sample from the training set
item = ds_train[0]

# Extract and display the image
image = item["Image"]
plt.imshow(image, cmap="gray")
plt.savefig("image.png")  # Save the image to a file
plt.show()  # Display the image

# Print metadata (excluding the image itself)
for key in item.keys():
    if key != "Image":
        print(f"{key}: {item[key]}")

Option B: Downloading the Dataset

If you prefer to download the dataset to a specific folder, use the following script. This will create the following folder structure:

.
├── data/
│   ├── 549a816ae020fb7da68a31d7d62d73c418a069c77294fc084dd9f7bd717becb9.png
│   ├── d8546c6108aad271211da996eb7e9eeabaf44d39cf0226a4301c3cbe12d84151.png
│   └── ...
└── metadata/
    ├── annotation.csv
    └── split.csv
from datasets import load_dataset
from pathlib import Path
import pandas as pd
from tqdm import tqdm

# Define output paths
output_root = Path("./TAIX-Ray")

# Create folders
data_dir = output_root / "data"
metadata_dir = output_root / "metadata"
data_dir.mkdir(parents=True, exist_ok=True)
metadata_dir.mkdir(parents=True, exist_ok=True)

# Load dataset in streaming mode
dataset = load_dataset("TLAIM/TAIX-Ray", name="default", streaming=True)

# Process dataset
metadata = []
for split, split_dataset in dataset.items():
    print("-------- Start Download:", split, "--------")
    for item in tqdm(split_dataset, desc="Downloading"):  # Stream data one-by-one
        uid = item["UID"]
        img = item.pop("Image")  # PIL Image object

        # Save image
        img.save(data_dir / f"{uid}.png", format="PNG")

        # Store metadata
        metadata.append(item)

# Convert metadata to DataFrame
metadata_df = pd.DataFrame(metadata)

# Save annotations to CSV file
metadata_df.drop(columns=["Split", "Fold"]).to_csv(
    metadata_dir / "annotation.csv", index=False
)

print("Dataset streamed and saved successfully!")