Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Libraries:
Datasets
pandas
RunBugRun-Final / README.md
NicholasOgenstad's picture
Update README.md
7efa459 verified

Original Dataset + Tokenized Data + (Buggy + Fixed Embedding Pairs) + Difference Embeddings

Overview

This repository contains 4 related datasets for training a transformation from buggy to fixed code embeddings:

Datasets Included

1. Original Dataset (train-00000-of-00001.parquet)

  • Description: Legacy RunBugRun Dataset
  • Format: Parquet file with buggy-fixed code pairs, bug labels, and language
  • Size: 456,749 samples
  • Load with:
from datasets import load_dataset
dataset = load_dataset(
    "ASSERT-KTH/RunBugRun-Final",
    split="train"
)
buggy = dataset['buggy_code']
fixed = dataset['fixed_code']

2. Difference Embeddings (diff_embeddings_chunk_XXXX.pkl)

  • Description: ModernBERT-large embeddings for buggy-fixed pairs. The difference is Fixed embedding - Buggy embedding. 1024 dimensional vector.
  • Format: Pickle file
  • Dimensions: 456,749 × 1024, split among the different files, most 20000, last one shorter.
  • Load with:
from huggingface_hub import hf_hub_download
import pickle

repo_id = "ASSERT-KTH/RunBugRun-Final"
diff_embeddings = []

for chunk_num in range(23):
    file_path = hf_hub_download(
        repo_id=repo_id,
        filename=f"Embeddings_RBR/diff_embeddings/diff_embeddings_chunk_{chunk_num:04d}.pkl",
        repo_type="dataset"
    )
    with open(file_path, 'rb') as f:
        data = pickle.load(f)
        diff_embeddings.extend(data.tolist())

3. Tokens (chunk_XXXX.pkl)

  • Description: Original Dataset tokenized, pairs of Buggy and Fixed code.
  • Format: Pickle file
  • Load with:
from huggingface_hub import hf_hub_download
import pickle

repo_id = "ASSERT-KTH/RunBugRun-Final"
tokenized_data = []

for chunk_num in range(23):
    file_path = hf_hub_download(
        repo_id=repo_id,
        filename=f"Embeddings_RBR/tokenized_data/chunk_{chunk_num:04d}.pkl",
        repo_type="dataset"
    )
    with open(file_path, 'rb') as f:
        data = pickle.load(f)
        tokenized_data.extend(data)

4. Buggy + Fixed Embeddings (buggy_fixed_embeddings_chunk_XXXX.pkl)

  • Description: Preprocessed tokenized sequences
  • Format: Pickle file
  • Load with:
from huggingface_hub import hf_hub_download
import pickle

repo_id = "ASSERT-KTH/RunBugRun-Final"
buggy_list = []
fixed_list = []

for chunk_num in range(23):
    file_path = hf_hub_download(
        repo_id=repo_id,
        filename=f"Embeddings_RBR/buggy_fixed_embeddings/buggy_fixed_embeddings_chunk_{chunk_num:04d}.pkl",
        repo_type="dataset"
    )
    with open(file_path, 'rb') as f:
        data = pickle.load(f)
        buggy_list.extend(data['buggy_embeddings'].tolist())
        fixed_list.extend(data['fixed_embeddings'].tolist())