taher30's picture
Update README.md
0b325a7 verified
|
raw
history blame
2.01 kB
metadata
dataset_info:
  features:
    - name: path
      dtype: string
    - name: concatenated_notebook
      dtype: string
  splits:
    - name: train
      num_bytes: 13378216977
      num_examples: 781578
  download_size: 5447349438
  dataset_size: 13378216977
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Merged Jupyter Notebooks Dataset

Introduction

This dataset is a transformed version of the Jupyter Code-Text Pairs dataset. The original dataset contains markdown, code, and output pairs extracted from Jupyter notebooks. This transformation merges these components into a single, cohesive format that resembles a Jupyter notebook, making it easier to analyze and understand the flow of information.

Dataset Details

Source

The original dataset is sourced from the Hugging Face Hub, specifically the bigcode/jupyter-code-text-pairs dataset. It contains pairs of markdown, code, and output from Jupyter notebooks.

Transformation Process

Using the flexibility and efficiency of DuckDB, I processed the entire dataset without the need for heavy hardware. DuckDB's ability to handle large datasets efficiently allowed me to concatenate the markdown, code, and output for each notebook path into a single string, simulating the structure of a Jupyter notebook.

The transformation was performed using the following DuckDB query:

import duckdb

Connect to a new DuckDB database
new_db = duckdb.connect('merged_notebooks.db')

Query to concatenate markdown, code, and output
query = """
SELECT path,
STRING_AGG(CONCAT('###Markdown\n', markdown, '\n###Code\n', code, '\n###Output\n', output), '\n') AS concatenated_notebook
FROM read_parquet('jupyter-code-text-pairs/data/*.parquet')
GROUP BY path
"""

Execute the query and create a new table
new_db.execute(f"CREATE TABLE concatenated_notebooks AS {query}")