kedro_datasets.ibis.FileDataset

class kedro_datasets.ibis.FileDataset(filepath, file_format='parquet', *, table_name=None, connection=None, load_args=None, save_args=None, version=None, metadata=None)[source]

FileDataset loads/saves data from/to a specified file format.

Example usage for the YAML API:

cars:
  type: ibis.FileDataset
  filepath: data/01_raw/company/cars.csv
  file_format: csv
  table_name: cars
  connection:
    backend: duckdb
    database: company.db
  load_args:
    sep: ","
    nullstr: "#NA"
  save_args:
    sep: ","
    nullstr: "#NA"

motorbikes:
  type: ibis.FileDataset
  filepath: s3://your_bucket/data/02_intermediate/company/motorbikes/
  file_format: delta
  table_name: motorbikes
  connection:
    backend: polars

Example usage for the Python API:

 import ibis
 from kedro_datasets.ibis import FileDataset

 data = ibis.memtable({"col1": [1, 2], "col2": [4, 5], "col3": [5, 6]})

 dataset = FileDataset(
...     filepath=tmp_path / "test.csv",
...     file_format="csv",
...     table_name="test",
...     connection={"backend": "duckdb", "database": tmp_path / "file.db"},
... )
 dataset.save(data)
 reloaded = dataset.load()
 assert data.execute().equals(reloaded.execute())

Attributes

DEFAULT_CONNECTION_CONFIG

DEFAULT_LOAD_ARGS

DEFAULT_SAVE_ARGS

connection

The Backend instance for the connection configuration.

Methods

exists()

Checks whether a dataset's output already exists by calling the provided _exists() method.

from_config(name, config[, load_version, ...])

Create a dataset instance using the configuration provided.

load()

Loads data by delegation to the provided load method.

release()

Release any cached data.

resolve_load_version()

Compute the version the dataset should be loaded with.

resolve_save_version()

Compute the version the dataset should be saved with.

save(data)

Saves data by delegation to the provided save method.

DEFAULT_CONNECTION_CONFIG: ClassVar[dict[str, Any]] = {'backend': 'duckdb', 'database': ':memory:'}
DEFAULT_LOAD_ARGS: ClassVar[dict[str, Any]] = {}
DEFAULT_SAVE_ARGS: ClassVar[dict[str, Any]] = {}
__init__(filepath, file_format='parquet', *, table_name=None, connection=None, load_args=None, save_args=None, version=None, metadata=None)[source]

Creates a new FileDataset pointing to the given filepath.

FileDataset connects to the Ibis backend object constructed from the connection configuration. The backend key provided in the config can be any of the supported backends. The remaining dictionary entries will be passed as arguments to the underlying connect() method (e.g. ibis.duckdb.connect()).

The read method corresponding to the given file_format (e.g. read_csv()) is used to load the file with the backend. Note that only the data is loaded; no link to the underlying file exists past FileDataset.load().

Parameters:
  • filepath (str) – Path to a file to register as a table. Most useful for loading data into your data warehouse (for testing). On save, the backend exports data to the specified path.

  • file_format (str) – String specifying the file format for the file. Defaults to writing execution results to a Parquet file.

  • table_name (Optional[str]) – The name to use for the created table (on load).

  • connection (Optional[dict[str, Any]]) – Configuration for connecting to an Ibis backend. If not provided, connect to DuckDB in in-memory mode.

  • load_args (Optional[dict[str, Any]]) – Additional arguments passed to the Ibis backend’s read_{file_format} method.

  • save_args (Optional[dict[str, Any]]) – Additional arguments passed to the Ibis backend’s to_{file_format} method.

  • version (Optional[Version]) – If specified, should be an instance of kedro.io.core.Version. If its load attribute is None, the latest version will be loaded. If its save attribute is None, save version will be autogenerated.

  • metadata (Optional[dict[str, Any]]) – Any arbitrary metadata. This is ignored by Kedro, but may be consumed by users or external plugins.

property connection: BaseBackend

The Backend instance for the connection configuration.

Return type:

BaseBackend

exists()

Checks whether a dataset’s output already exists by calling the provided _exists() method.

Return type:

bool

Returns:

Flag indicating whether the output already exists.

Raises:

DatasetError – when underlying exists method raises error.

classmethod from_config(name, config, load_version=None, save_version=None)

Create a dataset instance using the configuration provided.

Parameters:
  • name (str) – Data set name.

  • config (dict[str, Any]) – Data set config dictionary.

  • load_version (Optional[str]) – Version string to be used for load operation if the dataset is versioned. Has no effect on the dataset if versioning was not enabled.

  • save_version (Optional[str]) – Version string to be used for save operation if the dataset is versioned. Has no effect on the dataset if versioning was not enabled.

Return type:

AbstractDataset

Returns:

An instance of an AbstractDataset subclass.

Raises:

DatasetError – When the function fails to create the dataset from its config.

load()[source]

Loads data by delegation to the provided load method.

Return type:

Table

Returns:

Data returned by the provided load method.

Raises:

DatasetError – When underlying load method raises error.

release()

Release any cached data.

Raises:

DatasetError – when underlying release method raises error.

Return type:

None

resolve_load_version()

Compute the version the dataset should be loaded with.

Return type:

Optional[str]

resolve_save_version()

Compute the version the dataset should be saved with.

Return type:

Optional[str]

save(data)[source]

Saves data by delegation to the provided save method.

Parameters:

data (Table) – the value to be saved by provided save method.

Raises:
  • DatasetError – when underlying save method raises error.

  • FileNotFoundError – when save method got file instead of dir, on Windows.

  • NotADirectoryError – when save method got file instead of dir, on Unix.

Return type:

None