kedro.io.IncrementalDataSet

class kedro.io.IncrementalDataSet(path, dataset, checkpoint=None, filepath_arg='filepath', filename_suffix='', credentials=None, load_args=None, fs_args=None)[source]

IncrementalDataSet inherits from PartitionedDataSet, which loads and saves partitioned file-like data using the underlying dataset definition. For filesystem level operations it uses fsspec: https://github.com/intake/filesystem_spec. IncrementalDataSet also stores the information about the last processed partition in so-called checkpoint that is persisted to the location of the data partitions by default, so that subsequent pipeline run loads only new partitions past the checkpoint.

Example:

from kedro.io import IncrementalDataSet

# these credentials will be passed to:
# a) 'fsspec.filesystem()' call,
# b) the dataset initializer,
# c) the checkpoint initializer
credentials = {"key1": "secret1", "key2": "secret2"}

data_set = IncrementalDataSet(
    path="s3://bucket-name/path/to/folder",
    dataset="pandas.CSVDataSet",
    credentials=credentials
)
loaded = data_set.load()  # loads all available partitions
# assert isinstance(loaded, dict)

data_set.confirm()  # update checkpoint value to the last processed partition ID
reloaded = data_set.load()  # still loads all available partitions

data_set.release()  # clears load cache
# returns an empty dictionary as no new partitions were added
data_set.load()

Attributes

DEFAULT_CHECKPOINT_FILENAME

DEFAULT_CHECKPOINT_TYPE

Methods

confirm()

Confirm the dataset by updating the checkpoint value to the latest processed partition ID

exists()

Checks whether a data set’s output already exists by calling the provided _exists() method.

from_config(name, config[, load_version, …])

Create a data set instance using the configuration provided.

load()

Loads data by delegation to the provided load method.

release()

Release any cached data.

save(data)

Saves data by delegation to the provided save method.

DEFAULT_CHECKPOINT_FILENAME = 'CHECKPOINT'
DEFAULT_CHECKPOINT_TYPE = 'kedro.extras.datasets.text.TextDataSet'
__init__(path, dataset, checkpoint=None, filepath_arg='filepath', filename_suffix='', credentials=None, load_args=None, fs_args=None)[source]

Creates a new instance of IncrementalDataSet.

Parameters
  • path (str) – Path to the folder containing partitioned data. If path starts with the protocol (e.g., s3://) then the corresponding fsspec concrete filesystem implementation will be used. If protocol is not specified, fsspec.implementations.local.LocalFileSystem will be used. Note: Some concrete implementations are bundled with fsspec, while others (like s3 or gcs) must be installed separately prior to usage of the PartitionedDataSet.

  • dataset (Union[str, Type[AbstractDataSet], Dict[str, Any]]) – Underlying dataset definition. This is used to instantiate the dataset for each file located inside the path. Accepted formats are: a) object of a class that inherits from AbstractDataSet b) a string representing a fully qualified class name to such class c) a dictionary with type key pointing to a string from b), other keys are passed to the Dataset initializer. Credentials for the dataset can be explicitly specified in this configuration.

  • checkpoint (Union[str, Dict[str, Any], None]) – Optional checkpoint configuration. Accepts a dictionary with the corresponding dataset definition including filepath (unlike dataset argument). Checkpoint configuration is described here: https://kedro.readthedocs.io/en/stable/05_data/02_kedro_io.html#checkpoint-configuration Credentials for the checkpoint can be explicitly specified in this configuration.

  • filepath_arg (str) – Underlying dataset initializer argument that will contain a path to each corresponding partition file. If unspecified, defaults to “filepath”.

  • filename_suffix (str) – If specified, only partitions that end with this string will be processed.

  • credentials (Optional[Dict[str, Any]]) – Protocol-specific options that will be passed to fsspec.filesystem https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.filesystem, the dataset dataset initializer and the checkpoint. If the dataset or the checkpoint configuration contains explicit credentials spec, then such spec will take precedence. All possible credentials management scenarios are documented here: https://kedro.readthedocs.io/en/stable/05_data/02_kedro_io.html#partitioned-dataset-credentials

  • load_args (Optional[Dict[str, Any]]) – Keyword arguments to be passed into find() method of the filesystem implementation.

  • fs_args (Optional[Dict[str, Any]]) – Extra arguments to pass into underlying filesystem class constructor (e.g. {“project”: “my-project”} for GCSFileSystem).

Raises

DataSetError – If versioning is enabled for the underlying dataset.

confirm()[source]

Confirm the dataset by updating the checkpoint value to the latest processed partition ID

Return type

None

exists()

Checks whether a data set’s output already exists by calling the provided _exists() method.

Return type

bool

Returns

Flag indicating whether the output already exists.

Raises

DataSetError – when underlying exists method raises error.

classmethod from_config(name, config, load_version=None, save_version=None)

Create a data set instance using the configuration provided.

Parameters
  • name (str) – Data set name.

  • config (Dict[str, Any]) – Data set config dictionary.

  • load_version (Optional[str]) – Version string to be used for load operation if the data set is versioned. Has no effect on the data set if versioning was not enabled.

  • save_version (Optional[str]) – Version string to be used for save operation if the data set is versioned. Has no effect on the data set if versioning was not enabled.

Return type

AbstractDataSet

Returns

An instance of an AbstractDataSet subclass.

Raises

DataSetError – When the function fails to create the data set from its config.

load()

Loads data by delegation to the provided load method.

Return type

Any

Returns

Data returned by the provided load method.

Raises

DataSetError – When underlying load method raises error.

release()

Release any cached data.

Raises

DataSetError – when underlying release method raises error.

Return type

None

save(data)

Saves data by delegation to the provided save method.

Parameters

data (Any) – the value to be saved by provided save method.

Raises
  • DataSetError – when underlying save method raises error.

  • FileNotFoundError – when save method got file instead of dir, on Windows.

  • NotADirectoryError – when save method got file instead of dir, on Unix.

Return type

None