kedro_datasets.partitions.IncrementalDataset¶
- class kedro_datasets.partitions.IncrementalDataset(*, path, dataset, checkpoint=None, filepath_arg='filepath', filename_suffix='', credentials=None, load_args=None, fs_args=None, metadata=None)[source]¶
IncrementalDataset
inherits fromPartitionedDataset
, which loads and saves partitioned file-like data using the underlying dataset definition. For filesystem level operations it uses fsspec: https://github.com/intake/filesystem_spec.IncrementalDataset
also stores the information about the last processed partition in so-called checkpoint that is persisted to the location of the data partitions by default, so that subsequent pipeline run loads only new partitions past the checkpoint.Example:
from kedro_datasets.partitions import IncrementalDataset dataset = IncrementalDataset( ... path=str(tmp_path / "test_data"), dataset="pandas.CSVDataset" ... ) loaded = dataset.load() # loads all available partitions # assert isinstance(loaded, dict) dataset.confirm() # update checkpoint value to the last processed partition ID reloaded = dataset.load() # still loads all available partitions dataset.release() # clears load cache # returns an empty dictionary as no new partitions were added assert dataset.load() == {}
Attributes
Methods
confirm
()Confirm the dataset by updating the checkpoint value to the latest processed partition ID
exists
()Checks whether a dataset's output already exists by calling the provided _exists() method.
from_config
(name, config[, load_version, ...])Create a dataset instance using the configuration provided.
load
()Loads data by delegation to the provided load method.
release
()Release any cached data.
save
(data)Saves data by delegation to the provided save method.
Converts the dataset instance into a dictionary-based configuration for serialization.
- DEFAULT_CHECKPOINT_FILENAME = 'CHECKPOINT'¶
- DEFAULT_CHECKPOINT_TYPE = 'kedro_datasets.text.TextDataset'¶
- __init__(*, path, dataset, checkpoint=None, filepath_arg='filepath', filename_suffix='', credentials=None, load_args=None, fs_args=None, metadata=None)[source]¶
Creates a new instance of
IncrementalDataset
.- Parameters:
path (
str
) – Path to the folder containing partitioned data. If path starts with the protocol (e.g.,s3://
) then the correspondingfsspec
concrete filesystem implementation will be used. If protocol is not specified,fsspec.implementations.local.LocalFileSystem
will be used. Note: Some concrete implementations are bundled withfsspec
, while others (likes3
orgcs
) must be installed separately prior to usage of thePartitionedDataset
.dataset (
str
|type
[AbstractDataset
] |dict
[str
,Any
]) – Underlying dataset definition. This is used to instantiate the dataset for each file located inside thepath
. Accepted formats are: a) object of a class that inherits fromAbstractDataset
b) a string representing a fully qualified class name to such class c) a dictionary withtype
key pointing to a string from b), other keys are passed to the Dataset initializer. Credentials for the dataset can be explicitly specified in this configuration.checkpoint (
Union
[str
,dict
[str
,Any
],None
]) – Optional checkpoint configuration. Accepts a dictionary with the corresponding dataset definition includingfilepath
(unlikedataset
argument). Checkpoint configuration is described here: https://docs.kedro.org/en/stable/data/partitioned_and_incremental_datasets.html#checkpoint-configuration Credentials for the checkpoint can be explicitly specified in this configuration.filepath_arg (
str
) – Underlying dataset initializer argument that will contain a path to each corresponding partition file. If unspecified, defaults to “filepath”.filename_suffix (
str
) – If specified, only partitions that end with this string will be processed.credentials (
Optional
[dict
[str
,Any
]]) – Protocol-specific options that will be passed tofsspec.filesystem
https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.filesystem, the dataset initializer and the checkpoint. If the dataset or the checkpoint configuration contains explicit credentials spec, then such spec will take precedence. All possible credentials management scenarios are documented here: https://docs.kedro.org/en/stable/data/partitioned_and_incremental_datasets.html#partitioned-dataset-credentialsload_args (
Optional
[dict
[str
,Any
]]) – Keyword arguments to be passed intofind()
method of the filesystem implementation.fs_args (
Optional
[dict
[str
,Any
]]) – Extra arguments to pass into underlying filesystem class constructor (e.g. {“project”: “my-project”} forGCSFileSystem
).metadata (
Optional
[dict
[str
,Any
]]) – Any arbitrary metadata. This is ignored by Kedro, but may be consumed by users or external plugins.
- Raises:
DatasetError – If versioning is enabled for the checkpoint dataset.
- confirm()[source]¶
Confirm the dataset by updating the checkpoint value to the latest processed partition ID
- Return type:
- exists()[source]¶
Checks whether a dataset’s output already exists by calling the provided _exists() method.
- Return type:
- Returns:
Flag indicating whether the output already exists.
- Raises:
DatasetError – when underlying exists method raises error.
- classmethod from_config(name, config, load_version=None, save_version=None)[source]¶
Create a dataset instance using the configuration provided.
- Parameters:
name (
str
) – Data set name.load_version (
Optional
[str
]) – Version string to be used forload
operation if the dataset is versioned. Has no effect on the dataset if versioning was not enabled.save_version (
Optional
[str
]) – Version string to be used forsave
operation if the dataset is versioned. Has no effect on the dataset if versioning was not enabled.
- Return type:
- Returns:
An instance of an
AbstractDataset
subclass.- Raises:
DatasetError – When the function fails to create the dataset from its config.
- release()[source]¶
Release any cached data.
- Raises:
DatasetError – when underlying release method raises error.
- Return type:
- save(data)[source]¶
Saves data by delegation to the provided save method.
- Parameters:
data (
dict
[str
,Any
]) – the value to be saved by provided save method.- Raises:
DatasetError – when underlying save method raises error.
FileNotFoundError – when save method got file instead of dir, on Windows.
NotADirectoryError – when save method got file instead of dir, on Unix.
- Return type:
- to_config()[source]¶
Converts the dataset instance into a dictionary-based configuration for serialization. Ensures that any subclass-specific details are handled, with additional logic for versioning and caching implemented for CachedDataset.
Adds a key for the dataset’s type using its module and class name and includes the initialization arguments.
For CachedDataset it extracts the underlying dataset’s configuration, handles the versioned flag and removes unnecessary metadata. It also ensures the embedded dataset’s configuration is appropriately flattened or transformed.
If the dataset has a version key, it sets the versioned flag in the configuration.
Removes the metadata key from the configuration if present.