kedro.extras.datasets.svmlight.SVMLightDataSet¶
- class kedro.extras.datasets.svmlight.SVMLightDataSet(filepath, load_args=None, save_args=None, version=None, credentials=None, fs_args=None)[source]¶
SVMLightDataSet
loads/saves data from/to a svmlight/libsvm file using an underlying filesystem (e.g.: local, S3, GCS). It uses sklearn functionsdump_svmlight_file
to save andload_svmlight_file
to load a file.Data is loaded as a tuple of features and labels. Labels is NumPy array, and features is Compressed Sparse Row matrix.
This format is a text-based format, with one sample per line. It does not store zero valued features hence it is suitable for sparse datasets.
This format is used as the default format for both svmlight and the libsvm command line programs.
Example usage for the YAML API:
svm_dataset: type: svmlight.SVMLightDataSet filepath: data/01_raw/location.svm load_args: zero_based: False save_args: zero_based: False cars: type: svmlight.SVMLightDataSet filepath: gcs://your_bucket/cars.svm fs_args: project: my-project credentials: my_gcp_credentials load_args: zero_based: False save_args: zero_based: False
Example usage for the Python API:
from kedro.extras.datasets.svmlight import SVMLightDataSet import numpy as np # Features and labels. data = (np.array([[0, 1], [2, 3.14159]]), np.array([7, 3])) data_set = SVMLightDataSet(filepath="test.svm") data_set.save(data) reloaded_features, reloaded_labels = data_set.load() assert (data[0] == reloaded_features).all() assert (data[1] == reloaded_labels).all()
Attributes
Methods
exists
()Checks whether a data set's output already exists by calling the provided _exists() method.
from_config
(name, config[, load_version, ...])Create a data set instance using the configuration provided.
load
()Loads data by delegation to the provided load method.
release
()Release any cached data.
Compute the version the dataset should be loaded with.
Compute the version the dataset should be saved with.
save
(data)Saves data by delegation to the provided save method.
- DEFAULT_LOAD_ARGS: Dict[str, Any] = {}¶
- DEFAULT_SAVE_ARGS: Dict[str, Any] = {}¶
- exists()¶
Checks whether a data set’s output already exists by calling the provided _exists() method.
- Return type
bool
- Returns
Flag indicating whether the output already exists.
- Raises
DatasetError – when underlying exists method raises error.
- classmethod from_config(name, config, load_version=None, save_version=None)¶
Create a data set instance using the configuration provided.
- Parameters
name – Data set name.
config – Data set config dictionary.
load_version – Version string to be used for
load
operation if the data set is versioned. Has no effect on the data set if versioning was not enabled.save_version – Version string to be used for
save
operation if the data set is versioned. Has no effect on the data set if versioning was not enabled.
- Returns
An instance of an
AbstractDataset
subclass.- Raises
DatasetError – When the function fails to create the data set from its config.
- load()¶
Loads data by delegation to the provided load method.
- Return type
TypeVar
(_DO
)- Returns
Data returned by the provided load method.
- Raises
DatasetError – When underlying load method raises error.
- release()¶
Release any cached data.
- Raises
DatasetError – when underlying release method raises error.
- Return type
None
- resolve_load_version()¶
Compute the version the dataset should be loaded with.
- Return type
str | None
- resolve_save_version()¶
Compute the version the dataset should be saved with.
- Return type
str | None
- save(data)¶
Saves data by delegation to the provided save method.
- Parameters
data (
TypeVar
(_DI
)) – the value to be saved by provided save method.- Raises
DatasetError – when underlying save method raises error.
FileNotFoundError – when save method got file instead of dir, on Windows.
NotADirectoryError – when save method got file instead of dir, on Unix.
- Return type
None