kedro_datasets.spark.SparkHiveDataset¶
- class kedro_datasets.spark.SparkHiveDataset(*, database, table, write_mode='errorifexists', table_pk=None, save_args=None, metadata=None)[source]¶
SparkHiveDataset
loads and saves Spark dataframes stored on Hive. This dataset also handles some incompatible file types such as using partitioned parquet on hive which will not normally allow upserts to existing data without a complete replacement of the existing file/partition.This Dataset has some key assumptions:
Schemas do not change during the pipeline run (defined PKs must be present for the duration of the pipeline).
Tables are not being externally modified during upserts. The upsert method is NOT ATOMIC to external changes to the target table while executing. Upsert methodology works by leveraging Spark DataFrame execution plan checkpointing.
Example usage for the YAML API:
hive_dataset: type: spark.SparkHiveDataset database: hive_database table: table_name write_mode: overwrite
Example usage for the Python API:
from pyspark.sql import SparkSession from pyspark.sql.types import StructField, StringType, IntegerType, StructType from kedro_datasets.spark import SparkHiveDataset schema = StructType( ... [StructField("name", StringType(), True), StructField("age", IntegerType(), True)] ... ) data = [("Alex", 31), ("Bob", 12), ("Clarke", 65), ("Dave", 29)] spark_df = SparkSession.builder.getOrCreate().createDataFrame(data, schema) dataset = SparkHiveDataset( ... database="test_database", table="test_table", write_mode="overwrite" ... ) dataset.save(spark_df) reloaded = dataset.load() reloaded.take(4)
Attributes
Methods
exists
()Checks whether a dataset's output already exists by calling the provided _exists() method.
from_config
(name, config[, load_version, ...])Create a dataset instance using the configuration provided.
load
()Loads data by delegation to the provided load method.
release
()Release any cached data.
save
(data)Saves data by delegation to the provided save method.
Converts the dataset instance into a dictionary-based configuration for serialization.
- __init__(*, database, table, write_mode='errorifexists', table_pk=None, save_args=None, metadata=None)[source]¶
Creates a new instance of
SparkHiveDataset
.- Parameters:
database (
str
) – The name of the hive database.table (
str
) – The name of the table within the database.write_mode (
str
) –insert
,upsert
oroverwrite
are supported.table_pk (
Optional
[list
[str
]]) – If performing an upsert, this identifies the primary key columns used to resolve preexisting data. Is required forwrite_mode="upsert"
.save_args (
Optional
[dict
[str
,Any
]]) – Optional mapping of any options, passed to the DataFrameWriter.saveAsTable as kwargs. Key example of this is partitionBy which allows data partitioning on a list of column names. Other HiveOptions can be found here: https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html#specifying-storage-format-for-hive-tablesmetadata (
Optional
[dict
[str
,Any
]]) – Any arbitrary metadata. This is ignored by Kedro, but may be consumed by users or external plugins.
Note
For users leveraging the upsert functionality, a checkpoint directory must be set, e.g. using spark.sparkContext.setCheckpointDir(“/path/to/dir”) or directly in the Spark conf folder.
- Raises:
DatasetError – Invalid configuration supplied
- exists()[source]¶
Checks whether a dataset’s output already exists by calling the provided _exists() method.
- Return type:
- Returns:
Flag indicating whether the output already exists.
- Raises:
DatasetError – when underlying exists method raises error.
- classmethod from_config(name, config, load_version=None, save_version=None)[source]¶
Create a dataset instance using the configuration provided.
- Parameters:
name (
str
) – Data set name.load_version (
Optional
[str
]) – Version string to be used forload
operation if the dataset is versioned. Has no effect on the dataset if versioning was not enabled.save_version (
Optional
[str
]) – Version string to be used forsave
operation if the dataset is versioned. Has no effect on the dataset if versioning was not enabled.
- Return type:
- Returns:
An instance of an
AbstractDataset
subclass.- Raises:
DatasetError – When the function fails to create the dataset from its config.
- load()[source]¶
Loads data by delegation to the provided load method.
- Return type:
DataFrame
- Returns:
Data returned by the provided load method.
- Raises:
DatasetError – When underlying load method raises error.
- release()[source]¶
Release any cached data.
- Raises:
DatasetError – when underlying release method raises error.
- Return type:
- save(data)[source]¶
Saves data by delegation to the provided save method.
- Parameters:
data (
DataFrame
) – the value to be saved by provided save method.- Raises:
DatasetError – when underlying save method raises error.
FileNotFoundError – when save method got file instead of dir, on Windows.
NotADirectoryError – when save method got file instead of dir, on Unix.
- Return type:
- to_config()[source]¶
Converts the dataset instance into a dictionary-based configuration for serialization. Ensures that any subclass-specific details are handled, with additional logic for versioning and caching implemented for CachedDataset.
Adds a key for the dataset’s type using its module and class name and includes the initialization arguments.
For CachedDataset it extracts the underlying dataset’s configuration, handles the versioned flag and removes unnecessary metadata. It also ensures the embedded dataset’s configuration is appropriately flattened or transformed.
If the dataset has a version key, it sets the versioned flag in the configuration.
Removes the metadata key from the configuration if present.