Databricks SQL Warehouse - Upload File to Table
Overviewβ
Quickly upload a file from Platform to a SQL table in Databricks.
Recommended Setup
This should be used immediately after downloading data from another source.
Although they are not required in order to connect, it is recommended that you provide the Catalog
and the Schema
that you will query. By not doing so, the connection will resort to the defaults and the uploaded table will reside there.
The recommended approach is to provide the volume where the uploaded file will be staged, then copied into the target table. Platform will remove the staged file after successfully copying into the target. It is also recommended to use one volume per schema, though this is not strictly enforced. If the volume provided does not exist, Platform will create it.
Additionally the match type selected greatly affects how this Blueprint works.
Note This blueprint cannot upload a file from your local machine.
Variablesβ
Name | Reference | Type | Required | Default | Options | Description |
---|---|---|---|---|---|---|
Access Token | DATABRICKS_SQL_ACCESS_TOKEN | Password | β | - | - | The access token generated in Databricks for programatic access |
Databricks Server Host | DATABRICKS_SQL_SERVER_HOST | Alphanumeric | β | - | - | The URL address of the SQL warehouse |
Warehouse HTTP Path | DATABRICKS_SQL_HTTP_PATH | Alphanumeric | β | - | - | The extended path for the SQL warehouse |
Catalog | DATABRICKS_SQL_CATALOG | Alphanumeric | β | - | - | The optional catalog to connect to. If none is provided, this will default to Hive Metastore |
Schema | DATABRICKS_SQL_SCHEMA | Alphanumeric | β | - | - | The optional schema to connect to. If none is provided, the blueprint will connect to the default schema |
Volume | DATABRICKS_SQL_VOLUME | Alphanumeric | β | - | - | The name of the volume to stage the file |
Table Name | DATABRICKS_SQL_TABLE | Alphanumeric | β | - | - | The table in Databricks to write to |
Data Types | DATABRICKS_SQL_DATATYPES | Alphanumeric | β | - | - | The optional Spark datatypes to use in Databricks. These should be in JSON format, and if none are provided then the datatypes will be inferred. |
Insert Method | DATABRICKS_SQL_INSERT_METHOD | Select | β | append | Append: append Create or Replace: replace | This decides whether to append to an existing table or overwrite an exiting table. |
File Type | DATABRICKS_SQL_FILE_TYPE | Select | β | csv | CSV: csv Parquet: parquet | The file type to load |
Shipyard File Match Type | DATABRICKS_SQL_MATCH_TYPE | Select | β | exact_match | Exact Match: exact_match Glob Match: glob_match | Determines if the text in "Shipyard File Name" will look for one file with exact match, or multiple files using regex. |
Shipyard Folder Name | DATABRICKS_SQL_FOLDER_NAME | Alphanumeric | β | - | - | The optional name of the folder where the file in Shipyard is located |
Shipyard File Name | DATABRICKS_SQL_FILE_NAME | Alphanumeric | β | - | - | The name of the file in Platform to load to Databricks |
YAMLβ
Below is the YAML template for this Blueprint and can be used in the Fleet YAML Editor.
source:
blueprint: Databricks SQL Warehouse - Upload File to Table
inputs:
DATABRICKS_SQL_ACCESS_TOKEN: null ## REQUIRED
DATABRICKS_SQL_SERVER_HOST: null ## REQUIRED
DATABRICKS_SQL_HTTP_PATH: null ## REQUIRED
DATABRICKS_SQL_CATALOG: null
DATABRICKS_SQL_SCHEMA: null
DATABRICKS_SQL_VOLUME: null
DATABRICKS_SQL_TABLE: null ## REQUIRED
DATABRICKS_SQL_DATATYPES: null
DATABRICKS_SQL_INSERT_METHOD: append ## REQUIRED
DATABRICKS_SQL_FILE_TYPE: csv ## REQUIRED
DATABRICKS_SQL_MATCH_TYPE: exact_match ## REQUIRED
DATABRICKS_SQL_FOLDER_NAME: null
DATABRICKS_SQL_FILE_NAME: null ## REQUIRED
type: BLUEPRINT
guardrails:
retry_count: 1
retry_wait: 0h0m0s
runtime_cutoff: 1h0m0s
exclude_exit_code_ranges:
- '200'
- '202'
- '203'
- '204'
- '205'
- '206'
- '207'
- '208'
- '209'
- '210'
- '211'
- '249'