If my raw data is in CSV format and I would like to store it in the Bronze layer as Delta tables then I would end up with four layers like Raw+Bronze+Silver+Gold. Which approach should I consider?
How to handle CSV files in the Bronze layer without the extra layer
320 views Asked by Su1tan At
1
There are 1 answers
Related Questions in DATABRICKS
- Generate Databricks personal access token using REST API
- Databricks Delta table / Compute job
- Problem to add service principal permissions with terraform
- Spark connectors from Azure Databricks to Snowflake using AzureAD login
- SparkException: Task failed while writing rows, caused by Futures timed out
- databricks-connect==14.3 does not recognize cluster
- Connect and track mlflow runs on databricks
- Databricks can't find a csv file inside a wheel I installed when running from a Databricks Notebook
- How to override a is_member() in-built function in databricks
- Last SPARK Task taking forever to complete
- Call Databricks API from an ASP.NET Core web application
- Access df_loaded and/or run_id in Load Data section of best trial notebook of Databricks AutoML run
- How to avoid being struct column name written to the json file?
- Understanding least common type in databricks
- Azure DataBricks - Looking to query "workflows" related logs in Log Analytics (ie Name, CreatedBy, RecentRuns, Status, StartTime, Job)
Related Questions in DELTA-LAKE
- Existing column unrecognized by Delta merge
- Writing on Delta Table with Change Data Feed enabled
- Programatically querying Delta Table via Athena is failing
- Delta Lake as ingress for Flink Stateful Functions
- Optimise Command on Delta Table
- Azure SQL support for Delta tables
- Executing Spark sql in delta live tables
- New delta log folder is not getting created
- Adding column metadata comments in delta live table
- org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus can't be cast to org.apache.spark.sql.execution.datasources.FileStatusWithMetadat
- Databricks AutoLoader - how to handle spark write transactional (_SUCCESS file) on Azure Data Lake Storage?
- pyspark casting missing struct in optional array for delta table
- How to Refresh Unity Catalog Table MataStore
- How to drop or skip data type mismatch while reading from Mongo using Spark Mongo Connector
- Apache Delta upsert vs insert/delete
Related Questions in DATA-LAKE
- Apache Iceberg: Is it possible to manually set snapshot time during historical load
- Read incrementally from Kafka with AWS Glue
- "Insufficient number of drives online" error when running distributed MinIO in virtual machines
- Data Lake vs Raw Data Storage
- Handle new data in "Raw" zone of Lakehouse
- Understanding what's a datalake and lakehouse: implementation details
- 12M Rolling Active Customers
- Lake Formation Cross-Account Share, Issues seeing database in external account
- Scala Spark Iceberg writeStream. How to set bucket?
- Does Azure Synapse Analytics Database designer do not support Delta format
- How to set a numeric value in the source_mappings.json file in a AWS SDLF pipeline?
- Data lake with Apache NiFi
- deltastreamer.HoodieDeltaStreamer exceptio: Filesystem closed
- CDAP execute crud queries
- Error while granting permissions to datalake locations via CDK
Related Questions in DATA-LAKEHOUSE
- SQL analytics endpoint and Semantic model not being created in MS Fabric Lakehouse
- [Fabric][Delta Tables] "Create database for [Lakehouse] is not permitted using Apache Spark in Microsoft Fabric." How to solve this issue?
- Overwrite data coming from source to lakehouse in data fabric
- glusterFS with data lake
- Access JSON file in Lakehouse from Microsoft Notebook with other Lakehouse-Default
- MS Fabric - Notebook data load from a lakehouse that's NOT default?
- what is the difference between microsoft lake house and azure lakehouse?
- Handle new data in "Raw" zone of Lakehouse
- Delta Lake Size Requirements
- Opensource Datalakehouse with Multi-Node Multi-Drive MinIO object storage
- In Model view for the sql endpoint of a lakehouse, model doesn't persist, Mark Date Table greyed out. Using MS Fabric / PowerBI / Synapse Engineering
- Understanding what's a datalake and lakehouse: implementation details
- How to create an Iceberg table using Flink that is partitioned on STRUCT/ROW type's inner field?
- How to create Iceberg table using Trino partitioned on a STRUCT/ROW column's inner field?
- Could I achieve schema on read approach when loading/ingesting data to Google BQ (BigQuery)?
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
A bit of an open question, however with respect to retaining the "raw" data in CSV I would normally recommend this as storage of these data is usually cheap relative to the utility of being able to re-process if there are problems or for purpose of data audit/traceability.
I would normally take the approach of compressing the raw files after processing and perhaps tar-balling the files. In addition moving these files to colder/cheaper storage.