We are looking to see if there is a tool within the Foundry platform that will allow us to have a list of field descriptions and when the dataset builds, it can populated those descriptions automatically. Does this exist and if so what is the tool called?
Is there a tool available within Foundry that can automatically populate column descriptions? If so, what is it called?
544 views Asked by Robert F At
1
There are 1 answers
Related Questions in PALANTIR-FOUNDRY
- Has the hubble:icon Type Class been deprecated?
- Quiver pivot table from linked Object sets
- Manipulate GeoJSON Data via Typescript Function in Palantir Foundry
- Rest API: Is there any endpoint that provides dataset lineage info?
- Create charts in workshop based on two objects
- Using a Code Repo to call a Webhook created in Palantir Foundry
- How do you manage static data in Palantir Foundry?
- How do I get values from Palantir Foundry Workshop app into a PDF?
- What object types are avaliable for export on <hostname>/phonograph2-export/api/?
- Run Docker Containers on an Existing Kubernetes Cluster on Palantir Foundry
- Delete all rows with duplicate values in column in Palantir Foundry Countor
- Use of ngraph.path library in Palantir - Foundry
- Generate PDF files using transforms api in code repositories and save to foundry
- Palantir Foundry - Making http request and capturing JSON response
- How to publish a Spark ML pyspark.ml.PipelineModel object in code repositories?
Related Questions in FOUNDRY-CODE-REPOSITORIES
- Generate PDF files using transforms api in code repositories and save to foundry
- How to upload large unstructured dataset into a MediaSet in Palantir Foundry?
- How to filter out specific rows of dataframe in pyspark?
- How to use Broadcast in Foundry code repository
- Python Inner Join Returns No Rows but Contour Does
- Load h5 file with keras in Foundry code repository
- How to convert a Dataset of PDFs to a Media Set?
- Is there any way in foundry by which we can validate the attachment while uploading it to ontology object type using attachment property?
- Pass a whole dataset contains multiple files to HuggingFace function in Palantir
- How to add a column as a file name to a parsed dataset in Palantir Foundry?
- How to revert/roll back to an earlier commit in Foundry Code Repo
- How do I ensure a build has consistent provenance records so it can run incrementally?
- check if rows are already present in pyspark dataset
- Palantir Foundry fail to set link between two objects
- How do I update part of a dataset without doing a snapshot build of the whole dataset?
Related Questions in FOUNDRY-PYTHON-TRANSFORM
- Python Inner Join Returns No Rows but Contour Does
- How do you parse pdf files from a raw dataset in Foundry?
- In Palantir Foundry, how can I detect whether my Transform is running in a Preview?
- How do I identify the value of a skewed task of my Foundry job?
- Shuffle Stage Failing Due To Executor Loss
- PySpark Serialized Results too Large OOM for loop in Spark
- Why is my Code Repo warning me not to use union and instead use unionByName?
- How can I have nice file names & efficient storage usage in my Foundry Magritte dataset export?
- Why don't I see log lines in my PySpark code when I would expect them to appear?
- Why is my Code Repo warning me about using withColumn in a for/while loop?
- When would I prefer to run a job in static allocation vs. dynamic allocation?
- How to apply different schemas to csvs within a single dataset?
- Why don't I see smaller tasks for my requested repartitioning?
- Does a count() over a DataFrame materialize the data to the driver / increase a risk of OOM?
- How do I ensure consistent file sizes in datasets built in Foundry Python Transforms?
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
If you upgrade your Code Repository to version 1.184.0+, this is released and available from this point onwards.
The method you use to push output column descriptions is to provide a new optional argument to your
TransformOutput.write_dataframe(), namelycolumn_descriptions.This argument should be a
dictwith keys of column names and values of column descriptions (up to 200 characters in length for stability reasons).The code will automatically compute the intersection of the column names available on your
pyspark.sql.DataFrameand the keys in thedictyou provide, so it won't try to put descriptions on columns that don't exist.The code you use to run this process looks like this: