I am attempting to create a proof of concept Hadoop instance, and have chosen to use the KiteSDK to interact with it (as well as to structure my data). When I set up a Hadoop instance on my local box, everything is working correctly and my code connects without problems. However, when I set up Hadoop on an AWS server, I am no longer able to connect to it. I believe that this is because of my network's security settings, and I suspect that I will need to use a proxy to connect. Is this possible? If so, can you provide an example of how to do it? The Kite CLI has a proxy-user parameter that can be passed in, but when I look at the SDK's code, it doesn't appear that setting the value of that does what I need.
Can you use a proxy to connect to HDFS using the Kite SDK?
137 views Asked by Josh Edwards At
1
There are 1 answers
Related Questions in HADOOP
- Can anyoone help me with this problem while trying to install hadoop on ubuntu?
- Hadoop No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster)
- Top-N using Python, MapReduce
- Spark Driver vs MapReduce Driver on YARN
- ERROR: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "maprfs"
- can't write pyspark dataframe to parquet file on windows
- How to optimize writing to a large table in Hive/HDFS using Spark
- Can't replicate block xxx because the block file doesn't exist, or is not accessible
- HDFS too many bad blocks due to "Operation category WRITE is not supported in state standby" - Understanding why datanode can't find Active NameNode
- distcp throws java.io.IOException when copying files
- Hadoop MapReduce WordPairsCount produces inconsistent results
- If my data is not partitioned can that be why I’m getting maxResultSize error for my PySpark job?
- resource manager and nodemanager connectivity issues
- ERROR flume.SinkRunner: Unable to deliver event
- converting varchar(7) to decimal (7,5) in hive
Related Questions in HDFS
- Can anyoone help me with this problem while trying to install hadoop on ubuntu?
- ERROR: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "maprfs"
- How to optimize writing to a large table in Hive/HDFS using Spark
- Update hadoop hadoop-2.6.5 to haddop 3.x. Operation category WRITE is not supported in state standby
- Copy/Merge multiple HDFS files using Nifi Processor
- HDFS too many bad blocks due to "Operation category WRITE is not supported in state standby" - Understanding why datanode can't find Active NameNode
- distcp throws java.io.IOException when copying files
- ERROR flume.SinkRunner: Unable to deliver event
- Apache flume does not run hadoop 3.1.0 Flume 1.11
- Livy session to submit pyspark from HDFS
- ClickHouse Server Exception: Code: 210.DB::Exception: Fail to read from HDFS:
- Confluent HDFS Sink connector error while connecting HDFS to Hive
- Node Transitioned from NEW to UNHEALTHY and Attempting to remove non-existent node
- Error associated with Azure Datalake Gen2 and Hadoop connection
- How do I directly read files from HDFS using dask?
Related Questions in CLOUDERA
- Issue with SQLAlchemy accessing Impala database via cloudera ODBC DSN
- cloudera/quickstart image running in docker container is not starting and status is exited(139). How to solve it?
- Cloudera ODBC ThriftExtension connection error
- No Output for MapReduce Program even after successful job completion on Cloudera VM
- Exception in thread "main" java.lang.NoClassDefFoundError while running Spark program using spark-submit
- How to use external Spark with the Cloudera cluster?
- Permission denied error while importing a table into HDFS using Scoop
- How to load an audio/videos into Hbase
- Hive via hue is working but not via hive cli on Cloudera Data Platform
- Cloudera configuration between Master and Slave Nodes
- Bad performance when select any thing from a table on hive?
- Spark: IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
- How to reduce network round trip between hbase hadoop and application server
- Nifi data movement between SQL databases to HDFS and Hive
- Is there an inbuilt Monitoring solution for Nifi Processors where sometimes the processor getting hung can be reported
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
I never figured out a proxy setting for Kite SDK, but I was able to work around this issue by finding a port that was not behind a firewall and reconfigure Hadoop to have the nodename reside on that port.