Weka's J48 allows one to check information gain on a full set of attributes, should I use those significant attributes to build my model? Or should I use the full set of attributes?
Use significant attributes only, or use full set of attributes to build J48 model after checking information gain?
157 views Asked by Guanhua Lee At
1
There are 1 answers
Related Questions in TREE
- prolog traverse nonstandard tree left to right
- Why would one use a heap over a self balancing binary search tree?
- recursively editing member variable: All instances have same value
- D3.js collapsable tree - visualise set number of levels
- java - How to generate Tree from two dimensional array
- Haskell, Tree problems
- d3 indented tree close other nodes with child and open only specific node
- Function that return average depth of a binary search tree
- SQL Tree Structure Table
- Java: make prefix tree remember last value that was not null
Related Questions in WEKA
- ARFF file extension to csv binary executable
- How to cluster using kMeans in Weka?
- Difference between weka tool's correlation coefficient and scikit learn's coefficient of determination score
- How to export PCA from Weka
- scatter plot for a multiclass dataset with class imbalance and class overlapping
- Use significant attributes only, or use full set of attributes to build J48 model after checking information gain?
- Train And Use Classifier Weka In Java
- weka API import
- Prediction of sets
- Replace numbers with Boolean in r
Related Questions in C4.5
- Use significant attributes only, or use full set of attributes to build J48 model after checking information gain?
- Information Gain in R
- ML Decision Tree classifier is only splitting on the same tree / asking about the same attribute
- How does pessimistic error pruning in C4.5 algorithm working?
- Paralleizing implementation of Decision tree ID3/C4.5 on Hadoop
- I am looking for specific algorithms in Orange
- R caret train() underperforming on J48 compared to manual parameter setting
- Identify application based on its packets
- C4.5 Decision Tree Algorithm doesn't improve the accuracy
- c4.5 algorithm missing values
Related Questions in J48
- Use significant attributes only, or use full set of attributes to build J48 model after checking information gain?
- Graphviz Visualizer in WEKA giving empty Grpahs
- How visualize j48 tree in weka
- Weka improve model TP Rate
- What does the useLaplace parameter do in the WEKA j48 algorithm?
- How to install ant package in java correctly?
- Student Performance using Decision Tree in Weka
- Extracting contents from decision tree J48
- Properties and their values out of J48 tree (RWeka)
- R Weka J48 with snowfall for parallel execution
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Popular Tags
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
In data mining, there is a multi-way trade-off between the number of features that you use, your accuracy, and the time it takes to generate a model. In theory, you'd want include every possible feature to boost accuracy; however, going about data mining in this way guarantees lengthy model generation times. Further, models that produce textual decision trees like J48 aren't as useful when the tree has thousands of nodes.
Depending on how many features you start out with, you may very well want to remove features that don't provide a large enough information gain. If you have a small number of features to begin with (e.g. fewer than 20), it might make sense just to keep all of them.
If you do wish to limit the number of features you use, it would be best to choose those with the highest Information Gain. It would also be worthwhile to look into things like Principal Component Reduction (which can be done through WEKA) to help select the best features.