Working with inaccurate (incorrect) dataset

1.2k views Asked by At

This is my problem description:

"According to the Survey on Household Income and Wealth, we need to find out the top 10% households with the most income and expenditures. However, we know that these collected data is not reliable due to many misstatements. Despite these misstatements, we have some features in the dataset which are certainly reliable. But these certain features are just a little part of information for each household wealth."

Unreliable data means that households tell lies to government. These households misstate their income and wealth in order to unfairly get more governmental services. Therefore, these fraudulent statements in original data will lead to incorrect results and patterns.

Now, I have below questions:

  • How should we deal with unreliable data in data science?
  • Is there any way to figure out these misstatements and then report the top 10% rich people with better accuracy using Machine Learning algorithms? -How can we evaluate our errors in this study? Since we have unlabeled dataset, should I look for labeling techniques? Or, should I use unsupervised methods? Or, should I work with semi-supervised learning methods?
  • Is there any idea or application in Machine Learning which tries to improve the quality of collected data?

Please introduce me any ideas or references which can help me in this issue.

Thanks in advance.

1

There are 1 answers

8
Maksim Khaitovich On

Q: How should we deal with unreliable data in data science

A: Use feature engineering to fix unreliable data (make some transformations on unreliable data to make it reliable) or drop them out completely - bad features could significantly decrease the quality of the model

Q: Is there any way to figure out these misstatements and then report the top 10% rich people with better accuracy using Machine Learning algorithms?

A: ML algorithms are not magic sticks, they can't figure out anything unless you tell them what you are looking for. Can you describe what means 'unreliable'? If yes, you can, as I mentioned, use feature engineering or write a code which will fix the data. Otherwise no ML algorithm will be able to help you, without the description of what exactly you want to achieve

Q: Is there any idea or application in Machine Learning which tries to improve the quality of collected data?

A: I don't think so just because the question itself is too open-ended. What means 'the quality of the data'?

Generally, here are couple of things for you to consider:

1) Spend some time on googling feature engineering guides. They cover how to prepare your data for you ML algorithms, refine it, fix it. Good data with good features dramatically increase the results.

2) You don't need to use all of features from original data. Some of features of original dataset are meaningless and you don't need to use them. Try to run gradient boosting machine or random forest classifier from scikit-learn on your dataset to perform classification (or regression, if you do regression). These algorithms also evaluate importance of each feature of original dataset. Part of your features will have extremely low importance for classification, so you may wish to drop them out completely or try to combine unimportant features together somehow to produce something more important.