I have an IoT device that updates an Azure Storage Table anytime one of its values changes. For example, If the fish tank temperature changes from 68 to 69, that gets logged. If the filter pump runs, that gets logged. When the little treasure chest opens and bubbles come out, that gets logged. This makes my tabular data look like this:
TimeStamp Name Value
(time) TreaureChestBubbles 2.8
(time) TreaureChestBubbles 5
(time) FilterPumpRunning 1
(time) TreaureChestBubbles 3.5
(time) FilterPumpRunning 0
(time) WaterTemp 66
(time) TreaureChestBubbles -1 (indicating an error)
I want to create a model that predicts when my little treasure chest is going to fail.
I dumped all this data into an AutoML job and clicked go...and it failed miserably. Then I started reading the documentation. I find lots of documentation talking about setting up experiments, but very little concerning the exact structure of the data. It looks like my tabular data needs to have EVERY parameter in each row? So instead of a Name column, I'd need a TreaureChestBubblesValue column, a WaterTempValues column, a FilterPumpRunningValues, etc.
TimeStamp TreaureChestBubblesValue WaterTempValues ... FilterPumpRunningValues
(time) 2.8 67 0
(time) 5 67 0
(time) 5 66 0
(time) 8.4 66 1
(time) 2.8 67 0
Does that sound correct? Or does the structure of the data not matter for AutoML so long as its tabular?
Per this link: https://learn.microsoft.com/en-us/azure/machine-learning/concept-automl-forecasting-methods#how-automl-uses-your-data