One time vs Iteration Model in vowpal wabbit with --lrq option

276 views Asked by At

I am using vowpal wabbit logistic regression with Low Rank Quadratic options (--lrq ) for CTR prediction.I have trained model with two scenario

  1. Model building in one time with command

vw -d traning_data_all.vw --lrq ic2 --link logistic --loss_function logistic --l2 0.0000000360911 --l1 0.00000000103629 --learning_rate 0.3 --holdout_off -b 28 --noconstant -f final_model

  1. I have breaks the training data in 20 chunks(day wise) and building the model iterative way (with option -i and --save_resume).

first step:-

vw -d traning_data_day_1.vw --lrq ic2 --link logistic --loss_function logistic --l2 0.0000000360911 --l1 0.00000000103629 --learning_rate 0.3 --holdout_off -b 28 --noconstant -f model_1

And then

`vw -d  traning_data_day_2.vw --lrq ic2  --link logistic  --loss_function logistic --l2 0.0000000360911 --l1 0.00000000103629  --learning_rate 0.3 --holdout_off  -b 28 --noconstant  --save_resume -i model_1 -f model_2`

And so on up to 20 iteration

1st scenario is working fine but in second scenario predictions are tending to 1 OR 0 (only ) after 7-8 iteration. I need 2nd scenario working because i want to update model frequently. l1, l2 and learning_rate are optimised by vw-hypersearch script.

please help me how to solve this issue. Am i missing something ?. I have tried with option --lrqdropout.

0

There are 0 answers