Below is the list of steps that should be performed as part of my scenario.
- Verify the file generated at S3 location with Redshift DB
- Verify the duplicate between the original script with latest script
- Verify the newly added column in latest script
- Verify that old column data is intact in original script and latest script
And I have the below feature to cover the same.
Feature:Add # of times PDP browsed to search and sort output files User Story CDH-3311
@regression
Scenario Outline: :Validation of file being generated at S3 location after job run
Given user has the json file with <country> and <brand>
Then user execute the Generic-extract-Batch job
Then user verify the file is generated successfully at S3 location
Then user verify the data in Redshift db with generated file
Then user verify the duplicate data in latest sql script
And user verify the duplicate data in original sql script
And user verify PDP_VIEWS column in latest sql script
And user verify <old coulmn> data of original script
And user compare it with the latest sql script
Examples:
| country | brand | old column |
| US | test1 | test6 |
| US | test2 | STORE |
| US | test3 | test7 |
| US | test4 | SALESUNITSCORE |
| US | test5 | TOTALSCORE |
Kindly verify that the outline adhere to best practices and is the correct representation of the things that needs to be done for the above mentioned tests
Not sure about the business flow, but make sure there are not spaces in the examples table columns. so the last column should be
old_column.