Skip to content

Commit

Permalink
e2e groupby conditional scenarios
Browse files Browse the repository at this point in the history
  • Loading branch information
itsmekumari committed Aug 25, 2023
1 parent 339807e commit eeb3c8b
Show file tree
Hide file tree
Showing 7 changed files with 186 additions and 0 deletions.
153 changes: 153 additions & 0 deletions core-plugins/src/e2e-test/features/groupby/GroupByWithFile.feature
Original file line number Diff line number Diff line change
Expand Up @@ -153,3 +153,156 @@ Feature: GroupBy - Verify File source to File sink data transfer using GroupBy a
Then Close the pipeline logs
Then Validate OUT record count of groupby is equal to IN record count of sink
Then Validate output file generated by file sink plugin "fileSinkTargetBucket" is equal to expected output file "groupByTest3OutputFile"

@GROUP_BY_TEST @FILE_SINK_TEST
Scenario: To verify complete flow of data extract and transfer from File source to File sink with GroupBy plugin using MaxIf,AvgIf,SumIf,CountIf aggregates
Given Open Datafusion Project to configure pipeline
When Select plugin: "File" from the plugins list as: "Source"
When Expand Plugin group in the LHS plugins list: "Analytics"
When Select plugin: "Group By" from the plugins list as: "Analytics"
Then Connect plugins: "File" and "Group By" to establish connection
When Expand Plugin group in the LHS plugins list: "Sink"
When Select plugin: "File" from the plugins list as: "Sink"
Then Connect plugins: "Group By" and "File2" to establish connection
Then Navigate to the properties page of plugin: "File"
Then Enter input plugin property: "referenceName" with value: "FileReferenceName"
Then Enter input plugin property: "path" with value: "groupByTest"
Then Select dropdown plugin property: "format" with option value: "csv"
Then Click plugin property: "skipHeader"
Then Click on the Get Schema button
Then Verify the Output Schema matches the Expected Schema: "groupByCsvDataTypeFileSchema"
Then Validate "File" plugin properties
Then Close the Plugin Properties page
Then Navigate to the properties page of plugin: "Group By"
Then Select dropdown plugin property: "groupByFields" with option value: "groupByValidFirstField"
Then Press ESC key to close the unique fields dropdown
Then Select dropdown plugin property: "groupByFields" with option value: "groupByValidSecondField"
Then Press ESC key to close the unique fields dropdown
Then Enter GroupBy plugin Fields to be Aggregate "groupByFileAggregateMultipleSetFields1"
Then Click on the Get Schema button
Then Click on the Validate button
Then Close the Plugin Properties page
Then Navigate to the properties page of plugin: "File2"
Then Enter input plugin property: "referenceName" with value: "FileReferenceName"
Then Enter input plugin property: "path" with value: "fileSinkTargetBucket"
Then Replace input plugin property: "pathSuffix" with value: "yyyy-MM-dd-HH-mm-ss"
Then Select dropdown plugin property: "format" with option value: "tsv"
Then Validate "File2" plugin properties
Then Close the Plugin Properties page
Then Save the pipeline
Then Preview and run the pipeline
Then Wait till pipeline preview is in running state
Then Open and capture pipeline preview logs
Then Verify the preview run status of pipeline in the logs is "succeeded"
Then Close the pipeline logs
Then Close the preview
Then Deploy the pipeline
Then Run the Pipeline in Runtime
Then Wait till pipeline is in running state
Then Open and capture logs
Then Verify the pipeline status is "Succeeded"
Then Close the pipeline logs
Then Validate OUT record count of groupby is equal to IN record count of sink
Then Validate output file generated by file sink plugin "fileSinkTargetBucket" is equal to expected output file "groupByTest5OutputFile"

@GROUP_BY_TEST @FILE_SINK_TEST
Scenario: To verify complete flow of data extract and transfer from File source to File sink with GroupBy plugin using set of aggregates
Given Open Datafusion Project to configure pipeline
When Select plugin: "File" from the plugins list as: "Source"
When Expand Plugin group in the LHS plugins list: "Analytics"
When Select plugin: "Group By" from the plugins list as: "Analytics"
Then Connect plugins: "File" and "Group By" to establish connection
When Expand Plugin group in the LHS plugins list: "Sink"
When Select plugin: "File" from the plugins list as: "Sink"
Then Connect plugins: "Group By" and "File2" to establish connection
Then Navigate to the properties page of plugin: "File"
Then Enter input plugin property: "referenceName" with value: "FileReferenceName"
Then Enter input plugin property: "path" with value: "groupByTest"
Then Select dropdown plugin property: "format" with option value: "csv"
Then Click plugin property: "skipHeader"
Then Click on the Get Schema button
Then Verify the Output Schema matches the Expected Schema: "groupByCsvDataTypeFileSchema"
Then Validate "File" plugin properties
Then Close the Plugin Properties page
Then Navigate to the properties page of plugin: "Group By"
Then Select dropdown plugin property: "groupByFields" with option value: "groupByValidFirstField"
Then Press ESC key to close the unique fields dropdown
Then Select dropdown plugin property: "groupByFields" with option value: "groupByValidSecondField"
Then Press ESC key to close the unique fields dropdown
Then Enter GroupBy plugin Fields to be Aggregate "groupByFileAggregateMultipleSetFields2"
Then Click on the Get Schema button
Then Click on the Validate button
Then Close the Plugin Properties page
Then Navigate to the properties page of plugin: "File2"
Then Enter input plugin property: "referenceName" with value: "FileReferenceName"
Then Enter input plugin property: "path" with value: "fileSinkTargetBucket"
Then Replace input plugin property: "pathSuffix" with value: "yyyy-MM-dd-HH-mm-ss"
Then Select dropdown plugin property: "format" with option value: "tsv"
Then Validate "File2" plugin properties
Then Close the Plugin Properties page
Then Save the pipeline
Then Preview and run the pipeline
Then Wait till pipeline preview is in running state
Then Open and capture pipeline preview logs
Then Verify the preview run status of pipeline in the logs is "succeeded"
Then Close the pipeline logs
Then Close the preview
Then Deploy the pipeline
Then Run the Pipeline in Runtime
Then Wait till pipeline is in running state
Then Open and capture logs
Then Verify the pipeline status is "Succeeded"
Then Close the pipeline logs
Then Validate OUT record count of groupby is equal to IN record count of sink
Then Validate output file generated by file sink plugin "fileSinkTargetBucket" is equal to expected output file "groupByTest6OutputFile"

@GROUP_BY_TEST @FILE_SINK_TEST
Scenario: To verify complete flow of data extract and transfer from File source to File sink with GroupBy plugin using AnyIf and MinIf aggregates
Given Open Datafusion Project to configure pipeline
When Select plugin: "File" from the plugins list as: "Source"
When Expand Plugin group in the LHS plugins list: "Analytics"
When Select plugin: "Group By" from the plugins list as: "Analytics"
Then Connect plugins: "File" and "Group By" to establish connection
When Expand Plugin group in the LHS plugins list: "Sink"
When Select plugin: "File" from the plugins list as: "Sink"
Then Connect plugins: "Group By" and "File2" to establish connection
Then Navigate to the properties page of plugin: "File"
Then Enter input plugin property: "referenceName" with value: "FileReferenceName"
Then Enter input plugin property: "path" with value: "groupByTest"
Then Select dropdown plugin property: "format" with option value: "csv"
Then Click plugin property: "skipHeader"
Then Click on the Get Schema button
Then Verify the Output Schema matches the Expected Schema: "groupByCsvDataTypeFileSchema"
Then Validate "File" plugin properties
Then Close the Plugin Properties page
Then Navigate to the properties page of plugin: "Group By"
Then Select dropdown plugin property: "groupByFields" with option value: "groupByValidFirstField"
Then Press ESC key to close the unique fields dropdown
Then Select dropdown plugin property: "groupByFields" with option value: "groupByValidSecondField"
Then Press ESC key to close the unique fields dropdown
Then Enter GroupBy plugin Fields to be Aggregate "groupByFileAggregateMultipleSetFields3"
Then Click on the Get Schema button
Then Click on the Validate button
Then Close the Plugin Properties page
Then Navigate to the properties page of plugin: "File2"
Then Enter input plugin property: "referenceName" with value: "FileReferenceName"
Then Enter input plugin property: "path" with value: "fileSinkTargetBucket"
Then Replace input plugin property: "pathSuffix" with value: "yyyy-MM-dd-HH-mm-ss"
Then Select dropdown plugin property: "format" with option value: "tsv"
Then Validate "File2" plugin properties
Then Close the Plugin Properties page
Then Save the pipeline
Then Preview and run the pipeline
Then Wait till pipeline preview is in running state
Then Open and capture pipeline preview logs
Then Verify the preview run status of pipeline in the logs is "succeeded"
Then Close the pipeline logs
Then Close the preview
Then Deploy the pipeline
Then Run the Pipeline in Runtime
Then Wait till pipeline is in running state
Then Open and capture logs
Then Verify the pipeline status is "Succeeded"
Then Close the pipeline logs
Then Validate OUT record count of groupby is equal to IN record count of sink
Then Validate output file generated by file sink plugin "fileSinkTargetBucket" is equal to expected output file "groupByTest7OutputFile"
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,9 @@ public static void enterAggregates(String jsonAggreegatesFields) {
ElementHelper.sendKeys(GroupByLocators.field(index), entry.getKey().split("#")[0]);
ElementHelper.selectDropdownOption(GroupByLocators.fieldFunction(index), CdfPluginPropertiesLocators.
locateDropdownListItem(entry.getKey().split("#")[1]));
if (entry.getKey().split("#")[1].contains("If")) {
ElementHelper.sendKeys(GroupByLocators.fieldFunctionCondition(index), entry.getKey().split("#")[2]);
}
ElementHelper.sendKeys(GroupByLocators.fieldFunctionAlias(index), entry.getValue());
ElementHelper.clickOnElement(GroupByLocators.fieldAddRowButton(index));
index++;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,4 +75,9 @@ public static WebElement addFieldsRowButton(int row) {
String xpath = "//*[@data-cy='groupByFields']//*[@data-cy='" + row + "']//button[@data-cy='add-row']";
return SeleniumDriver.getDriver().findElement(By.xpath(xpath));
}

public static WebElement fieldFunctionCondition(int row) {
String xpath = "//div[@data-cy='aggregates']//div[@data-cy= '" + row + "']//input[@placeholder='condition']";
return SeleniumDriver.getDriver().findElement(By.xpath(xpath));
}
}
10 changes: 10 additions & 0 deletions core-plugins/src/e2e-test/resources/pluginParameters.properties
Original file line number Diff line number Diff line change
Expand Up @@ -211,6 +211,16 @@ groupByTest1OutputFile=e2e-tests/expected_outputs/CSV_GROUPBY_TEST1_Output.csv
groupByTest2OutputFile=e2e-tests/expected_outputs/CSV_GROUPBY_TEST2_Output.csv
groupByTest3OutputFile=e2e-tests/expected_outputs/CSV_GROUPBY_TEST3_Output.csv
groupByMacroOutputFile=e2e-tests/expected_outputs/CSV_GROUPBY_TEST4_Output.csv
groupByFileAggregateMultipleSetFields1=[{"key":"price#MaxIf#price>=0.50","value":"MaxIfPrice"}, \
{"key":"price#AvgIf#price>0.50","value":"AvgIfPrice"}, {"key":"price#SumIf#price>=0.50","value":"SumIfPrice"}, \
{"key":"price#CountIf#price>=0.50","value":"CountIfPrice"}]
groupByFileAggregateMultipleSetFields2=[{"key":"price#Max","value":"MaxPrice"},\
{"key":"price#Sum","value":"SumPrice"},{"key":"item#Count","value":"CountItem"}]
groupByFileAggregateMultipleSetFields3=[{"key":"price#AnyIf#price>0.6","value":"AnyIfPrice"}, \
{"key":"price#MinIf#price>0.35","value":"MinIfPrice"}]
groupByTest5OutputFile=e2e-tests/expected_outputs/CSV_GROUPBY_TEST5_Output.csv
groupByTest6OutputFile=e2e-tests/expected_outputs/CSV_GROUPBY_TEST6_Output.csv
groupByTest7OutputFile=e2e-tests/expected_outputs/CSV_GROUPBY_TEST7_Output.csv
## GROUPBY-PLUGIN-PROPERTIES-END

## JOINER-PLUGIN-PROPERTIES-START
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
bob coffee 3.5 3.5 3.5 1
bob donut 2.8 1.5 1.15 3
alice cookie 1.4 0.8 0.7 2
alice tea 3.49 1.99 1.745 2
bob cofee 2.05 2.05 2.05 1
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
bob coffee 2 3.85 3.5
bob donut 4 3.25 1.5
alice cookie 2 1.4 0.8
alice tea 3 3.79 1.99
bob cofee 1 2.05 2.05
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
bob coffee 3.5 3.5
bob donut 0.45 0.8
alice cookie 0.6 0.8
alice tea 1.5 1.99
bob cofee 2.05 2.05

0 comments on commit eeb3c8b

Please sign in to comment.