An overview of our behaviour-driven development framework for testing hundreds of scenarios and cases with minimal intervention and effort.
Pilots and vehicles are the two most fundamental elements in the relay model. Given that these are also highly dynamic, a central rostering system which has a holistic view of the demand-supply paradigm of the network is needed. The algorithm that solves this for us is known as pilot auto-allocation.
The pilot auto-allocation algorithm executes every few minutes and assigns eligible pilots on incoming vehicles arriving in the next ‘x’ hour window. Given the scale, manual setting up of data for each scenario in this case can take a lot of time and effort. Moreover, it can lead to misses when there are many assertions involved.
For example, consider a pilot (P1) who is assigned to a vehicle (V1) going from Jaipur to Delhi. The expected start time of V1 is more than 45 minutes from current time. Now, if V1 gets delayed and there is another vehicle (V2) going to Delhi before V1, then P1 should be removed from V1 and assigned to V2. All the updates need to be verified in the backend (database) and frontend (on app and UI). Given such complexities arising out of the dynamicity of the variables involved, a backend automation framework which is robust for auto allocation quality analysis becomes critical.
Behaviour-driven development is an extension of Test-driven development. It is premised in the ideology that software development should be led by both business interests and technical insight. It is a collaboration framework for developers, testers and non-technical or business participants in a software project. BDD focuses on verifying behaviour instead of implementation. So, the focus is on test scenario/business rules instead of how the code is implemented.
Why we chose behaviour-driven development (BDD) over test-driven development (TDD)?
BDD tests are written in a way in which they can be read almost like a sentence whereas TDD tests are associated with programming languages. This means that other stakeholders (business analysts, product managers etc.) can also add test cases easily to BDD. In this way, BDD provides clarity on what is to be built from business perspective. This integration was one of our primary reasons for choosing BDD over TDD. The examples below will help you understand this better.
Our auto allocation algorithm has more than 100 business rules which are executed on each run. Also, new scenarios can impact existing rules. Hence, our objective was to build a framework which:
The diagram below shows the various modules of the framework and their interactions with each other.
This package has all the classes containing step definitions for feature files. We have divided our step files into three parts, namely, PilotSteps, VehicleSteps and CommonSteps. Steps are defined in one of these files depending on their operation. Step definition (methods) has code to perform operations like making updates in a database or making API call and assertions to verify the responses of API’s. Below is one of the methods of PilotSteps to do updates on the assigned vehicle.
This package has common utilities to make HTTP calls to the server (different methods like POST call with JSON body or POST call with parameters and methods containing logic for assertions). All these reusable methods are added to the common repository, which is being used across different projects. All assertion, DB connection and methods to make API calls with REST assured are present in the common repository. Below are few reusable HTTP methods that are part of the common repository.
It also has CucumberRunnerUtil file which has cucumber configuration parameters. These parameters mark which scenarios would be executed in a subsequent run. In addition to this, it has a path to feature files, step definition files and test result file format. It also contains TestNG annotations to perform actions as per the flow. For example, methods in @BeforeSuite and @AfterSuite are used to set up the test data in the database before each test execution and to close the hibernate connection respectively.
This package contains all the feature files. Each feature file has independent modules/features and all the scenarios (test cases) for the same. Each scenario has steps to be executed in sequence.
This is the first file that gets invoked by maven. Execution environment (local or staging) is passed from this file to CucumberRunnerUtil file, from where the execution of all scenarios takes place.
Hibernate is used to make a connection to Mysql database and perform operations. It is required by each step to create test data for the scenario. With this, we can directly change the time of vehicles and pilot states before execution of a scenario. It is faster than hitting an API which, in turn, makes changes in the database. The dataset size is much smaller than our production dataset size. It contains the data which is just required by our tests.
Here, we used masterthought’s open source plugin . It gives test results at various levels scenarios, tags, steps and features.
The environment in which tests would be executed and also the decision on the tests that would run is parameterized. These values are passed from the command line. Cucumber provides the functionality to execute tests via tags. Tags can be implemented at feature level or scenario level. The code snippet below has @allbreach tag at feature level and @delaybreach tag at scenario level.
Hibernate to make changes in database
Cucumber for Behaviour-driven development
Maven as a build tool
TestNG for test execution and assertions
Rest-Assured for testing REST API’s
PicoContainer for dependency injection
Additionally, to execute our growing automation suite without flaky/indeterministic behaviour, we set up all the backend servers on a single machine similar to Hermetic Servers used by Google. It helps avoid false positives as network access is not required.
We measured impact with the change in the number of issues per release. Number of issues went down drastically after implementing the automated integration tests. 92% of our releases go live without any high or medium priority issues on production. 8% releases with bugs are due to missed testing of corner cases etc. We are working on JaCoCo to get code coverage with manual and automation testing to make all releases bug free.
Our team sends about two releases to production every week. This automation framework helped us ship code faster and with more confidence. Code refactor and other changes which only require regression testing are no more tested manually. If all automated tests pass on Jenkins, we are good to go live. Now testers get more time to test new features or to add more automated tests to the framework whereas much time was going in regression testing earlier.