Testing SDKs at Optimizely (Part 2)

Ali Abbas Rizvi
Engineers @ Optimizely
3 min readDec 7, 2020

--

Last time we talked about how we have applications wrapping our SDKs and how we use Cucumber to run tests written in Gherkin syntax against these applications.

As we are building new functionality in our SDKs, we still need to instruct the test runner which tests to run and which ones to skip since the functionality may not exist in the SDK being tested.

Enter Tags in Cucumber. Tags allow us to organize our tests and then we can pass a compiled list of tags to run against each SDKs.

So, in our previous example, we can apply a tag (say ACTIVATE_API) to our test. Now it will look something like this:

@ACTIVATE_API
Feature: Activate API
Background:
Given the datafile is “ab_experiments.json”
Scenario Outline: Activate API test with user ID
When activate is called with arguments
"""
experiment_key: my_experiment
user_id: my_user
"""
Then the result should be “variation_a”

Similarly, all tests in our test suite are tagged. Ahead of running the tests, all we need to do is determine which tags are applicable. In order to do that we use Optimizely.

At Optimizely, we have a big culture of using our own product. That allows us to assess the overall quality of our product and also understand what our customers go through on a daily basis.

In this particular use case, each tag corresponds to a feature in the Optimizely feature flagging system.

We have also defined audiences corresponding to each of our clients (SDKs and services).

For each individual feature we then manage rollouts for each of these audiences (clients). For example, for the DATAFILE_ACCESSOR feature, we can manage in the UI and see which clients the tests are supposed to run for. A typical set up looks something like this:

In this example, the tests for the DATAFILE_ACCESSOR feature are supposed to not run for Agent (0%) but for other clients (set at 100%).

The test runner in order to determine the tags uses our JavaScript SDK’s getEnabledFeatures API. While using the API, the SDK name is passed in as an attribute (which is then used to check whether the audience conditions are met) and that helps determine if the tag is supposed to be executed for the SDK or not. The API returns a list of all tags and then the runner just executes tests corresponding to those tags.

This set up allows us to write the tests and then we are able to easily manage which client to run tests against from the Optimizely UI. As engineers introduce functionality in the SDK, all they need to do is log into Optimizely and enable the tests for the feature and see how the tests perform. They can also easily roll back functionality and just disable the tests for the particular SDK easily.

This also allows us to easily share progress of the feature with the rest of the organization as one can simply come in here and determine which SDKs are ready and which are not.

In part 1 of the blog we discussed 2 things that we were trying to resolve:

  • It was time consuming to release SDKs because we had to do a whole lot of manual testing.
  • Issues were caught later.

Because of our setup both these issues are resolved since:

  • The suite of tests is exhaustive and so a successful run means that the SDK is good to release. A typical run of all tests takes less than 5 minutes.
  • We run tests with each commit in the SDK and so we know issues prior to merging any code.

We hope you find this use case helpful. Reach out to me if you want to learn more about our set up or interested in discussing similar use cases.

--

--