Skip to main content

Calling SpecFlow steps from API controllers/services

Comments

3 comments

  • Andreas Willich

    Those are unusual requirements and I would like to understand them better to be able to help you.

    Why do you need to call the steps from an API controller? Is it simply because you want to reuse the automation code there?

    Or are you using Gherkin as a scripting language within your application?

    0
  • Jiri Novotny

    Yes, the requirements are unusual. In the target test project there are about 100+ feature files with 20k end-to-end tests using SpecFlow and Selenium to control and thoroughly test a web application. There are hundreds or maybe thousands of unique step methods in the binding classes. You can probably imagine how much effort was invested in building these tests. Our intention is to reuse the automation code there. In addition it would not make sense to build something new besides as these two testing codebases will desynchronize very easily over time.

    At the moment of writing the automation code we did not know that one day there will be an additional web API needed to reuse existing steps that test the web application. The product itself is not only a web application but it stretches from some hardware units that produce data that get reshaped and processed several times until they land in the DB to which the tested web app connects. Up till now we were able to mimic the behavior of the HW and data processing by some reference data or substitutes. Our CI in Azure enabled us to build a pipeline that is able to automate also the HW and data processing with some APIs as well. The Azure CI will contain scenarios (not SpecFlow or Gherkin unfortunately) that communicates with each subsystem, incl. our automated SpecFlow tests for the web app, to test test system entirely. Our API will be holding active test sessions identifiable through a GUID in the API request. And for the given test session it should be able to run certain test step method(s) from the binding classes depending on the endpoint that gets called. And therefore we need to feed the binding classes and methods with the data it needs for the seamless execution.

    I hope you got the idea what we are working on. For now we just came up with the idea to modify our binding classes to take our "own" scenario context through the concept of context injection (as described in the docs). This context will have a public constructor and is therefore injectable through the .NET Core DI container. We just need to verify if this is sufficient or not.

    What we also stumbled upon is that the SpecFlow DI container does not have registration by convention. So we were having issues to abstract out the scenario context, i.e. depend on interface in the binding class and have an interface-implementation registration "somewhere" in the code. Can we add some custom registrations? Where should such code be placed?

    Your advice will be much appreciated. Thank you.

    0
  • Andreas Willich

    Ok, I have bad news.

    Calling Step methods from somewhere else was never thought of in SpecFlow. That's why you are running into these issues with custom TestExecutionEngine and internal classes. 

    General, we recommend using the so-called driver pattern. With this, you move nearly everything out from your bindings in separate classes. And these can be called from everywhere.

    If you also don't use the ScenarioContext dictionary but use ContextInjection (https://docs.specflow.org/projects/specflow/en/latest/Bindings/Context-Injection.html#) to save your state, you have also removed this dependency on SpecFlow in your automation code.

    We are currently writing documentation for it, but here is a good blog post about the Driver pattern in the meantime: http://leitner.io/2015/11/14/driver-pattern-empowers-your-specflow-step-definitions/

    But this all means in your example, you need to refactor a lot of code.

    0

Please sign in to leave a comment.

Powered by Zendesk