Skip to main content

[SpecFlow+ LivingDoc] Ability to see historical test runs in Specflow + Living Doc

Comments

21 comments

  • Mario Steiner

    Hi Tim, thanks a lot for your request. Do you have an example of how you would imagine the view of the historical test runs? Are you using any other reporting tool for that right now + are you sharing the information with people in your team or outside of your team?

    0
  • Reece, Tim

    Hi Mario

    I currently have a series of function that copies test (Feature/Scenario and Step) details and results into nested dictionaries and then parse these through into json, xml and html files at the end of each test run. The results are copied into a shared folder with a heirarchy of test run/company/browser. We are using Azure Devops Standalone server and the build/release pipelines for the installation and then test run with the Visual Studio Test task. The tests themselves are written in MSTest/Selenium/Specflow

    I then have a python page on our devops server that scrapes the most recent folders and displays the summary results

    This page has a link to the individual results from the html file created within the test runs so they can be viewed individually

    It's very procedural, and very tailored to our environment, but it gives the team and managers an overview of where the tests/system is at in real time. The above shows the release branch only. I did have results shown for the nightly trunk build we do, but, when the last third party reporting tool I was using broke, I began writing my own reports as described above and have not got around to including those builds back into the python file. They used to sit cross the top of the page with a very brief result and link, and only showed the last 5 days worth in one row.

    0
  • Mario Steiner

    very interesting, thanks a lot for this detailed example and explanations. Which reporting tool did you use before? Are the 5 days sufficient for you or do you have the need/requirement to look back way further into the past? e.g last 30 days, 60 days, ... Do you need to report historical test runs and/or are using them for analysis and code quality improvements? 

     

    0
  • Reece, Tim

    Hi Mario

    5 Days history is usually plenty, 1 or two is normal, but I have on occasion gone back a few weeks to track down when things went wrong, in those cases it was when a test was missing a bug and why. It is also a nice way to show to the managers the growth of the system over time, and the expanding coverage and improvements the tests provide. It's a good advertisement for my job :)

    As I said, I am lucky in that we store results in a flat file system, so they are easily accessed.

    I was using Extent Reports, which was a bit of a hassle to begin with, and dragged in a lot of additional packages with NuGet that started to become very unwieldy. The overhead on it became too much though, after some update caused hassles elsewhere in the system, so I abandoned it when I noticed you had Specflow + Living Doc. There was another BDD reporintg tool I'd used earlier which was far superior, but had been abandoned by the developers, which was a shame.

    I guess it's strength was that they both created standalone html files that I could then copy to our results folders along with my own results, which led me to doing it the way I have. Perhaps if you could do something like that with Living Doc I could be persuaded to use it again. People can do with the results as they please then, as I understand everyone's development/test pipeline will differ a lot?

    0
  • Reece, Tim

    Having just looked through other Feature Requests, it appears these two are asking similar things.

    https://support.specflow.org/hc/en-us/community/posts/360014231838--SpecFlow-LivingDoc-Have-Specflow-reports-auto-generate-when-running-test-from-VS-IDE

    https://support.specflow.org/hc/en-us/community/posts/360014202838--LivingDoc-CLI-tool-Generate-report-from-multiple-input-json-files-from-different-executions-e-g-different-browsers-different-tags

    Just he ability to (automatically) download a html file of the test report at the conclusion of the tests will be all most people need I think?

    0
  • Mario Steiner

    can you remember the name of the abandoned BDD reporting tool? Yes you are right, the development/testing pipelines are mostly quite individual. We are currently trying to analyze all the great feedback we received recently and think about how we can incorporate it into our product roadmap. We will keep you updated.

    0
  • Karol Czechowski

    Hi guys, I can share with you my approach for historic test results and how I would like to see it with LivingDoc in the future.
    Currently, each pipeline with tests in Azure DevOps generates HTML report (from SpecRunner) which is copied to Blob Storage and each report has unique name. This is the way I can go through the historic test results. I do the same with LivingDoc, in parallel, however there is no way, afaik, to create custom report name, that is why report is being overwritten each time it generates so I am not able to see the old files. Conclusion is, if there was a way to generate LivingDoc.html with unique name, similar to Specflow+ Runner, that would solve the issue. (e.g. outputName="LivingDoc_{unique_guid}.html")

    0
  • Mario Steiner

    Hi Karol, thanks for your feedback and for sharing your workflow for historic test results with us. It is possible to manually rename your report (name of the LivingDoc) with the following CLI command:

    livingdoc feature-folder C:\Work\MyProject.Specs --output C:\Temp\MyReport.html

    this would at least allow renaming your LivingDoc to e.g. LivingDoc17052021

    Link to the docs: Using the CLI tool — SpecFlow+ LivingDoc documentation

    we are currently evaluating our options on how to provide an option for historic test results in the future. We will keep you updated.

    0
  • Karol Czechowski

    Hi Mario, your hint works perfect for me! Now each run creates a report with suffix:

     LivingDoc_$(Build.BuildNumber).html

    what allows me to have unique reports stored in my Blob Storage.
    I create direct url to such a file and display it in the "Extensions" tab in pipeline summary view. 
    Thanks!
    P.S. Now I wait for the documentation update to implement Test Output in LD reports ;)

    1
  • Andreas Willich

    Ali Mollahosseini is working on them right now. Should be available this week.

    2
  • Raman Tsitou

    Looks like the feature is still not available :(

    0
  • Andreas Willich

    Raman Tsitou I meant that Ali is working on the Documentation. We are not working on this feature. You can find our roadmap at https://docs.specflow.org/en/latest/roadmap.html

    0
  • Singh, SK (Shravan)

    Hi,

    I was also looking at similar feature this week to replace the existing in-house custom tool.

    Requirement: To be able to demonstrate to users(testers) that a given Feature file is failing since which date and/or which build. We need this information , because we do nightly regression for over 7K Test Scenarios and sometime this is handy  information to know which commit can be possible root cause for failures.

    This information is also useful to track long pending Test failures,  when working with multiple Agile teams and get them prioritized accordingly.

    Example Below:

    Snaphot from Custom tool :

    Snapshot from Living Docs when multiple TestExecution*.json was combined:

    Note- An info that given scenario has failed X times may not be enough information for debugging.

     

    0
  • Mario Steiner

    Hi Singh, thanks for taking the time and leaving your request, sounds interesting! Would you mind sharing more insights into your current collaboration workflow?

    How are you sharing the report with the testers right now? How do you know that the failing features files have been addressed by the testers and on average how long does it take them to fix them?

    Looking forward to your reply, Mario

    0
  • Singh, SK (Shravan)

    Hello Mario,

    Our WOW is as below

    Organization: We follow scrum of scrum Way. In our Department, we have more than 5 Agile teams. Many times more than one Agile Teamwork on an Epic/Feature and hence there is and can have cross dependencies between code. To ensure better collaboration we follow below Daily cycle.

    1. Developer/Tester works on Feature file and target code and commits the same to central Repo.
    2. Every night at the scheduled time new Azure Release is created. This Release has the latest code(code from multiple teams get integrated) which is deployed on servers and post-deployment we run regression suite via Azure DevOps Pipeline.
    3. Once the Pipeline including the Test run is complete, we run APIs to build a custom report detailing - Deployment status and the number of failures for the release, etc.
    4. This report is currently shared via email on a scheduled basis with all users.
    5. In case we want to see detailed results(failures) then, we open the custom tool (snapshot in a previous thread). since all Features are tagged to XYZ Teams we know the Feature Owners, and once picked for a fix by a tester, he/she can claim the failure on the tool.
    6. This update is then also available to other users when they open the custom tool for the release. 
    7. The custom tool not only shows the failures but also has a button to show which release this is failing from(history) and links to Azure DevOps detailing errors for each of those failures. (Since All failures are specific to a given release).
    8. We also maintain pickle documentation per Audit and Business Requirements.
    9. Since this all involves multiple tools, we thought to replace and centralize if not all then the majority of features via Living Docs and host it to users every day. 

    Hope this clears your query.

    Regards

    Shravan

    0
  • Mario Steiner

    Hi Shravan,

    very helpful, thanks a lot. I have some follow-up questions:

    ad 5) Are you tagging the teams directly in the feature files using the @ function, for example, @Team X? how does the claiming of the failure in the custom tool look like? like a checkbox that XYZ is working on it?

    ad 9) Are there any other tools/functionality besides the mentioned custom tool and Pickle reports you would like to centralize? Is it required that the new tool runs on-premise or would be a secure, cloud-based tool also an option?

    Are Business/BA/Product Owners also interested in the daily release reports? How does your timeline look like for the replacement of your current tool stack?

    Kind regards,

    Mario

    0
  • Singh, SK (Shravan)

    Hello Mario,

    Find my response below

    Q1 Are you tagging the teams directly in the feature files using the @ function, for example, @Team X? how does the claiming of the failure in the custom tool look like? like a checkbox that XYZ is working on it?

    We use @ function for Team Owners currently. However to support claiming feature/scenario failure by a tester we use the below functionality.

    A user has to click on 'Claim' button and select the check box when resolved. This is also maintained in history so that same can be traced later when unclaimed or issue re occur after several successful runs. 

     Q2 Are there any other tools/functionality besides the mentioned custom tool and Pickle reports you would like to centralize? Is it required that the new tool runs on-premise or would be a secure, cloud-based tool also an option?

    Yes, We send daily mailers for Nighty run results which include Release name, Build No, No of Failures, Pass/Fail Percentage, etc to get more attention from Teams including management. Our current setup is on Premise but Cloud-based solution compatible with Microsoft Suite will be future-ready.  :)

    Q3 Are Business/BA/Product Owners also interested in the daily release reports? How does your timeline look like for the replacement of your current tool stack? 

    Business folks are interested in documentation primarily so, it's desirable for them to be able to add/link Microsoft-recognized format (Word/Visio) (without much technical know-how) directly with Feature file as part of BDD and TDD Approach.

    Moreover, when the release goes to production, for now, we attach the Living doc as a Test report for Regression results from the previous day. 

    Timelines per see, since we have already working tool so this is not a burning issue for now but more from maintainability purpose needs more attention at times.

    I understand I am discussing lots of new features but for now to officially make the Living doc live we will need help to showcase the Scenario/Feature failure history for our Nightly run via Living Docs. 

    Hope this gives a fair idea in a complex environment how one of the Department follows WOW. Looking forward to, if not all,  some of the feature requests in future Roadmap. 

     

    0
  • Mario Steiner

    Hi Shravan,

    again very helpful, thanks a lot for sharing all this detailed information with us.

    We've been learning a lot over the last months and still trying to better understand the daily challenges and complex environments of teams like yours.

    In order to enable better cross-organizational collaboration and to help teams to be more productive, we are currently collecting and evaluating different product options/areas of improvement.

    Therefore, I can't promise anything right now but we will deeply consider your suggestions and trying to keep you updated.

    0
  • Singh, SK (Shravan)

    Sure,

    Thanks for the heads-up, but showcasing Failure history and relevant details are worth a Feature for consideration as it helps RCA on complex issues. 

    Regards, Shravan

    1
  • Janine Roe

    Karol Czechowski how exactly did you perform that build step of saving it to blob storage and create a direct url in the extension tab of the pipeline summary view?  I'd love to be able to do the same as well. 

    1
  • Karol Czechowski

    John Doe try this:

    Task 1: Azure file copy
    Source: Path to the folder where HTML report is being stored after test run on agent
    Blob prefix: $(Build.DefinitionName)
    Container Name: {someName}

    Task 2: Powershell Script
    Here you do: Find html file and create blob-url for extension tab

    $path = "$(System.DefaultWorkingDirectory)/{yourProjectPathToTheFolderWithHtmlReport}/"
    $htmlReportFileName = Get-ChildItem -path $path -name "TestReport_*.html"
    Write-Output $htmlReportFileName

    $UrlFile = '$(System.DefaultWorkingDirectory)\My_Temp_Html_File.html'
    $UrlLink = '<a href="https://{yourBlobAddress}/{someName}/$(Build.DefinitionName)/'+$htmlReportFileName+'">SpecFlow+ Runner Test Report</a>'
    Add-content -path $UrlFile -value $UrlLink
    Write-output "##vso[task.addattachment type=Distributedtask.Core.Summary;name=SpecFlow Runner Test Report;]$UrlFile"

    1

Please sign in to leave a comment.

Powered by Zendesk