github atlassian/dc-app-performance-toolkit release-3.0.0
Release 3.0.0

latest releases: release-8.5.0, release-8.4.0, release-8.3.2...
4 years ago

Version 3.0.0

Added

  • locust executor and Locust scripts for Jira and Confluence (default load executor is still JMeter).
  • Official Docker container with the Toolkit.
  • Instruction how to set up and run the Toolkit on an EC2 instance in a Docker container.
  • Automatic Base URL update after database restore.
  • Nodes count information into results_summary.log.
  • Dataset information into results_summary.log.
  • Check if the script is running not on the NFS server for Bitbucket.
  • Check the product language before the start.
  • Check Collaborative Editing is enabled for Confluence.
  • Check if the script is running on a bastion host.
  • Extra logging into bzt.log.

Fixed

  • Bug when Jira DC has more than 8000 projects.
  • Utility functions were removed from conftest.py.
  • Error handling in case the product version was not found.
  • Error handling in case the product URL is invalid.
  • Bug with an infinite loop on user creation for Confluence.
  • Bug with random project selection for Jira.
  • Bug when JMeter script fails on Bitbucket with postfix.

Changed

  • print_time decorator was refactored for better readability and simplicity.
  • Do not override License on database restore.
  • bzt version bump up to 1.14.2.
  • ChromeDriver version bump up to 83.0.4103.39 to support Chrome browser version 83.
  • Downstream library's versions bump.
  • Skip the whole test if login failed.

Upgrade instructions

  • git pull from master branch
  • activate virtual env for the toolkit (see README.md for details)
  • pip install -r requirements.txt

Locust executor

In release 3.0.0 new locust executor and Locust scripts were added for Jira and Confluence. Locust (https://locust.io) is an open-source performance tool based on Python requests library. The benefit of using locust executor over jmeter executor is simplicity, extensibility and debugging flexibility. Locust scripts are a simple Python code that is straightforward to run, debug and extend.
jmeter executor is set by default in jira.yml and confluence.yml files. But if you want to give Locust a try or think that JMeter UI is too complicated for issues debugging or new app-specific actions creation just change load_executor value to locust in .yml configuration file.
More details about Locust could be found in README.md files.

App-specific actions change required

DCAPT framework uses print_timing decorator to measure timings of all Selenium actions and sub-actions.
In release 3.0.0 print_timing decorator was refactored for better readability and convenience of use. The signature of the decorator was changed, so if you have app-specific actions created for a previous version of the toolkit you need to update it as shown on an example below. Also, now there is no need to pass interaction into every selenium webdriver step.
Old decorator signature and usage example:

def custom_action(webdriver, datasets):
    page = BasePage(webdriver)
    @print_timing
    def measure(webdriver, interaction):
        @print_timing
        def measure(webdriver, interaction):
            page.go_to_url(f"{JIRA_SETTINGS.server_url}/plugin/report")
            page.wait_until_visible((By.ID, 'report_app_element_id'), interaction)

        measure(webdriver, 'selenium_app_custom_action:view_report')

        @print_timing
        def measure(webdriver, interaction):
            page.go_to_url(f"{JIRA_SETTINGS.server_url}/plugin/dashboard")
            page.wait_until_visible((By.ID, 'dashboard_app_element_id'), interaction)

        measure(webdriver, 'selenium_app_custom_action:view_dashboard')
    measure(webdriver, 'selenium_app_custom_action')

New decorator signature and usage example:

def custom_action(webdriver, datasets):
    page = BasePage(webdriver)

    @print_timing("selenium_app_custom_action")
    def measure():

        @print_timing("selenium_app_custom_action:view_report")
        def sub_measure():
            page.go_to_url(f"{JIRA_SETTINGS.server_url}/plugin/report")
            page.wait_until_visible((By.ID, 'report_app_element_id'))
        sub_measure()

        @print_timing("selenium_app_custom_action:view_dashboard")
        def sub_measure():
            page.go_to_url(f"{JIRA_SETTINGS.server_url}/plugin/dashboard")
            page.wait_until_visible((By.ID, 'dashboard_app_element_id'))
        sub_measure()
    measure()

Don't miss a new dc-app-performance-toolkit release

NewReleases is sending notifications on new releases.