GoLand 2021.2 Help

Run tests

Quick way

If your tests don't require any specific actions before start and you don't want to configure additional options, you can run them by using the following options:

  • Place the caret at the test file to run all tests in that file, or at the test method, and press Ctrl+Shift+F10. Alternatively, click the gutter icon next to the test method.

    The gutter icon changes depending on the state of your test:

    • The the Run button gutter icon marks new tests.

    • The the Run test icon gutter icon marks successful tests.

    • The the Rerun the Run button gutter icon marks failed tests.

  • To run all tests in a folder, select this folder in the Project tool window and press Ctrl+Shift+F10 or select Run Tests in 'folder' from the context menu .

    Running a test using the gutter icon

Customizable way

When you run a test, GoLand creates a temporary run configuration. You can save temporary run configurations, change their settings, share them with other members of your team. For more information, refer to Run/debug configurations.

  1. Create a new run configuration or save a temporary one.

  2. From the list on the main toolbar, select the configuration you want to run.

  3. Click the Run button or press Shift+F10.

    Running a run/debug configuration for tests

After GoLand finishes running your tests, it shows the results in the Run tool window on the Test Runner tab. For more information on how to analyze test results, refer to Explore test results.

Running all tests in a folder, stopping, and rerunning a single test

Run tests with test flags

You can run tests with test flags like -race, -failfast, -short, and others. Check other flags in the Go documentation at pkg.go.dev.

  1. Navigate to Run | Edit Configurations.

  2. Click the run/debug configuration that you use to run your application or your tests. In the Go tool arguments field, specify a flag that you plan to use:

    • -race: enables data race detection. Supported only on linux/amd64, freebsd/amd64, darwin/amd64, windows/amd64, linux/ppc64le and linux/arm64 (only for 48-bit VMA).

    • -test.failfast: stops new tests after the first test failure.

    • -test.short: shortens run time of long-running tests.

    • -test.benchmem: prints memory allocation statistics for benchmarks.

    Run tests with test flags

Run tests before commit

When you want to check that your changes wouldn't break the code before committing them, you can do that by running tests as a pre-commit check.

Set up test configuration

  1. Open the Commit tool window as described in the following topics:

  2. Click Show Commit Options the Settings button. In the menu, click Choose configuration near Run Tests and select which configuration you want to run.

    Pre-commit checks menu

After you have set up the test configuration, the specified tests will run every time you make a commit.

Non-modal commit dialog running a test

Stop tests

Use the following options on the Run toolbar of the Test Runner tab:

  • Click the Stop button or press Ctrl+F2 to terminate the process immediately.

Stop running tests

Rerun tests

Rerun a single test

  • Right-click a test on the Test Runner tab of the Run tool window and select Run 'test name'.

Rerun all tests in a session

  • Click the Run button on the Run toolbar or press Ctrl+F5 to rerun all tests in a session.

Rerun failed tests

  • Click the Rerun Failed Tests button on the Run toolbar to rerun only failed tests.

    Hold Shift and click the Rerun Failed Tests button to choose whether you want to Run the tests again or Debug them.

    You can configure the IDE to trigger tests that were ignored or not started during the previous test run together with failed tests. Click the Settings button on the Test Runner toolbar and enable the Include Non-Started Tests into Rerun Failed option.

Rerun tests automatically

In GoLand, you can enable the autotest-like runner: any test in the current run configuration restarts automatically after you change the related source code.

  • Click Toggle auto-test Toggle auto-test on the Run toolbar to enable the autotest-like runner.

Debug failed tests

If you don't know why a test fails, you can debug it.

  1. In the editor, click the gutter on the line where you want to set a breakpoint.

    There are different types of breakpoints that you can use depending on where you want to suspend the program. For more information, refer to Breakpoints.

  2. Right-click the the Rerun the Run button gutter icon next to the failed test and select Debug 'test name'.

    The test that has failed will be rerun in the debug mode. After that, the test will be suspended, allowing you to examine its current state.

    You can step through the test to analyze its execution in detail.

    Debugging a test using the gutter icon

Run a run/debug configuration for tests

To run a run/debug configuration for a test, you must create the run/debug configuration. Read about creating a run/debug configuration for tests in Run/debug configuration templates for tests.

  1. Click the Edit Run/Debug Configurations list and select configuration that you want to run.

  2. Click the Run button the Run button.

Run a run/debug configuration for tests

Run tests from the gutter

When you run a test from the gutter, you create a temporary run/debug configuration. To save this configuration, navigate to Run | Edit Configurations, select the grayed-out item in the configurations list, and click the Save Configuration button the Save Configuration button.

  1. Click the Run Test icon the Run Test icon in the gutter.

  2. Select Run <configuration_name>.

    Run a test from the gutter menu

Run tests from the context menu

  • Right-click a test file or a directory with test files and select Run | Go test <object_name> (for directories) or Run <object_name> (for files).

    Run a test from the context menu

Productivity tips

Run individual table tests

  • You can run individual table tests by using the Run icon (the Run icon) in the gutter. Also, you can navigate to an individual table test from the Run tool window.

    Current support of table tests has following limitations:

    • The test data variable must be a slice, an array, or a map. It must be defined in the same function as the t.Run call and must not be used after initialization (except for the range clause in the for loop).

    • The individual test data entry must be a struct literal. Loop variables used in a subtest name expression must not be used before the t.Run call.

    • A subtest name expression can be test data string field, a concatenation of test data string fields, or a fmt.Sprintf() call with %s and %d verbs.

      For example, in the following code snippet, fmt.Sprintf("%s in %s", tc.gmt, tc.loc) is a subtest name expression.

      for _, tc := range testCases { t.Run(fmt.Sprintf("%s in %s", tc.gmt, tc.loc), func(t *testing.T) { loc, err := time.LoadLocation(tc.loc) if err != nil { t.Fatal("could not load location") } gmt, _ := time.Parse("15:04", tc.gmt) if got := gmt.In(loc).Format("15:04"); got != tc.want { t.Errorf("got %s; want %s", got, tc.want) } }) }
    Run individual table tests
Last modified: 14 September 2021