133 lines
4.8 KiB
Markdown
133 lines
4.8 KiB
Markdown
# test-samurai.nvim
|
|
|
|
A Neovim plugin to run tests across multiple languages and frameworks with a unified UX and an extensible runner architecture.
|
|
|
|
## Requirements
|
|
|
|
- Neovim >= 0.11.4
|
|
- Lua
|
|
- Test runners are provided as separate Lua modules
|
|
|
|
## Installation (Lazy.nvim)
|
|
|
|
Use the GitHub repository. The example below shows how to add the Go runner as a dependency and configure it in `setup()`:
|
|
|
|
```lua
|
|
{
|
|
"m13r/test-samurai.nvim",
|
|
dependencies = {
|
|
"m13r/test-samurai-go-runner",
|
|
-- furthur samurai runners
|
|
},
|
|
config = function()
|
|
require("test-samurai").setup({
|
|
runner_modules = {
|
|
"test-samurai-go-runner",
|
|
-- furthur samurai runners
|
|
},
|
|
})
|
|
end,
|
|
}
|
|
```
|
|
|
|
## Configuration
|
|
|
|
### Runner modules (required)
|
|
|
|
test-samurai does not ship with any built-in runners. You must explicitly configure the runners you want to use:
|
|
|
|
```lua
|
|
require("test-samurai").setup({
|
|
runner_modules = {
|
|
"my-runners.go",
|
|
"my-runners.js",
|
|
},
|
|
})
|
|
```
|
|
|
|
If no runner matches the current test file, test-samurai will show:
|
|
|
|
```
|
|
[test-samurai] no runner installed for this kind of test
|
|
```
|
|
|
|
## Commands and keymaps
|
|
|
|
- `TSamNearest` -> `<leader>tn`
|
|
- `TSamFile` -> `<leader>tf`
|
|
- `TSamAll` -> `<leader>ta`
|
|
- `TSamLast` -> `<leader>tl`
|
|
- `TSamFailedOnly` -> `<leader>te`
|
|
- `TSamShowOutput` -> `<leader>to`
|
|
- Help: `:help test-samurai`
|
|
|
|
Additional keymaps:
|
|
|
|
- Listing navigation:
|
|
- `<leader>fn` -> [F]ind [N]ext failed test in listing (wraps to the first, opens Detail-Float, works in Detail-Float)
|
|
- `<leader>fp` -> [F]ind [P]revious failed test in listing (wraps to the last, opens Detail-Float, works in Detail-Float)
|
|
- `<leader>ff` -> [F]ind [F]irst list entry (opens Detail-Float, works in Detail-Float)
|
|
- `<leader>o` -> jump to the test location
|
|
- `<leader>qn` -> close the testing floats and jump to the first quickfix entry
|
|
- Listing filters:
|
|
- `<leader>sf` -> filter the listing to `[ FAIL ] - ...` entries
|
|
- `<leader>ss` -> filter the listing to `[ SKIP ] - ...` entries
|
|
- `<leader>sa` -> clear the listing filter and show all entries
|
|
- Listing actions:
|
|
- `<leader>tt` -> run the test under the cursor in the listing
|
|
- `<leader>cb` -> breaks test-command onto multiple lines (clears search highlight)
|
|
- `<leader>cj` -> joins test-command onto single line
|
|
- `?` -> show help with TSam commands and standard keymaps in the Detail-Float
|
|
|
|
Before running any test command, test-samurai runs `:wall` to save all buffers.
|
|
|
|
## Output UI
|
|
|
|
- Output is shown in a floating container called **Testing-Float**.
|
|
- The **Test-Listing-Float** is the left subwindow and shows the test result list.
|
|
- The **Detail-Float** is the right subwindow and shows detailed output for a selected test.
|
|
- After `TSamNearest`, `TSamFile`, `TSamAll`, `TSamFailedOnly`, etc., the UI opens in listing mode (only **Test-Listing-Float** visible).
|
|
- Press `<cr>` on a `[ FAIL ] ...` line in the listing to open/update the **Detail-Float** as a 20/80 split (left 20% listing, right 80% detail).
|
|
- ANSI color translation is only applied in the **Detail-Float**; the **Test-Listing-Float** shows raw text without ANSI translation.
|
|
- `<esc><esc>` hides the floating window and restores the cursor position; `TSamShowOutput` reopens it.
|
|
- If no output is captured for a test, the **Detail-Float** shows `No output captured`.
|
|
- Summary lines (`TOTAL`/`DURATION`) are appended in the listing output, including `TSamLast`.
|
|
|
|
## Runner architecture
|
|
|
|
Runners are standalone Lua modules. All runner modules are expected to implement the full interface so every command and keymap works.
|
|
All functions are required (including the previously optional ones) and listing output must be streamed.
|
|
Required functions:
|
|
|
|
- `is_test_file`
|
|
- `find_nearest`
|
|
- `build_command`
|
|
- `build_file_command`
|
|
- `build_all_command`
|
|
- `build_failed_command`
|
|
- `parse_results`
|
|
- `output_parser` (must stream listing output via `on_line`)
|
|
- `parse_test_output`
|
|
- `collect_failed_locations`
|
|
|
|
No runner for your environment exists? No problem: use `runners-agents.md` to guide an AI-assisted runner implementation tailored to your stack.
|
|
|
|
## Known runners
|
|
|
|
- [`m13r/test-samurai-go-runner`](https://gitea.mschirmer.com/m13r/test-samurai-go-runner)
|
|
- [`m13r/test-samurai-jest-runner`](https://gitea.mschirmer.com/m13r/test-samurai-jest-runner)
|
|
- [`m13r/test-samurai-mocha-runner`](https://gitea.mschirmer.com/m13r/test-samurai-mocha-runner)
|
|
- [`m13r/test-samurai-vitest-runner`](https://gitea.mschirmer.com/m13r/test-samurai-vitest-runner)
|
|
|
|
## Development
|
|
|
|
Runner development guidelines, including required data formats for keymaps, tests (`run_test.sh`), Gitea CI (Neovim AppImage on ARM runners), and framework-agnostic best practices (naming conventions, TSamNearest priority, reporter payloads, failed-only behavior), are documented in `runner-agents.md`.
|
|
|
|
Tests are written with `plenary.nvim` / `busted`. Mocks and stubs are allowed.
|
|
|
|
Run tests:
|
|
|
|
```sh
|
|
bash run_test.sh
|
|
```
|