1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 | ssshtest - **s**tupid **s**imple (ba)**sh** **test**ing ======================================================= [![Build Status](https://travis-ci.org/ryanlayer/ssshtest.svg?branch=master)](https://travis-ci.org/ryanlayer/ssshtest) [![Docs](https://img.shields.io/badge/docs-latest-blue.svg)](http://ryanlayer.github.io/ssshtest/) ![example output](https://raw.githubusercontent.com/ryanlayer/ssshtest/master/docs/screenshot.png) `ssshtest` is designed to be practical and easy to use. To use `ssshtest` in your project just source it in your test file ``` test -e ssshtest || wget -q https://raw.githubusercontent.com/ryanlayer/ssshtest/master/ssshtest . ssshtest ``` Then write some tests: ``` run test_for_success python -c "print 'zzz: example success'" assert_in_stdout "zzz" run test_for_stderr python -c "sys.stderr.write('zzz: example failure')" assert_in_stderr "xxx" ``` Then simply run the bash file that contains those lines. ``` $ bash mytests.sh ``` To run only certain tests, use: ``` $ bash mytests.sh test_for_success test_42 ``` For an example of how to use **travis** continuous integration tests, our [.travis.yml](https://github.com/ryanlayer/ssshtest/blob/master/.travis.yml) Functions ========= run (2) ------- run a block of code. This must precede any of the testing functions below. ### Arguments + name for test. + code to run assert_equal (2) ---------------- Assert that 2 things are equal. This does a string comparison so assert_equal "2" " 2" will fail. ### Arguments + observed + expected ``` assert_equal 42 $((21 + 21)) ``` assert_stdout (0) ----------------- Assert that stdout is not empty ``` run test_stdout python -c "print 'zzz: example success'" assert_stdout ``` assert_in_stdout (1) -------------------- Assert that stdout out contains this text. ``` run test_in_stdout python -c "print 'zzz: example success'" assert_in_stdout "zzz" ``` ### Arguments + text to match assert_no_stdout (0) -------------------- Assert that stdout is empty ``` run test_empty_stdout python -c "import sys; sys.stderr.write('aaa')" assert_no_stdout ``` assert_stderr (0) ----------------- Assert that stderr is not empty ``` run test_stderr python -c "import sys; sys.stderr.write('zzz: example success')" assert_stderr ``` assert_in_stderr (1) -------------------- Assert that stderr out contains this text. ### Arguments + text to match ``` run test_in_stderr python -c "import sys; sys.stderr.write('zzz: example success')" assert_in_stderr "zzz" ``` assert_no_stderr (0) -------------------- Assert that stderr is empty ``` run test_no_stderr python -c "print 'aaa'" assert_no_stderr ``` assert_exit_code (1) -------------------- Assert that the program exited with a particular code ### Arguments + exit code ``` run test_exit_code python -c "import sys; sys.exit(33)" assert_exit_code 33 ``` Variables ========= STOP_ON_FAIL ------------ Set STOP_ON_FAIL=1 after sourcing `ssshtest` to stop on the first error. Default is to continue running STDOUT_FILE ----------- `$STDOUT_FILE` is a file containing the $STDOUT from the last run command STDERR_FILE ----------- `$STDERR_FILE` is a file containing the $STDERR from the last run command `$test_name` ----------- `ssshtest` allows you to run a subset of the tests in your test file, and an environment variable is set for to each test that is set to run. You can use this variable to prevent lines associated with a test from running when the test not set to run. For example, the following test creates the file `test.txt`. ``` rm -f test.txt run write_test \ python -c 'f=open("test.txt","w");f.write("test\n");f.close()' assert_equal "test" $( cat test.txt ) ``` If this test is not run the file is not created, and when the shell attempts to resolve `$( cat test.txt )` the error `cat: test.txt: No such file or directory` occurs. You can prevent this error by putting a test-specific guard around the assert. ``` rm -f test.txt run write_test \ python -c 'f=open("test.txt","w");f.write("test\n");f.close()' if [ $write_test ]; then assert_equal "test" $( cat test.txt ) fi ``` LICENSE ======= MIT LICENSE |
Commit History @bullseye-backports/main
0
»»
- Add upstream/metadata Nilesh Patra 3 years ago
- Cleanup Nilesh Patra 3 years ago
- Adapt example to python3 Nilesh Patra 3 years ago
- Exit help with code 0 Nilesh Patra 3 years ago
- Install examples Nilesh Patra 3 years ago
- Add autopkgtests Nilesh Patra 3 years ago
- Add manpages Nilesh Patra 3 years ago
- Install ssshtest to usr/bin Nilesh Patra 3 years ago
- Import Upstream version 0.0+git20190416.6f5438a Nilesh Patra 3 years ago
- Import Debian changes 0.0+git20190416.6f5438a-1 Nilesh Patra 3 years ago
0
»»