New Upstream Release - python-rq

Ready changes

Summary

Merged new upstream version: 1.15.1 (was: 1.15).

Resulting package

Built on 2023-06-24T08:35 (took 5m33s)

The resulting binary packages can be installed (if you have the apt repository enabled) by running one of:

apt install -t fresh-releases python3-rq

Lintian Result

Diff

diff --git a/.coveragerc b/.coveragerc
deleted file mode 100644
index 2838512..0000000
--- a/.coveragerc
+++ /dev/null
@@ -1,13 +0,0 @@
-[run]
-source = rq
-omit =
-    rq/contrib/legacy.py
-    rq/local.py
-    rq/tests/*
-    tests/*
-
-[report]
-exclude_lines =
-    if __name__ == .__main__.:
-    if TYPE_CHECKING:
-    pragma: no cover
diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml
deleted file mode 100644
index 4518858..0000000
--- a/.github/FUNDING.yml
+++ /dev/null
@@ -1,9 +0,0 @@
-# These are supported funding model platforms
-
-github: [selwin]
-patreon: # Replace with a single Patreon username
-open_collective: # Replace with a single Open Collective username
-ko_fi: # Replace with a single Ko-fi username
-tidelift: "pypi/rq"
-community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
-custom: # Replace with a single custom sponsorship URL
diff --git a/.github/dependabot.yml b/.github/dependabot.yml
deleted file mode 100644
index 4bb40d8..0000000
--- a/.github/dependabot.yml
+++ /dev/null
@@ -1,12 +0,0 @@
-version: 2
-updates:
-- package-ecosystem: pip
-  directory: "/"
-  schedule:
-    interval: daily
-  open-pull-requests-limit: 10
-- package-ecosystem: "github-actions"
-  directory: "/"
-  schedule: 
-   interval: daily
-  open-pull-requests-limit: 10
diff --git a/.github/workflows/codeql.yaml b/.github/workflows/codeql.yaml
deleted file mode 100644
index 07b94a0..0000000
--- a/.github/workflows/codeql.yaml
+++ /dev/null
@@ -1,44 +0,0 @@
-name: "Code Scanning - Action"
-
-on:
-  pull_request:
-  push:
-
-jobs:
-  CodeQL-Build:
-    # CodeQL runs on ubuntu-latest, windows-latest, and macos-latest
-    runs-on: ubuntu-20.04
-
-    permissions:
-      # required for all workflows
-      security-events: write
-
-    steps:
-      - name: Checkout repository
-        uses: actions/checkout@v3
-
-      # Initializes the CodeQL tools for scanning.
-      - name: Initialize CodeQL
-        uses: github/codeql-action/init@v2
-        # Override language selection by uncommenting this and choosing your languages
-        # with:
-        #   languages: go, javascript, csharp, python, cpp, java
-
-      # Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
-      # If this step fails, then you should remove it and run the build manually (see below).
-      - name: Autobuild
-        uses: github/codeql-action/autobuild@v2
-
-      # ℹ️ Command-line programs to run using the OS shell.
-      # 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
-
-      # ✏️ If the Autobuild fails above, remove it and uncomment the following
-      #    three lines and modify them (or add more) to build your code if your
-      #    project uses a compiled language
-
-      #- run: |
-      #     make bootstrap
-      #     make release
-
-      - name: Perform CodeQL Analysis
-        uses: github/codeql-action/analyze@v2
diff --git a/.github/workflows/dependencies.yml b/.github/workflows/dependencies.yml
deleted file mode 100644
index b1807f7..0000000
--- a/.github/workflows/dependencies.yml
+++ /dev/null
@@ -1,109 +0,0 @@
-name: Dependencies
-
-on:
-  schedule:
-    # View https://docs.github.com/en/actions/reference/events-that-trigger-workflows#schedule
-    - cron: '0 12 * * *'
-  workflow_dispatch:
-
-jobs:
-  build:
-    if: github.repository == 'rq/rq'
-    name: Python${{ matrix.python-version }}/Redis${{ matrix.redis-version }}/redis-py${{ matrix.redis-py-version }}
-    runs-on: ubuntu-20.04
-    strategy:
-      matrix:
-        python-version: ["3.6", "3.7", "3.8.3", "3.9", "3.10", "3.11"]
-        redis-version: [3, 4, 5, 6, 7]
-        redis-py-version: [3.5.0]
-
-    steps:
-    - uses: actions/checkout@v3
-
-    - name: Set up Python ${{ matrix.python-version }}
-      uses: actions/setup-python@v4.6.1
-      with:
-        python-version: ${{ matrix.python-version }}
-
-    - name: Start Redis
-      uses: supercharge/redis-github-action@1.5.0
-      with:
-        redis-version: ${{ matrix.redis-version }}
-
-    - name: Install dependencies
-      run: |
-        python -m pip install --upgrade pip
-        pip install redis==${{ matrix.redis-py-version }}
-        pip install -r requirements.txt -r dev-requirements.txt
-        pip install -e .
-
-    - name: Test with pytest
-      run: |
-        RUN_SLOW_TESTS_TOO=1 pytest --durations=5
-
-  dependency-build:
-    name: Check development branches of dependencies
-    runs-on: ubuntu-latest
-    needs: build
-    if: success()
-
-    strategy:
-      matrix:
-        python-version: ["3.6", "3.7", "3.8.3", "3.9", "3.10", "3.11"]
-        redis-version: [3, 4, 5, 6, 7]
-
-    steps:
-    - uses: actions/checkout@v3
-
-    - name: Set up Python ${{ matrix.python-version }}
-      uses: actions/setup-python@v4.6.1
-      with:
-        python-version: ${{ matrix.python-version }}
-
-    - name: Start Redis
-      uses: supercharge/redis-github-action@1.5.0
-      with:
-        redis-version: ${{ matrix.redis-version }}
-
-    - name: Install dependencies
-      run: |
-        python -m pip install --upgrade pip
-        pip install git+https://github.com/redis/redis-py
-        pip install git+https://github.com/pallets/click
-        pip install -r dev-requirements.txt
-        pip install -e .
-
-    - name: Test with pytest
-      run: RUN_SLOW_TESTS_TOO=1 pytest --durations=5 > log.txt 2>&1
-
-    - uses: actions/upload-artifact@v3
-      with:
-        name: dependencies-error
-        path: log.txt
-      if: failure()
-
-  issue:
-    name: Create failure issue
-    runs-on: ubuntu-latest
-
-    if: failure()
-    needs: dependency-build
-
-    steps:
-    - uses: actions/download-artifact@v3
-      with:
-        name: dependencies-error
-        path: .
-
-    - name: Create failure issue
-      run: |
-        if [[ "$(curl --url https://api.github.com/repos/${{ github.repository }}/issues?creator=github-actions --request GET)" != *"\""* ]]
-          then curl --request POST \
-                    --url https://api.github.com/repos/${{ github.repository }}/issues \
-                    --header 'authorization: Bearer ${{ secrets.GITHUB_TOKEN }}' \
-                    --header 'content-type: application/json' \
-                    --data "{
-                        \"title\": \"RQ maybe may not work with dependencies in the future\",
-                        \"body\": \"This issue was automatically created by the GitHub Action workflow **${{ github.workflow }}**. \n\n View log: \n\n \`\`\` \n $(cat log.txt | while read line; do echo -n "$line\n"; done | sed -r 's/"/\\"/g') \n \`\`\`\"
-                      }"
-        fi
diff --git a/.github/workflows/docker.yml b/.github/workflows/docker.yml
deleted file mode 100644
index 7a9dda6..0000000
--- a/.github/workflows/docker.yml
+++ /dev/null
@@ -1,55 +0,0 @@
-name: Docker
-
-on:
-  push:
-    branches: [ master ]
-    tags: [ '*' ]
-  workflow_dispatch:
-
-
-permissions:
-  contents: read write # to fetch code (actions/checkout)
-  packages: write
-
-jobs:
-  push:
-    if: github.repository == 'rq/rq'
-    runs-on: ubuntu-20.04
-
-    steps:
-      - uses: actions/checkout@v3
-
-      - name: Push
-        run: |
-          # Parse Version
-          # "master" -> "master"
-          # "v1.2.3" -> "1.2.3", "1.2", "1", "latest"
-
-          VERSIONS=$(echo "${{ github.ref }}" | sed -e 's,.*/\(.*\),\1,')
-
-          [[ "${{ github.ref }}" == "refs/tags/"* ]] && {
-            VERSIONS=$(echo $VERSIONS | sed -e 's/^v//')
-            i="$VERSIONS"
-            while [[ "$i" == *"."* ]]
-              do i="$(echo "$i" | sed 's/\(.*\)\..*/\1/g')"
-                 VERSIONS="$VERSIONS $i"
-            done
-            VERSIONS="$VERSIONS latest"
-          }
-          
-          echo Building with tags: $VERSIONS
-
-          # Login to registries
-          echo "${{ secrets.GITHUB_TOKEN }}" | docker login docker.pkg.github.com -u ${{ github.actor }} --password-stdin
-          echo "${{ secrets.DOCKER_TOKEN }}" | docker login -u selwin --password-stdin
-
-          # Build image
-          docker build . --tag worker
-
-          # Tag and Push
-          for VERSION in $VERSIONS
-            do docker tag worker redisqueue/worker:$VERSION
-               docker push redisqueue/worker:$VERSION
-               docker tag worker docker.pkg.github.com/rq/rq/worker:$VERSION
-               docker push docker.pkg.github.com/rq/rq/worker:$VERSION
-          done
diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml
deleted file mode 100644
index f53343b..0000000
--- a/.github/workflows/lint.yml
+++ /dev/null
@@ -1,37 +0,0 @@
-name: Lint rq
-
-on:
-  push:
-    branches: [ master ]
-  pull_request:
-    branches: [ master ]
-
-permissions:
-  contents: read
-
-jobs:
-  lint:
-    name: Lint
-    runs-on: ubuntu-20.04
-
-    steps:
-    - uses: actions/checkout@v3
-
-    - name: Set up Python
-      uses: actions/setup-python@v4.6.1
-      with:
-        python-version: 3.8
-
-    - name: Install dependencies
-      run: |
-        python -m pip install --upgrade pip
-        pip install black ruff
-
-    - name: Lint with black
-      run: |
-        black --check --skip-string-normalization --line-length 120 rq tests
-
-    - name: Lint with ruff
-      run: |
-        # stop the build if there are Python syntax errors.
-        ruff check --show-source rq tests
diff --git a/.github/workflows/workflow.yml b/.github/workflows/workflow.yml
deleted file mode 100644
index 1de525f..0000000
--- a/.github/workflows/workflow.yml
+++ /dev/null
@@ -1,107 +0,0 @@
-name: Test
-
-on:
-  push:
-    branches: [ master ]
-  pull_request:
-    branches: [ master ]
-
-permissions:
-  contents: read # to fetch code (actions/checkout)
-
-jobs:
-  ssl-test:
-    name: Run SSL tests
-    runs-on: ubuntu-20.04
-    steps:
-    - uses: actions/checkout@v3
-    - name: Build the Docker test image for tox
-      uses: docker/build-push-action@v4
-      with:
-        file: tests/Dockerfile
-        tags: rqtest-image:latest
-        push: false
-    - name: Launch tox SSL env only (will only run SSL specific tests)
-      uses: addnab/docker-run-action@v3
-      with:
-        image: rqtest-image:latest
-        run: stunnel & redis-server & RUN_SSL_TESTS=1 tox run -e ssl
-
-  test:
-    name: Python${{ matrix.python-version }}/Redis${{ matrix.redis-version }}/redis-py${{ matrix.redis-py-version }}
-    runs-on: ubuntu-20.04
-    timeout-minutes: 10
-    strategy:
-      matrix:
-        python-version: ["3.7", "3.8.3", "3.9", "3.10", "3.11"]
-        redis-version: [3, 4, 5, 6, 7]
-        redis-py-version: [3.5.0]
-
-    steps:
-    - uses: actions/checkout@v3
-
-    - name: Set up Python ${{ matrix.python-version }}
-      uses: actions/setup-python@v4.6.1
-      with:
-        python-version: ${{ matrix.python-version }}
-
-    - name: Start Redis
-      uses: supercharge/redis-github-action@1.5.0
-      with:
-        redis-version: ${{ matrix.redis-version }}
-
-    - name: Install dependencies
-      run: |
-        python -m pip install --upgrade pip
-        pip install redis==${{ matrix.redis-py-version }}
-        pip install -r requirements.txt -r dev-requirements.txt
-        pip install -e .
-
-    - name: Test with pytest
-      run: |
-        RUN_SLOW_TESTS_TOO=1 pytest --cov=rq --cov-config=.coveragerc --cov-report=xml --durations=5
-
-    - name: Upload coverage to Codecov
-      uses: codecov/codecov-action@v3
-      with:
-        file: ./coverage.xml
-        fail_ci_if_error: false
-  test-python-36:
-    name: Python${{ matrix.python-version }}/Redis${{ matrix.redis-version }}/redis-py${{ matrix.redis-py-version }}
-    runs-on: ubuntu-20.04
-    timeout-minutes: 10
-    strategy:
-      matrix:
-        python-version: ["3.6"]
-        redis-version: [3, 4, 5, 6, 7]
-        redis-py-version: [3.5.0]
-
-    steps:
-    - uses: actions/checkout@v3
-
-    - name: Set up Python ${{ matrix.python-version }}
-      uses: actions/setup-python@v4.6.1
-      with:
-        python-version: ${{ matrix.python-version }}
-
-    - name: Start Redis
-      uses: supercharge/redis-github-action@1.5.0
-      with:
-        redis-version: ${{ matrix.redis-version }}
-
-    - name: Install dependencies
-      run: |
-        python -m pip install --upgrade pip
-        pip install redis==${{ matrix.redis-py-version }}
-        pip install -r requirements.txt -r dev-requirements-36.txt
-        pip install -e .
-
-    - name: Test with pytest
-      run: |
-        RUN_SLOW_TESTS_TOO=1 pytest --cov=rq --cov-config=.coveragerc --cov-report=xml --durations=5
-
-    - name: Upload coverage to Codecov
-      uses: codecov/codecov-action@v3
-      with:
-        file: ./coverage.xml
-        fail_ci_if_error: false
diff --git a/.gitignore b/.gitignore
deleted file mode 100644
index 262d20b..0000000
--- a/.gitignore
+++ /dev/null
@@ -1,24 +0,0 @@
-*.pyc
-*.egg-info
-
-.DS_Store
-
-/dump.rdb
-/.direnv
-/.envrc
-/.tox
-/dist
-/build
-.tox
-.pytest_cache/
-.vagrant
-Vagrantfile
-.idea/
-.coverage*
-/.cache
-
-Gemfile
-Gemfile.lock
-_site/
-.venv/
-.vscode/
\ No newline at end of file
diff --git a/.mailmap b/.mailmap
deleted file mode 100644
index ee139e8..0000000
--- a/.mailmap
+++ /dev/null
@@ -1,6 +0,0 @@
-Cal Leeming <cal@iops.io> <cal.leeming@simplicitymedialtd.co.uk>
-Mark LaPerriere <marklap@gmail.com> <mark.a.laperriere@disney.com>
-Selwin Ong <selwin.ong@gmail.com> <selwin@ui.co.id>
-Vincent Driessen <me@nvie.com> <vincent@3rdcloud.com>
-Vincent Driessen <me@nvie.com> <vincent@datafox.nl>
-zhangliyong <lyzhang87@gmail.com> <zhangliyong@umeng.com>
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
deleted file mode 100644
index d45026b..0000000
--- a/.pre-commit-config.yaml
+++ /dev/null
@@ -1,9 +0,0 @@
-repos:
-  - repo: https://github.com/psf/black
-    rev: 23.3.0
-    hooks:
-        - id: black
-  - repo: https://github.com/charliermarsh/ruff-pre-commit
-    rev: "v0.0.267"
-    hooks:
-      - id: ruff
diff --git a/CHANGES.md b/CHANGES.md
deleted file mode 100644
index a8f0d24..0000000
--- a/CHANGES.md
+++ /dev/null
@@ -1,728 +0,0 @@
-### RQ 1.15 (2023-05-24)
-* Added `Callback(on_stopped='my_callback)`. Thanks @eswolinsky3241!
-* `Callback` now accepts dotted path to function as input. Thanks @rishabh-ranjan!
-* `queue.enqueue_many()` now supports job dependencies. Thanks @eswolinsky3241!
-* `rq worker` CLI script now configures logging based on `DICT_CONFIG` key present in config file. Thanks @juur!
-* Whenever possible, `Worker` now uses `lmove()` to implement [reliable queue pattern](https://redis.io/commands/lmove/). Thanks @selwin!
-* `Scheduler` should only release locks that it successfully acquires. Thanks @xzander!
-* Fixes crashes that may happen by changes to `as_text()` function in v1.14. Thanks @tchapi!
-* Various linting, CI and code quality improvements. Thanks @robhudson!
-
-### RQ 1.14.1 (2023-05-05)
-* Fixes a crash that happens if Redis connection uses SSL. Thanks @tchapi!
-* Fixes a crash if `job.meta()` is loaded using the wrong serializer. Thanks @gabriels1234!
-
-### RQ 1.14.0 (2023-05-01)
-* Added `WorkerPool` (beta) that manages multiple workers in a single CLI. Thanks @selwin!
-* Added a new `Callback` class that allows more flexibility in declaring job callbacks. Thanks @ronlut!
-* Fixed a regression where jobs with unserializable return value crashes RQ. Thanks @tchapi!
-* Added `--dequeue-strategy` option to RQ's CLI. Thanks @ccrvlh!
-* Added `--max-idle-time` option to RQ's worker CLI. Thanks @ronlut!
-* Added `--maintenance-interval` option to RQ's worker CLI. Thanks @ronlut!
-* Fixed RQ usage in Windows as well as various other refactorings. Thanks @ccrvlh!
-* Show more info on `rq info` CLI command. Thanks @iggeehu!
-* `queue.enqueue_jobs()` now properly account for job dependencies. Thanks @sim6!
-* `TimerDeathPenalty` now properly handles negative/infinite timeout. Thanks @marqueurs404!
-
-### RQ 1.13.0 (2023-02-19)
-* Added `work_horse_killed_handler` argument to `Worker`. Thanks @ronlut!
-* Fixed an issue where results aren't properly persisted on synchronous jobs. Thanks @selwin!
-* Fixed a bug where job results are not properly persisted when `result_ttl` is `-1`. Thanks @sim6!
-* Various documentation and logging fixes. Thanks @lowercase00!
-* Improve Redis connection reliability. Thanks @lowercase00!
-* Scheduler reliability improvements. Thanks @OlegZv and @lowercase00!
-* Fixed a bug where `dequeue_timeout` ignores `worker_ttl`. Thanks @ronlut!
-* Use `job.return_value()` instead of `job.result` when processing callbacks. Thanks @selwin!
-* Various internal refactorings to make `Worker` code more easily extendable. Thanks @lowercase00!
-* RQ's source code is now black formatted. Thanks @aparcar!
-
-### RQ 1.12.0 (2023-01-15)
-* RQ now stores multiple job execution results. This feature is only available on Redis >= 5.0 Redis Streams. Please refer to [the docs](https://python-rq.org/docs/results/) for more info. Thanks @selwin! 
-* Improve performance when enqueueing many jobs at once. Thanks @rggjan!
-* Redis server version is now cached in connection object. Thanks @odarbelaeze!
-* Properly handle `at_front` argument when jobs are scheduled. Thanks @gabriels1234!
-* Add type hints to RQ's code base. Thanks @lowercase00!
-* Fixed a bug where exceptions are logged twice. Thanks @selwin!
-* Don't delete `job.worker_name` after job is finished. Thanks @eswolinsky3241!
-
-### RQ 1.11.1 (2022-09-25)
-* `queue.enqueue_many()` now supports `on_success` and on `on_failure` arguments. Thanks @y4n9squared!
-* You can now pass `enqueue_at_front` to `Dependency()` objects to put dependent jobs at the front when they are enqueued. Thanks @jtfidje!
-* Fixed a bug where workers may wrongly acquire scheduler locks. Thanks @milesjwinter!
-* Jobs should not be enqueued if any one of it's dependencies is canceled. Thanks @selwin!
-* Fixed a bug when handling jobs that have been stopped. Thanks @ronlut!
-* Fixed a bug in handling Redis connections that don't allow `SETNAME` command. Thanks @yilmaz-burak!
-
-### RQ 1.11 (2022-07-31)
-* This will be the last RQ version that supports Python 3.5.
-* Allow jobs to be enqueued even when their dependencies fail via `Dependency(allow_failure=True)`. Thanks @mattchan-tencent, @caffeinatedMike and @selwin!
-* When stopped jobs are deleted, they should also be removed from FailedJobRegistry. Thanks @selwin!
-* `job.requeue()` now supports `at_front()` argument. Thanks @buroa!
-* Added ssl support for sentinel connections. Thanks @nevious!
-* `SimpleWorker` now works better on Windows. Thanks @caffeinatedMike!
-* Added `on_failure` and `on_success` arguments to @job decorator. Thanks @nepta1998!
-* Fixed a bug in dependency handling. Thanks @th3hamm0r!
-* Minor fixes and optimizations by @xavfernandez, @olaure, @kusaku.
-
-### RQ 1.10.1 (2021-12-07)
-* **BACKWARDS INCOMPATIBLE**: synchronous execution of jobs now correctly mimics async job execution. Exception is no longer raised when a job fails, job status will now be correctly set to `FAILED` and failure callbacks are now properly called when job is run synchronously. Thanks @ericman93!
-* Fixes a bug that could cause job keys to be left over when `result_ttl=0`. Thanks @selwin!
-* Allow `ssl_cert_reqs` argument to be passed to Redis. Thanks @mgcdanny!
-* Better compatibility with Python 3.10. Thanks @rpkak!
-* `job.cancel()` should also remove itself from registries. Thanks @joshcoden!
-* Pubsub threads are now launched in `daemon` mode. Thanks @mik3y!
-
-### RQ 1.10.0 (2021-09-09)
-* You can now enqueue jobs from CLI. Docs [here](https://python-rq.org/docs/#cli-enqueueing). Thanks @rpkak!
-* Added a new `CanceledJobRegistry` to keep track of canceled jobs. Thanks @selwin!
-* Added custom serializer support to various places in RQ. Thanks @joshcoden!
-* `cancel_job(job_id, enqueue_dependents=True)` allows you to cancel a job while enqueueing its dependents. Thanks @joshcoden!
-* Added `job.get_meta()` to fetch fresh meta value directly from Redis. Thanks @aparcar!
-* Fixes a race condition that could cause jobs to be incorrectly added to FailedJobRegistry. Thanks @selwin!
-* Requeueing a job now clears `job.exc_info`. Thanks @selwin!
-* Repo infrastructure improvements by @rpkak.
-* Other minor fixes by @cesarferradas and @bbayles.
-
-### RQ 1.9.0 (2021-06-30)
-* Added success and failure callbacks. You can now do `queue.enqueue(foo, on_success=do_this, on_failure=do_that)`. Thanks @selwin!
-* Added `queue.enqueue_many()` to enqueue many jobs in one go. Thanks @joshcoden!
-* Various improvements to CLI commands. Thanks @rpkak!
-* Minor logging improvements. Thanks @clavigne and @natbusa!
-
-### RQ 1.8.1 (2021-05-17)
-* Jobs that fail due to hard shutdowns are now retried. Thanks @selwin!
-* `Scheduler` now works with custom serializers. Thanks @alella!
-* Added support for click 8.0. Thanks @rpkak!
-* Enqueueing static methods are now supported. Thanks @pwws!
-* Job exceptions no longer get printed twice. Thanks @petrem!
-
-### RQ 1.8.0 (2021-03-31)
-* You can now declare multiple job dependencies. Thanks @skieffer and @thomasmatecki for laying the groundwork for multi dependency support in RQ.
-* Added `RoundRobinWorker` and `RandomWorker` classes to control how jobs are dequeued from multiple queues. Thanks @bielcardona!
-* Added `--serializer` option to `rq worker` CLI. Thanks @f0cker!
-* Added support for running asyncio tasks. Thanks @MyrikLD!
-* Added a new `STOPPED` job status so that you can differentiate between failed and manually stopped jobs. Thanks @dralley!
-* Fixed a serialization bug when used with job dependency feature. Thanks @jtfidje!
-* `clean_worker_registry()` now works in batches of 1,000 jobs to prevent modifying too many keys at once. Thanks @AxeOfMen and @TheSneak!
-* Workers will now wait and try to reconnect in case of Redis connection errors. Thanks @Asrst! 
-
-### RQ 1.7.0 (2020-11-29)
-* Added `job.worker_name` attribute that tells you which worker is executing a job. Thanks @selwin!
-* Added `send_stop_job_command()` that tells a worker to stop executing a job. Thanks @selwin!
-* Added `JSONSerializer` as an alternative to the default `pickle` based serializer. Thanks @JackBoreczky!
-* Fixes `RQScheduler` running on Redis with `ssl=True`. Thanks @BobReid!
-
-### RQ 1.6.1 (2020-11-08)
-* Worker now properly releases scheduler lock when run in burst mode. Thanks @selwin!
-
-### RQ 1.6.0 (2020-11-08)
-* Workers now listen to external commands via pubsub. The first two features taking advantage of this infrastructure are `send_shutdown_command()` and `send_kill_horse_command()`. Thanks @selwin!
-* Added `job.last_heartbeat` property that's periodically updated when job is running. Thanks @theambient!
-* Now horses are killed by their parent group. This helps in cleanly killing all related processes if job uses multiprocessing. Thanks @theambient!
-* Fixed scheduler usage with Redis connections that uses custom parser classes. Thanks @selwin!
-* Scheduler now enqueue jobs in batches to prevent lock timeouts. Thanks @nikkonrom!
-* Scheduler now follows RQ worker's logging configuration. Thanks @christopher-dG!
-
-### RQ 1.5.2 (2020-09-10)
-* Scheduler now uses the class of connection that's used. Thanks @pacahon!
-* Fixes a bug that puts retried jobs in `FailedJobRegistry`. Thanks @selwin!
-* Fixed a deprecated import. Thanks @elmaghallawy!
-
-### RQ 1.5.1 (2020-08-21)
-* Fixes for Redis server version parsing. Thanks @selwin!
-* Retries can now be set through @job decorator. Thanks @nerok!
-* Log messages below logging.ERROR is now sent to stdout. Thanks @selwin!
-* Better logger name for RQScheduler. Thanks @atainter!
-* Better handling of exceptions thrown by horses. Thanks @theambient! 
-
-### RQ 1.5.0 (2020-07-26)
-* Failed jobs can now be retried. Thanks @selwin!
-* Fixed scheduler on Python > 3.8.0. Thanks @selwin!
-* RQ is now aware of which version of Redis server it's running on. Thanks @aparcar!
-* RQ now uses `hset()` on redis-py >= 3.5.0. Thanks @aparcar!
-* Fix incorrect worker timeout calculation in SimpleWorker.execute_job(). Thanks @davidmurray!
-* Make horse handling logic more robust. Thanks @wevsty!
-
-### RQ 1.4.3 (2020-06-28)
-* Added `job.get_position()` and `queue.get_job_position()`. Thanks @aparcar!
-* Longer TTLs for worker keys to prevent them from expiring inside the worker lifecycle. Thanks @selwin!
-* Long job args/kwargs are now truncated during logging. Thanks @JhonnyBn!
-* `job.requeue()` now returns the modified job. Thanks @ericatkin!
-
-### RQ 1.4.2 (2020-05-26)
-* Reverted changes to `hmset` command which causes workers on Redis server < 4 to crash. Thanks @selwin!
-* Merged in more groundwork to enable jobs with multiple dependencies. Thanks @thomasmatecki!
-
-### RQ 1.4.1 (2020-05-16)
-* Default serializer now uses `pickle.HIGHEST_PROTOCOL` for backward compatibility reasons. Thanks @bbayles!
-* Avoid deprecation warnings on redis-py >= 3.5.0. Thanks @bbayles!
-
-### RQ 1.4.0 (2020-05-13)
-* Custom serializer is now supported. Thanks @solababs!
-* `delay()` now accepts `job_id` argument. Thanks @grayshirt!
-* Fixed a bug that may cause early termination of scheduled or requeued jobs. Thanks @rmartin48!
-* When a job is scheduled, always add queue name to a set containing active RQ queue names. Thanks @mdawar!
-* Added `--sentry-ca-certs` and `--sentry-debug` parameters to `rq worker` CLI. Thanks @kichawa!
-* Jobs cleaned up by `StartedJobRegistry` are given an exception info. Thanks @selwin!
-* Python 2.7 is no longer supported. Thanks @selwin!
-
-### RQ 1.3.0 (2020-03-09)
-* Support for infinite job timeout. Thanks @theY4Kman!
-* Added `__main__` file so you can now do `python -m rq.cli`. Thanks @bbayles!
-* Fixes an issue that may cause zombie processes. Thanks @wevsty!
-* `job_id` is now passed to logger during failed jobs. Thanks @smaccona!
-* `queue.enqueue_at()` and `queue.enqueue_in()` now supports explicit `args` and `kwargs` function invocation. Thanks @selwin!
-
-### RQ 1.2.2 (2020-01-31)
-* `Job.fetch()` now properly handles unpickleable return values. Thanks @selwin!
-
-### RQ 1.2.1 (2020-01-31)
-* `enqueue_at()` and `enqueue_in()` now sets job status to `scheduled`. Thanks @coolhacker170597!
-* Failed jobs data are now automatically expired by Redis. Thanks @selwin!
-* Fixes `RQScheduler` logging configuration. Thanks @FlorianPerucki!
-
-### RQ 1.2.0 (2020-01-04)
-* This release also contains an alpha version of RQ's builtin job scheduling mechanism. Thanks @selwin!
-* Various internal API changes in preparation to support multiple job dependencies. Thanks @thomasmatecki!
-* `--verbose` or `--quiet` CLI arguments should override `--logging-level`. Thanks @zyt312074545!
-* Fixes a bug in `rq info` where it doesn't show workers for empty queues. Thanks @zyt312074545!
-* Fixed `queue.enqueue_dependents()` on custom `Queue` classes. Thanks @van-ess0!
-* `RQ` and Python versions are now stored in job metadata. Thanks @eoranged!
-* Added `failure_ttl` argument to job decorator. Thanks @pax0r!
-
-### RQ 1.1.0 (2019-07-20)
-
-- Added `max_jobs` to `Worker.work` and `--max-jobs` to `rq worker` CLI. Thanks @perobertson!
-- Passing `--disable-job-desc-logging` to `rq worker` now does what it's supposed to do. Thanks @janierdavila!
-- `StartedJobRegistry` now properly handles jobs with infinite timeout. Thanks @macintoshpie!
-- `rq info` CLI command now cleans up registries when it first runs. Thanks @selwin!
-- Replaced the use of `procname` with `setproctitle`. Thanks @j178! 
-
-
-### 1.0 (2019-04-06)
-Backward incompatible changes:
-
-- `job.status` has been removed. Use `job.get_status()` and `job.set_status()` instead. Thanks @selwin!
-
-- `FailedQueue` has been replaced with `FailedJobRegistry`:
-  * `get_failed_queue()` function has been removed. Please use `FailedJobRegistry(queue=queue)` instead.
-  * `move_to_failed_queue()` has been removed.
-  * RQ now provides a mechanism to automatically cleanup failed jobs. By default, failed jobs are kept for 1 year.
-  * Thanks @selwin!
-
-- RQ's custom job exception handling mechanism has also changed slightly:
-  * RQ's default exception handling mechanism (moving jobs to `FailedJobRegistry`) can be disabled by doing `Worker(disable_default_exception_handler=True)`.
-  * Custom exception handlers are no longer executed in reverse order.
-  * Thanks @selwin!
-
-- `Worker` names are now randomized. Thanks @selwin!
-
-- `timeout` argument on `queue.enqueue()` has been deprecated in favor of `job_timeout`. Thanks @selwin!
-
-- Sentry integration has been reworked:
-  * RQ now uses the new [sentry-sdk](https://pypi.org/project/sentry-sdk/) in place of the deprecated [Raven](https://pypi.org/project/raven/) library
-  * RQ will look for the more explicit `RQ_SENTRY_DSN` environment variable instead of `SENTRY_DSN` before instantiating Sentry integration
-  * Thanks @selwin!
-
-- Fixed `Worker.total_working_time` accounting bug. Thanks @selwin!
-
-
-### 0.13.0 (2018-12-11)
-- Compatibility with Redis 3.0. Thanks @dash-rai!
-- Added `job_timeout` argument to `queue.enqueue()`. This argument will eventually replace `timeout` argument. Thanks @selwin!
-- Added `job_id` argument to `BaseDeathPenalty` class. Thanks @loopbio!
-- Fixed a bug which causes long running jobs to timeout under `SimpleWorker`. Thanks @selwin!
-- You can now override worker's name from config file. Thanks @houqp!
-- Horses will now return exit code 1 if they don't terminate properly (e.g when Redis connection is lost). Thanks @selwin!
-- Added `date_format` and `log_format` arguments to `Worker` and `rq worker` CLI. Thanks @shikharsg!
-
-
-### 0.12.0 (2018-07-14)
-- Added support for Python 3.7. Since `async` is a keyword in Python 3.7,
-`Queue(async=False)` has been changed to `Queue(is_async=False)`. The `async`
-keyword argument will still work, but raises a `DeprecationWarning`. Thanks @dchevell!
-
-
-### 0.11.0 (2018-06-01)
-- `Worker` now periodically sends heartbeats and checks whether child process is still alive while performing long running jobs. Thanks @Kriechi!
-- `Job.create` now accepts `timeout` in string format (e.g `1h`). Thanks @theodesp!
-- `worker.main_work_horse()` should exit with return code `0` even if job execution fails. Thanks @selwin!
-- `job.delete(delete_dependents=True)` will delete job along with its dependents. Thanks @olingerc!
-- Other minor fixes and documentation updates.
-
-
-### 0.10.0
-- `@job` decorator now accepts `description`, `meta`, `at_front` and `depends_on` kwargs. Thanks @jlucas91 and @nlyubchich!
-- Added the capability to fetch workers by queue using `Worker.all(queue=queue)` and `Worker.count(queue=queue)`.
-- Improved RQ's default logging configuration. Thanks @samuelcolvin!
-- `job.data` and `job.exc_info` are now stored in compressed format in Redis.
-
-
-### 0.9.2
-- Fixed an issue where `worker.refresh()` may fail when `birth_date` is not set. Thanks @vanife!
-
-
-### 0.9.1
-- Fixed an issue where `worker.refresh()` may fail when upgrading from previous versions of RQ.
-
-
-### 0.9.0
-- `Worker` statistics! `Worker` now keeps track of `last_heartbeat`, `successful_job_count`, `failed_job_count` and `total_working_time`. Thanks @selwin!
-- `Worker` now sends heartbeat during suspension check. Thanks @theodesp!
-- Added `queue.delete()` method to delete `Queue` objects entirely from Redis. Thanks @theodesp!
-- More robust exception string decoding. Thanks @stylight!
-- Added `--logging-level` option to command line scripts. Thanks @jiajunhuang!
-- Added millisecond precision to job timestamps. Thanks @samuelcolvin!
-- Python 2.6 is no longer supported. Thanks @samuelcolvin!
-
-
-### 0.8.2
-- Fixed an issue where `job.save()` may fail with unpickleable return value.
-
-
-### 0.8.1
-- Replace `job.id` with `Job` instance in local `_job_stack `. Thanks @katichev!
-- `job.save()` no longer implicitly calls `job.cleanup()`. Thanks @katichev!
-- Properly catch `StopRequested` `worker.heartbeat()`. Thanks @fate0!
-- You can now pass in timeout in days. Thanks @yaniv-g!
-- The core logic of sending job to `FailedQueue` has been moved to `rq.handlers.move_to_failed_queue`. Thanks @yaniv-g!
-- RQ cli commands now accept `--path` parameter. Thanks @kirill and @sjtbham!
-- Make `job.dependency` slightly more efficient. Thanks @liangsijian!
-- `FailedQueue` now returns jobs with the correct class. Thanks @amjith!
-
-
-### 0.8.0
-- Refactored APIs to allow custom `Connection`, `Job`, `Worker` and `Queue` classes via CLI. Thanks @jezdez!
-- `job.delete()` now properly cleans itself from job registries. Thanks @selwin!
-- `Worker` should no longer overwrite `job.meta`. Thanks @WeatherGod!
-- `job.save_meta()` can now be used to persist custom job data. Thanks @katichev!
-- Added Redis Sentinel support. Thanks @strawposter!
-- Make `Worker.find_by_key()` more efficient. Thanks @selwin!
-- You can now specify job `timeout` using strings such as `queue.enqueue(foo, timeout='1m')`. Thanks @luojiebin!
-- Better unicode handling. Thanks @myme5261314 and @jaywink!
-- Sentry should default to HTTP transport. Thanks @Atala!
-- Improve `HerokuWorker` termination logic. Thanks @samuelcolvin!
-
-
-### 0.7.1
-- Fixes a bug that prevents fetching jobs from `FailedQueue` (#765). Thanks @jsurloppe!
-- Fixes race condition when enqueueing jobs with dependency (#742). Thanks @th3hamm0r!
-- Skip a test that requires Linux signals on MacOS (#763). Thanks @jezdez!
-- `enqueue_job` should use Redis pipeline when available (#761). Thanks mtdewulf!
-
-
-### 0.7.0
-- Better support for Heroku workers (#584, #715)
-- Support for connecting using a custom connection class (#741)
-- Fix: connection stack in default worker (#479, #641)
-- Fix: `fetch_job` now checks that a job requested actually comes from the
-  intended queue (#728, #733)
-- Fix: Properly raise exception if a job dependency does not exist (#747)
-- Fix: Job status not updated when horse dies unexpectedly (#710)
-- Fix: `request_force_stop_sigrtmin` failing for Python 3 (#727)
-- Fix `Job.cancel()` method on failed queue (#707)
-- Python 3.5 compatibility improvements (#729)
-- Improved signal name lookup (#722)
-
-
-### 0.6.0
-- Jobs that depend on job with result_ttl == 0 are now properly enqueued.
-- `cancel_job` now works properly. Thanks @jlopex!
-- Jobs that execute successfully now no longer tries to remove itself from queue. Thanks @amyangfei!
-- Worker now properly logs Falsy return values. Thanks @liorsbg!
-- `Worker.work()` now accepts `logging_level` argument. Thanks @jlopex!
-- Logging related fixes by @redbaron4 and @butla!
-- `@job` decorator now accepts `ttl` argument. Thanks @javimb!
-- `Worker.__init__` now accepts `queue_class` keyword argument. Thanks @antoineleclair!
-- `Worker` now saves warm shutdown time. You can access this property from `worker.shutdown_requested_date`. Thanks @olingerc!
-- Synchronous queues now properly sets completed job status as finished. Thanks @ecarreras!
-- `Worker` now correctly deletes `current_job_id` after failed job execution. Thanks @olingerc!
-- `Job.create()` and `queue.enqueue_call()` now accepts `meta` argument. Thanks @tornstrom!
-- Added `job.started_at` property. Thanks @samuelcolvin!
-- Cleaned up the implementation of `job.cancel()` and `job.delete()`. Thanks @glaslos!
-- `Worker.execute_job()` now exports `RQ_WORKER_ID` and `RQ_JOB_ID` to OS environment variables. Thanks @mgk!
-- `rqinfo` now accepts `--config` option. Thanks @kfrendrich!
-- `Worker` class now has `request_force_stop()` and `request_stop()` methods that can be overridden by custom worker classes. Thanks @samuelcolvin!
-- Other minor fixes by @VicarEscaped, @kampfschlaefer, @ccurvey, @zfz, @antoineleclair,
-  @orangain, @nicksnell, @SkyLothar, @ahxxm and @horida.
-
-
-### 0.5.6
-
-- Job results are now logged on `DEBUG` level. Thanks @tbaugis!
-- Modified `patch_connection` so Redis connection can be easily mocked
-- Customer exception handlers are now called if Redis connection is lost. Thanks @jlopex!
-- Jobs can now depend on jobs in a different queue. Thanks @jlopex!
-
-
-### 0.5.5 (2015-08-25)
-
-- Add support for `--exception-handler` command line flag
-- Fix compatibility with click>=5.0
-- Fix maximum recursion depth problem for very large queues that contain jobs
-  that all fail
-
-
-### 0.5.4
-
-(July 8th, 2015)
-
-- Fix compatibility with raven>=5.4.0
-
-
-### 0.5.3
-
-(June 3rd, 2015)
-
-- Better API for instantiating Workers. Thanks @RyanMTB!
-- Better support for unicode kwargs. Thanks @nealtodd and @brownstein!
-- Workers now automatically cleans up job registries every hour
-- Jobs in `FailedQueue` now have their statuses set properly
-- `enqueue_call()` no longer ignores `ttl`. Thanks @mbodock!
-- Improved logging. Thanks @trevorprater!
-
-
-### 0.5.2
-
-(April 14th, 2015)
-
-- Support SSL connection to Redis (requires redis-py>=2.10)
-- Fix to prevent deep call stacks with large queues
-
-
-### 0.5.1
-
-(March 9th, 2015)
-
-- Resolve performance issue when queues contain many jobs
-- Restore the ability to specify connection params in config
-- Record `birth_date` and `death_date` on Worker
-- Add support for SSL URLs in Redis (and `REDIS_SSL` config option)
-- Fix encoding issues with non-ASCII characters in function arguments
-- Fix Redis transaction management issue with job dependencies
-
-
-### 0.5.0
-(Jan 30th, 2015)
-
-- RQ workers can now be paused and resumed using `rq suspend` and
-  `rq resume` commands. Thanks Jonathan Tushman!
-- Jobs that are being performed are now stored in `StartedJobRegistry`
-  for monitoring purposes. This also prevents currently active jobs from
-  being orphaned/lost in the case of hard shutdowns.
-- You can now monitor finished jobs by checking `FinishedJobRegistry`.
-  Thanks Nic Cope for helping!
-- Jobs with unmet dependencies are now created with `deferred` as their
-  status. You can monitor deferred jobs by checking `DeferredJobRegistry`.
-- It is now possible to enqueue a job at the beginning of queue using
-  `queue.enqueue(func, at_front=True)`. Thanks Travis Johnson!
-- Command line scripts have all been refactored to use `click`. Thanks Lyon Zhang!
-- Added a new `SimpleWorker` that does not fork when executing jobs.
-  Useful for testing purposes. Thanks Cal Leeming!
-- Added `--queue-class` and `--job-class` arguments to `rqworker` script.
-  Thanks David Bonner!
-- Many other minor bug fixes and enhancements.
-
-
-### 0.4.6
-(May 21st, 2014)
-
-- Raise a warning when RQ workers are used with Sentry DSNs using
-  asynchronous transports.  Thanks Wei, Selwin & Toms!
-
-
-### 0.4.5
-(May 8th, 2014)
-
-- Fix where rqworker broke on Python 2.6. Thanks, Marko!
-
-
-### 0.4.4
-(May 7th, 2014)
-
-- Properly declare redis dependency.
-- Fix a NameError regression that was introduced in 0.4.3.
-
-
-### 0.4.3
-(May 6th, 2014)
-
-- Make job and queue classes overridable. Thanks, Marko!
-- Don't require connection for @job decorator at definition time. Thanks, Sasha!
-- Syntactic code cleanup.
-
-
-### 0.4.2
-(April 28th, 2014)
-
-- Add missing depends_on kwarg to @job decorator.  Thanks, Sasha!
-
-
-### 0.4.1
-(April 22nd, 2014)
-
-- Fix bug where RQ 0.4 workers could not unpickle/process jobs from RQ < 0.4.
-
-
-### 0.4.0
-(April 22nd, 2014)
-
-- Emptying the failed queue from the command line is now as simple as running
-  `rqinfo -X` or `rqinfo --empty-failed-queue`.
-
-- Job data is unpickled lazily. Thanks, Malthe!
-
-- Removed dependency on the `times` library. Thanks, Malthe!
-
-- Job dependencies!  Thanks, Selwin.
-
-- Custom worker classes, via the `--worker-class=path.to.MyClass` command line
-  argument.  Thanks, Selwin.
-
-- `Queue.all()` and `rqinfo` now report empty queues, too.  Thanks, Rob!
-
-- Fixed a performance issue in `Queue.all()` when issued in large Redis DBs.
-  Thanks, Rob!
-
-- Birth and death dates are now stored as proper datetimes, not timestamps.
-
-- Ability to provide a custom job description (instead of using the default
-  function invocation hint).  Thanks, İbrahim.
-
-- Fix: temporary key for the compact queue is now randomly generated, which
-  should avoid name clashes for concurrent compact actions.
-
-- Fix: `Queue.empty()` now correctly deletes job hashes from Redis.
-
-
-### 0.3.13
-(December 17th, 2013)
-
-- Bug fix where the worker crashes on jobs that have their timeout explicitly
-  removed.  Thanks for reporting, @algrs.
-
-
-### 0.3.12
-(December 16th, 2013)
-
-- Bug fix where a worker could time out before the job was done, removing it
-  from any monitor overviews (#288).
-
-
-### 0.3.11
-(August 23th, 2013)
-
-- Some more fixes in command line scripts for Python 3
-
-
-### 0.3.10
-(August 20th, 2013)
-
-- Bug fix in setup.py
-
-
-### 0.3.9
-(August 20th, 2013)
-
-- Python 3 compatibility (Thanks, Alex!)
-
-- Minor bug fix where Sentry would break when func cannot be imported
-
-
-### 0.3.8
-(June 17th, 2013)
-
-- `rqworker` and `rqinfo` have a  `--url` argument to connect to a Redis url.
-
-- `rqworker` and `rqinfo` have a `--socket` option to connect to a Redis server
-  through a Unix socket.
-
-- `rqworker` reads `SENTRY_DSN` from the environment, unless specifically
-  provided on the command line.
-
-- `Queue` has a new API that supports paging `get_jobs(3, 7)`, which will
-  return at most 7 jobs, starting from the 3rd.
-
-
-### 0.3.7
-(February 26th, 2013)
-
-- Fixed bug where workers would not execute builtin functions properly.
-
-
-### 0.3.6
-(February 18th, 2013)
-
-- Worker registrations now expire.  This should prevent `rqinfo` from reporting
-  about ghosted workers.  (Thanks, @yaniv-aknin!)
-
-- `rqworker` will automatically clean up ghosted worker registrations from
-  pre-0.3.6 runs.
-
-- `rqworker` grew a `-q` flag, to be more silent (only warnings/errors are shown)
-
-
-### 0.3.5
-(February 6th, 2013)
-
-- `ended_at` is now recorded for normally finished jobs, too.  (Previously only
-  for failed jobs.)
-
-- Adds support for both `Redis` and `StrictRedis` connection types
-
-- Makes `StrictRedis` the default connection type if none is explicitly provided
-
-
-### 0.3.4
-(January 23rd, 2013)
-
-- Restore compatibility with Python 2.6.
-
-
-### 0.3.3
-(January 18th, 2013)
-
-- Fix bug where work was lost due to silently ignored unpickle errors.
-
-- Jobs can now access the current `Job` instance from within.  Relevant
-  documentation [here](http://python-rq.org/docs/jobs/).
-
-- Custom properties can be set by modifying the `job.meta` dict.  Relevant
-  documentation [here](http://python-rq.org/docs/jobs/).
-
-- Custom properties can be set by modifying the `job.meta` dict.  Relevant
-  documentation [here](http://python-rq.org/docs/jobs/).
-
-- `rqworker` now has an optional `--password` flag.
-
-- Remove `logbook` dependency (in favor of `logging`)
-
-
-### 0.3.2
-(September 3rd, 2012)
-
-- Fixes broken `rqinfo` command.
-
-- Improve compatibility with Python < 2.7.
-
-
-
-### 0.3.1
-(August 30th, 2012)
-
-- `.enqueue()` now takes a `result_ttl` keyword argument that can be used to
-  change the expiration time of results.
-
-- Queue constructor now takes an optional `async=False` argument to bypass the
-  worker (for testing purposes).
-
-- Jobs now carry status information.  To get job status information, like
-  whether a job is queued, finished, or failed, use the property `status`, or
-  one of the new boolean accessor properties `is_queued`, `is_finished` or
-  `is_failed`.
-
-- Jobs return values are always stored explicitly, even if they have to
-  explicit return value or return `None` (with given TTL of course).  This
-  makes it possible to distinguish between a job that explicitly returned
-  `None` and a job that isn't finished yet (see `status` property).
-
-- Custom exception handlers can now be configured in addition to, or to fully
-  replace, moving failed jobs to the failed queue.  Relevant documentation
-  [here](http://python-rq.org/docs/exceptions/) and
-  [here](http://python-rq.org/patterns/sentry/).
-
-- `rqworker` now supports passing in configuration files instead of the
-  many command line options: `rqworker -c settings` will source
-  `settings.py`.
-
-- `rqworker` now supports one-flag setup to enable Sentry as its exception
-  handler: `rqworker --sentry-dsn="http://public:secret@example.com/1"`
-  Alternatively, you can use a settings file and configure `SENTRY_DSN
-  = 'http://public:secret@example.com/1'` instead.
-
-
-### 0.3.0
-(August 5th, 2012)
-
-- Reliability improvements
-
-    - Warm shutdown now exits immediately when Ctrl+C is pressed and worker is idle
-    - Worker does not leak worker registrations anymore when stopped gracefully
-
-- `.enqueue()` does not consume the `timeout` kwarg anymore.  Instead, to pass
-  RQ a timeout value while enqueueing a function, use the explicit invocation
-  instead:
-
-      ```python
-      q.enqueue(do_something, args=(1, 2), kwargs={'a': 1}, timeout=30)
-      ```
-
-- Add a `@job` decorator, which can be used to do Celery-style delayed
-  invocations:
-
-      ```python
-      from redis import StrictRedis
-      from rq.decorators import job
-
-      # Connect to Redis
-      redis = StrictRedis()
-
-      @job('high', timeout=10, connection=redis)
-      def some_work(x, y):
-          return x + y
-      ```
-
-  Then, in another module, you can call `some_work`:
-
-      ```python
-      from foo.bar import some_work
-
-      some_work.delay(2, 3)
-      ```
-
-
-### 0.2.2
-(August 1st, 2012)
-
-- Fix bug where return values that couldn't be pickled crashed the worker
-
-
-### 0.2.1
-(July 20th, 2012)
-
-- Fix important bug where result data wasn't restored from Redis correctly
-  (affected non-string results only).
-
-
-### 0.2.0
-(July 18th, 2012)
-
-- `q.enqueue()` accepts instance methods now, too.  Objects will be pickle'd
-  along with the instance method, so beware.
-- `q.enqueue()` accepts string specification of functions now, too.  Example:
-  `q.enqueue("my.math.lib.fibonacci", 5)`.  Useful if the worker and the
-  submitter of work don't share code bases.
-- Job can be assigned custom attrs and they will be pickle'd along with the
-  rest of the job's attrs.  Can be used when writing RQ extensions.
-- Workers can now accept explicit connections, like Queues.
-- Various bug fixes.
-
-
-### 0.1.2
-(May 15, 2012)
-
-- Fix broken PyPI deployment.
-
-
-### 0.1.1
-(May 14, 2012)
-
-- Thread-safety by using context locals
-- Register scripts as console_scripts, for better portability
-- Various bugfixes.
-
-
-### 0.1.0:
-(March 28, 2012)
-
-- Initially released version.
diff --git a/Dockerfile b/Dockerfile
deleted file mode 100644
index 231549d..0000000
--- a/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM python:3.8
-
-WORKDIR /root
-
-COPY . /tmp/rq
-
-RUN pip install /tmp/rq
-
-RUN rm -r /tmp/rq
-
-ENTRYPOINT ["rq", "worker"]
diff --git a/Makefile b/Makefile
deleted file mode 100644
index a3ad247..0000000
--- a/Makefile
+++ /dev/null
@@ -1,21 +0,0 @@
-all:
-	@grep -Ee '^[a-z].*:' Makefile | cut -d: -f1 | grep -vF all
-
-clean:
-	rm -rf build/ dist/
-
-test:
-	docker build -f tests/Dockerfile . -t rqtest && docker run -it --rm rqtest
-
-release: clean
-	# Check if latest tag is the current head we're releasing
-	echo "Latest tag = $$(git tag | sort -nr | head -n1)"
-	echo "HEAD SHA       = $$(git sha head)"
-	echo "Latest tag SHA = $$(git tag | sort -nr | head -n1 | xargs git sha)"
-	@test "$$(git sha head)" = "$$(git tag | sort -nr | head -n1 | xargs git sha)"
-	make force_release
-
-force_release: clean
-	git push --tags
-	python setup.py sdist bdist_wheel
-	twine upload dist/*
diff --git a/PKG-INFO b/PKG-INFO
new file mode 100644
index 0000000..e88c319
--- /dev/null
+++ b/PKG-INFO
@@ -0,0 +1,39 @@
+Metadata-Version: 2.1
+Name: rq
+Version: 1.15.1
+Summary: RQ is a simple, lightweight, library for creating background jobs, and processing them.
+Home-page: https://github.com/nvie/rq/
+Author: Vincent Driessen
+Author-email: vincent@3rdcloud.com
+License: BSD
+Platform: any
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Intended Audience :: Developers
+Classifier: Intended Audience :: End Users/Desktop
+Classifier: Intended Audience :: Information Technology
+Classifier: Intended Audience :: Science/Research
+Classifier: Intended Audience :: System Administrators
+Classifier: License :: OSI Approved :: BSD License
+Classifier: Operating System :: POSIX
+Classifier: Operating System :: MacOS
+Classifier: Operating System :: Unix
+Classifier: Programming Language :: Python
+Classifier: Programming Language :: Python :: 3
+Classifier: Programming Language :: Python :: 3.6
+Classifier: Programming Language :: Python :: 3.7
+Classifier: Programming Language :: Python :: 3.8
+Classifier: Programming Language :: Python :: 3.9
+Classifier: Programming Language :: Python :: 3.10
+Classifier: Programming Language :: Python :: 3.11
+Classifier: Topic :: Software Development :: Libraries :: Python Modules
+Classifier: Topic :: Internet
+Classifier: Topic :: Scientific/Engineering
+Classifier: Topic :: System :: Distributed Computing
+Classifier: Topic :: System :: Systems Administration
+Classifier: Topic :: System :: Monitoring
+Requires-Python: >=3.6
+License-File: LICENSE
+
+
+rq is a simple, lightweight, library for creating background jobs, and
+processing them.
diff --git a/codecov.yml b/codecov.yml
deleted file mode 100644
index 6e566ad..0000000
--- a/codecov.yml
+++ /dev/null
@@ -1,3 +0,0 @@
-ignore:
-  - setup.py
-  - "*/tests/*"
diff --git a/debian/changelog b/debian/changelog
index ffe2265..8905959 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+python-rq (1.15.1-1) UNRELEASED; urgency=low
+
+  * New upstream release.
+
+ -- Debian Janitor <janitor@jelmer.uk>  Sat, 24 Jun 2023 08:31:25 -0000
+
 python-rq (1.15-1) unstable; urgency=medium
 
   * Team upload
diff --git a/dev-requirements-36.txt b/dev-requirements-36.txt
deleted file mode 100644
index 1bf35b6..0000000
--- a/dev-requirements-36.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-packaging==21.3
-coverage==6.2
-psutil
-pytest
-pytest-cov
-sentry-sdk
\ No newline at end of file
diff --git a/dev-requirements.txt b/dev-requirements.txt
deleted file mode 100644
index cbe9464..0000000
--- a/dev-requirements.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-packaging
-coverage
-psutil
-pytest
-pytest-cov
-sentry-sdk
diff --git a/docs/CNAME b/docs/CNAME
deleted file mode 100644
index 7f907ac..0000000
--- a/docs/CNAME
+++ /dev/null
@@ -1 +0,0 @@
-python-rq.org
diff --git a/docs/_config.yml b/docs/_config.yml
deleted file mode 100644
index cf448f4..0000000
--- a/docs/_config.yml
+++ /dev/null
@@ -1,58 +0,0 @@
-baseurl: /
-exclude: design
-permalink: pretty
-
-navigation:
-- text: Home
-  url: /
-- text: Docs
-  url: /docs/
-  subs:
-  - text: Queues
-    url: /docs/
-  - text: Workers
-    url: /docs/workers/
-  - text: Results
-    url: /docs/results/
-  - text: Jobs
-    url: /docs/jobs/
-  - text: Exceptions & Retries
-    url: /docs/exceptions/
-  - text: Scheduling Jobs
-    url: /docs/scheduling/
-  - text: Job Registries
-    url: /docs/job_registries/
-  - text: Monitoring
-    url: /docs/monitoring/
-  - text: Connections
-    url: /docs/connections/  
-  - text: Testing
-    url: /docs/testing/
-- text: Patterns
-  url: /patterns/
-  subs:
-  - text: Heroku
-    url: /patterns/
-  - text: Django
-    url: /patterns/django/
-  - text: Sentry
-    url: /patterns/sentry/
-  - text: Supervisor
-    url: /patterns/supervisor/
-  - text: Systemd
-    url: /patterns/systemd/
-- text: Contributing
-  url: /contrib/
-  subs:
-  - text: Internals
-    url: /contrib/
-  - text: GitHub
-    url: /contrib/github/
-  - text: Documentation
-    url: /contrib/docs/
-  - text: Testing
-    url: /contrib/testing/
-  - text: Vagrant
-    url: /contrib/vagrant/
-- text: Chat
-  url: /chat/
diff --git a/docs/_includes/forward.html b/docs/_includes/forward.html
deleted file mode 100644
index 9c8cdd6..0000000
--- a/docs/_includes/forward.html
+++ /dev/null
@@ -1,6 +0,0 @@
-<script type="text/javascript">
-    // Auto-forward for incoming links on nvie.com
-    if ("nvie.com" === document.location.hostname) {
-        document.location = 'http://python-rq.org';
-    }
-</script>
diff --git a/docs/_includes/ga_tracking.html b/docs/_includes/ga_tracking.html
deleted file mode 100644
index f0c8fb5..0000000
--- a/docs/_includes/ga_tracking.html
+++ /dev/null
@@ -1,20 +0,0 @@
-<script type="text/javascript">
-
-  var _gaq = _gaq || [];
-  _gaq.push(['_setAccount', 'UA-27167945-1']);
-  _gaq.push(['_trackPageview']);
-
-  (function() {
-    var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
-    ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
-    var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
-  })();
-
-</script>
-
-<script src="https://cdnjs.cloudflare.com/ajax/libs/anchor-js/4.2.0/anchor.min.js"></script>
-<script>
-document.addEventListener("DOMContentLoaded", function(event) {
-  anchors.add();
-});
-</script>
diff --git a/docs/_layouts/chat.html b/docs/_layouts/chat.html
deleted file mode 100644
index 59e769e..0000000
--- a/docs/_layouts/chat.html
+++ /dev/null
@@ -1,16 +0,0 @@
----
-layout: default
----
-<div class="subnav">
-    <ul class="inline">
-    {% for link in site.navigation %}
-        {% if link.url == "/chat/" %}
-            {% for sublink in link.subs %}
-                <li><a href="{{ sublink.url }}">{{ sublink.text }}</a></li>
-            {% endfor %}
-        {% endif %}
-    {% endfor %}
-    </ul>
-</div>
-
-{{ content }}
diff --git a/docs/_layouts/contrib.html b/docs/_layouts/contrib.html
deleted file mode 100644
index 075698d..0000000
--- a/docs/_layouts/contrib.html
+++ /dev/null
@@ -1,16 +0,0 @@
----
-layout: default
----
-<div class="subnav">
-    <ul class="inline">
-    {% for link in site.navigation %}
-        {% if link.url == "/contrib/" %}
-            {% for sublink in link.subs %}
-                <li><a href="{{ sublink.url }}">{{ sublink.text }}</a></li>
-            {% endfor %}
-        {% endif %}
-    {% endfor %}
-    </ul>
-</div>
-
-{{ content }}
diff --git a/docs/_layouts/default.html b/docs/_layouts/default.html
deleted file mode 100644
index afe08f1..0000000
--- a/docs/_layouts/default.html
+++ /dev/null
@@ -1,38 +0,0 @@
-<!DOCTYPE html>
-<base href="{{ site.baseurl }}" />
-<html lang="en">
-  <head>
-    <meta charset="UTF-8">
-    <title>{{ page.title }}</title>
-    <meta content="width=600" name="viewport">
-    <meta content="all" name="robots">
-    <link href="http://fonts.googleapis.com/css?family=Lato:light,regular,regularitalic,lightitalic,bold&amp;v1" media="all" rel="stylesheet" type="text/css">
-    <link href='http://fonts.googleapis.com/css?family=Droid+Sans+Mono' media="all" rel='stylesheet' type='text/css'>
-    <link href="/css/screen.css" media="screen" rel="stylesheet" type="text/css">
-    <link href="/css/syntax.css" media="screen" rel="stylesheet" type="text/css">
-    <link href="/favicon.png" rel="icon" type="image/png">
-  </head>
-  <body>
-    <header>
-        <a href="http://git.io/rq"><img class="nomargin" style="position: absolute; top: 0; right: 0; border: 0;" src="https://s3.amazonaws.com/github/ribbons/forkme_right_orange_ff7600.png" alt="Fork me on GitHub"></a>
-
-        <ul class="inline">
-        {% for link in site.navigation %}
-            <li><a href="{{ link.url }}">{{ link.text }}</a></li>
-        {% endfor %}
-        </ul>
-    </header>
-
-    <section class="container">
-      {{ content }}
-    </section>
-
-    <footer>
-        <p>RQ is written by <a href="http://nvie.com/about">Vincent Driessen</a>.</p>
-        <p>It is open sourced under the terms of the <a href="https://raw.github.com/nvie/rq/master/LICENSE">BSD license</a>.</p>
-    </footer>
-
-    {% include forward.html %}
-    {% include ga_tracking.html %}
-  </body>
-</html>
diff --git a/docs/_layouts/docs.html b/docs/_layouts/docs.html
deleted file mode 100644
index 5687329..0000000
--- a/docs/_layouts/docs.html
+++ /dev/null
@@ -1,16 +0,0 @@
----
-layout: default
----
-<div class="subnav">
-    <ul class="inline">
-    {% for link in site.navigation %}
-        {% if link.url == "/docs/" %}
-            {% for sublink in link.subs %}
-                <li><a href="{{ sublink.url }}">{{ sublink.text }}</a></li>
-            {% endfor %}
-        {% endif %}
-    {% endfor %}
-    </ul>
-</div>
-
-{{ content }}
diff --git a/docs/_layouts/patterns.html b/docs/_layouts/patterns.html
deleted file mode 100644
index 99de008..0000000
--- a/docs/_layouts/patterns.html
+++ /dev/null
@@ -1,16 +0,0 @@
----
-layout: default
----
-<div class="subnav">
-    <ul class="inline">
-    {% for link in site.navigation %}
-        {% if link.url == "/patterns/" %}
-            {% for sublink in link.subs %}
-                <li><a href="{{ sublink.url }}">{{ sublink.text }}</a></li>
-            {% endfor %}
-        {% endif %}
-    {% endfor %}
-    </ul>
-</div>
-
-{{ content }}
diff --git a/docs/chat/index.md b/docs/chat/index.md
deleted file mode 100644
index df37f94..0000000
--- a/docs/chat/index.md
+++ /dev/null
@@ -1,6 +0,0 @@
----
-title: "RQ Discord"
-layout: chat
----
-
-Join our discord [here](https://discord.gg/pYannYntWH){:target="_blank" rel="noopener noreferrer"} if you need help or want to chat about contributions or what should come next in RQ.
diff --git a/docs/contrib/docs.md b/docs/contrib/docs.md
deleted file mode 100644
index 52c5da5..0000000
--- a/docs/contrib/docs.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: "Documentation"
-layout: contrib
----
-
-### Running docs locally
-
-To build the docs, run [jekyll](http://jekyllrb.com/):
-
-```
-jekyll serve
-```
-
-If you rather use Vagrant, see [these instructions][v].
-
-[v]: {{site.baseurl}}contrib/vagrant/
diff --git a/docs/contrib/github.md b/docs/contrib/github.md
deleted file mode 100644
index 7445fc0..0000000
--- a/docs/contrib/github.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: "Contributing to RQ"
-layout: contrib
----
-
-If you'd like to contribute to RQ, simply [fork](https://github.com/rq/rq)
-the project on GitHub and submit a pull request.
-
-Please bear in mind the philosiphy behind RQ: it should rather remain small and
-simple, than packed with features.  And it should value insightfulness over
-performance.
diff --git a/docs/contrib/index.md b/docs/contrib/index.md
deleted file mode 100644
index 6925f4e..0000000
--- a/docs/contrib/index.md
+++ /dev/null
@@ -1,66 +0,0 @@
----
-title: "RQ: Simple job queues for Python"
-layout: contrib
----
-
-This document describes how RQ works internally when enqueuing or dequeueing.
-
-
-## Enqueueing internals
-
-Whenever a function call gets enqueued, RQ does two things:
-
-* It creates a job instance representing the delayed function call and persists
-  it in a Redis [hash][h]; and
-* It pushes the given job's ID onto the requested Redis queue.
-
-All jobs are stored in Redis under the `rq:job:` prefix, for example:
-
-    rq:job:55528e58-9cac-4e05-b444-8eded32e76a1
-
-The keys of such a job [hash][h] are:
-
-    created_at  => '2012-02-13 14:35:16+0000'
-    enqueued_at => '2012-02-13 14:35:16+0000'
-    origin      => 'default'
-    data        => <pickled representation of the function call>
-    description => "count_words_at_url('http://nvie.com')"
-
-Depending on whether or not the job has run successfully or has failed, the
-following keys are available, too:
-
-    ended_at    => '2012-02-13 14:41:33+0000'
-    result      => <pickled return value>
-    exc_info    => <exception information>
-
-[h]: http://redis.io/topics/data-types#hashes
-
-
-## Dequeueing internals
-
-Whenever a dequeue is requested, an RQ worker does two things:
-
-* It pops a job ID from the queue, and fetches the job data belonging to that
-  job ID;
-* It starts executing the function call.
-* If the job succeeds, its return value is written to the `result` hash key and
-  the hash itself is expired after 500 seconds; or
-* If the job fails, the exception information is written to the `exc_info`
-  hash key and the job ID is pushed onto the `failed` queue.
-
-
-## Cancelling jobs
-
-Any job ID that is encountered by a worker for which no job hash is found in
-Redis is simply ignored.  This makes it easy to cancel jobs by simply removing
-the job hash.  In Python:
-
-```python
-    from rq import cancel_job
-    cancel_job('2eafc1e6-48c2-464b-a0ff-88fd199d039c')
-```
-
-Note that it is irrelevant on which queue the job resides.  When a worker
-eventually pops the job ID from the queue and notes that the Job hash does not
-exist (anymore), it simply discards the job ID and continues with the next.
-
diff --git a/docs/contrib/testing.md b/docs/contrib/testing.md
deleted file mode 100644
index 3264e43..0000000
--- a/docs/contrib/testing.md
+++ /dev/null
@@ -1,49 +0,0 @@
----
-title: "Testing"
-layout: contrib
----
-
-### Testing RQ locally
-
-To run tests locally you can use `tox`, which will run the tests with all supported Python versions (3.6 - 3.11)
-
-```
-tox
-```
-
-Bear in mind that you need to have all those versions installed in your local environment for that to work.
-
-### Testing with Pytest directly
-
-For a faster and simpler testing alternative you can just run `pytest` directly.
-
-```sh
-pytest .
-```
-
-It should automatically pickup the `tests` directory and run the test suite.
-Bear in mind that some tests may be be skipped in your local environment - make sure to look at which tests are being skipped.
-
-
-### Skipped Tests
-
-Apart from skipped tests related to the interpreter (eg. `PyPy`) or operational systems, slow tests are also skipped by default, but are ran in the GitHub CI/CD workflow.
-To include slow tests in your local environment, use the `RUN_SLOW_TESTS_TOO=1` environment variable:
-
-```sh
-RUN_SLOW_TESTS_TOO=1 pytest .
-```
-
-If you want to analyze the coverage reports, you can use the `--cov` argument to `pytest`. By adding `--cov-report`, you also have some flexibility in terms of the report output format:
-
-```sh
-RUN_SLOW_TESTS_TOO=1 pytest --cov=rq --cov-config=.coveragerc --cov-report={{report_format}} --durations=5
-```
-
-Where you replace the `report_format` by the desired format (`term` / `html` / `xml`).
-
-### Using Vagrant
-
-If you rather use Vagrant, see [these instructions][v].
-
-[v]: {{site.baseurl}}contrib/vagrant/
diff --git a/docs/contrib/vagrant.md b/docs/contrib/vagrant.md
deleted file mode 100644
index c114c7c..0000000
--- a/docs/contrib/vagrant.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-title: "Using Vagrant"
-layout: contrib
----
-
-If you don't feel like installing dependencies on your main development
-machine, you can use [Vagrant](https://www.vagrantup.com/).  Here's how you run
-your tests and build the documentation on Vagrant.
-
-
-### Running tests in Vagrant
-
-To create a working Vagrant environment, use the following;
-
-```
-vagrant init ubuntu/trusty64
-vagrant up
-vagrant ssh -- "sudo apt-get -y install redis-server python-dev python-pip"
-vagrant ssh -- "sudo pip install --no-input redis hiredis mock"
-vagrant ssh -- "(cd /vagrant; ./run_tests)"
-```
-
-
-### Running docs on Vagrant
-
-```
-vagrant init ubuntu/trusty64
-vagrant up
-vagrant ssh -- "sudo apt-get -y install ruby-dev nodejs"
-vagrant ssh -- "sudo gem install jekyll"
-vagrant ssh -- "(cd /vagrant; jekyll serve)"
-```
-
-You'll also need to add a port forward entry to your `Vagrantfile`;
-
-```
-config.vm.network "forwarded_port", guest: 4000, host: 4001
-```
-
-Then you can access the docs using;
-
-```
-http://127.0.0.1:4001
-```
-
-You also may need to forcibly kill Jekyll if you ctrl+c;
-
-```
-vagrant ssh -- "sudo killall -9 jekyll"
-```
diff --git a/docs/css/reset.css b/docs/css/reset.css
deleted file mode 100644
index e29c0f5..0000000
--- a/docs/css/reset.css
+++ /dev/null
@@ -1,48 +0,0 @@
-/* http://meyerweb.com/eric/tools/css/reset/ 
-   v2.0 | 20110126
-   License: none (public domain)
-*/
-
-html, body, div, span, applet, object, iframe,
-h1, h2, h3, h4, h5, h6, p, blockquote, pre,
-a, abbr, acronym, address, big, cite, code,
-del, dfn, em, img, ins, kbd, q, s, samp,
-small, strike, strong, sub, sup, tt, var,
-b, u, i, center,
-dl, dt, dd, ol, ul, li,
-fieldset, form, label, legend,
-table, caption, tbody, tfoot, thead, tr, th, td,
-article, aside, canvas, details, embed, 
-figure, figcaption, footer, header, hgroup, 
-menu, nav, output, ruby, section, summary,
-time, mark, audio, video {
-	margin: 0;
-	padding: 0;
-	border: 0;
-	font-size: 100%;
-	font: inherit;
-	vertical-align: baseline;
-}
-/* HTML5 display-role reset for older browsers */
-article, aside, details, figcaption, figure, 
-footer, header, hgroup, menu, nav, section {
-	display: block;
-}
-body {
-	line-height: 1;
-}
-ol, ul {
-	list-style: none;
-}
-blockquote, q {
-	quotes: none;
-}
-blockquote:before, blockquote:after,
-q:before, q:after {
-	content: '';
-	content: none;
-}
-table {
-	border-collapse: collapse;
-	border-spacing: 0;
-}
diff --git a/docs/css/screen.css b/docs/css/screen.css
deleted file mode 100644
index 5be991c..0000000
--- a/docs/css/screen.css
+++ /dev/null
@@ -1,300 +0,0 @@
-@import url("reset.css");
-
-body {
-    background: #DBE0DF url(../img/bg.png) 50% 0 repeat-y !important;
-    height: 100%;
-    font-family: system-ui, -apple-system, sans-serif;
-    font-size: 1rem;
-    line-height: 1.55;
-    padding: 0 30px 80px;
-}
-
-header {
-    background: url(../img/ribbon.png) no-repeat 50% 0;
-    max-width: 630px;
-    width: 100%;
-    text-align: center;
-    padding: 240px 0 1em 0;
-    border-bottom: 1px dashed #e1e1e1;
-    margin: 0 auto 2em auto;
-}
-
-li {
-    padding-bottom: 5px;
-}
-
-ul.inline {
-    list-style-type: none;
-    margin: 0;
-    padding: 0;
-}
-
-ul.inline li {
-    display: inline;
-    margin: 0 10px;
-}
-
-.subnav ul.inline li {
-    margin: 0 6px;
-}
-
-header a {
-    color: #3a3a3a;
-    border: 0;
-    font-size: 110%;
-    font-weight: 600;
-    text-decoration: none;
-    transition: color linear 0.1s;
-    -webkit-transition: color linear 0.1s;
-    -moz-transition: color linear 0.1s;
-}
-
-header a:hover {
-    border-bottom-color: rgba(0, 0, 0, 0.1);
-    color: rgba(0, 0, 0, 0.4);
-}
-
-.subnav {
-    text-align: center;
-    font-size: 94%;
-    margin: -3em auto 2em auto;
-}
-
-.subnav li {
-    background-color: white;
-    padding: 0 4px;
-}
-
-.subnav a {
-    text-decoration: none;
-    white-space: nowrap;
-}
-
-.container {
-    margin: 0 auto;
-    max-width: 630px;
-    width: 100%;
-}
-
-footer {
-    margin: 2em auto;
-    max-width: 430px;
-    width: 100%;
-    border-top: 1px dashed #e1e1e1;
-    padding-top: 1em;
-}
-
-footer p {
-    text-align: center;
-    font-size: 90%;
-    font-style: italic;
-    margin-bottom: 0;
-}
-
-footer a {
-    font-weight: 400;
-}
-
-pre,
-pre.highlight {
-    margin: 0 0 1em 1em;
-    padding: 1em 1.8em;
-    color: #222;
-    border-bottom: 1px solid #ccc;
-    border-right: 1px solid #ccc;
-    background: #F3F3F0 url(../img/bq.png) top left no-repeat;
-    line-height: 1.15em;
-    overflow: auto;
-}
-
-code {
-    font-family: "Droid Sans Mono", SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;
-    font-weight: 400;
-    font-size: 80%;
-    line-height: 0.5em;
-    border: 1px solid #efeaea;
-    padding: 0.2em 0.4em;
-}
-
-pre code {
-    border: none;
-    padding: 0;
-}
-
-h1 {
-    font-size: 280%;
-    font-weight: 400;
-}
-
-.ir {
-    display: block;
-    border: 0;
-    text-indent: -999em;
-    overflow: hidden;
-    background-color: transparent;
-    background-repeat: no-repeat;
-    text-align: left;
-    direction: ltr;
-}
-
-.ir br {
-    display: none;
-}
-
-h1#logo {
-    margin: 0 auto;
-    width: 305px;
-    height: 186px;
-    background-image: url(../img/logo2.png);
-}
-
-/*
-h1:hover:after
-{
-    color: rgba(0, 0, 0, 0.3);
-    content: attr(title);
-    font-size: 60%;
-    font-weight: 300;
-    margin: 0 0 0 0.5em;
-}
-*/
-
-h2 {
-    font-size: 200%;
-    font-weight: 400;
-    margin: 0 0 0.4em;
-}
-
-h3 {
-    font-size: 135%;
-    font-weight: 400;
-    margin: 0 0 0.25em;
-}
-
-p {
-    color: rgba(0, 0, 0, 0.7);
-    margin: 0 0 1em;
-}
-
-p:last-child {
-    margin-bottom: 0;
-}
-
-img {
-    border-radius: 4px;
-    float: left;
-    margin: 6px 12px 15px 0;
-    -moz-border-radius: 4px;
-    -webkit-border-radius: 4px;
-}
-
-.nomargin {
-    margin: 0;
-}
-
-a {
-    border-bottom: 1px solid rgba(65, 131, 196, 0.1);
-    color: rgb(65, 131, 196);
-    font-weight: 600;
-    text-decoration: none;
-    transition: color linear 0.1s;
-    -webkit-transition: color linear 0.1s;
-    -moz-transition: color linear 0.1s;
-}
-
-a:hover {
-    border-bottom-color: rgba(0, 0, 0, 0.1);
-    color: rgba(0, 0, 0, 0.4);
-}
-
-em {
-    font-style: italic;
-}
-
-strong {
-    font-weight: 600;
-}
-
-acronym {
-    border-bottom: 1px dotted rgba(0, 0, 0, 0.1);
-    cursor: help;
-}
-
-blockquote {
-    font-style: italic;
-    padding: 1em;
-}
-
-ul {
-    list-style: circle;
-    margin: 0 0 1em 2em;
-    color: rgba(0, 0, 0, 0.7);
-}
-
-li {
-    font-size: 100%;
-}
-
-ol {
-    list-style-type: decimal;
-    margin: 0 0 1em 2em;
-    color: rgba(0, 0, 0, 0.7);
-}
-
-li {
-    font-size: 100%;
-}
-
-.warning {
-    position: relative;
-    padding: 7px 15px;
-    margin-bottom: 18px;
-    color: #404040;
-    background-color: #eedc94;
-    background-repeat: repeat-x;
-    background-image: -khtml-gradient(linear, left top, left bottom, from(#fceec1), to(#eedc94));
-    background-image: -moz-linear-gradient(top, #fceec1, #eedc94);
-    background-image: -ms-linear-gradient(top, #fceec1, #eedc94);
-    background-image: -webkit-gradient(linear, left top, left bottom, color-stop(0%, #fceec1), color-stop(100%, #eedc94));
-    background-image: -webkit-linear-gradient(top, #fceec1, #eedc94);
-    background-image: -o-linear-gradient(top, #fceec1, #eedc94);
-    background-image: linear-gradient(top, #fceec1, #eedc94);
-    filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fceec1', endColorstr='#eedc94', GradientType=0);
-    text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25);
-    border-color: #eedc94 #eedc94 #e4c652;
-    border-color: rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.25);
-    text-shadow: 0 1px 0 rgba(255, 255, 255, 0.5);
-    border-width: 1px;
-    border-style: solid;
-    -webkit-border-radius: 4px;
-    -moz-border-radius: 4px;
-    border-radius: 4px;
-    -webkit-box-shadow: inset 0 1px 0 rgba(255, 255, 255, 0.25);
-    -moz-box-shadow: inset 0 1px 0 rgba(255, 255, 255, 0.25);
-    box-shadow: inset 0 1px 0 rgba(255, 255, 255, 0.25);
-}
-
-.alert-message .close {
-    *margin-top: 3px;
-    /* IE7 spacing */
-}
-
-/*
-@media screen and (max-width: 1400px)
-{
-    body
-    {
-        padding-bottom: 60px;
-        padding-top: 60px;
-    }
-}
-
-@media screen and (max-width: 600px)
-{
-    body
-    {
-        padding-bottom: 40px;
-        padding-top: 30px;
-    }
-}
-*/
\ No newline at end of file
diff --git a/docs/css/syntax.css b/docs/css/syntax.css
deleted file mode 100644
index c82ff1f..0000000
--- a/docs/css/syntax.css
+++ /dev/null
@@ -1,61 +0,0 @@
-.highlight  { background: #ffffff; }
-.highlight .c { color: #999988; } /* Comment */
-.highlight .err { color: #a61717; background-color: #e3d2d2 } /* Error */
-.highlight .k { font-weight: bold; color: #555555; } /* Keyword */
-.highlight .kn { font-weight: bold; color: #555555; } /* Keyword */
-.highlight .o { font-weight: bold; color: #555555; } /* Operator */
-.highlight .cm { color: #999988; } /* Comment.Multiline */
-.highlight .cp { color: #999999; font-weight: bold } /* Comment.Preproc */
-.highlight .c1 { color: #999988; } /* Comment.Single */
-.highlight .cs { color: #999999; font-weight: bold; } /* Comment.Special */
-.highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */
-.highlight .gd .x { color: #000000; background-color: #ffaaaa } /* Generic.Deleted.Specific */
-.highlight .ge {} /* Generic.Emph */
-.highlight .gr { color: #aa0000 } /* Generic.Error */
-.highlight .gh { color: #999999 } /* Generic.Heading */
-.highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */
-.highlight .gi .x { color: #000000; background-color: #aaffaa } /* Generic.Inserted.Specific */
-.highlight .go { color: #888888 } /* Generic.Output */
-.highlight .gp { color: #555555 } /* Generic.Prompt */
-.highlight .gs { font-weight: bold } /* Generic.Strong */
-.highlight .gu { color: #aaaaaa } /* Generic.Subheading */
-.highlight .gt { color: #aa0000 } /* Generic.Traceback */
-.highlight .kc { font-weight: bold } /* Keyword.Constant */
-.highlight .kd { font-weight: bold } /* Keyword.Declaration */
-.highlight .kp { font-weight: bold } /* Keyword.Pseudo */
-.highlight .kr { font-weight: bold } /* Keyword.Reserved */
-.highlight .kt { color: #445588; font-weight: bold } /* Keyword.Type */
-.highlight .m { color: #009999 } /* Literal.Number */
-.highlight .s { color: #d14 } /* Literal.String */
-.highlight .na { color: #008080 } /* Name.Attribute */
-.highlight .nb { color: #0086B3 } /* Name.Builtin */
-.highlight .nc { color: #445588; font-weight: bold } /* Name.Class */
-.highlight .no { color: #008080 } /* Name.Constant */
-.highlight .ni { color: #800080 } /* Name.Entity */
-.highlight .ne { color: #aa0000; font-weight: bold } /* Name.Exception */
-.highlight .nf { color: #aa0000; font-weight: bold } /* Name.Function */
-.highlight .nn { color: #555555 } /* Name.Namespace */
-.highlight .nt { color: #000080 } /* Name.Tag */
-.highlight .nv { color: #008080 } /* Name.Variable */
-.highlight .ow { font-weight: bold } /* Operator.Word */
-.highlight .w { color: #bbbbbb } /* Text.Whitespace */
-.highlight .mf { color: #009999 } /* Literal.Number.Float */
-.highlight .mh { color: #009999 } /* Literal.Number.Hex */
-.highlight .mi { color: #009999 } /* Literal.Number.Integer */
-.highlight .mo { color: #009999 } /* Literal.Number.Oct */
-.highlight .sb { color: #d14 } /* Literal.String.Backtick */
-.highlight .sc { color: #d14 } /* Literal.String.Char */
-.highlight .sd { color: #d14 } /* Literal.String.Doc */
-.highlight .s2 { color: #d14 } /* Literal.String.Double */
-.highlight .se { color: #d14 } /* Literal.String.Escape */
-.highlight .sh { color: #d14 } /* Literal.String.Heredoc */
-.highlight .si { color: #d14 } /* Literal.String.Interpol */
-.highlight .sx { color: #d14 } /* Literal.String.Other */
-.highlight .sr { color: #009926 } /* Literal.String.Regex */
-.highlight .s1 { color: #d14 } /* Literal.String.Single */
-.highlight .ss { color: #990073 } /* Literal.String.Symbol */
-.highlight .bp { color: #999999 } /* Name.Builtin.Pseudo */
-.highlight .vc { color: #008080 } /* Name.Variable.Class */
-.highlight .vg { color: #008080 } /* Name.Variable.Global */
-.highlight .vi { color: #008080 } /* Name.Variable.Instance */
-.highlight .il { color: #009999 } /* Literal.Number.Integer.Long */
diff --git a/docs/design/favicon.psd b/docs/design/favicon.psd
deleted file mode 100644
index 9674bb8..0000000
Binary files a/docs/design/favicon.psd and /dev/null differ
diff --git a/docs/design/rq-logo.psd b/docs/design/rq-logo.psd
deleted file mode 100644
index 52a60c7..0000000
Binary files a/docs/design/rq-logo.psd and /dev/null differ
diff --git a/docs/docs/connections.md b/docs/docs/connections.md
deleted file mode 100644
index 4382cef..0000000
--- a/docs/docs/connections.md
+++ /dev/null
@@ -1,155 +0,0 @@
----
-title: "RQ: Connections"
-layout: docs
----
-
-### The connection parameter
-
-Each RQ object (queues, workers, jobs) has a `connection` keyword
-argument that can be passed to the constructor - this is the recommended way of handling connections.
-
-```python
-from redis import Redis
-from rq import Queue
-
-redis = Redis('localhost', 6789)
-q = Queue(connection=redis)
-```
-
-This pattern allows for different connections to be passed to different objects:
-
-```python
-from rq import Queue
-from redis import Redis
-
-conn1 = Redis('localhost', 6379)
-conn2 = Redis('remote.host.org', 9836)
-
-q1 = Queue('foo', connection=conn1)
-q2 = Queue('bar', connection=conn2)
-```
-
-Every job that is enqueued on a queue will know what connection it belongs to.
-The same goes for the workers.
-
-
-### Connection contexts (precise and concise)
-
-<div class="warning">
-    <img style="float: right; margin-right: -60px; margin-top: -38px" src="/img/warning.png" />
-    <strong>Note:</strong>
-    <p>
-        The use of <code>Connection</code> context manager is deprecated.
-        Please don't use <code>Connection</code> in your scripts.
-        Instead, use explicit connection management.
-    </p>
-</div>
-
-There is a better approach if you want to use multiple connections, though.
-Each RQ object instance, upon creation, will use the topmost Redis connection
-on the RQ connection stack, which is a mechanism to temporarily replace the
-default connection to be used.
-
-An example will help to understand it:
-
-```python
-from rq import Queue, Connection
-from redis import Redis
-
-with Connection(Redis('localhost', 6379)):
-    q1 = Queue('foo')
-    with Connection(Redis('remote.host.org', 9836)):
-        q2 = Queue('bar')
-    q3 = Queue('qux')
-
-assert q1.connection != q2.connection
-assert q2.connection != q3.connection
-assert q1.connection == q3.connection
-```
-
-You can think of this as if, within the `Connection` context, every newly
-created RQ object instance will have the `connection` argument set implicitly.
-Enqueueing a job with `q2` will enqueue it in the second (remote) Redis
-backend, even when outside of the connection context.
-
-
-### Pushing/popping connections
-
-If your code does not allow you to use a `with` statement, for example, if you
-want to use this to set up a unit test, you can use the `push_connection()` and
-`pop_connection()` methods instead of using the context manager.
-
-```python
-import unittest
-from rq import Queue
-from rq import push_connection, pop_connection
-
-class MyTest(unittest.TestCase):
-    def setUp(self):
-        push_connection(Redis())
-
-    def tearDown(self):
-        pop_connection()
-
-    def test_foo(self):
-        """Any queues created here use local Redis."""
-        q = Queue()
-```
-
-### Sentinel support
-
-To use redis sentinel, you must specify a dictionary in the configuration file.
-Using this setting in conjunction with the systemd or docker containers with the
-automatic restart option allows workers and RQ to have a fault-tolerant connection to the redis.
-
-```python
-SENTINEL: {
-    'INSTANCES':[('remote.host1.org', 26379), ('remote.host2.org', 26379), ('remote.host3.org', 26379)],
-    'MASTER_NAME': 'master',
-    'DB': 2,
-    'USERNAME': 'redis-user',
-    'PASSWORD': 'redis-secret',
-    'SOCKET_TIMEOUT': None,
-    'CONNECTION_KWARGS': {  # Eventual addition Redis connection arguments
-        'ssl_ca_path': None,
-    },
-    'SENTINEL_KWARGS': {    # Eventual Sentinels connections arguments
-        'username': 'sentinel-user',
-        'password': 'sentinel-secret',
-    },
-}
-```
-
-
-### Timeout
-
-To avoid potential issues with hanging Redis commands, specifically the blocking `BLPOP` command,
-RQ automatically sets a `socket_timeout` value that is 10 seconds higher than the `default_worker_ttl`.
-
-If you prefer to manually set the `socket_timeout` value,
-make sure that the value being set is higher than the `default_worker_ttl` (which is 420 by default).
-
-```python
-from redis import Redis
-from rq import Queue
-
-conn = Redis('localhost', 6379, socket_timeout=500)
-q = Queue(connection=conn)
-```
-
-Setting a `socket_timeout` with a lower value than the `default_worker_ttl` will cause a `TimeoutError`
-since it will interrupt the worker while it gets new jobs from the queue.
-
-
-### Encoding / Decoding
-
-The encoding and decoding of Redis objects occur in multiple locations within the codebase,
-which means that the `decode_responses=True` argument of the Redis connection is not currently supported.
-
-```python
-from redis import Redis
-from rq import Queue
-
-conn = Redis(..., decode_responses=True) # This is not supported
-q = Queue(connection=conn)
-```
diff --git a/docs/docs/exceptions.md b/docs/docs/exceptions.md
deleted file mode 100644
index 2ab8473..0000000
--- a/docs/docs/exceptions.md
+++ /dev/null
@@ -1,154 +0,0 @@
----
-title: "RQ: Exceptions & Retries"
-layout: docs
----
-
-Jobs can fail due to exceptions occurring. When your RQ workers run in the
-background, how do you get notified of these exceptions?
-
-## Default: FailedJobRegistry
-
-The default safety net for RQ is the `FailedJobRegistry`. Every job that doesn't
-execute successfully is stored here, along with its exception information (type,
-value, traceback).
-
-```python
-from redis import Redis
-from rq import Queue
-from rq.job import Job
-from rq.registry import FailedJobRegistry
-
-redis = Redis()
-queue = Queue(connection=redis)
-registry = FailedJobRegistry(queue=queue)
-
-# Show all failed job IDs and the exceptions they caused during runtime
-for job_id in registry.get_job_ids():
-    job = Job.fetch(job_id, connection=redis)
-    print(job_id, job.exc_info)
-```
-
-## Retrying Failed Jobs
-
-_New in version 1.5.0_
-
-RQ lets you easily retry failed jobs. To configure retries, use RQ's
-`Retry` object that accepts `max` and `interval` arguments. For example:
-
-```python
-from redis import Redis
-from rq import Retry, Queue
-
-from somewhere import my_func
-
-queue = Queue(connection=redis)
-# Retry up to 3 times, failed job will be requeued immediately
-queue.enqueue(my_func, retry=Retry(max=3))
-
-# Retry up to 3 times, with 60 seconds interval in between executions
-queue.enqueue(my_func, retry=Retry(max=3, interval=60))
-
-# Retry up to 3 times, with longer interval in between retries
-queue.enqueue(my_func, retry=Retry(max=3, interval=[10, 30, 60]))
-```
-
-<div class="warning">
-    <img style="float: right; margin-right: -60px; margin-top: -38px" src="/img/warning.png" />
-    <strong>Note:</strong>
-    <p>
-        If you use `interval` argument with `Retry`, don't forget to run your workers using
-        the `--with-scheduler` argument.
-    </p>
-</div>
-
-
-## Custom Exception Handlers
-
-RQ supports registering custom exception handlers. This makes it possible to
-inject your own error handling logic to your workers.
-
-This is how you register custom exception handler(s) to an RQ worker:
-
-```python
-from exception_handlers import foo_handler, bar_handler
-
-w = Worker([q], exception_handlers=[foo_handler, bar_handler])
-```
-
-The handler itself is a function that takes the following parameters: `job`,
-`exc_type`, `exc_value` and `traceback`:
-
-```python
-def my_handler(job, exc_type, exc_value, traceback):
-    # do custom things here
-    # for example, write the exception info to a DB
-
-```
-
-You might also see the three exception arguments encoded as:
-
-```python
-def my_handler(job, *exc_info):
-    # do custom things here
-```
-
-```python
-from exception_handlers import foo_handler
-
-w = Worker([q], exception_handlers=[foo_handler],
-           disable_default_exception_handler=True)
-```
-
-
-## Chaining Exception Handlers
-
-The handler itself is responsible for deciding whether or not the exception
-handling is done, or should fall through to the next handler on the stack.
-The handler can indicate this by returning a boolean. `False` means stop
-processing exceptions, `True` means continue and fall through to the next
-exception handler on the stack.
-
-It's important to know for implementers that, by default, when the handler
-doesn't have an explicit return value (thus `None`), this will be interpreted
-as `True` (i.e.  continue with the next handler).
-
-To prevent the next exception handler in the handler chain from executing,
-use a custom exception handler that doesn't fall through, for example:
-
-```python
-def black_hole(job, *exc_info):
-    return False
-```
-
-## Work Horse Killed Handler
-_New in version 1.13.0._
-
-In addition to job exception handler(s), RQ supports registering a handler for unexpected workhorse termination.
-This handler is called when a workhorse is unexpectedly terminated, for example due to OOM.
-
-This is how you set a workhorse termination handler to an RQ worker:
-
-```python
-from my_handlers import my_work_horse_killed_handler
-
-w = Worker([q], work_horse_killed_handler=my_work_horse_killed_handler)
-```
-
-The handler itself is a function that takes the following parameters: `job`,
-`retpid`, `ret_val` and `rusage`:
-
-```python
-from resource import struct_rusage
-from rq.job import Job
-def my_work_horse_killed_handler(job: Job, retpid: int, ret_val: int, rusage: struct_rusage):
-    # do your thing here, for example set job.retries_left to 0 
-
-```
-
-## Built-in Exceptions
-RQ Exceptions you can get in your job failure callbacks
-
-# AbandonedJobError
-This error means an unfinished job was collected by another worker's maintenance task.  
-This usually happens when a worker is busy with a job and is terminated before it finished that job.  
-Another worker collects this job and moves it to the FailedJobRegistry.
\ No newline at end of file
diff --git a/docs/docs/index.md b/docs/docs/index.md
deleted file mode 100644
index aa3b2c9..0000000
--- a/docs/docs/index.md
+++ /dev/null
@@ -1,467 +0,0 @@
----
-title: "RQ: Documentation Overview"
-layout: docs
----
-
-A _job_ is a Python object, representing a function that is invoked
-asynchronously in a worker (background) process.  Any Python function can be
-invoked asynchronously, by simply pushing a reference to the function and its
-arguments onto a queue.  This is called _enqueueing_.
-
-
-## Enqueueing Jobs
-
-To put jobs on queues, first declare a function:
-
-```python
-import requests
-
-def count_words_at_url(url):
-    resp = requests.get(url)
-    return len(resp.text.split())
-```
-
-Noticed anything?  There's nothing special about this function!  Any Python
-function call can be put on an RQ queue.
-
-To put this potentially expensive word count for a given URL in the background,
-simply do this:
-
-```python
-from rq import Queue
-from redis import Redis
-from somewhere import count_words_at_url
-import time
-
-# Tell RQ what Redis connection to use
-redis_conn = Redis()
-q = Queue(connection=redis_conn)  # no args implies the default queue
-
-# Delay execution of count_words_at_url('http://nvie.com')
-job = q.enqueue(count_words_at_url, 'http://nvie.com')
-print(job.result)   # => None  # Changed to job.return_value() in RQ >= 1.12.0
-
-# Now, wait a while, until the worker is finished
-time.sleep(2)
-print(job.result)   # => 889  # Changed to job.return_value() in RQ >= 1.12.0
-```
-
-If you want to put the work on a specific queue, simply specify its name:
-
-```python
-q = Queue('low', connection=redis_conn)
-q.enqueue(count_words_at_url, 'http://nvie.com')
-```
-
-Notice the `Queue('low')` in the example above?  You can use any queue name, so
-you can quite flexibly distribute work to your own desire.  A common naming
-pattern is to name your queues after priorities (e.g.  `high`, `medium`,
-`low`).
-
-In addition, you can add a few options to modify the behaviour of the queued
-job. By default, these are popped out of the kwargs that will be passed to the
-job function.
-
-* `job_timeout` specifies the maximum runtime of the job before it's interrupted
-    and marked as `failed`. Its default unit is second and it can be an integer or a string representing an integer(e.g.  `2`, `'2'`). Furthermore, it can be a string with specify unit including hour, minute, second(e.g. `'1h'`, `'3m'`, `'5s'`).
-* `result_ttl` specifies how long (in seconds) successful jobs and their
-results are kept. Expired jobs will be automatically deleted. Defaults to 500 seconds.
-* `ttl` specifies the maximum queued time (in seconds) of the job before it's discarded.
-  This argument defaults to `None` (infinite TTL).
-* `failure_ttl` specifies how long failed jobs are kept (defaults to 1 year)
-* `depends_on` specifies another job (or list of jobs) that must complete before this
-  job will be queued.
-* `job_id` allows you to manually specify this job's `job_id`
-* `at_front` will place the job at the *front* of the queue, instead of the
-  back
-* `description` to add additional description to enqueued jobs.
-* `on_success` allows you to run a function after a job completes successfully
-* `on_failure` allows you to run a function after a job fails
-* `on_stopped` allows you to run a function after a job is stopped
-* `args` and `kwargs`: use these to explicitly pass arguments and keyword to the
-  underlying job function. This is useful if your function happens to have
-  conflicting argument names with RQ, for example `description` or `ttl`.
-
-In the last case, if you want to pass `description` and `ttl` keyword arguments
-to your job and not to RQ's enqueue function, this is what you do:
-
-```python
-q = Queue('low', connection=redis_conn)
-q.enqueue(count_words_at_url,
-          ttl=30,  # This ttl will be used by RQ
-          args=('http://nvie.com',),
-          kwargs={
-              'description': 'Function description', # This is passed on to count_words_at_url
-              'ttl': 15  # This is passed on to count_words_at_url function
-          })
-```
-
-For cases where the web process doesn't have access to the source code running
-in the worker (i.e. code base X invokes a delayed function from code base Y),
-you can pass the function as a string reference, too.
-
-```python
-q = Queue('low', connection=redis_conn)
-q.enqueue('my_package.my_module.my_func', 3, 4)
-```
-
-### Bulk Job Enqueueing
-_New in version 1.9.0._  
-You can also enqueue multiple jobs in bulk with `queue.enqueue_many()` and `Queue.prepare_data()`:
-
-```python
-jobs = q.enqueue_many(
-  [
-    Queue.prepare_data(count_words_at_url, ('http://nvie.com',), job_id='my_job_id'),
-    Queue.prepare_data(count_words_at_url, ('http://nvie.com',), job_id='my_other_job_id'),
-  ]
-)
-```
-
-which will enqueue all the jobs in a single redis `pipeline` which you can optionally pass in yourself:
-
-```python
-with q.connection.pipeline() as pipe:
-  jobs = q.enqueue_many(
-    [
-      Queue.prepare_data(count_words_at_url, ('http://nvie.com',), job_id='my_job_id'),
-      Queue.prepare_data(count_words_at_url, ('http://nvie.com',), job_id='my_other_job_id'),
-    ],
-    pipeline=pipe
-  )
-  pipe.execute()
-```
-
-`Queue.prepare_data` accepts all arguments that `Queue.parse_args` does.
-
-## Job dependencies
-
-RQ allows you to chain the execution of multiple jobs.
-To execute a job that depends on another job, use the `depends_on` argument:
-
-```python
-q = Queue('low', connection=my_redis_conn)
-report_job = q.enqueue(generate_report)
-q.enqueue(send_report, depends_on=report_job)
-```
-
-Specifying multiple dependencies are also supported:
-
-```python
-queue = Queue('low', connection=redis)
-foo_job = queue.enqueue(foo)
-bar_job = queue.enqueue(bar)
-baz_job = queue.enqueue(baz, depends_on=[foo_job, bar_job])
-```
-
-The ability to handle job dependencies allows you to split a big job into
-several smaller ones. By default, a job that is dependent on another is enqueued only when
-its dependency finishes *successfully*.
-
-_New in 1.11.0._
-
-If you want a job's dependencies to execute regardless if the job completes or fails, RQ provides
-the `Dependency` class that will allow you to dictate how to handle job failures.
-
-The `Dependency(jobs=...)` parameter accepts:
-- a string representing a single job id
-- a Job object
-- an iteratable of job id strings and/or Job objects
-- `enqueue_at_front` boolean parameter to put dependents at the front when they are enqueued
-
-Example:
-
-```python
-from redis import Redis
-from rq.job import Dependency
-from rq import Queue
-
-queue = Queue(connection=Redis())
-job_1 = queue.enqueue(div_by_zero)
-dependency = Dependency(
-    jobs=[job_1],
-    allow_failure=True,    # allow_failure defaults to False
-    enqueue_at_front=True  # enqueue_at_front defaults to False  
-)
-job_2 = queue.enqueue(say_hello, depends_on=dependency)
-
-"""
-  job_2 will execute even though its dependency (job_1) fails,
-  and it will be enqueued at the front of the queue.
-"""
-```
-
-
-## Job Callbacks
-_New in version 1.9.0._
-
-If you want to execute a function whenever a job completes, fails, or is stopped, RQ provides
-`on_success`, `on_failure`, and `on_stopped` callbacks.
-
-```python
-queue.enqueue(say_hello, on_success=report_success, on_failure=report_failure, on_stopped=report_stopped)
-```
-
-### Callback Class and Callback Timeouts
-
-_New in version 1.14.0_
-
-RQ lets you configure the method and timeout for each callback - success, failure, and stopped.   
-To configure callback timeouts, use RQ's
-`Callback` object that accepts `func` and `timeout` arguments. For example:
-
-```python
-from rq import Callback
-queue.enqueue(say_hello, 
-              on_success=Callback(report_success),  # default callback timeout (60 seconds) 
-              on_failure=Callback(report_failure, timeout=10), # 10 seconds timeout
-              on_stopped=Callback(report_stopped, timeout="2m")) # 2 minute timeout  
-```
-
-You can also pass the function as a string reference: `Callback('my_package.my_module.my_func')`
-
-### Success Callback
-
-Success callbacks must be a function that accepts `job`, `connection` and `result` arguments.
-Your function should also accept `*args` and `**kwargs` so your application doesn't break
-when additional parameters are added.
-
-```python
-def report_success(job, connection, result, *args, **kwargs):
-    pass
-```
-
-Success callbacks are executed after job execution is complete, before dependents are enqueued.
-If an exception happens when your callback is executed, job status will be set to `FAILED`
-and dependents won't be enqueued.
-
-Callbacks are limited to 60 seconds of execution time. If you want to execute a long running job,
-consider using RQ's job dependency feature instead.
-
-
-### Failure Callbacks
-
-Failure callbacks are functions that accept `job`, `connection`, `type`, `value` and `traceback`
-arguments. `type`, `value` and `traceback` values returned by [sys.exc_info()](https://docs.python.org/3/library/sys.html#sys.exc_info), which is the exception raised when executing your job.
-
-```python
-def report_failure(job, connection, type, value, traceback):
-    pass
-```
-
-Failure callbacks are limited to 60 seconds of execution time.
-
-
-### Stopped Callbacks
-
-Stopped callbacks are functions that accept `job` and `connection` arguments.
-
-```python
-def report_stopped(job, connection):
-  pass
-```
-
-Stopped callbacks are functions that are executed when a worker receives a command to stop
-a job that is currently executing. See [Stopping a Job](https://python-rq.org/docs/workers/#stopping-a-job).
-
-
-### CLI Enqueueing
-
-_New in version 1.10.0._
-
-If you prefer enqueueing jobs via the command line interface or do not use python
-you can use this.
-
-
-#### Usage:
-```bash
-rq enqueue [OPTIONS] FUNCTION [ARGUMENTS]
-```
-
-#### Options:
-* `-q, --queue [value]`      The name of the queue.
-* `--timeout [value]`        Specifies the maximum runtime of the job before it is
-                               interrupted and marked as failed.
-* `--result-ttl [value]`     Specifies how long successful jobs and their results
-                               are kept.
-* `--ttl [value]`            Specifies the maximum queued time of the job before
-                               it is discarded.
-* `--failure-ttl [value]`    Specifies how long failed jobs are kept.
-* `--description [value]`    Additional description of the job
-* `--depends-on [value]`     Specifies another job id that must complete before this
-                               job will be queued.
-* `--job-id [value]`         The id of this job
-* `--at-front`               Will place the job at the front of the queue, instead
-                               of the end
-* `--retry-max [value]`      Maximum number of retries
-* `--retry-interval [value]` Interval between retries in seconds
-* `--schedule-in [value]`    Delay until the function is enqueued (e.g. 10s, 5m, 2d).
-* `--schedule-at [value]`    Schedule job to be enqueued at a certain time formatted
-                               in ISO 8601 without timezone (e.g. 2021-05-27T21:45:00).
-* `--quiet`                  Only logs errors.
-
-#### Function:
-There are two options:
-* Execute a function: dot-separated string of package, module and function (Just like
-    passing a string to `queue.enqueue()`).
-* Execute a python file: dot-separated pathname of the file. Because it is technically
-    an import `__name__ == '__main__'` will not work.
-
-#### Arguments:
-
-|            | plain text      | json             | [literal-eval](https://docs.python.org/3/library/ast.html#ast.literal_eval) |
-| ---------- | --------------- | ---------------- | --------------------------------------------------------------------------- |
-| keyword    | `[key]=[value]` | `[key]:=[value]` | `[key]%=[value]`                                                            |
-| no keyword | `[value]`       | `:[value]`       | `%[value]`                                                                  |
-
-Where `[key]` is the keyword and `[value]` is the value which is parsed with the corresponding
-parsing method.
-
-If the first character of `[value]` is `@` the subsequent path will be read.
-
-##### Examples:
-
-* `rq enqueue path.to.func abc` -> `queue.enqueue(path.to.func, 'abc')`
-* `rq enqueue path.to.func abc=def` -> `queue.enqueue(path.to.func, abc='def')`
-* `rq enqueue path.to.func ':{"json": "abc"}'` -> `queue.enqueue(path.to.func, {'json': 'abc'})`
-* `rq enqueue path.to.func 'key:={"json": "abc"}'` -> `queue.enqueue(path.to.func, key={'json': 'abc'})`
-* `rq enqueue path.to.func '%1, 2'` -> `queue.enqueue(path.to.func, (1, 2))`
-* `rq enqueue path.to.func '%None'` -> `queue.enqueue(path.to.func, None)`
-* `rq enqueue path.to.func '%True'` -> `queue.enqueue(path.to.func, True)`
-* `rq enqueue path.to.func 'key%=(1, 2)'` -> `queue.enqueue(path.to.func, key=(1, 2))`
-* `rq enqueue path.to.func 'key%={"foo": True}'` -> `queue.enqueue(path.to.func, key={"foo": True})`
-* `rq enqueue path.to.func @path/to/file` -> `queue.enqueue(path.to.func, open('path/to/file', 'r').read())`
-* `rq enqueue path.to.func key=@path/to/file` -> `queue.enqueue(path.to.func, key=open('path/to/file', 'r').read())`
-* `rq enqueue path.to.func :@path/to/file.json` -> `queue.enqueue(path.to.func, json.loads(open('path/to/file.json', 'r').read()))`
-* `rq enqueue path.to.func key:=@path/to/file.json` -> `queue.enqueue(path.to.func, key=json.loads(open('path/to/file.json', 'r').read()))`
-
-**Warning:** Do not use plain text without keyword if you do not know what the value is.
-If the value starts with `@`, `:` or `%` or includes `=` it would be recognised as something else.
-
-
-## Working with Queues
-
-Besides enqueuing jobs, Queues have a few useful methods:
-
-```python
-from rq import Queue
-from redis import Redis
-
-redis_conn = Redis()
-q = Queue(connection=redis_conn)
-
-# Getting the number of jobs in the queue
-# Note: Only queued jobs are counted, not including deferred ones
-print(len(q))
-
-# Retrieving jobs
-queued_job_ids = q.job_ids # Gets a list of job IDs from the queue
-queued_jobs = q.jobs # Gets a list of enqueued job instances
-job = q.fetch_job('my_id') # Returns job having ID "my_id"
-
-# Emptying a queue, this will delete all jobs in this queue
-q.empty()
-
-# Deleting a queue
-q.delete(delete_jobs=True) # Passing in `True` will remove all jobs in the queue
-# queue is now unusable. It can be recreated by enqueueing jobs to it.
-```
-
-
-### On the Design
-
-With RQ, you don't have to set up any queues upfront, and you don't have to
-specify any channels, exchanges, routing rules, or whatnot.  You can just put
-jobs onto any queue you want.  As soon as you enqueue a job to a queue that
-does not exist yet, it is created on the fly.
-
-RQ does _not_ use an advanced broker to do the message routing for you.  You
-may consider this an awesome advantage or a handicap, depending on the problem
-you're solving.
-
-Lastly, it does not speak a portable protocol, since it depends on [pickle][p]
-to serialize the jobs, so it's a Python-only system.
-
-
-## The delayed result
-
-When jobs get enqueued, the `queue.enqueue()` method returns a `Job` instance.
-This is nothing more than a proxy object that can be used to check the outcome
-of the actual job.
-
-For this purpose, it has a convenience `result` accessor property, that
-will return `None` when the job is not yet finished, or a non-`None` value when
-the job has finished (assuming the job _has_ a return value in the first place,
-of course).
-
-
-## The `@job` decorator
-If you're familiar with Celery, you might be used to its `@task` decorator.
-Starting from RQ >= 0.3, there exists a similar decorator:
-
-```python
-from rq.decorators import job
-
-@job('low', connection=my_redis_conn, timeout=5)
-def add(x, y):
-    return x + y
-
-job = add.delay(3, 4)
-time.sleep(1)
-print(job.result)  # Changed to job.return_value() in RQ >= 1.12.0
-```
-
-
-## Bypassing workers
-
-For testing purposes, you can enqueue jobs without delegating the actual
-execution to a worker (available since version 0.3.1). To do this, pass the
-`is_async=False` argument into the Queue constructor:
-
-```python
->>> q = Queue('low', is_async=False, connection=my_redis_conn)
->>> job = q.enqueue(fib, 8)
->>> job.result
-21
-```
-
-The above code runs without an active worker and executes `fib(8)`
-synchronously within the same process. You may know this behaviour from Celery
-as `ALWAYS_EAGER`. Note, however, that you still need a working connection to
-a redis instance for storing states related to job execution and completion.
-
-
-## The worker
-
-To learn about workers, see the [workers][w] documentation.
-
-[w]: {{site.baseurl}}workers/
-
-
-## Considerations for jobs
-
-Technically, you can put any Python function call on a queue, but that does not
-mean it's always wise to do so.  Some things to consider before putting a job
-on a queue:
-
-* Make sure that the function's `__module__` is importable by the worker.  In
-  particular, this means that you cannot enqueue functions that are declared in
-  the `__main__` module.
-* Make sure that the worker and the work generator share _exactly_ the same
-  source code.
-* Make sure that the function call does not depend on its context.  In
-  particular, global variables are evil (as always), but also _any_ state that
-  the function depends on (for example a "current" user or "current" web
-  request) is not there when the worker will process it.  If you want work done
-  for the "current" user, you should resolve that user to a concrete instance
-  and pass a reference to that user object to the job as an argument.
-
-
-## Limitations
-
-RQ workers will only run on systems that implement `fork()`.  Most notably,
-this means it is not possible to run the workers on Windows without using the [Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/about) and running in a bash shell.
-
-
-[m]: http://pypi.python.org/pypi/mailer
-[p]: http://docs.python.org/library/pickle.html
diff --git a/docs/docs/job_registries.md b/docs/docs/job_registries.md
deleted file mode 100644
index 5740c6d..0000000
--- a/docs/docs/job_registries.md
+++ /dev/null
@@ -1,92 +0,0 @@
----
-title: "RQ: Job Registries"
-layout: docs
----
-
-Each queue maintains a set of Job Registries:
-* `StartedJobRegistry` Holds currently executing jobs. Jobs are added right before they are 
-executed and removed right after completion (success or failure).
-* `FinishedJobRegistry` Holds successfully completed jobs.
-* `FailedJobRegistry` Holds jobs that have been executed, but didn't finish successfully.
-* `DeferredJobRegistry` Holds deferred jobs (jobs that depend on another job and are waiting for that 
-job to finish).
-* `ScheduledJobRegistry` Holds scheduled jobs.
-* `CanceledJobRegistry` Holds canceled jobs.
-
-You can get the number of jobs in a registry, the ids of the jobs in the registry, and more. 
-Below is an example using a `StartedJobRegistry`.
-```python
-import time
-from redis import Redis
-from rq import Queue
-from rq.registry import StartedJobRegistry
-from somewhere import count_words_at_url
-
-redis = Redis()
-queue = Queue(connection=redis)
-job = queue.enqueue(count_words_at_url, 'http://nvie.com')
-
-# get StartedJobRegistry by queue
-registry = StartedJobRegistry(queue=queue)
-
-# or get StartedJobRegistry by queue name and connection
-registry2 = StartedJobRegistry(name='my_queue', connection=redis)
-
-# sleep for a moment while job is taken off the queue
-time.sleep(0.1)
-
-print('Queue associated with the registry: %s' % registry.get_queue())
-print('Number of jobs in registry %s' % registry.count)
-
-# get the list of ids for the jobs in the registry
-print('IDs in registry %s' % registry.get_job_ids())
-
-# test if a job is in the registry using the job instance or job id
-print('Job in registry %s' % (job in registry))
-print('Job in registry %s' % (job.id in registry))
-```
-
-_New in version 1.2.0_
-
-You can quickly access job registries from `Queue` objects.
-
-```python
-from redis import Redis
-from rq import Queue
-
-redis = Redis()
-queue = Queue(connection=redis)
-
-queue.started_job_registry  # Returns StartedJobRegistry
-queue.deferred_job_registry   # Returns DeferredJobRegistry
-queue.finished_job_registry  # Returns FinishedJobRegistry
-queue.failed_job_registry  # Returns FailedJobRegistry
-queue.scheduled_job_registry  # Returns ScheduledJobRegistry
-```
-
-## Removing Jobs
-
-_New in version 1.2.0_
-
-To remove a job from a job registry, use `registry.remove()`. This is useful
-when you want to manually remove jobs from a registry, such as deleting failed
-jobs before they expire from `FailedJobRegistry`.
-
-```python
-from redis import Redis
-from rq import Queue
-from rq.registry import FailedJobRegistry
-
-redis = Redis()
-queue = Queue(connection=redis)
-registry = FailedJobRegistry(queue=queue)
-
-# This is how to remove a job from a registry
-for job_id in registry.get_job_ids():
-    registry.remove(job_id)
-
-# If you want to remove a job from a registry AND delete the job,
-# use `delete_job=True`
-for job_id in registry.get_job_ids():
-    registry.remove(job_id, delete_job=True)
-```
diff --git a/docs/docs/jobs.md b/docs/docs/jobs.md
deleted file mode 100644
index fd5bab3..0000000
--- a/docs/docs/jobs.md
+++ /dev/null
@@ -1,355 +0,0 @@
----
-title: "RQ: Jobs"
-layout: docs
----
-
-For some use cases it might be useful have access to the current job ID or
-instance from within the job function itself.  Or to store arbitrary data on
-jobs.
-
-
-## RQ's Job Object
-
-### Job Creation
-
-When you enqueue a function, a job will be returned.  You may then access the
-id property, which can later be used to retrieve the job.
-
-```python
-from rq import Queue
-from redis import Redis
-from somewhere import count_words_at_url
-
-redis_conn = Redis()
-q = Queue(connection=redis_conn)  # no args implies the default queue
-
-# Delay execution of count_words_at_url('http://nvie.com')
-job = q.enqueue(count_words_at_url, 'http://nvie.com')
-print('Job id: %s' % job.id)
-```
-
-Or if you want a predetermined job id, you may specify it when creating the job.
-
-```python
-job = q.enqueue(count_words_at_url, 'http://nvie.com', job_id='my_job_id')
-```
-
-A job can also be created directly with `Job.create()`.
-
-```python
-from rq.job import Job
-
-job = Job.create(count_words_at_url, 'http://nvie.com')
-print('Job id: %s' % job.id)
-q.enqueue_job(job)
-
-# create a job with a predetermined id
-job = Job.create(count_words_at url, 'http://nvie.com', id='my_job_id')
-```
-
-The keyword arguments accepted by `create()` are:
-
-* `timeout` specifies the maximum runtime of the job before it's interrupted
-  and marked as `failed`. Its default unit is seconds and it can be an integer
-  or a string representing an integer(e.g.  `2`, `'2'`). Furthermore, it can
-  be a string with specify unit including hour, minute, second
-  (e.g. `'1h'`, `'3m'`, `'5s'`).
-* `result_ttl` specifies how long (in seconds) successful jobs and their
-  results are kept. Expired jobs will be automatically deleted. Defaults to 500 seconds.
-* `ttl` specifies the maximum queued time (in seconds) of the job before it's discarded.
-  This argument defaults to `None` (infinite TTL).
-* `failure_ttl` specifies how long (in seconds) failed jobs are kept (defaults to 1 year)
-* `depends_on` specifies another job (or job id) that must complete before this
-  job will be queued.
-* `id` allows you to manually specify this job's id
-* `description` to add additional description to the job
-* `connection`
-* `status`
-* `origin` where this job was originally enqueued
-* `meta` a dictionary holding custom status information on this job
-* `args` and `kwargs`: use these to explicitly pass arguments and keyword to the
-  underlying job function. This is useful if your function happens to have
-  conflicting argument names with RQ, for example `description` or `ttl`.
-
-In the last case, if you want to pass `description` and `ttl` keyword arguments
-to your job and not to RQ's enqueue function, this is what you do:
-
-```python
-job = Job.create(count_words_at_url,
-          ttl=30,  # This ttl will be used by RQ
-          args=('http://nvie.com',),
-          kwargs={
-              'description': 'Function description', # This is passed on to count_words_at_url
-              'ttl': 15  # This is passed on to count_words_at_url function
-          })
-```
-
-### Retrieving Jobs
-
-All job information is stored in Redis. You can inspect a job and its attributes
-by using `Job.fetch()`.
-
-```python
-from redis import Redis
-from rq.job import Job
-
-redis = Redis()
-job = Job.fetch('my_job_id', connection=redis)
-print('Status: %s' % job.get_status())
-```
-
-Some interesting job attributes include:
-* `job.get_status(refresh=True)` Possible values are `queued`, `started`,
-  `deferred`, `finished`, `stopped`, `scheduled`, `canceled` and `failed`. If `refresh` is
-  `True` fresh values are fetched from Redis.
-* `job.get_meta(refresh=True)` Returns custom `job.meta` dict containing user
-  stored data. If `refresh` is `True` fresh values are fetched from Redis.
-* `job.origin` queue name of this job
-* `job.func_name`
-* `job.args` arguments passed to the underlying job function
-* `job.kwargs` key word arguments passed to the underlying job function
-* `job.result` stores the return value of the job being executed, will return `None` prior to job execution. Results are kept according to the `result_ttl` parameter (500 seconds by default).
-* `job.enqueued_at`
-* `job.started_at`
-* `job.ended_at`
-* `job.exc_info` stores exception information if job doesn't finish successfully.
-* `job.last_heartbeat` the latest timestamp that's periodically updated when the job is executing. Can be used to determine if the job is still active.
-* `job.worker_name` returns the worker name currently executing this job.
-* `job.refresh()` Update the job instance object with values fetched from Redis.
-
-If you want to efficiently fetch a large number of jobs, use `Job.fetch_many()`.
-
-```python
-jobs = Job.fetch_many(['foo_id', 'bar_id'], connection=redis)
-for job in jobs:
-    print('Job %s: %s' % (job.id, job.func_name))
-```
-
-## Stopping a Currently Executing Job
-_New in version 1.7.0_
-
-You can use `send_stop_job_command()` to tell a worker to immediately stop a currently executing job. A job that's stopped will be sent to [FailedJobRegistry](https://python-rq.org/docs/results/#dealing-with-exceptions).
-
-```python
-from redis import Redis
-from rq.command import send_stop_job_command
-
-redis = Redis()
-
-# This will raise an exception if job is invalid or not currently executing
-send_stop_job_command(redis, job_id)
-```
-
-Unlike failed jobs, stopped jobs will *not* be automatically retried if retry is configured. Subclasses of `Worker` which override `handle_job_failure()` should likewise take care to handle jobs with a `stopped` status appropriately.
-
-## Canceling a Job
-_New in version 1.10.0_
-
-To prevent a job from running, cancel a job, use `job.cancel()`.
-
-```python
-from redis import Redis
-from rq.job import Job
-from rq.registry import CanceledJobRegistry
-from .queue import Queue
-
-redis = Redis()
-job = Job.fetch('my_job_id', connection=redis)
-job.cancel()
-
-job.get_status()  # Job status is CANCELED
-
-registry = CanceledJobRegistry(job.origin, connection=job.connection)
-print(job in registry)  # Job is in CanceledJobRegistry
-```
-
-Canceling a job will remove:
-1. Sets job status to `CANCELED`
-2. Removes job from queue
-3. Puts job into `CanceledJobRegistry`
-
-Note that `job.cancel()` does **not** delete the job itself from Redis. If you want to
-delete the job from Redis and reclaim memory, use `job.delete()`.
-
-Note: if you want to enqueue the dependents of the job you 
-are trying to cancel use the following:
-
-```python
-from rq import cancel_job
-cancel_job(
-  '2eafc1e6-48c2-464b-a0ff-88fd199d039c',
-  enqueue_dependents=True
-)
-```
-
-## Job / Queue Creation with Custom Serializer
-
-When creating a job or queue, you can pass in a custom serializer that will be used for serializing / de-serializing job arguments.
-Serializers used should have at least `loads` and `dumps` method.
-The default serializer used is `pickle`.
-
-```python
-from rq import Queue
-from rq.job import Job
-from rq.serializers import JSONSerializer
-
-job = Job(connection=connection, serializer=JSONSerializer)
-queue = Queue(connection=connection, serializer=JSONSerializer)
-```
-
-## Accessing The "current" Job from within the job function
-
-Since job functions are regular Python functions, you must retrieve the
-job in order to inspect or update the job's attributes.  To do this from within
-the function, you can use:
-
-```python
-from rq import get_current_job
-
-def add(x, y):
-    job = get_current_job()
-    print('Current job: %s' % (job.id,))
-    return x + y
-```
-
-Note that calling get_current_job() outside of the context of a job function will return `None`.
-
-
-## Storing arbitrary data on jobs
-
-_Improved in 0.8.0._
-
-To add/update custom status information on this job, you have access to the
-`meta` property, which allows you to store arbitrary pickleable data on the job
-itself:
-
-```python
-import socket
-
-def add(x, y):
-    job = get_current_job()
-    job.meta['handled_by'] = socket.gethostname()
-    job.save_meta()
-
-    # do more work
-    time.sleep(1)
-    return x + y
-```
-
-
-## Time to live for job in queue
-
-A job has two TTLs, one for the job result, `result_ttl`, and one for the job itself, `ttl`.
-The latter is used if you have a job that shouldn't be executed after a certain amount of time.
-
-```python
-# When creating the job:
-job = Job.create(func=say_hello,
-                 result_ttl=600,  # how long (in seconds) to keep the job (if successful) and its results
-                 ttl=43,  # maximum queued time (in seconds) of the job before it's discarded.
-                )
-
-# or when queueing a new job:
-job = q.enqueue(count_words_at_url,
-                'http://nvie.com',
-                result_ttl=600,  # how long to keep the job (if successful) and its results
-                ttl=43  # maximum queued time
-               )
-```
-
-## Job Position in Queue
-
-For user feedback or debuging it is possible to get the position of a job
-within the work queue. This allows to track the job processing through the
-queue.
-
-This function iterates over all jobs within the queue and therefore does
-perform poorly on very large job queues.
-
-```python
-from rq import Queue
-from redis import Redis
-from hello import say_hello
-
-redis_conn = Redis()
-q = Queue(connection=redis_conn)
-
-job = q.enqueue(say_hello)
-job2 = q.enqueue(say_hello)
-
-job2.get_position()
-# returns 1
-
-q.get_job_position(job)
-# return 0
-```
-
-## Failed Jobs
-
-If a job fails during execution, the worker will put the job in a FailedJobRegistry.
-On the Job instance, the `is_failed` property will be true. FailedJobRegistry
-can be accessed through `queue.failed_job_registry`.
-
-```python
-from redis import Redis
-from rq import Queue
-from rq.job import Job
-
-
-def div_by_zero(x):
-    return x / 0
-
-
-connection = Redis()
-queue = Queue(connection=connection)
-job = queue.enqueue(div_by_zero, 1)
-registry = queue.failed_job_registry
-
-worker = Worker([queue])
-worker.work(burst=True)
-
-assert len(registry) == 1  # Failed jobs are kept in FailedJobRegistry
-```
-
-By default, failed jobs are kept for 1 year. You can change this by specifying
-`failure_ttl` (in seconds) when enqueueing jobs.
-
-```python
-job = queue.enqueue(foo_job, failure_ttl=300)  # 5 minutes in seconds
-```
-
-
-### Requeuing Failed Jobs
-
-If you need to manually requeue failed jobs, here's how to do it:
-
-```python
-from redis import Redis
-from rq import Queue
-
-connection = Redis()
-queue = Queue(connection=connection)
-registry = queue.failed_job_registry
-
-# This is how to get jobs from FailedJobRegistry
-for job_id in registry.get_job_ids():
-    registry.requeue(job_id)  # Puts job back in its original queue
-
-assert len(registry) == 0  # Registry will be empty when job is requeued
-```
-
-Starting from version 1.5.0, RQ also allows you to [automatically retry
-failed jobs](https://python-rq.org/docs/exceptions/#retrying-failed-jobs).
-
-
-### Requeuing Failed Jobs via CLI
-
-RQ also provides a CLI tool that makes requeuing failed jobs easy.
-
-```console
-# This will requeue foo_job_id and bar_job_id from myqueue's failed job registry
-rq requeue --queue myqueue -u redis://localhost:6379 foo_job_id bar_job_id
-
-# This command will requeue all jobs in myqueue's failed job registry
-rq requeue --queue myqueue -u redis://localhost:6379 --all
-```
diff --git a/docs/docs/monitoring.md b/docs/docs/monitoring.md
deleted file mode 100644
index 0bb59f6..0000000
--- a/docs/docs/monitoring.md
+++ /dev/null
@@ -1,106 +0,0 @@
----
-title: "RQ: Monitoring"
-layout: docs
----
-
-Monitoring is where RQ shines.
-
-The easiest way is probably to use the [RQ dashboard][dashboard], a separately
-distributed tool, which is a lightweight webbased monitor frontend for RQ,
-which looks like this:
-
-[![RQ dashboard](/img/dashboard.png)][dashboard]  
-
-
-To install, just do:
-
-```console
-$ pip install rq-dashboard
-$ rq-dashboard
-```
-
-It can also be integrated easily in your Flask app.
-
-
-## Monitoring at the console
-
-To see what queues exist and what workers are active, just type `rq info`:
-
-```console
-$ rq info
-high       |██████████████████████████ 20
-low        |██████████████ 12
-default    |█████████ 8
-3 queues, 45 jobs total
-
-Bricktop.19233 idle: low
-Bricktop.19232 idle: high, default, low
-Bricktop.18349 idle: default
-3 workers, 3 queues
-```
-
-
-## Querying by queue names
-
-You can also query for a subset of queues, if you're looking for specific ones:
-
-```console
-$ rq info high default
-high       |██████████████████████████ 20
-default    |█████████ 8
-2 queues, 28 jobs total
-
-Bricktop.19232 idle: high, default
-Bricktop.18349 idle: default
-2 workers, 2 queues
-```
-
-
-## Organising workers by queue
-
-By default, `rq info` prints the workers that are currently active, and the
-queues that they are listening on, like this:
-
-```console
-$ rq info
-...
-
-Mickey.26421 idle: high, default
-Bricktop.25458 busy: high, default, low
-Turkish.25812 busy: high, default
-3 workers, 3 queues
-```
-
-To see the same data, but organised by queue, use the `-R` (or `--by-queue`)
-flag:
-
-```console
-$ rq info -R
-...
-
-high:    Bricktop.25458 (busy), Mickey.26421 (idle), Turkish.25812 (busy)
-low:     Bricktop.25458 (busy)
-default: Bricktop.25458 (busy), Mickey.26421 (idle), Turkish.25812 (busy)
-failed:  –
-3 workers, 4 queues
-```
-
-
-## Interval polling
-
-By default, `rq info` will print stats and exit.
-You can specify a poll interval, by using the `--interval` flag.
-
-```console
-$ rq info --interval 1
-```
-
-`rq info` will now update the screen every second.  You may specify a float
-value to indicate fractions of seconds.  Be aware that low interval values will
-increase the load on Redis, of course.
-
-```console
-$ rq info --interval 0.5
-```
-
-[dashboard]: https://github.com/nvie/rq-dashboard
diff --git a/docs/docs/results.md b/docs/docs/results.md
deleted file mode 100644
index b2a054a..0000000
--- a/docs/docs/results.md
+++ /dev/null
@@ -1,158 +0,0 @@
----
-title: "RQ: Results"
-layout: docs
----
-
-Enqueueing jobs is delayed execution of function calls.  This means we're
-solving a problem, but are getting back a few in return.
-
-
-## Dealing with Results
-
-Python functions may have return values, so jobs can have them, too.  If a job
-returns a non-`None` return value, the worker will write that return value back
-to the job's Redis hash under the `result` key. The job's Redis hash itself
-will expire after 500 seconds by default after the job is finished.
-
-The party that enqueued the job gets back a `Job` instance as a result of the
-enqueueing itself. Such a `Job` object is a proxy object that is tied to the
-job's ID, to be able to poll for results.
-
-
-### Return Value TTL
-Return values are written back to Redis with a limited lifetime (via a Redis
-expiring key), which is merely to avoid ever-growing Redis databases.
-
-The TTL value of the job result can be specified using the
-`result_ttl` keyword argument to `enqueue()` and `enqueue_call()` calls.  It
-can also be used to disable the expiry altogether.  You then are responsible
-for cleaning up jobs yourself, though, so be careful to use that.
-
-You can do the following:
-
-    q.enqueue(foo)  # result expires after 500 secs (the default)
-    q.enqueue(foo, result_ttl=86400)  # result expires after 1 day
-    q.enqueue(foo, result_ttl=0)  # result gets deleted immediately
-    q.enqueue(foo, result_ttl=-1)  # result never expires--you should delete jobs manually
-
-Additionally, you can use this for keeping around finished jobs without return
-values, which would be deleted immediately by default.
-
-    q.enqueue(func_without_rv, result_ttl=500)  # job kept explicitly
-
-
-## Dealing with Exceptions
-
-Jobs can fail and throw exceptions. This is a fact of life. RQ deals with
-this in the following way.
-
-Furthermore, it should be possible to retry failed
-jobs. Typically, this is something that needs manual interpretation, since
-there is no automatic or reliable way of letting RQ judge whether it is safe
-for certain tasks to be retried or not.
-
-When an exception is thrown inside a job, it is caught by the worker,
-serialized and stored under the job's Redis hash's `exc_info` key. A reference
-to the job is put in the `FailedJobRegistry`. By default, failed jobs will be
-kept for 1 year.
-
-The job itself has some useful properties that can be used to aid inspection:
-
-* the original creation time of the job
-* the last enqueue date
-* the originating queue
-* a textual description of the desired function invocation
-* the exception information
-
-This makes it possible to inspect and interpret the problem manually and
-possibly resubmit the job.
-
-
-## Dealing with Interruptions
-
-When workers get killed in the polite way (Ctrl+C or `kill`), RQ tries hard not
-to lose any work.  The current work is finished after which the worker will
-stop further processing of jobs.  This ensures that jobs always get a fair
-chance to finish themselves.
-
-However, workers can be killed forcefully by `kill -9`, which will not give the
-workers a chance to finish the job gracefully or to put the job on the `failed`
-queue.  Therefore, killing a worker forcefully could potentially lead to
-damage. Just sayin'.
-
-If the worker gets killed while a job is running, it will eventually end up in
-`FailedJobRegistry` because a cleanup task will raise an `AbandonedJobError`.
-Before 0.14 the behavor was the same, but the cleanup task raised a
-`Moved to FailedJobRegistry at` error message instead.
-
-## Dealing with Job Timeouts
-
-By default, jobs should execute within 180 seconds.  After that, the worker
-kills the work horse and puts the job onto the `failed` queue, indicating the
-job timed out.
-
-If a job requires more (or less) time to complete, the default timeout period
-can be loosened (or tightened), by specifying it as a keyword argument to the
-`enqueue()` call, like so:
-
-```python
-q = Queue()
-q.enqueue(mytask, args=(foo,), kwargs={'bar': qux}, job_timeout=600)  # 10 mins
-```
-
-You can also change the default timeout for jobs that are enqueued via specific
-queue instances at once, which can be useful for patterns like this:
-
-```python
-# High prio jobs should end in 8 secs, while low prio
-# work may take up to 10 mins
-high = Queue('high', default_timeout=8)  # 8 secs
-low = Queue('low', default_timeout=600)  # 10 mins
-
-# Individual jobs can still override these defaults
-low.enqueue(really_really_slow, job_timeout=3600)  # 1 hr
-```
-
-Individual jobs can still specify an alternative timeout, as workers will
-respect these.
-
-
-## Job Results
-_New in version 1.12.0._
-
-If a job is executed multiple times, you can access its execution history by calling
-`job.results()`. RQ will store up to 10 latest execution results.
-
-Calling `job.latest_result()` will return the latest `Result` object, which has the
-following attributes:
-* `type` - an enum of `SUCCESSFUL`, `FAILED` or `STOPPED`
-* `created_at` - the time at which result is created
-* `return_value` - job's return value, only present if result type is `SUCCESSFUL`
-* `exc_string` - the exception raised by job, only present if result type is `FAILED`
-* `job_id`
-
-```python
-job = Job.fetch(id='my_id', connection=redis)
-result = job.latest_result()  #  returns Result(id=uid, type=SUCCESSFUL)
-if result == result.Type.SUCCESSFUL: 
-    print(result.return_value) 
-else: 
-    print(result.exc_string)
-```
-
-Alternatively, you can also use `job.return_value()` as a shortcut to accessing
-the return value of the latest result. Note that `job.return_value` will only
-return a not-`None` object if the latest result is a successful execution.
-
-```python
-job = Job.fetch(id='my_id', connection=redis)
-print(job.return_value())  # Shortcut for job.latest_result().return_value
-```
-
-To access multiple results, use `job.results()`.
-
-```python
-job = Job.fetch(id='my_id', connection=redis)
-for result in job.results(): 
-    print(result.created_at, result.type)
-```
diff --git a/docs/docs/scheduling.md b/docs/docs/scheduling.md
deleted file mode 100644
index cb297fa..0000000
--- a/docs/docs/scheduling.md
+++ /dev/null
@@ -1,138 +0,0 @@
----
-title: "RQ: Scheduling Jobs"
-layout: docs
----
-
-_New in version 1.2.0._
-
-If you need a battle tested version of RQ job scheduling, please take a look at
-https://github.com/rq/rq-scheduler instead.
-
-New in RQ 1.2.0 is `RQScheduler`, a built-in component that allows you to schedule jobs
-for future execution.
-
-This component is developed based on prior experience of developing the external
-`rq-scheduler` library. The goal of taking this component in house is to allow
-RQ to have job scheduling capabilities without:
-1. Running a separate `rqscheduler` CLI command.
-2. Worrying about a separate `Scheduler` class.
-
-Running RQ workers with the scheduler component is simple:
-
-```console
-$ rq worker --with-scheduler
-```
-
-## Scheduling Jobs for Execution
-
-There are two main APIs to schedule jobs for execution, `enqueue_at()` and `enqueue_in()`.
-
-`queue.enqueue_at()` works almost like `queue.enqueue()`, except that it expects a datetime
-for its first argument.
-
-```python
-from datetime import datetime
-from rq import Queue
-from redis import Redis
-from somewhere import say_hello
-
-queue = Queue(name='default', connection=Redis())
-
-# Schedules job to be run at 9:15, October 10th in the local timezone
-job = queue.enqueue_at(datetime(2019, 10, 8, 9, 15), say_hello)
-```
-
-Note that if you pass in a naive datetime object, RQ will automatically convert it
-to the local timezone.
-
-`queue.enqueue_in()` accepts a `timedelta` as its first argument.
-
-```python
-from datetime import timedelta
-from rq import Queue
-from redis import Redis
-from somewhere import say_hello
-
-queue = Queue(name='default', connection=Redis())
-
-# Schedules job to be run in 10 seconds
-job = queue.enqueue_in(timedelta(seconds=10), say_hello)
-```
-
-Jobs that are scheduled for execution are not placed in the queue, but they are
-stored in `ScheduledJobRegistry`.
-
-```python
-from datetime import timedelta
-from redis import Redis
-
-from rq import Queue
-from rq.registry import ScheduledJobRegistry
-
-redis = Redis()
-
-queue = Queue(name='default', connection=redis)
-job = queue.enqueue_in(timedelta(seconds=10), say_nothing)
-print(job in queue)  # Outputs False as job is not enqueued
-
-registry = ScheduledJobRegistry(queue=queue)
-print(job in registry)  # Outputs True as job is placed in ScheduledJobRegistry
-```
-
-## Running the Scheduler
-
-If you use RQ's scheduling features, you need to run RQ workers with the
-scheduler component enabled.
-
-```console
-$ rq worker --with-scheduler
-```
-
-You can also run a worker with scheduler enabled in a programmatic way.
-
-```python
-from rq import Worker, Queue
-from redis import Redis
-
-redis = Redis()
-queue = Queue(connection=redis)
-
-worker = Worker(queues=[queue], connection=redis)
-worker.work(with_scheduler=True)
-```
-
-Only a single scheduler can run for a specific queue at any one time. If you run multiple
-workers with scheduler enabled, only one scheduler will be actively working for a given queue.
-
-Active schedulers are responsible for enqueueing scheduled jobs. Active schedulers will check for
-scheduled jobs once every second.
-
-Idle schedulers will periodically (every 15 minutes) check whether the queues they're
-responsible for have active schedulers. If they don't, one of the idle schedulers will start
-working. This way, if a worker with active scheduler dies, the scheduling work will be picked
-up by other workers with the scheduling component enabled.
-
-
-## Safe importing of the worker module
-
-When running the worker programmatically with the scheduler, you must keep in mind that the
-import must be protected with `if __name__ == '__main__'`. The scheduler runs on it's own process
-(using `multiprocessing` from the stdlib), so the new spawned process must able to safely import the module without
-causing any side effects (starting a new process on top of the main ones).
-
-```python
-...
-
-# When running `with_scheduler=True` this is necessary
-if __name__ == '__main__':
-    worker = Worker(queues=[queue], connection=redis)
-    worker.work(with_scheduler=True)
-
-...
-# When running without the scheduler this is fine
-worker = Worker(queues=[queue], connection=redis)
-worker.work()
-```
-
-More information on the Python official docs [here](https://docs.python.org/3.7/library/multiprocessing.html#the-spawn-and-forkserver-start-methods).
-
diff --git a/docs/docs/testing.md b/docs/docs/testing.md
deleted file mode 100644
index d340868..0000000
--- a/docs/docs/testing.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-title: "RQ: Testing"
-layout: docs
----
-
-## Workers inside unit tests
-
-You may wish to include your RQ tasks inside unit tests. However, many frameworks (such as Django) use in-memory databases, which do not play nicely with the default `fork()` behaviour of RQ.
-
-Therefore, you must use the SimpleWorker class to avoid fork();
-
-```python
-from redis import Redis
-from rq import SimpleWorker, Queue
-
-queue = Queue(connection=Redis())
-queue.enqueue(my_long_running_job)
-worker = SimpleWorker([queue], connection=queue.connection)
-worker.work(burst=True)  # Runs enqueued job
-# Check for result...
-```
-
-
-## Testing on Windows
-
-If you are testing on a Windows machine you can use the approach above, but with a slight tweak.
-You will need to subclass SimpleWorker to override the default timeout mechanism of the worker.
-Reason: Windows OS does not implement some underlying signals utilized by the default SimpleWorker.
-
-To subclass SimpleWorker for Windows you can do the following:
-
-```python
-from rq import SimpleWorker
-from rq.timeouts import TimerDeathPenalty
-
-class WindowsSimpleWorker(SimpleWorker):
-    death_penalty_class = TimerDeathPenalty
-```
-
-Now you can use WindowsSimpleWorker for running tasks on Windows.
-
-
-## Running Jobs in unit tests
-
-Another solution for testing purposes is to use the `is_async=False` queue
-parameter, that instructs it to instantly perform the job in the same
-thread instead of dispatching it to the workers. Workers are not required
-anymore.
-Additionally, we can use fake redis to mock a redis instance, so we don't have to
-run a redis server separately. The instance of the fake redis server can
-be directly passed as the connection argument to the queue:
-
-```python
-from fakeredis import FakeStrictRedis
-from rq import Queue
-
-queue = Queue(is_async=False, connection=FakeStrictRedis())
-job = queue.enqueue(my_long_running_job)
-assert job.is_finished
-```
diff --git a/docs/docs/workers.md b/docs/docs/workers.md
deleted file mode 100644
index 8c6d663..0000000
--- a/docs/docs/workers.md
+++ /dev/null
@@ -1,494 +0,0 @@
----
-title: "RQ: Workers"
-layout: docs
----
-
-A worker is a Python process that typically runs in the background and exists
-solely as a work horse to perform lengthy or blocking tasks that you don't want
-to perform inside web processes.
-
-
-## Starting Workers
-
-To start crunching work, simply start a worker from the root of your project
-directory:
-
-```console
-$ rq worker high default low
-*** Listening for work on high, default, low
-Got send_newsletter('me@nvie.com') from default
-Job ended normally without result
-*** Listening for work on high, default, low
-...
-```
-
-Workers will read jobs from the given queues (the order is important) in an
-endless loop, waiting for new work to arrive when all jobs are done.
-
-Each worker will process a single job at a time.  Within a worker, there is no
-concurrent processing going on.  If you want to perform jobs concurrently,
-simply start more workers.
-
-You should use process managers like [Supervisor](/patterns/supervisor/) or
-[systemd](/patterns/systemd/) to run RQ workers in production.
-
-
-### Burst Mode
-
-By default, workers will start working immediately and will block and wait for
-new work when they run out of work. Workers can also be started in _burst
-mode_ to finish all currently available work and quit as soon as all given
-queues are emptied.
-
-```console
-$ rq worker --burst high default low
-*** Listening for work on high, default, low
-Got send_newsletter('me@nvie.com') from default
-Job ended normally without result
-No more work, burst finished.
-Registering death.
-```
-
-This can be useful for batch work that needs to be processed periodically, or
-just to scale up your workers temporarily during peak periods.
-
-
-### Worker Arguments
-
-In addition to `--burst`, `rq worker` also accepts these arguments:
-
-* `--url` or `-u`: URL describing Redis connection details (e.g `rq worker --url redis://:secrets@example.com:1234/9` or `rq worker --url unix:///var/run/redis/redis.sock`)
-* `--burst` or `-b`: run worker in burst mode (stops after all jobs in queue have been processed).
-* `--path` or `-P`: multiple import paths are supported (e.g `rq worker --path foo --path bar`)
-* `--config` or `-c`: path to module containing RQ settings.
-* `--results-ttl`: job results will be kept for this number of seconds (defaults to 500).
-* `--worker-class` or `-w`: RQ Worker class to use (e.g `rq worker --worker-class 'foo.bar.MyWorker'`)
-* `--job-class` or `-j`: RQ Job class to use.
-* `--queue-class`: RQ Queue class to use.
-* `--connection-class`: Redis connection class to use, defaults to `redis.StrictRedis`.
-* `--log-format`: Format for the worker logs, defaults to `'%(asctime)s %(message)s'`
-* `--date-format`: Datetime format for the worker logs, defaults to `'%H:%M:%S'`
-* `--disable-job-desc-logging`: Turn off job description logging.
-* `--max-jobs`: Maximum number of jobs to execute.
-_New in version 1.8.0._
-* `--serializer`: Path to serializer object (e.g "rq.serializers.DefaultSerializer" or "rq.serializers.JSONSerializer")
-
-_New in version 1.14.0._
-* `--dequeue-strategy`: The strategy to dequeue jobs from multiple queues (one of `default`, `random` or `round_robin`,  defaults to `default`)
-* `--max-idle-time`: if specified, worker will wait for X seconds for a job to arrive before shuttind down.
-* `--maintenance-interval`: defaults to 600 seconds. Runs maintenance tasks every X seconds.
-
-
-## Inside the worker
-
-### The Worker Lifecycle
-
-The life-cycle of a worker consists of a few phases:
-
-1. _Boot_. Loading the Python environment.
-2. _Birth registration_. The worker registers itself to the system so it knows
-   of this worker.
-3. _Start listening_. A job is popped from any of the given Redis queues.
-   If all queues are empty and the worker is running in burst mode, quit now.
-   Else, wait until jobs arrive.
-4. _Prepare job execution_. The worker tells the system that it will begin work
-   by setting its status to `busy` and registers job in the `StartedJobRegistry`.
-5. _Fork a child process._
-   A child process (the "work horse") is forked off to do the actual work in
-   a fail-safe context.
-6. _Process work_. This performs the actual job work in the work horse.
-7. _Cleanup job execution_. The worker sets its status to `idle` and sets both
-   the job and its result to expire based on `result_ttl`. Job is also removed
-   from `StartedJobRegistry` and added to to `FinishedJobRegistry` in the case
-   of successful execution, or `FailedJobRegistry` in the case of failure.
-8. _Loop_.  Repeat from step 3.
-
-
-### Performance Notes
-
-Basically the `rq worker` shell script is a simple fetch-fork-execute loop.
-When a lot of your jobs do lengthy setups, or they all depend on the same set
-of modules, you pay this overhead each time you run a job (since you're doing
-the import _after_ the moment of forking).  This is clean, because RQ won't
-ever leak memory this way, but also slow.
-
-A pattern you can use to improve the throughput performance for these kind of
-jobs can be to import the necessary modules _before_ the fork.  There is no way
-of telling RQ workers to perform this set up for you, but you can do it
-yourself before starting the work loop.
-
-To do this, provide your own worker script (instead of using `rq worker`).
-A simple implementation example:
-
-```python
-#!/usr/bin/env python
-from redis import Redis
-from rq import Worker
-
-# Preload libraries
-import library_that_you_want_preloaded
-
-# Provide the worker with the list of queues (str) to listen to.
-w = Worker(['default'], connection=Redis())
-w.work()
-```
-
-
-### Worker Names
-
-Workers are registered to the system under their names, which are generated
-randomly during instantiation (see [monitoring][m]). To override this default,
-specify the name when starting the worker, or use the `--name` cli option.
-
-```python
-from redis import Redis
-from rq import Queue, Worker
-
-redis = Redis()
-queue = Queue('queue_name')
-
-# Start a worker with a custom name
-worker = Worker([queue], connection=redis, name='foo')
-```
-
-[m]: /docs/monitoring/
-
-
-### Retrieving Worker Information
-
-`Worker` instances store their runtime information in Redis. Here's how to
-retrieve them:
-
-```python
-from redis import Redis
-from rq import Queue, Worker
-
-# Returns all workers registered in this connection
-redis = Redis()
-workers = Worker.all(connection=redis)
-
-# Returns all workers in this queue (new in version 0.10.0)
-queue = Queue('queue_name')
-workers = Worker.all(queue=queue)
-worker = workers[0]
-print(worker.name)
-
-print('Successful jobs: ' + worker.successful_job_count)
-print('Failed jobs: ' + worker.failed_job_count)
-print('Total working time: '+ worker.total_working_time)  # In seconds
-```
-
-Aside from `worker.name`, worker also have the following properties:
-* `hostname` - the host where this worker is run
-* `pid` - worker's process ID
-* `queues` - queues on which this worker is listening for jobs
-* `state` - possible states are `suspended`, `started`, `busy` and `idle`
-* `current_job` - the job it's currently executing (if any)
-* `last_heartbeat` - the last time this worker was seen
-* `birth_date` - time of worker's instantiation
-* `successful_job_count` - number of jobs finished successfully
-* `failed_job_count` - number of failed jobs processed
-* `total_working_time` - amount of time spent executing jobs, in seconds
-
-If you only want to know the number of workers for monitoring purposes,
-`Worker.count()` is much more performant.
-
-```python
-from redis import Redis
-from rq import Worker
-
-redis = Redis()
-
-# Count the number of workers in this Redis connection
-workers = Worker.count(connection=redis)
-
-# Count the number of workers for a specific queue
-queue = Queue('queue_name', connection=redis)
-workers = Worker.all(queue=queue)
-```
-
-## Worker with Custom Serializer
-
-When creating a worker, you can pass in a custom serializer that will be implicitly passed to the queue.
-Serializers used should have at least `loads` and `dumps` method. An example of creating a custom serializer 
-class can be found in serializers.py (rq.serializers.JSONSerializer).
-The default serializer used is `pickle`
-
-```python
-from rq import Worker
-from rq.serialzers import JSONSerializer
-
-job = Worker('foo', serializer=JSONSerializer)
-```
-
-or when creating from a queue
-
-```python
-from rq import Queue, Worker
-from rq.serialzers import JSONSerializer
-
-w = Queue('foo', serializer=JSONSerializer)
-```
-
-Queues will now use custom serializer
-
-
-## Better worker process title
-Worker process will have a better title (as displayed by system tools such as ps and top) 
-after you installed a third-party package `setproctitle`:
-```sh
-pip install setproctitle
-```
-
-## Taking Down Workers
-
-If, at any time, the worker receives `SIGINT` (via Ctrl+C) or `SIGTERM` (via
-`kill`), the worker wait until the currently running task is finished, stop
-the work loop and gracefully register its own death.
-
-If, during this takedown phase, `SIGINT` or `SIGTERM` is received again, the
-worker will forcefully terminate the child process (sending it `SIGKILL`), but
-will still try to register its own death.
-
-
-## Using a Config File
-
-If you'd like to configure `rq worker` via a configuration file instead of
-through command line arguments, you can do this by creating a Python file like
-`settings.py`:
-
-```python
-REDIS_URL = 'redis://localhost:6379/1'
-
-# You can also specify the Redis DB to use
-# REDIS_HOST = 'redis.example.com'
-# REDIS_PORT = 6380
-# REDIS_DB = 3
-# REDIS_PASSWORD = 'very secret'
-
-# Queues to listen on
-QUEUES = ['high', 'default', 'low']
-
-# If you're using Sentry to collect your runtime exceptions, you can use this
-# to configure RQ for it in a single step
-# The 'sync+' prefix is required for raven: https://github.com/nvie/rq/issues/350#issuecomment-43592410
-SENTRY_DSN = 'sync+http://public:secret@example.com/1'
-
-# If you want custom worker name
-# NAME = 'worker-1024'
-
-# If you want to use a dictConfig <https://docs.python.org/3/library/logging.config.html#logging-config-dictschema>
-# for more complex/consistent logging requirements.
-DICT_CONFIG = {
-    'version': 1,
-    'disable_existing_loggers': False,
-    'formatters': {
-        'standard': {
-            'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
-        },
-    },
-    'handlers': {
-        'default': {
-            'level': 'INFO',
-            'formatter': 'standard',
-            'class': 'logging.StreamHandler',
-            'stream': 'ext://sys.stderr',  # Default is stderr
-        },
-    },
-    'loggers': {
-        'root': {  # root logger
-            'handlers': ['default'],
-            'level': 'INFO',
-            'propagate': False
-        },
-    }
-}
-```
-
-The example above shows all the options that are currently supported.
-
-To specify which module to read settings from, use the `-c` option:
-
-```console
-$ rq worker -c settings
-```
-
-Alternatively, you can also pass in these options via environment variables.
-
-## Custom Worker Classes
-
-There are times when you want to customize the worker's behavior. Some of the
-more common requests so far are:
-
-1. Managing database connectivity prior to running a job.
-2. Using a job execution model that does not require `os.fork`.
-3. The ability to use different concurrency models such as
-   `multiprocessing` or `gevent`.
-4. Using a custom strategy for dequeuing jobs from different queues. 
-   See [link](#round-robin-and-random-strategies-for-dequeuing-jobs-from-queues).
-
-You can use the `-w` option to specify a different worker class to use:
-
-```console
-$ rq worker -w 'path.to.GeventWorker'
-```
-
-
-## Strategies for Dequeuing Jobs from Queues
-
-The default worker considers the order of queues as their priority order.
-That's to say if the supplied queues are `rq worker high low`, the worker will
-prioritize dequeueing jobs from `high` before `low`. To choose a different strategy,
-`rq` provides the `--dequeue-strategy / -ds` option.
-
-In certain circumstances, you may want to dequeue jobs in a round robin fashion. For example,
-when you have `q1`,`q2`,`q3`, the 1st dequeued job is taken from `q1`, the 2nd from `q2`,
-the 3rd from `q3`, the 4th from `q1`, the 5th from `q2` and so on.
-To implement this strategy use `-ds round_robin` argument.
-
-To dequeue jobs from the different queues randomly,  use `-ds random` argument.
-
-Deprecation Warning: Those strategies were formely being implemented by using the custom classes `rq.worker.RoundRobinWorker`
-and `rq.worker.RandomWorker`. As the `--dequeue-strategy` argument allows for this option to be used with any worker, those worker classes are deprecated and will be removed from future versions. 
-
-## Custom Job and Queue Classes
-
-You can tell the worker to use a custom class for jobs and queues using
-`--job-class` and/or `--queue-class`.
-
-```console
-$ rq worker --job-class 'custom.JobClass' --queue-class 'custom.QueueClass'
-```
-
-Don't forget to use those same classes when enqueueing the jobs.
-
-For example:
-
-```python
-from rq import Queue
-from rq.job import Job
-
-class CustomJob(Job):
-    pass
-
-class CustomQueue(Queue):
-    job_class = CustomJob
-
-queue = CustomQueue('default', connection=redis_conn)
-queue.enqueue(some_func)
-```
-
-
-## Custom DeathPenalty Classes
-
-When a Job times-out, the worker will try to kill it using the supplied
-`death_penalty_class` (default: `UnixSignalDeathPenalty`). This can be overridden
-if you wish to attempt to kill jobs in an application specific or 'cleaner' manner.
-
-DeathPenalty classes are constructed with the following arguments
-`BaseDeathPenalty(timeout, JobTimeoutException, job_id=job.id)`
-
-
-## Custom Exception Handlers
-
-If you need to handle errors differently for different types of jobs, or simply want to customize
-RQ's default error handling behavior, run `rq worker` using the `--exception-handler` option:
-
-```console
-$ rq worker --exception-handler 'path.to.my.ErrorHandler'
-
-# Multiple exception handlers is also supported
-$ rq worker --exception-handler 'path.to.my.ErrorHandler' --exception-handler 'another.ErrorHandler'
-```
-
-If you want to disable RQ's default exception handler, use the `--disable-default-exception-handler` option:
-
-```console
-$ rq worker --exception-handler 'path.to.my.ErrorHandler' --disable-default-exception-handler
-```
-
-
-## Sending Commands to Worker
-_New in version 1.6.0._
-
-Starting in version 1.6.0, workers use Redis' pubsub mechanism to listen to external commands while
-they're working. Two commands are currently implemented:
-
-### Shutting Down a Worker
-
-`send_shutdown_command()` instructs a worker to shutdown. This is similar to sending a SIGINT
-signal to a worker.
-
-```python
-from redis import Redis
-from rq.command import send_shutdown_command
-from rq.worker import Worker
-
-redis = Redis()
-
-workers = Worker.all(redis)
-for worker in workers:
-   send_shutdown_command(redis, worker.name)  # Tells worker to shutdown
-```
-
-### Killing a Horse
-
-`send_kill_horse_command()` tells a worker to cancel a currently executing job. If worker is
-not currently working, this command will be ignored.
-
-```python
-from redis import Redis
-from rq.command import send_kill_horse_command
-from rq.worker import Worker, WorkerStatus
-
-redis = Redis()
-
-workers = Worker.all(redis)
-for worker in workers:
-   if worker.state == WorkerStatus.BUSY:
-      send_kill_horse_command(redis, worker.name)
-```
-
-
-### Stopping a Job
-_New in version 1.7.0._
-
-You can use `send_stop_job_command()` to tell a worker to immediately stop a currently executing job. A job that's stopped will be sent to [FailedJobRegistry](https://python-rq.org/docs/results/#dealing-with-exceptions).
-
-```python
-from redis import Redis
-from rq.command import send_stop_job_command
-
-redis = Redis()
-
-# This will raise an exception if job is invalid or not currently executing
-send_stop_job_command(redis, job_id)
-```
-
-## Worker Pool
-
-_New in version 1.14.0._
-
-<div class="warning">
-    <img style="float: right; margin-right: -60px; margin-top: -38px" src="/img/warning.png" />
-    <strong>Note:</strong>
-    <p>`WorkerPool` is still in beta, use at your own risk!</p>
-</div>
-
-WorkerPool allows you to run multiple workers in a single CLI command.
-
-Usage:
-
-```shell
-rq worker-pool high default low -n 3
-```
-
-Options:
-* `-u` or `--url <Redis connection URL>`: as defined in [redis-py's docs](https://redis.readthedocs.io/en/stable/connections.html#redis.Redis.from_url).
-* `-w` or `--worker-class <path.to.Worker>`: defaults to `rq.worker.Worker`. `rq.worker.SimpleWorker` is also an option.
-* `-n` or `--num-workers <number of worker>`: defaults to 2.
-* `-b` or `--burst`: run workers in burst mode (stops after all jobs in queue have been processed).
-* `-l` or `--logging-level <level>`: defaults to `INFO`. `DEBUG`, `WARNING`, `ERROR` and `CRITICAL` are supported.
-* `-S` or `--serializer <path.to.Serializer>`: defaults to `rq.serializers.DefaultSerializer`. `rq.serializers.JSONSerializer` is also included.
-* `-P` or `--path <path>`: multiple import paths are supported (e.g `rq worker --path foo --path bar`).
-* `-j` or `--job-class <path.to.Job>`: defaults to `rq.job.Job`.
diff --git a/docs/favicon.png b/docs/favicon.png
deleted file mode 100644
index 1ff2af3..0000000
Binary files a/docs/favicon.png and /dev/null differ
diff --git a/docs/img/bg.png b/docs/img/bg.png
deleted file mode 100644
index 82aa205..0000000
Binary files a/docs/img/bg.png and /dev/null differ
diff --git a/docs/img/bq.png b/docs/img/bq.png
deleted file mode 100644
index d8efdd7..0000000
Binary files a/docs/img/bq.png and /dev/null differ
diff --git a/docs/img/dashboard.png b/docs/img/dashboard.png
deleted file mode 100644
index 378a752..0000000
Binary files a/docs/img/dashboard.png and /dev/null differ
diff --git a/docs/img/logo.png b/docs/img/logo.png
deleted file mode 100644
index 34fb765..0000000
Binary files a/docs/img/logo.png and /dev/null differ
diff --git a/docs/img/logo2.png b/docs/img/logo2.png
deleted file mode 100644
index 357c661..0000000
Binary files a/docs/img/logo2.png and /dev/null differ
diff --git a/docs/img/ribbon.png b/docs/img/ribbon.png
deleted file mode 100644
index 5b10b85..0000000
Binary files a/docs/img/ribbon.png and /dev/null differ
diff --git a/docs/img/warning.png b/docs/img/warning.png
deleted file mode 100644
index 7c389bd..0000000
Binary files a/docs/img/warning.png and /dev/null differ
diff --git a/docs/index.md b/docs/index.md
deleted file mode 100644
index 4728bcf..0000000
--- a/docs/index.md
+++ /dev/null
@@ -1,101 +0,0 @@
----
-title: "RQ: Simple job queues for Python"
-layout: default
----
-
-RQ (_Redis Queue_) is a simple Python library for queueing jobs and processing
-them in the background with workers.  It is backed by Redis and it is designed
-to have a low barrier to entry.  It can be integrated in your web stack easily.
-
-RQ requires Redis >= 3.0.0.
-
-## Getting started
-
-First, run a Redis server.  You can use an existing one.  To put jobs on
-queues, you don't have to do anything special, just define your typically
-lengthy or blocking function:
-
-```python
-import requests
-
-def count_words_at_url(url):
-    resp = requests.get(url)
-    return len(resp.text.split())
-```
-
-Then, create a RQ queue:
-
-```python
-from redis import Redis
-from rq import Queue
-
-q = Queue(connection=Redis())
-```
-
-And enqueue the function call:
-
-```python
-from my_module import count_words_at_url
-result = q.enqueue(count_words_at_url, 'http://nvie.com')
-```
-
-Scheduling jobs are similarly easy:
-
-```python
-# Schedule job to run at 9:15, October 10th
-job = queue.enqueue_at(datetime(2019, 10, 8, 9, 15), say_hello)
-
-# Schedule job to be run in 10 seconds
-job = queue.enqueue_in(timedelta(seconds=10), say_hello)
-```
-
-You can also ask RQ to retry failed jobs:
-
-```python
-from rq import Retry
-
-# Retry up to 3 times, failed job will be requeued immediately
-queue.enqueue(say_hello, retry=Retry(max=3))
-
-# Retry up to 3 times, with configurable intervals between retries
-queue.enqueue(say_hello, retry=Retry(max=3, interval=[10, 30, 60]))
-```
-
-### The worker
-
-To start executing enqueued function calls in the background, start a worker
-from your project's directory:
-
-```console
-$ rq worker --with-scheduler
-*** Listening for work on default
-Got count_words_at_url('http://nvie.com') from default
-Job result = 818
-*** Listening for work on default
-```
-
-That's about it.
-
-
-## Installation
-
-Simply use the following command to install the latest released version:
-
-    pip install rq
-
-If you want the cutting edge version (that may well be broken), use this:
-
-    pip install git+https://github.com/nvie/rq.git@master#egg=rq
-
-
-## Project history
-
-This project has been inspired by the good parts of [Celery][1], [Resque][2]
-and [this snippet][3], and has been created as a lightweight alternative to
-existing queueing frameworks, with a low barrier to entry.
-
-[m]: http://pypi.python.org/pypi/mailer
-[p]: http://docs.python.org/library/pickle.html
-[1]: http://www.celeryproject.org/
-[2]: https://github.com/defunkt/resque
-[3]: https://github.com/fengsp/flask-snippets/blob/1f65833a4291c5b833b195a09c365aa815baea4e/utilities/rq.py
diff --git a/docs/patterns/django.md b/docs/patterns/django.md
deleted file mode 100644
index bb83559..0000000
--- a/docs/patterns/django.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: "RQ: Using with Django"
-layout: patterns
----
-
-## Using RQ with Django
-
-The simplest way of using RQ with Django is to use
-[django-rq](https://github.com/ui/django-rq).  Follow the instructions in the
-README.
-
-### Manually
-
-In order to use RQ together with Django, you have to start the worker in
-a "Django context".  Possibly, you have to write a custom Django management
-command to do so.  In many cases, however, setting the `DJANGO_SETTINGS_MODULE`
-environmental variable will already do the trick.
-
-If `settings.py` is your Django settings file (as it is by default), use this:
-
-```console
-$ DJANGO_SETTINGS_MODULE=settings rq worker high default low
-```
diff --git a/docs/patterns/index.md b/docs/patterns/index.md
deleted file mode 100644
index 818dc14..0000000
--- a/docs/patterns/index.md
+++ /dev/null
@@ -1,97 +0,0 @@
----
-title: "RQ: Using RQ on Heroku"
-layout: patterns
----
-
-## Using RQ on Heroku
-
-To setup RQ on [Heroku][1], first add it to your
-`requirements.txt` file:
-
-    redis>=3
-    rq>=0.13
-
-Create a file called `run-worker.py` with the following content (assuming you
-are using [Heroku Data For Redis][2] with Heroku):
-
-```python
-import os
-import redis
-from redis import Redis
-from rq import Queue, Connection
-from rq.worker import HerokuWorker as Worker
-
-
-listen = ['high', 'default', 'low']
-
-redis_url = os.getenv('REDIS_URL')
-if not redis_url:
-    raise RuntimeError("Set up Heroku Data For Redis first, \
-    make sure its config var is named 'REDIS_URL'.")
-    
-conn = redis.from_url(redis_url)
-
-if __name__ == '__main__':
-    with Connection(conn):
-        worker = Worker(map(Queue, listen))
-        worker.work()
-```
-
-Then, add the command to your `Procfile`:
-
-    worker: python -u run-worker.py
-
-Now, all you have to do is spin up a worker:
-
-```console
-$ heroku scale worker=1
-```
-
-If the from_url function fails to parse your credentials, you might need to do so manually:
-
-```console
-conn = redis.Redis(
-    host=host,
-    password=password,
-    port=port,
-    ssl=True,
-    ssl_cert_reqs=None
-)
-```
-The details are from the 'settings' page of your Redis add-on on the Heroku dashboard.
-
-and for using the cli:
-
-```console
-rq info --config rq_conf
-``````
-
-Where the rq_conf.py file looks like:
-```console
-REDIS_HOST = "host"
-REDIS_PORT = port
-REDIS_PASSWORD = "password"
-REDIS_SSL = True
-REDIS_SSL_CA_CERTS = None
-REDIS_DB = 0
-REDIS_SSL_CERT_REQS = None
-``````
-
-## Putting RQ under foreman
-
-[Foreman][3] is probably the process manager you use when you host your app on
-Heroku, or just because it's a pretty friendly tool to use in development.
-
-When using RQ under `foreman`, you may experience that the workers are a bit quiet sometimes. This is because of Python buffering the output, so `foreman`
-cannot (yet) echo it. Here's a related [Wiki page][4].
-
-Just change the way you run your worker process, by adding the `-u` option (to
-force stdin, stdout and stderr to be totally unbuffered):
-
-    worker: python -u run-worker.py
-
-[1]: https://heroku.com
-[2]: https://devcenter.heroku.com/articles/heroku-redis
-[3]: https://github.com/ddollar/foreman
-[4]: https://github.com/ddollar/foreman/wiki/Missing-Output
-[5]: https://elements.heroku.com/addons/heroku-redis
diff --git a/docs/patterns/sentry.md b/docs/patterns/sentry.md
deleted file mode 100644
index 7dab4dc..0000000
--- a/docs/patterns/sentry.md
+++ /dev/null
@@ -1,49 +0,0 @@
----
-title: "RQ: Sending exceptions to Sentry"
-layout: patterns
----
-
-## Sending Exceptions to Sentry
-
-[Sentry](https://www.getsentry.com/) is a popular exception gathering service.
-RQ allows you to very easily send job exceptions to Sentry. To do this, you'll
-need to have [sentry-sdk](https://pypi.org/project/sentry-sdk/) installed.
-
-There are a few ways to start sending job exceptions to Sentry.
-
-
-### Configuring Sentry Through CLI
-
-Simply invoke the `rqworker` script using the ``--sentry-dsn`` argument.
-
-```console
-rq worker --sentry-dsn https://my-dsn@sentry.io/123
-```
-
-
-### Configuring Sentry Through a Config File
-
-Declare `SENTRY_DSN` in RQ's config file like this:
-
-```python
-SENTRY_DSN = 'https://my-dsn@sentry.io/123'
-```
-
-And run RQ's worker with your config file:
-
-```console
-rq worker -c my_settings
-```
-
-Visit [this page](https://python-rq.org/docs/workers/#using-a-config-file)
-to read more about running RQ using a config file.
-
-
-### Configuring Sentry Through Environment Variable
-
-Simple set `RQ_SENTRY_DSN` in your environment variable and RQ will
-automatically start Sentry integration for you.
-
-```console
-RQ_SENTRY_DSN="https://my-dsn@sentry.io/123" rq worker
-```
diff --git a/docs/patterns/supervisor.md b/docs/patterns/supervisor.md
deleted file mode 100644
index fc3e6e2..0000000
--- a/docs/patterns/supervisor.md
+++ /dev/null
@@ -1,76 +0,0 @@
----
-title: "Putting RQ under supervisor"
-layout: patterns
----
-
-## Putting RQ under supervisor
-
-[Supervisor][1] is a popular tool for managing long-running processes in
-production environments.  It can automatically restart any crashed processes,
-and you gain a single dashboard for all of the running processes that make up
-your product.
-
-RQ can be used in combination with supervisor easily.  You'd typically want to
-use the following supervisor settings:
-
-```
-[program:myworker]
-; Point the command to the specific rq command you want to run.
-; If you use virtualenv, be sure to point it to
-; /path/to/virtualenv/bin/rq
-; Also, you probably want to include a settings module to configure this
-; worker.  For more info on that, see http://python-rq.org/docs/workers/
-command=/path/to/rq worker -c mysettings high default low
-; process_num is required if you specify >1 numprocs
-process_name=%(program_name)s-%(process_num)s
-
-; If you want to run more than one worker instance, increase this
-numprocs=1
-
-; This is the directory from which RQ is ran. Be sure to point this to the
-; directory where your source code is importable from
-directory=/path/to
-
-; RQ requires the TERM signal to perform a warm shutdown. If RQ does not die
-; within 10 seconds, supervisor will forcefully kill it
-stopsignal=TERM
-
-; These are up to you
-autostart=true
-autorestart=true
-```
-
-### Conda environments
-
-[Conda][2] virtualenvs can be used for RQ jobs which require non-Python
-dependencies. You can use a similar approach as with regular virtualenvs.
-
-```
-[program:myworker]
-; Point the command to the specific rq command you want to run.
-; For conda virtual environments, install RQ into your env.
-; Also, you probably want to include a settings module to configure this
-; worker.  For more info on that, see http://python-rq.org/docs/workers/
-environment=PATH='/opt/conda/envs/myenv/bin'
-command=/opt/conda/envs/myenv/bin/rq worker -c mysettings high default low
-; process_num is required if you specify >1 numprocs
-process_name=%(program_name)s-%(process_num)s
-
-; If you want to run more than one worker instance, increase this
-numprocs=1
-
-; This is the directory from which RQ is ran. Be sure to point this to the
-; directory where your source code is importable from
-directory=/path/to
-
-; RQ requires the TERM signal to perform a warm shutdown. If RQ does not die
-; within 10 seconds, supervisor will forcefully kill it
-stopsignal=TERM
-
-; These are up to you
-autostart=true
-autorestart=true
-```
-
-[1]: http://supervisord.org/
-[2]: https://conda.io/docs/
diff --git a/docs/patterns/systemd.md b/docs/patterns/systemd.md
deleted file mode 100644
index b313b6d..0000000
--- a/docs/patterns/systemd.md
+++ /dev/null
@@ -1,42 +0,0 @@
----
-title: "Running RQ Workers under systemd"
-layout: patterns
----
-
-## Running RQ Workers Under systemd
-
-Systemd is process manager that's built into many popular Linux distributions.
-
-To run multiple workers under systemd, you'll first need to create a unit file.
-
-We can name this file `rqworker@.service`, put this file in `/etc/systemd/system`
-directory (location may differ by what distributions you run).
-
-```
-[Unit]
-Description=RQ Worker Number %i
-After=network.target
-
-[Service]
-Type=simple
-WorkingDirectory=/path/to/working_directory
-Environment=LANG=en_US.UTF-8
-Environment=LC_ALL=en_US.UTF-8
-Environment=LC_LANG=en_US.UTF-8
-ExecStart=/path/to/rq worker -c config.py
-ExecReload=/bin/kill -s HUP $MAINPID
-ExecStop=/bin/kill -s TERM $MAINPID
-PrivateTmp=true
-Restart=always
-
-[Install]
-WantedBy=multi-user.target
-```
-
-If your unit file is properly installed, you should be able to start workers by
-invoking `systemctl start rqworker@1.service`, `systemctl start rqworker@2.service`
-from the terminal.
-
-You can also reload all the workers by invoking `systemctl reload rqworker@*`.
-
-You can read more about systemd and unit files [here](https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files).
diff --git a/examples/fib.py b/examples/fib.py
deleted file mode 100644
index 4ca4493..0000000
--- a/examples/fib.py
+++ /dev/null
@@ -1,5 +0,0 @@
-def slow_fib(n):
-    if n <= 1:
-        return 1
-    else:
-        return slow_fib(n - 1) + slow_fib(n - 2)
diff --git a/examples/run_example.py b/examples/run_example.py
deleted file mode 100644
index 43fe163..0000000
--- a/examples/run_example.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import os
-import time
-
-from fib import slow_fib
-
-from rq import Connection, Queue
-
-
-def main():
-    # Range of Fibonacci numbers to compute
-    fib_range = range(20, 34)
-
-    # Kick off the tasks asynchronously
-    async_results = {}
-    q = Queue()
-    for x in fib_range:
-        async_results[x] = q.enqueue(slow_fib, x)
-
-    start_time = time.time()
-    done = False
-    while not done:
-        os.system('clear')
-        print('Asynchronously: (now = %.2f)' % (time.time() - start_time,))
-        done = True
-        for x in fib_range:
-            result = async_results[x].return_value
-            if result is None:
-                done = False
-                result = '(calculating)'
-            print('fib(%d) = %s' % (x, result))
-        print('')
-        print('To start the actual in the background, run a worker:')
-        print('    python examples/run_worker.py')
-        time.sleep(0.2)
-
-    print('Done')
-
-
-if __name__ == '__main__':
-    # Tell RQ what Redis connection to use
-    with Connection():
-        main()
diff --git a/examples/run_worker.py b/examples/run_worker.py
deleted file mode 100644
index 84587ff..0000000
--- a/examples/run_worker.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from rq import Connection, Queue, Worker
-
-if __name__ == '__main__':
-    # Tell rq what Redis connection to use
-    with Connection():
-        q = Queue()
-        Worker(q).work()
diff --git a/rq.egg-info/PKG-INFO b/rq.egg-info/PKG-INFO
new file mode 100644
index 0000000..e88c319
--- /dev/null
+++ b/rq.egg-info/PKG-INFO
@@ -0,0 +1,39 @@
+Metadata-Version: 2.1
+Name: rq
+Version: 1.15.1
+Summary: RQ is a simple, lightweight, library for creating background jobs, and processing them.
+Home-page: https://github.com/nvie/rq/
+Author: Vincent Driessen
+Author-email: vincent@3rdcloud.com
+License: BSD
+Platform: any
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Intended Audience :: Developers
+Classifier: Intended Audience :: End Users/Desktop
+Classifier: Intended Audience :: Information Technology
+Classifier: Intended Audience :: Science/Research
+Classifier: Intended Audience :: System Administrators
+Classifier: License :: OSI Approved :: BSD License
+Classifier: Operating System :: POSIX
+Classifier: Operating System :: MacOS
+Classifier: Operating System :: Unix
+Classifier: Programming Language :: Python
+Classifier: Programming Language :: Python :: 3
+Classifier: Programming Language :: Python :: 3.6
+Classifier: Programming Language :: Python :: 3.7
+Classifier: Programming Language :: Python :: 3.8
+Classifier: Programming Language :: Python :: 3.9
+Classifier: Programming Language :: Python :: 3.10
+Classifier: Programming Language :: Python :: 3.11
+Classifier: Topic :: Software Development :: Libraries :: Python Modules
+Classifier: Topic :: Internet
+Classifier: Topic :: Scientific/Engineering
+Classifier: Topic :: System :: Distributed Computing
+Classifier: Topic :: System :: Systems Administration
+Classifier: Topic :: System :: Monitoring
+Requires-Python: >=3.6
+License-File: LICENSE
+
+
+rq is a simple, lightweight, library for creating background jobs, and
+processing them.
diff --git a/rq.egg-info/SOURCES.txt b/rq.egg-info/SOURCES.txt
new file mode 100644
index 0000000..7d1f82c
--- /dev/null
+++ b/rq.egg-info/SOURCES.txt
@@ -0,0 +1,48 @@
+.deepsource.toml
+LICENSE
+MANIFEST.in
+README.md
+pyproject.toml
+requirements.txt
+setup.cfg
+setup.py
+rq/__init__.py
+rq/command.py
+rq/connections.py
+rq/decorators.py
+rq/defaults.py
+rq/dependency.py
+rq/exceptions.py
+rq/executions.py
+rq/job.py
+rq/local.py
+rq/logutils.py
+rq/maintenance.py
+rq/py.typed
+rq/queue.py
+rq/registry.py
+rq/results.py
+rq/scheduler.py
+rq/serializers.py
+rq/suspension.py
+rq/timeouts.py
+rq/types.py
+rq/utils.py
+rq/version.py
+rq/worker.py
+rq/worker_pool.py
+rq/worker_registration.py
+rq.egg-info/PKG-INFO
+rq.egg-info/SOURCES.txt
+rq.egg-info/dependency_links.txt
+rq.egg-info/entry_points.txt
+rq.egg-info/not-zip-safe
+rq.egg-info/requires.txt
+rq.egg-info/top_level.txt
+rq/cli/__init__.py
+rq/cli/__main__.py
+rq/cli/cli.py
+rq/cli/helpers.py
+rq/contrib/__init__.py
+rq/contrib/legacy.py
+rq/contrib/sentry.py
\ No newline at end of file
diff --git a/rq.egg-info/dependency_links.txt b/rq.egg-info/dependency_links.txt
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/rq.egg-info/dependency_links.txt
@@ -0,0 +1 @@
+
diff --git a/rq.egg-info/entry_points.txt b/rq.egg-info/entry_points.txt
new file mode 100644
index 0000000..146a887
--- /dev/null
+++ b/rq.egg-info/entry_points.txt
@@ -0,0 +1,4 @@
+[console_scripts]
+rq = rq.cli:main
+rqinfo = rq.cli:info
+rqworker = rq.cli:worker
diff --git a/rq.egg-info/not-zip-safe b/rq.egg-info/not-zip-safe
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/rq.egg-info/not-zip-safe
@@ -0,0 +1 @@
+
diff --git a/rq.egg-info/requires.txt b/rq.egg-info/requires.txt
new file mode 100644
index 0000000..3a14dff
--- /dev/null
+++ b/rq.egg-info/requires.txt
@@ -0,0 +1,2 @@
+redis>=4.0.0
+click>=5.0.0
diff --git a/rq.egg-info/top_level.txt b/rq.egg-info/top_level.txt
new file mode 100644
index 0000000..75dd785
--- /dev/null
+++ b/rq.egg-info/top_level.txt
@@ -0,0 +1 @@
+rq
diff --git a/rq/__init__.py b/rq/__init__.py
index b385e76..96b4f40 100644
--- a/rq/__init__.py
+++ b/rq/__init__.py
@@ -5,4 +5,19 @@ from .queue import Queue
 from .version import VERSION
 from .worker import SimpleWorker, Worker
 
+__all__ = [
+    "Connection",
+    "get_current_connection",
+    "pop_connection",
+    "push_connection",
+    "Callback",
+    "Retry",
+    "cancel_job",
+    "get_current_job",
+    "requeue_job",
+    "Queue",
+    "SimpleWorker",
+    "Worker",
+]
+
 __version__ = VERSION
diff --git a/rq/executions.py b/rq/executions.py
new file mode 100644
index 0000000..d1fb59b
--- /dev/null
+++ b/rq/executions.py
@@ -0,0 +1,66 @@
+from datetime import datetime
+from typing import Optional, TYPE_CHECKING
+from uuid import uuid4
+
+from redis import Redis
+
+if TYPE_CHECKING:
+    from redis.client import Pipeline
+
+from .job import Job
+from .registry import BaseRegistry, StartedJobRegistry
+from .utils import current_timestamp, now
+
+
+def get_key(job_id: str) -> str:
+    return 'rq:executions:%s' % job_id
+
+
+class Execution:
+    """Class to represent an execution of a job."""
+
+    def __init__(self, id: str, job_id: str, connection: Redis, created_at: Optional[datetime] = None):
+        self.id = id
+        self.job_id = job_id
+        self.connection = connection
+        self.created_at = created_at if created_at else now()
+    
+    @property
+    def composite_key(self):
+        return f'{self.job_id}:{self.id}'
+
+    @classmethod
+    def from_composite_key(cls, composite_key: str, connection: Redis) -> 'Execution':
+        job_id, id = composite_key.split(':')
+        return cls(id=id, job_id=job_id, connection=connection)
+
+    @classmethod
+    def create(cls, job: Job) -> 'Execution':
+        id = uuid4().hex
+        return cls(id=id, job_id=job.id, connection=job.connection, created_at=now())
+
+
+class ExecutionRegistry(BaseRegistry):
+    """Class to represent a registry of executions."""
+    key_template = 'rq:executions:{0}'
+
+    def __init__(self, job: Job):
+        self.connection = job.connection
+        self.job = job
+
+    def add(self, execution: Execution, pipeline: 'Pipeline', ttl=0, xx: bool = False) -> Any:
+        """Register an execution to registry with expiry time of now + ttl, unless it's -1 which is set to +inf
+
+        Args:
+            execution (Execution): The Execution to add
+            ttl (int, optional): The time to live. Defaults to 0.
+            pipeline (Optional[Pipeline], optional): The Redis Pipeline. Defaults to None.
+            xx (bool, optional): .... Defaults to False.
+
+        Returns:
+            result (int): The ZADD command result
+        """
+        score = ttl if ttl < 0 else current_timestamp() + ttl
+        if score == -1:
+            score = '+inf'
+        return pipeline.zadd(self.key, {execution.id: score}, xx=xx)
diff --git a/rq/job.py b/rq/job.py
index 0acfa40..f2707b5 100644
--- a/rq/job.py
+++ b/rq/job.py
@@ -1586,7 +1586,7 @@ class Job:
             # If parent job is not finished, we should only continue
             # if this job allows parent job to fail
             dependencies_ids.discard(parent_job.id)
-            if parent_job._status == JobStatus.CANCELED:
+            if parent_job.get_status() == JobStatus.CANCELED:
                 return False
             elif parent_job._status == JobStatus.FAILED and not self.allow_dependency_failures:
                 return False
diff --git a/rq/maintenance.py b/rq/maintenance.py
index b078a23..5c9d7cd 100644
--- a/rq/maintenance.py
+++ b/rq/maintenance.py
@@ -21,5 +21,6 @@ def clean_intermediate_queue(worker: 'BaseWorker', queue: Queue) -> None:
     for job_id in job_ids:
         if job_id not in queue.started_job_registry:
             job = queue.fetch_job(job_id)
-            worker.handle_job_failure(job, queue, exc_string='Job was stuck in the intermediate queue.')
+            if job:
+                worker.handle_job_failure(job, queue, exc_string='Job was stuck in intermediate queue.')
             queue.connection.lrem(queue.intermediate_queue_key, 1, job_id)
diff --git a/rq/queue.py b/rq/queue.py
index e0e48c5..ab7d7d4 100644
--- a/rq/queue.py
+++ b/rq/queue.py
@@ -893,7 +893,7 @@ class Queue:
             kwargs (*kwargs): function kargs
         """
         if not isinstance(f, str) and f.__module__ == '__main__':
-            raise ValueError('Functions from the __main__ module cannot be processed ' 'by workers')
+            raise ValueError('Functions from the __main__ module cannot be processed by workers')
 
         # Detect explicit invocations, i.e. of the form:
         #     q.enqueue(foo, args=(1, 2), kwargs={'a': 1}, job_timeout=30)
@@ -1206,6 +1206,7 @@ class Queue:
                         pipeline=pipe,
                         exclude_job_id=exclude_job_id,
                     )
+                    and dependent_job.get_status(refresh=False) != JobStatus.CANCELED
                 ]
 
                 pipe.multi()
diff --git a/rq/timeouts.py b/rq/timeouts.py
index a8a408c..12eef56 100644
--- a/rq/timeouts.py
+++ b/rq/timeouts.py
@@ -60,7 +60,7 @@ class BaseDeathPenalty:
 
 class UnixSignalDeathPenalty(BaseDeathPenalty):
     def handle_death_penalty(self, signum, frame):
-        raise self._exception('Task exceeded maximum timeout value ' '({0} seconds)'.format(self._timeout))
+        raise self._exception('Task exceeded maximum timeout value ({0} seconds)'.format(self._timeout))
 
     def setup_death_penalty(self):
         """Sets up an alarm signal and a signal handler that raises
diff --git a/rq/version.py b/rq/version.py
index 4fda84c..3e4fa76 100644
--- a/rq/version.py
+++ b/rq/version.py
@@ -1 +1 @@
-VERSION = '1.15.0'
+VERSION = '1.15.1'
diff --git a/rq/worker.py b/rq/worker.py
index 267f9c1..d5b921a 100644
--- a/rq/worker.py
+++ b/rq/worker.py
@@ -456,6 +456,62 @@ class BaseWorker:
             self.teardown()
         return bool(completed_jobs)
 
+    def handle_job_failure(self, job: 'Job', queue: 'Queue', started_job_registry=None, exc_string=''):
+        """
+        Handles the failure or an executing job by:
+            1. Setting the job status to failed
+            2. Removing the job from StartedJobRegistry
+            3. Setting the workers current job to None
+            4. Add the job to FailedJobRegistry
+        `save_exc_to_job` should only be used for testing purposes
+        """
+        self.log.debug('Handling failed execution of job %s', job.id)
+        with self.connection.pipeline() as pipeline:
+            if started_job_registry is None:
+                started_job_registry = StartedJobRegistry(
+                    job.origin, self.connection, job_class=self.job_class, serializer=self.serializer
+                )
+
+            # check whether a job was stopped intentionally and set the job
+            # status appropriately if it was this job.
+            job_is_stopped = self._stopped_job_id == job.id
+            retry = job.retries_left and job.retries_left > 0 and not job_is_stopped
+
+            if job_is_stopped:
+                job.set_status(JobStatus.STOPPED, pipeline=pipeline)
+                self._stopped_job_id = None
+            else:
+                # Requeue/reschedule if retry is configured, otherwise
+                if not retry:
+                    job.set_status(JobStatus.FAILED, pipeline=pipeline)
+
+            started_job_registry.remove(job, pipeline=pipeline)
+
+            if not self.disable_default_exception_handler and not retry:
+                job._handle_failure(exc_string, pipeline=pipeline)
+                with suppress(redis.exceptions.ConnectionError):
+                    pipeline.execute()
+
+            self.set_current_job_id(None, pipeline=pipeline)
+            self.increment_failed_job_count(pipeline)
+            if job.started_at and job.ended_at:
+                self.increment_total_working_time(job.ended_at - job.started_at, pipeline)
+
+            if retry:
+                job.retry(queue, pipeline)
+                enqueue_dependents = False
+            else:
+                enqueue_dependents = True
+
+            try:
+                pipeline.execute()
+                if enqueue_dependents:
+                    queue.enqueue_dependents(job)
+            except Exception:
+                # Ensure that custom exception handlers are called
+                # even if Redis is down
+                pass
+
     def _start_scheduler(
         self,
         burst: bool = False,
@@ -653,7 +709,7 @@ class BaseWorker:
         connection: Union[Redis, 'Pipeline'] = pipeline if pipeline is not None else self.connection
         connection.expire(self.key, timeout)
         connection.hset(self.key, 'last_heartbeat', utcformat(utcnow()))
-        self.log.debug('Sent heartbeat to prevent worker timeout. ' 'Next one should arrive in %s seconds.', timeout)
+        self.log.debug('Sent heartbeat to prevent worker timeout. Next one should arrive in %s seconds.', timeout)
 
 
 class Worker(BaseWorker):
@@ -947,7 +1003,7 @@ class Worker(BaseWorker):
         if self.get_state() == WorkerStatus.BUSY:
             self._stop_requested = True
             self.set_shutdown_requested_date()
-            self.log.debug('Stopping after current horse is finished. ' 'Press Ctrl+C again for a cold shutdown.')
+            self.log.debug('Stopping after current horse is finished. Press Ctrl+C again for a cold shutdown.')
             if self.scheduler:
                 self.stop_scheduler()
         else:
@@ -1294,62 +1350,6 @@ class Worker(BaseWorker):
         msg = 'Processing {0} from {1} since {2}'
         self.procline(msg.format(job.func_name, job.origin, time.time()))
 
-    def handle_job_failure(self, job: 'Job', queue: 'Queue', started_job_registry=None, exc_string=''):
-        """
-        Handles the failure or an executing job by:
-            1. Setting the job status to failed
-            2. Removing the job from StartedJobRegistry
-            3. Setting the workers current job to None
-            4. Add the job to FailedJobRegistry
-        `save_exc_to_job` should only be used for testing purposes
-        """
-        self.log.debug('Handling failed execution of job %s', job.id)
-        with self.connection.pipeline() as pipeline:
-            if started_job_registry is None:
-                started_job_registry = StartedJobRegistry(
-                    job.origin, self.connection, job_class=self.job_class, serializer=self.serializer
-                )
-
-            # check whether a job was stopped intentionally and set the job
-            # status appropriately if it was this job.
-            job_is_stopped = self._stopped_job_id == job.id
-            retry = job.retries_left and job.retries_left > 0 and not job_is_stopped
-
-            if job_is_stopped:
-                job.set_status(JobStatus.STOPPED, pipeline=pipeline)
-                self._stopped_job_id = None
-            else:
-                # Requeue/reschedule if retry is configured, otherwise
-                if not retry:
-                    job.set_status(JobStatus.FAILED, pipeline=pipeline)
-
-            started_job_registry.remove(job, pipeline=pipeline)
-
-            if not self.disable_default_exception_handler and not retry:
-                job._handle_failure(exc_string, pipeline=pipeline)
-                with suppress(redis.exceptions.ConnectionError):
-                    pipeline.execute()
-
-            self.set_current_job_id(None, pipeline=pipeline)
-            self.increment_failed_job_count(pipeline)
-            if job.started_at and job.ended_at:
-                self.increment_total_working_time(job.ended_at - job.started_at, pipeline)
-
-            if retry:
-                job.retry(queue, pipeline)
-                enqueue_dependents = False
-            else:
-                enqueue_dependents = True
-
-            try:
-                pipeline.execute()
-                if enqueue_dependents:
-                    queue.enqueue_dependents(job)
-            except Exception:
-                # Ensure that custom exception handlers are called
-                # even if Redis is down
-                pass
-
     def handle_job_success(self, job: 'Job', queue: 'Queue', started_job_registry: StartedJobRegistry):
         """Handles the successful execution of certain job.
         It will remove the job from the `StartedJobRegistry`, adding it to the `SuccessfulJobRegistry`,
@@ -1498,7 +1498,9 @@ class Worker(BaseWorker):
         extra.update({'queue': job.origin, 'job_id': job.id})
 
         # func_name
-        self.log.error('[Job %s]: exception raised while executing (%s)\n' + exc_string, job.id, func_name, extra=extra)
+        self.log.error(
+            '[Job %s]: exception raised while executing (%s)\n%s', job.id, func_name, exc_string, extra=extra
+        )
 
         for handler in self._exc_handlers:
             self.log.debug('Invoking exception handler %s', handler)
diff --git a/setup.cfg b/setup.cfg
index f873f9b..a472602 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -1,6 +1,11 @@
 [bdist_rpm]
 requires = redis >= 3.0.0
-           click >= 3.0
+	click >= 3.0
 
 [wheel]
 universal = 1
+
+[egg_info]
+tag_build = 
+tag_date = 0
+
diff --git a/tests/Dockerfile b/tests/Dockerfile
deleted file mode 100644
index d131b7e..0000000
--- a/tests/Dockerfile
+++ /dev/null
@@ -1,51 +0,0 @@
-FROM ubuntu:20.04
-
-ARG DEBIAN_FRONTEND=noninteractive
-ENV LANG C.UTF-8
-ENV LC_ALL C.UTF-8
-
-RUN apt-get update && \
-    apt-get upgrade -y && \
-    apt-get install -y \
-    build-essential \
-    zlib1g-dev \
-    libncurses5-dev \
-    libgdbm-dev \
-    libnss3-dev \
-    libssl-dev \
-    libreadline-dev \
-    libffi-dev wget \
-    software-properties-common && \
-    add-apt-repository ppa:deadsnakes/ppa && \
-    apt-get update
-
-RUN apt-get install -y \
-    redis-server \
-    python3-pip \
-    stunnel \
-    python3.6 \
-    python3.7 \
-    python3.8 \
-    python3.9 \
-    python3.10 \
-    python3.6-distutils \
-    python3.7-distutils
-
-RUN apt-get clean && \
-    rm -rf /var/lib/apt/lists/*
-
-COPY tests/ssl_config/private.pem tests/ssl_config/stunnel.conf /etc/stunnel/
-
-COPY . /tmp/rq
-WORKDIR /tmp/rq
-
-RUN set -e && \
-    python3 -m pip install --upgrade pip && \
-    python3 -m pip install --no-cache-dir tox && \
-    pip3 install -r /tmp/rq/requirements.txt -r /tmp/rq/dev-requirements.txt && \
-    python3 /tmp/rq/setup.py build && \
-    python3 /tmp/rq/setup.py install
-
-CMD stunnel \
-    & redis-server \
-    & RUN_SLOW_TESTS_TOO=1 RUN_SSL_TESTS=1 tox
diff --git a/tests/__init__.py b/tests/__init__.py
deleted file mode 100644
index 3451eb3..0000000
--- a/tests/__init__.py
+++ /dev/null
@@ -1,111 +0,0 @@
-import logging
-import os
-import unittest
-
-import pytest
-from redis import Redis
-
-from rq import pop_connection, push_connection
-
-
-def find_empty_redis_database(ssl=False):
-    """Tries to connect to a random Redis database (starting from 4), and
-    will use/connect it when no keys are in there.
-    """
-    for dbnum in range(4, 17):
-        connection_kwargs = {'db': dbnum}
-        if ssl:
-            connection_kwargs['port'] = 9736
-            connection_kwargs['ssl'] = True
-            connection_kwargs['ssl_cert_reqs'] = None  # disable certificate validation
-        testconn = Redis(**connection_kwargs)
-        empty = testconn.dbsize() == 0
-        if empty:
-            return testconn
-    assert False, 'No empty Redis database found to run tests in.'
-
-
-def slow(f):
-    f = pytest.mark.slow(f)
-    return unittest.skipUnless(os.environ.get('RUN_SLOW_TESTS_TOO'), "Slow tests disabled")(f)
-
-
-def ssl_test(f):
-    f = pytest.mark.ssl_test(f)
-    return unittest.skipUnless(os.environ.get('RUN_SSL_TESTS'), "SSL tests disabled")(f)
-
-
-class TestCase(unittest.TestCase):
-    """Base class to inherit test cases from for RQ.
-
-    It sets up the Redis connection (available via self.connection), turns off
-    logging to the terminal and flushes the Redis database before and after
-    running each test.
-    """
-
-    @classmethod
-    def setUpClass(cls):
-        # Set up connection to Redis
-        cls.connection = find_empty_redis_database()
-        # Shut up logging
-        logging.disable(logging.ERROR)
-
-    def setUp(self):
-        # Flush beforewards (we like our hygiene)
-        self.connection.flushdb()
-
-    def tearDown(self):
-        # Flush afterwards
-        self.connection.flushdb()
-
-    @classmethod
-    def tearDownClass(cls):
-        logging.disable(logging.NOTSET)
-
-
-class RQTestCase(unittest.TestCase):
-    """Base class to inherit test cases from for RQ.
-
-    It sets up the Redis connection (available via self.testconn), turns off
-    logging to the terminal and flushes the Redis database before and after
-    running each test.
-
-    Also offers assertQueueContains(queue, that_func) assertion method.
-    """
-
-    @classmethod
-    def setUpClass(cls):
-        # Set up connection to Redis
-        testconn = find_empty_redis_database()
-        push_connection(testconn)
-
-        # Store the connection (for sanity checking)
-        cls.testconn = testconn
-        cls.connection = testconn
-
-        # Shut up logging
-        logging.disable(logging.ERROR)
-
-    def setUp(self):
-        # Flush beforewards (we like our hygiene)
-        self.testconn.flushdb()
-
-    def tearDown(self):
-        # Flush afterwards
-        self.testconn.flushdb()
-
-    # Implement assertIsNotNone for Python runtimes < 2.7 or < 3.1
-    if not hasattr(unittest.TestCase, 'assertIsNotNone'):
-
-        def assertIsNotNone(self, value, *args):  # noqa
-            self.assertNotEqual(value, None, *args)
-
-    @classmethod
-    def tearDownClass(cls):
-        logging.disable(logging.NOTSET)
-
-        # Pop the connection to Redis
-        testconn = pop_connection()
-        assert (
-            testconn == cls.testconn
-        ), 'Wow, something really nasty happened to the Redis connection stack. Check your setup.'
diff --git a/tests/config_files/__init__.py b/tests/config_files/__init__.py
deleted file mode 100644
index e69de29..0000000
diff --git a/tests/config_files/dummy.py b/tests/config_files/dummy.py
deleted file mode 100644
index 1404250..0000000
--- a/tests/config_files/dummy.py
+++ /dev/null
@@ -1 +0,0 @@
-REDIS_HOST = "testhost.example.com"
diff --git a/tests/config_files/dummy_logging.py b/tests/config_files/dummy_logging.py
deleted file mode 100644
index e84af64..0000000
--- a/tests/config_files/dummy_logging.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# example config taken from <https://stackoverflow.com/a/7507842/784804>
-DICT_CONFIG = {
-    'version': 1,
-    'disable_existing_loggers': False,
-    'formatters': {
-        'standard': {'format': 'MY_LOG_FMT: %(asctime)s [%(levelname)s] %(name)s: %(message)s'},
-    },
-    'handlers': {
-        'default': {
-            'level': 'DEBUG',
-            'formatter': 'standard',
-            'class': 'logging.StreamHandler',
-            'stream': 'ext://sys.stdout',  # Default is stderr
-        },
-    },
-    'loggers': {
-        'root': {'handlers': ['default'], 'level': 'DEBUG', 'propagate': False},  # root logger
-    },
-}
diff --git a/tests/config_files/dummy_override.py b/tests/config_files/dummy_override.py
deleted file mode 100644
index 2b87a0c..0000000
--- a/tests/config_files/dummy_override.py
+++ /dev/null
@@ -1,4 +0,0 @@
-REDIS_HOST = "testhost.example.com"
-REDIS_PORT = 6378
-REDIS_DB = 2
-REDIS_PASSWORD = '123'
diff --git a/tests/config_files/sentry.py b/tests/config_files/sentry.py
deleted file mode 100644
index 163d305..0000000
--- a/tests/config_files/sentry.py
+++ /dev/null
@@ -1,2 +0,0 @@
-REDIS_HOST = "testhost.example.com"
-SENTRY_DSN = 'https://123@sentry.io/123'
diff --git a/tests/fixtures.py b/tests/fixtures.py
deleted file mode 100644
index f0831ee..0000000
--- a/tests/fixtures.py
+++ /dev/null
@@ -1,316 +0,0 @@
-"""
-This file contains all jobs that are used in tests.  Each of these test
-fixtures has a slightly different characteristics.
-"""
-
-import contextlib
-import os
-import signal
-import subprocess
-import sys
-import time
-from multiprocessing import Process
-
-from redis import Redis
-
-from rq import Connection, Queue, get_current_connection, get_current_job
-from rq.command import send_kill_horse_command, send_shutdown_command
-from rq.decorators import job
-from rq.job import Job
-from rq.worker import HerokuWorker, Worker
-
-
-def say_pid():
-    return os.getpid()
-
-
-def say_hello(name=None):
-    """A job with a single argument and a return value."""
-    if name is None:
-        name = 'Stranger'
-    return 'Hi there, %s!' % (name,)
-
-
-async def say_hello_async(name=None):
-    """A async job with a single argument and a return value."""
-    return say_hello(name)
-
-
-def say_hello_unicode(name=None):
-    """A job with a single argument and a return value."""
-    return str(say_hello(name))  # noqa
-
-
-def do_nothing():
-    """The best job in the world."""
-    pass
-
-
-def raise_exc():
-    raise Exception('raise_exc error')
-
-
-def raise_exc_mock():
-    return raise_exc
-
-
-def div_by_zero(x):
-    """Prepare for a division-by-zero exception."""
-    return x / 0
-
-
-def long_process():
-    time.sleep(60)
-    return
-
-
-def some_calculation(x, y, z=1):
-    """Some arbitrary calculation with three numbers.  Choose z smartly if you
-    want a division by zero exception.
-    """
-    return x * y / z
-
-
-def rpush(key, value, append_worker_name=False, sleep=0):
-    """Push a value into a list in Redis. Useful for detecting the order in
-    which jobs were executed."""
-    if sleep:
-        time.sleep(sleep)
-    if append_worker_name:
-        value += ':' + get_current_job().worker_name
-    redis = get_current_connection()
-    redis.rpush(key, value)
-
-
-def check_dependencies_are_met():
-    return get_current_job().dependencies_are_met()
-
-
-def create_file(path):
-    """Creates a file at the given path.  Actually, leaves evidence that the
-    job ran."""
-    with open(path, 'w') as f:
-        f.write('Just a sentinel.')
-
-
-def create_file_after_timeout(path, timeout):
-    time.sleep(timeout)
-    create_file(path)
-
-
-def create_file_after_timeout_and_setsid(path, timeout):
-    os.setsid()
-    create_file_after_timeout(path, timeout)
-
-
-def launch_process_within_worker_and_store_pid(path, timeout):
-    p = subprocess.Popen(['sleep', str(timeout)])
-    with open(path, 'w') as f:
-        f.write('{}'.format(p.pid))
-    p.wait()
-
-
-def access_self():
-    assert get_current_connection() is not None
-    assert get_current_job() is not None
-
-
-def modify_self(meta):
-    j = get_current_job()
-    j.meta.update(meta)
-    j.save()
-
-
-def modify_self_and_error(meta):
-    j = get_current_job()
-    j.meta.update(meta)
-    j.save()
-    return 1 / 0
-
-
-def echo(*args, **kwargs):
-    return args, kwargs
-
-
-class Number:
-    def __init__(self, value):
-        self.value = value
-
-    @classmethod
-    def divide(cls, x, y):
-        return x * y
-
-    def div(self, y):
-        return self.value / y
-
-
-class CallableObject:
-    def __call__(self):
-        return u"I'm callable"
-
-
-class UnicodeStringObject:
-    def __repr__(self):
-        return u'é'
-
-
-class ClassWithAStaticMethod:
-    @staticmethod
-    def static_method():
-        return u"I'm a static method"
-
-
-with Connection():
-
-    @job(queue='default')
-    def decorated_job(x, y):
-        return x + y
-
-
-def black_hole(job, *exc_info):
-    # Don't fall through to default behaviour (moving to failed queue)
-    return False
-
-
-def add_meta(job, *exc_info):
-    job.meta = {'foo': 1}
-    job.save()
-    return True
-
-
-def save_key_ttl(key):
-    # Stores key ttl in meta
-    job = get_current_job()
-    ttl = job.connection.ttl(key)
-    job.meta = {'ttl': ttl}
-    job.save_meta()
-
-
-def long_running_job(timeout=10):
-    time.sleep(timeout)
-    return 'Done sleeping...'
-
-
-def run_dummy_heroku_worker(sandbox, _imminent_shutdown_delay):
-    """
-    Run the work horse for a simplified heroku worker where perform_job just
-    creates two sentinel files 2 seconds apart.
-    :param sandbox: directory to create files in
-    :param _imminent_shutdown_delay: delay to use for HerokuWorker
-    """
-    sys.stderr = open(os.path.join(sandbox, 'stderr.log'), 'w')
-
-    class TestHerokuWorker(HerokuWorker):
-        imminent_shutdown_delay = _imminent_shutdown_delay
-
-        def perform_job(self, job, queue):
-            create_file(os.path.join(sandbox, 'started'))
-            # have to loop here rather than one sleep to avoid holding the GIL
-            # and preventing signals being received
-            for i in range(20):
-                time.sleep(0.1)
-            create_file(os.path.join(sandbox, 'finished'))
-
-    w = TestHerokuWorker(Queue('dummy'))
-    w.main_work_horse(None, None)
-
-
-class DummyQueue:
-    pass
-
-
-def kill_worker(pid: int, double_kill: bool, interval: float = 1.5):
-    # wait for the worker to be started over on the main process
-    time.sleep(interval)
-    os.kill(pid, signal.SIGTERM)
-    if double_kill:
-        # give the worker time to switch signal handler
-        time.sleep(interval)
-        os.kill(pid, signal.SIGTERM)
-
-
-class Serializer:
-    def loads(self):
-        pass
-
-    def dumps(self):
-        pass
-
-
-def start_worker(queue_name, conn_kwargs, worker_name, burst):
-    """
-    Start a worker. We accept only serializable args, so that this can be
-    executed via multiprocessing.
-    """
-    # Silence stdout (thanks to <https://stackoverflow.com/a/28321717/14153673>)
-    with open(os.devnull, 'w') as devnull:
-        with contextlib.redirect_stdout(devnull):
-            w = Worker([queue_name], name=worker_name, connection=Redis(**conn_kwargs))
-            w.work(burst=burst)
-
-
-def start_worker_process(queue_name, connection=None, worker_name=None, burst=False):
-    """
-    Use multiprocessing to start a new worker in a separate process.
-    """
-    connection = connection or get_current_connection()
-    conn_kwargs = connection.connection_pool.connection_kwargs
-    p = Process(target=start_worker, args=(queue_name, conn_kwargs, worker_name, burst))
-    p.start()
-    return p
-
-
-def burst_two_workers(queue, timeout=2, tries=5, pause=0.1):
-    """
-    Get two workers working simultaneously in burst mode, on a given queue.
-    Return after both workers have finished handling jobs, up to a fixed timeout
-    on the worker that runs in another process.
-    """
-    w1 = start_worker_process(queue.name, worker_name='w1', burst=True)
-    w2 = Worker(queue, name='w2')
-    jobs = queue.jobs
-    if jobs:
-        first_job = jobs[0]
-        # Give the first worker process time to get started on the first job.
-        # This is helpful in tests where we want to control which worker takes which job.
-        n = 0
-        while n < tries and not first_job.is_started:
-            time.sleep(pause)
-            n += 1
-    # Now can start the second worker.
-    w2.work(burst=True)
-    w1.join(timeout)
-
-
-def save_result(job, connection, result):
-    """Store job result in a key"""
-    connection.set('success_callback:%s' % job.id, result, ex=60)
-
-
-def save_exception(job, connection, type, value, traceback):
-    """Store job exception in a key"""
-    connection.set('failure_callback:%s' % job.id, str(value), ex=60)
-
-
-def save_result_if_not_stopped(job, connection, result=""):
-    connection.set('stopped_callback:%s' % job.id, result, ex=60)
-
-
-def erroneous_callback(job):
-    """A callback that's not written properly"""
-    pass
-
-
-def _send_shutdown_command(worker_name, connection_kwargs, delay=0.25):
-    time.sleep(delay)
-    send_shutdown_command(Redis(**connection_kwargs), worker_name)
-
-
-def _send_kill_horse_command(worker_name, connection_kwargs, delay=0.25):
-    """Waits delay before sending kill-horse command"""
-    time.sleep(delay)
-    send_kill_horse_command(Redis(**connection_kwargs), worker_name)
-
-
-class CustomJob(Job):
-    """A custom job class just to test it"""
diff --git a/tests/ssl_config/private.pem b/tests/ssl_config/private.pem
deleted file mode 100644
index c136389..0000000
--- a/tests/ssl_config/private.pem
+++ /dev/null
@@ -1,85 +0,0 @@
------BEGIN RSA PRIVATE KEY-----
-MIIJKwIBAAKCAgEAwN/TmlUJWSo8rWLAf94FUqWlFieMnitFbeOkpZsVI5ROdUVl
-NvvCF1h/o6+PTff6kRuRDWMdxQed22Pk40K79mGz8rjgNCRBJehPIUgi27BZZac3
-diae4aTgHsp6I0sw4+vT/4xbwfQoF+S2WdRfeoOV3odbFOKrxz2FKNb/p0I8/IbK
-Dgp/IpcX6z/LmYA0yD77eGxL9TzTW06hoLZByifKp0Q/MmQe6n4h4S1bG2dhAg5G
-2twa+B4+lh5j45/WA+OvWzCMkRjI8NuDidxFKdx+ddqqmJdXR6Aivi15oCDzJsvA
-eRHtFddgHa7+jj2+rx6+D8E9bkwiTQHS23rLWVnB0Fydm2a+G7PyXUGk+Ss+ekyT
-+83HZfoPDN58k4ZPPG7xhOLYC5bDCNmRo0P4L4CkNj91KQYMdhpuX2LjOtYRR2B7
-fmOXAlWIkeo8rJ+i+hCepkXTRTPG0FOzRVnYQfN2IbCFwSizqqRDSO7wlOBs7Q1U
-bDzgQi2JmpxuUf+/7A6WSAJirxXgTVEhj9YaxKZuGXzx/1+AQ2Dzp1u4Dh0dygxD
-BghornbBr5KdXRyAC71jszRnFNdHZriijwvgmKV70Jz5WGNxennHcE45HEUiFbI6
-AZCJ+zqqlJfZGt5lWO1EPCALrBn5dKm8BzcYniIx1+AGC+mG7oy4NVePc9sCAwEA
-AQKCAgEAm6SDx6kTsCaLbIeiPA1YUkdlnykvKnxUvMbVGOa6+kk1vyDO+r3S9K/v
-4JFNnWedhfeu6BSx80ugMWi9Tj+OGtbhNd/G3YzcHdEH+h2SM6JtocB82xVzZTd9
-vJs8ULreqy6llzUW3r8+k3l3RapBmkYRbM/hykrYwCF/EWPeToT/XfEPoKEL00gG
-f0qt7CMvdOCOYbFS4oXBMY+UknJBSPcvbCeAsBNnd2dtw56sRML534TR3M992/fc
-HZxMk2VqeR0FZxsYdAaCMQuTbG6aSZurWUOqIxUN07kAEGP2ICg2z3ngylKS9esl
-nw6WUQa2l+7BBUm1XwqFK4trMr421W5hwdsUCt4iwgYjBdc/uJtOPsnF8wVol7I9
-YWooHfnSvztaIYq4qVNU8iCn6KYZ6s+2CMafto/gugQlTNGksUhP0cu70oh3t8bC
-oeNf8O9ZRfwZzhsSTScVWpNpJxTB19Ofm2o/yU0JUiiH4fGVSSlTzVmP6/9g2tqU
-iuTjcuM55sOtFmTIWDY3aeKvnGz2peQEgtfdxQa5nkRwt719SplsP3iyjJdArgE/
-x2xC162CwDVGCrq1H3JD9/fpZedC3CaYrXDMqI1vAsBcoKBbF3lNAxDnT+8tP2g5
-1pGuvaR3+UOUG6sd/8bHycPZU5ba9XcpqXTNG7JRAlji/bdunaECggEBAOzhi6a+
-Pmf6Ou6cAveEeGwki5d7qY+4Sw9q7eqOGE/4n3gF7ZQjbkdjSvE4D9Tk4soOcXv2
-1o4Hh+Cmgwp4U6BLGgBW94mTUVqXtVkD0HFLpORjSd4HLSu9NCJfCjWH1Gtc/IyM
-vq6zeSwLIFDm7TZe8hvrfN5sxI6FMsi5T87sXQS1GjlBTVSiIAm2m/q27Hmkrs7u
-wI22yYmVgnWy7LbReSfhweYzdBQSMItYL+aXQvRsLhHWm+rLzdu8nslZ1gBgiqrs
-8lly9SasM1d1E4vFvbtt1w4ZLTdetyq5FgWackgrj1dpHis116onxBa9lTRnAumw
-O4Dqr1JroTD6anMCggEBANBxAsl/LkhTIUW5biJ8DI6zavI38iZhcrYbDl8G+9i/
-JUj4tuSgq8YX3wdoMqkaDX8s6L7YFxYY7Dom4wBhDYrxqiih7RLyxu9UxLjx5TeO
-f9m9SBwCxa+i05M8gAEyK9jmd/k76yuAqDDeZGuy/rH/itP+BJpsC3QX+8chKIjh
-/lN3le1OM3TmE9OdGwFG7CxPelKeghd9ES1yvq7yyL7RpCLcwNkKer8X+PQISrUe
-Q77vmc94p+Zgdacmt2Eu3hgCOk+swtouTmp4W1k0oJTcOIeT+2OF2U2/mZA5B1us
-smhFvpxObh3RHaxG3R1ciK5xWHWyx78qooc/n1Id7vkCggEBAI+XfV8bbZr7/aNM
-oSPHgnQThybRiIyda6qx5/zKHATGMmzAMy8cdyoBD5m/oSEtiihvru01SQQZno1Y
-gpDjNdYyEFXqYe1chvFCi2SlQkKbVx427b0QXppn++Vl9TtT1jkqydCtNJ2UH7zK
-FdHU2jCeR2cTTcNK7a9zIMC6TJ2jfBNxcK8KXcUS7hbVQiItppVqdajs435EMlEb
-d1S/nGyJ+EZrvG09/Xx5NkIRuB+wy558wUSA8kzXNDeiVCK8OVRLMWPBdHsyi1bh
-BdJbHvkYahXm1HkwW893s9LLFYVaBTKobSDQkMAiyFPV/TDHxV1ZoFNmR/uyx4pP
-wgt9kO8CggEBAMN2NjbdnHkV+0125WBREzV96fvZmqmDGB7MoF1cHy7RkBUtpdQf
-FvVbzTkU7OzGEYIAiwDrgjqmhF7DuHrSh/CTTg1sSvRJ1WL5CsCjlV7TsfBtHwGl
-V9urxNt9EEwO0C9Fb5u4JH9W1mF9Ko4T++LOz1CcE5T7XIIxO1kwLuKtieCbc2xk
-uLwWROFbocdAypeCsCJpoXSFQ2ZrA4TrBnRqApDukaj1usUXpcyxOd091CloZcO4
-UTonmix0keIAISRCcovkZZRTeBU/Z+nu/+aX3CrHCiX5jhzqXwZvdAbzmxlMzcGl
-in1La5fxm8e8zi9G+rzkOYt6X46UisJmb4ECggEBAM2NtCiX85y0YswAx8GpZXz7
-8yM9qmR1RJwDA8mnsJYRpyohIbHiPvGGd67W/MyOe2j8EPlMraK9PG/Q9PfkChc0
-su5kjH/o2etgSYjykV0e3xKIuGb57gkQjgN6ZXTMBRxo+PqOp8BG/PkiTEbJErod
-K72zYfnvF1/YfrTHF+uGhF7rUl8Z66nNh1uZLURVE/O1+YRbJrFVi9hxdT+3FGv6
-ilq32bGCMopgFOee0CRS4IYJtYJufq+EgmXBt5l6yjr6A1OLUcNQ0tsT88VDgTQe
-rvaAxK/9DXs3J7gjgsu4Qc/I6oLg+KSCEOSEbZsaYuICas143lC1cLfThlxAYoM=
------END RSA PRIVATE KEY-----
------BEGIN CERTIFICATE-----
-MIIF0TCCA7mgAwIBAgIUH0n4JVFqZVeehn7EeRAkjWh0wrowDQYJKoZIhvcNAQEL
-BQAweDEfMB0GCSqGSIb3DQEJARYQdGVzdEBnZXRyZXNxLmNvbTEPMA0GA1UEAwwG
-cnEuY29tMQswCQYDVQQKDAJSUTEMMAoGA1UECwwDRW5nMQswCQYDVQQGEwJDQTEN
-MAsGA1UECAwEVGVzdDENMAsGA1UEBwwEVGVzdDAeFw0yMDExMjUxOTAzMzJaFw0y
-NTExMjUxOTAzMzJaMHgxHzAdBgkqhkiG9w0BCQEWEHRlc3RAZ2V0cmVzcS5jb20x
-DzANBgNVBAMMBnJxLmNvbTELMAkGA1UECgwCUlExDDAKBgNVBAsMA0VuZzELMAkG
-A1UEBhMCQ0ExDTALBgNVBAgMBFRlc3QxDTALBgNVBAcMBFRlc3QwggIiMA0GCSqG
-SIb3DQEBAQUAA4ICDwAwggIKAoICAQDA39OaVQlZKjytYsB/3gVSpaUWJ4yeK0Vt
-46SlmxUjlE51RWU2+8IXWH+jr49N9/qRG5ENYx3FB53bY+TjQrv2YbPyuOA0JEEl
-6E8hSCLbsFllpzd2Jp7hpOAeynojSzDj69P/jFvB9CgX5LZZ1F96g5Xeh1sU4qvH
-PYUo1v+nQjz8hsoOCn8ilxfrP8uZgDTIPvt4bEv1PNNbTqGgtkHKJ8qnRD8yZB7q
-fiHhLVsbZ2ECDkba3Br4Hj6WHmPjn9YD469bMIyRGMjw24OJ3EUp3H512qqYl1dH
-oCK+LXmgIPMmy8B5Ee0V12Adrv6OPb6vHr4PwT1uTCJNAdLbestZWcHQXJ2bZr4b
-s/JdQaT5Kz56TJP7zcdl+g8M3nyThk88bvGE4tgLlsMI2ZGjQ/gvgKQ2P3UpBgx2
-Gm5fYuM61hFHYHt+Y5cCVYiR6jysn6L6EJ6mRdNFM8bQU7NFWdhB83YhsIXBKLOq
-pENI7vCU4GztDVRsPOBCLYmanG5R/7/sDpZIAmKvFeBNUSGP1hrEpm4ZfPH/X4BD
-YPOnW7gOHR3KDEMGCGiudsGvkp1dHIALvWOzNGcU10dmuKKPC+CYpXvQnPlYY3F6
-ecdwTjkcRSIVsjoBkIn7OqqUl9ka3mVY7UQ8IAusGfl0qbwHNxieIjHX4AYL6Ybu
-jLg1V49z2wIDAQABo1MwUTAdBgNVHQ4EFgQUFBBOTl94RoNjXrxR9+idaPA6WMEw
-HwYDVR0jBBgwFoAUFBBOTl94RoNjXrxR9+idaPA6WMEwDwYDVR0TAQH/BAUwAwEB
-/zANBgkqhkiG9w0BAQsFAAOCAgEAltcc8+Vz+sLnoVrappVJ3iRa20T8J9XwrRt8
-zs7WiMORHIh3PIKJVSjd328HwdFBHUJEMc5Vgrwg8rVQYoxRoz2kFj9fMF0fYync
-ipjL+p4bLGdyWDEHIziJSLULkjgypsW3rRi4MdB8kV8r8zHWVz4enFrztnw8e2Qz
-i/7FIIxc5i07kttCY4+u8VVZWrzaNt3KUrDQ3yJiBODp1pIMcmCUgx6AG7vhi9Js
-v1y27GKRW88pIGSHPWDcko2X9JuJuNHdBPYBU2rJXkhA6bh36LUuSJ0ZY2tvHPUw
-NZWi2DoYb3xaevdUDHS25+LUhFullQRvuS/1r9l8sCRp17xZBUh0rtDJa+keoq3O
-EADybpmoRKOfNoZLMeJabo/VbQX9qNYVN3rgzCZ/yOdotEKOrr90tw/JSS4CTtMw
-athKFIHWQwqcL1/xTM3EQ/HpxA6d1qayozMPVj5NnfpYjaBK+PncBTN01u/O45Pw
-+GGvvILPCsRYLIXp1lM5O3kbL9qffNLYHngQ/yW+R85AzMqbBIB9aaY3M0b4zdVo
-eIr8vDfTUh1bnzyKLiVWugOPVwfeU0ePg06Kr2yVPwtia4dW7YXm0dXHxn+7sMjg
-stJ4aqjlOiudLyb3wsRgnFDSzM5YZwtz3hCnbKhgDf5Qayywj/9VJWGpVbuQkmoq
-QQRVNAs=
------END CERTIFICATE-----
diff --git a/tests/ssl_config/stunnel.conf b/tests/ssl_config/stunnel.conf
deleted file mode 100644
index 8b0a769..0000000
--- a/tests/ssl_config/stunnel.conf
+++ /dev/null
@@ -1,13 +0,0 @@
-cert=/etc/stunnel/private.pem
-fips=no
-foreground=yes
-sslVersion=all
-socket=l:TCP_NODELAY=1
-socket=r:TCP_NODELAY=1
-pid=/var/run/stunnel.pid
-debug=0
-output=/etc/stunnel/stunnel.log
-
-[redis]
-accept = 0.0.0.0:9736
-connect = 127.0.0.1:6379
diff --git a/tests/test.json b/tests/test.json
deleted file mode 100644
index 7ae9fec..0000000
--- a/tests/test.json
+++ /dev/null
@@ -1,3 +0,0 @@
-{
-    "test": true
-}
diff --git a/tests/test_callbacks.py b/tests/test_callbacks.py
deleted file mode 100644
index 8aa9ad4..0000000
--- a/tests/test_callbacks.py
+++ /dev/null
@@ -1,331 +0,0 @@
-from datetime import timedelta
-
-from rq import Queue, Worker
-from rq.job import UNEVALUATED, Callback, Job, JobStatus
-from rq.serializers import JSONSerializer
-from rq.worker import SimpleWorker
-from tests import RQTestCase
-from tests.fixtures import (
-    div_by_zero,
-    erroneous_callback,
-    long_process,
-    save_exception,
-    save_result,
-    save_result_if_not_stopped,
-    say_hello,
-)
-
-
-class QueueCallbackTestCase(RQTestCase):
-    def test_enqueue_with_success_callback(self):
-        """Test enqueue* methods with on_success"""
-        queue = Queue(connection=self.testconn)
-
-        # Only functions and builtins are supported as callback
-        with self.assertRaises(ValueError):
-            queue.enqueue(say_hello, on_success=Job.fetch)
-
-        job = queue.enqueue(say_hello, on_success=print)
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.success_callback, print)
-
-        job = queue.enqueue_in(timedelta(seconds=10), say_hello, on_success=print)
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.success_callback, print)
-
-        # test string callbacks
-        job = queue.enqueue(say_hello, on_success=Callback("print"))
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.success_callback, print)
-
-        job = queue.enqueue_in(timedelta(seconds=10), say_hello, on_success=Callback("print"))
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.success_callback, print)
-
-    def test_enqueue_with_failure_callback(self):
-        """queue.enqueue* methods with on_failure is persisted correctly"""
-        queue = Queue(connection=self.testconn)
-
-        # Only functions and builtins are supported as callback
-        with self.assertRaises(ValueError):
-            queue.enqueue(say_hello, on_failure=Job.fetch)
-
-        job = queue.enqueue(say_hello, on_failure=print)
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.failure_callback, print)
-
-        job = queue.enqueue_in(timedelta(seconds=10), say_hello, on_failure=print)
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.failure_callback, print)
-
-        # test string callbacks
-        job = queue.enqueue(say_hello, on_failure=Callback("print"))
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.failure_callback, print)
-
-        job = queue.enqueue_in(timedelta(seconds=10), say_hello, on_failure=Callback("print"))
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.failure_callback, print)
-
-    def test_enqueue_with_stopped_callback(self):
-        """queue.enqueue* methods with on_stopped is persisted correctly"""
-        queue = Queue(connection=self.testconn)
-
-        # Only functions and builtins are supported as callback
-        with self.assertRaises(ValueError):
-            queue.enqueue(say_hello, on_stopped=Job.fetch)
-
-        job = queue.enqueue(long_process, on_stopped=print)
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.stopped_callback, print)
-
-        job = queue.enqueue_in(timedelta(seconds=10), long_process, on_stopped=print)
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.stopped_callback, print)
-
-        # test string callbacks
-        job = queue.enqueue(long_process, on_stopped=Callback("print"))
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.stopped_callback, print)
-
-        job = queue.enqueue_in(timedelta(seconds=10), long_process, on_stopped=Callback("print"))
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.stopped_callback, print)
-
-
-class SyncJobCallback(RQTestCase):
-    def test_success_callback(self):
-        """Test success callback is executed only when job is successful"""
-        queue = Queue(is_async=False)
-
-        job = queue.enqueue(say_hello, on_success=save_result)
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-        self.assertEqual(self.testconn.get('success_callback:%s' % job.id).decode(), job.result)
-
-        job = queue.enqueue(div_by_zero, on_success=save_result)
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        self.assertFalse(self.testconn.exists('success_callback:%s' % job.id))
-
-        # test string callbacks
-        job = queue.enqueue(say_hello, on_success=Callback("tests.fixtures.save_result"))
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-        self.assertEqual(self.testconn.get('success_callback:%s' % job.id).decode(), job.result)
-
-        job = queue.enqueue(div_by_zero, on_success=Callback("tests.fixtures.save_result"))
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        self.assertFalse(self.testconn.exists('success_callback:%s' % job.id))
-
-    def test_failure_callback(self):
-        """queue.enqueue* methods with on_failure is persisted correctly"""
-        queue = Queue(is_async=False)
-
-        job = queue.enqueue(div_by_zero, on_failure=save_exception)
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        self.assertIn('div_by_zero', self.testconn.get('failure_callback:%s' % job.id).decode())
-
-        job = queue.enqueue(div_by_zero, on_success=save_result)
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        self.assertFalse(self.testconn.exists('failure_callback:%s' % job.id))
-
-        # test string callbacks
-        job = queue.enqueue(div_by_zero, on_failure=Callback("tests.fixtures.save_exception"))
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        self.assertIn('div_by_zero', self.testconn.get('failure_callback:%s' % job.id).decode())
-
-        job = queue.enqueue(div_by_zero, on_success=Callback("tests.fixtures.save_result"))
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        self.assertFalse(self.testconn.exists('failure_callback:%s' % job.id))
-
-    def test_stopped_callback(self):
-        """queue.enqueue* methods with on_stopped is persisted correctly"""
-        connection = self.testconn
-        queue = Queue('foo', connection=connection, serializer=JSONSerializer)
-        worker = SimpleWorker('foo', connection=connection, serializer=JSONSerializer)
-
-        job = queue.enqueue(long_process, on_stopped=save_result_if_not_stopped)
-        job.execute_stopped_callback(
-            worker.death_penalty_class
-        )  # Calling execute_stopped_callback directly for coverage
-        self.assertTrue(self.testconn.exists('stopped_callback:%s' % job.id))
-
-        # test string callbacks
-        job = queue.enqueue(long_process, on_stopped=Callback("tests.fixtures.save_result_if_not_stopped"))
-        job.execute_stopped_callback(
-            worker.death_penalty_class
-        )  # Calling execute_stopped_callback directly for coverage
-        self.assertTrue(self.testconn.exists('stopped_callback:%s' % job.id))
-
-
-class WorkerCallbackTestCase(RQTestCase):
-    def test_success_callback(self):
-        """Test success callback is executed only when job is successful"""
-        queue = Queue(connection=self.testconn)
-        worker = SimpleWorker([queue])
-
-        # Callback is executed when job is successfully executed
-        job = queue.enqueue(say_hello, on_success=save_result)
-        worker.work(burst=True)
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-        self.assertEqual(self.testconn.get('success_callback:%s' % job.id).decode(), job.return_value())
-
-        job = queue.enqueue(div_by_zero, on_success=save_result)
-        worker.work(burst=True)
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        self.assertFalse(self.testconn.exists('success_callback:%s' % job.id))
-
-        # test string callbacks
-        job = queue.enqueue(say_hello, on_success=Callback("tests.fixtures.save_result"))
-        worker.work(burst=True)
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-        self.assertEqual(self.testconn.get('success_callback:%s' % job.id).decode(), job.return_value())
-
-        job = queue.enqueue(div_by_zero, on_success=Callback("tests.fixtures.save_result"))
-        worker.work(burst=True)
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        self.assertFalse(self.testconn.exists('success_callback:%s' % job.id))
-
-    def test_erroneous_success_callback(self):
-        """Test exception handling when executing success callback"""
-        queue = Queue(connection=self.testconn)
-        worker = Worker([queue])
-
-        # If success_callback raises an error, job will is considered as failed
-        job = queue.enqueue(say_hello, on_success=erroneous_callback)
-        worker.work(burst=True)
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-
-        # test string callbacks
-        job = queue.enqueue(say_hello, on_success=Callback("tests.fixtures.erroneous_callback"))
-        worker.work(burst=True)
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-
-    def test_failure_callback(self):
-        """Test failure callback is executed only when job a fails"""
-        queue = Queue(connection=self.testconn)
-        worker = SimpleWorker([queue])
-
-        # Callback is executed when job is successfully executed
-        job = queue.enqueue(div_by_zero, on_failure=save_exception)
-        worker.work(burst=True)
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        job.refresh()
-        print(job.exc_info)
-        self.assertIn('div_by_zero', self.testconn.get('failure_callback:%s' % job.id).decode())
-
-        job = queue.enqueue(div_by_zero, on_success=save_result)
-        worker.work(burst=True)
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        self.assertFalse(self.testconn.exists('failure_callback:%s' % job.id))
-
-        # test string callbacks
-        job = queue.enqueue(div_by_zero, on_failure=Callback("tests.fixtures.save_exception"))
-        worker.work(burst=True)
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        job.refresh()
-        print(job.exc_info)
-        self.assertIn('div_by_zero', self.testconn.get('failure_callback:%s' % job.id).decode())
-
-        job = queue.enqueue(div_by_zero, on_success=Callback("tests.fixtures.save_result"))
-        worker.work(burst=True)
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        self.assertFalse(self.testconn.exists('failure_callback:%s' % job.id))
-
-        # TODO: add test case for error while executing failure callback
-
-
-class JobCallbackTestCase(RQTestCase):
-    def test_job_creation_with_success_callback(self):
-        """Ensure callbacks are created and persisted properly"""
-        job = Job.create(say_hello)
-        self.assertIsNone(job._success_callback_name)
-        # _success_callback starts with UNEVALUATED
-        self.assertEqual(job._success_callback, UNEVALUATED)
-        self.assertEqual(job.success_callback, None)
-        # _success_callback becomes `None` after `job.success_callback` is called if there's no success callback
-        self.assertEqual(job._success_callback, None)
-
-        # job.success_callback is assigned properly
-        job = Job.create(say_hello, on_success=print)
-        self.assertIsNotNone(job._success_callback_name)
-        self.assertEqual(job.success_callback, print)
-        job.save()
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.success_callback, print)
-
-        # test string callbacks
-        job = Job.create(say_hello, on_success=Callback("print"))
-        self.assertIsNotNone(job._success_callback_name)
-        self.assertEqual(job.success_callback, print)
-        job.save()
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.success_callback, print)
-
-    def test_job_creation_with_failure_callback(self):
-        """Ensure failure callbacks are persisted properly"""
-        job = Job.create(say_hello)
-        self.assertIsNone(job._failure_callback_name)
-        # _failure_callback starts with UNEVALUATED
-        self.assertEqual(job._failure_callback, UNEVALUATED)
-        self.assertEqual(job.failure_callback, None)
-        # _failure_callback becomes `None` after `job.failure_callback` is called if there's no failure callback
-        self.assertEqual(job._failure_callback, None)
-
-        # job.failure_callback is assigned properly
-        job = Job.create(say_hello, on_failure=print)
-        self.assertIsNotNone(job._failure_callback_name)
-        self.assertEqual(job.failure_callback, print)
-        job.save()
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.failure_callback, print)
-
-        # test string callbacks
-        job = Job.create(say_hello, on_failure=Callback("print"))
-        self.assertIsNotNone(job._failure_callback_name)
-        self.assertEqual(job.failure_callback, print)
-        job.save()
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.failure_callback, print)
-
-    def test_job_creation_with_stopped_callback(self):
-        """Ensure stopped callbacks are persisted properly"""
-        job = Job.create(say_hello)
-        self.assertIsNone(job._stopped_callback_name)
-        # _failure_callback starts with UNEVALUATED
-        self.assertEqual(job._stopped_callback, UNEVALUATED)
-        self.assertEqual(job.stopped_callback, None)
-        # _stopped_callback becomes `None` after `job.stopped_callback` is called if there's no stopped callback
-        self.assertEqual(job._stopped_callback, None)
-
-        # job.failure_callback is assigned properly
-        job = Job.create(say_hello, on_stopped=print)
-        self.assertIsNotNone(job._stopped_callback_name)
-        self.assertEqual(job.stopped_callback, print)
-        job.save()
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.stopped_callback, print)
-
-        # test string callbacks
-        job = Job.create(say_hello, on_stopped=Callback("print"))
-        self.assertIsNotNone(job._stopped_callback_name)
-        self.assertEqual(job.stopped_callback, print)
-        job.save()
-
-        job = Job.fetch(id=job.id, connection=self.testconn)
-        self.assertEqual(job.stopped_callback, print)
diff --git a/tests/test_cli.py b/tests/test_cli.py
deleted file mode 100644
index 874eace..0000000
--- a/tests/test_cli.py
+++ /dev/null
@@ -1,846 +0,0 @@
-import json
-import os
-from datetime import datetime, timedelta, timezone
-from time import sleep
-from uuid import uuid4
-
-import pytest
-from click.testing import CliRunner
-from redis import Redis
-
-from rq import Queue
-from rq.cli import main
-from rq.cli.helpers import CliConfig, parse_function_arg, parse_schedule, read_config_file
-from rq.job import Job, JobStatus
-from rq.registry import FailedJobRegistry, ScheduledJobRegistry
-from rq.scheduler import RQScheduler
-from rq.serializers import JSONSerializer
-from rq.timeouts import UnixSignalDeathPenalty
-from rq.worker import Worker, WorkerStatus
-from tests import RQTestCase
-from tests.fixtures import div_by_zero, say_hello
-
-
-class CLITestCase(RQTestCase):
-    def setUp(self):
-        super().setUp()
-        db_num = self.testconn.connection_pool.connection_kwargs['db']
-        self.redis_url = 'redis://127.0.0.1:6379/%d' % db_num
-        self.connection = Redis.from_url(self.redis_url)
-
-    def assert_normal_execution(self, result):
-        if result.exit_code == 0:
-            return True
-        else:
-            print("Non normal execution")
-            print("Exit Code: {}".format(result.exit_code))
-            print("Output: {}".format(result.output))
-            print("Exception: {}".format(result.exception))
-            self.assertEqual(result.exit_code, 0)
-
-
-class TestRQCli(CLITestCase):
-    @pytest.fixture(autouse=True)
-    def set_tmpdir(self, tmpdir):
-        self.tmpdir = tmpdir
-
-    def assert_normal_execution(self, result):
-        if result.exit_code == 0:
-            return True
-        else:
-            print("Non normal execution")
-            print("Exit Code: {}".format(result.exit_code))
-            print("Output: {}".format(result.output))
-            print("Exception: {}".format(result.exception))
-            self.assertEqual(result.exit_code, 0)
-
-    """Test rq_cli script"""
-
-    def setUp(self):
-        super().setUp()
-        job = Job.create(func=div_by_zero, args=(1, 2, 3))
-        job.origin = 'fake'
-        job.save()
-
-    def test_config_file(self):
-        settings = read_config_file('tests.config_files.dummy')
-        self.assertIn('REDIS_HOST', settings)
-        self.assertEqual(settings['REDIS_HOST'], 'testhost.example.com')
-
-    def test_config_file_logging(self):
-        runner = CliRunner()
-        result = runner.invoke(main, ['worker', '-u', self.redis_url, '-b', '-c', 'tests.config_files.dummy_logging'])
-        self.assert_normal_execution(result)
-
-    def test_config_file_option(self):
-        """"""
-        cli_config = CliConfig(config='tests.config_files.dummy')
-        self.assertEqual(
-            cli_config.connection.connection_pool.connection_kwargs['host'],
-            'testhost.example.com',
-        )
-        runner = CliRunner()
-        result = runner.invoke(main, ['info', '--config', cli_config.config])
-        self.assertEqual(result.exit_code, 1)
-
-    def test_config_file_default_options(self):
-        """"""
-        cli_config = CliConfig(config='tests.config_files.dummy')
-
-        self.assertEqual(
-            cli_config.connection.connection_pool.connection_kwargs['host'],
-            'testhost.example.com',
-        )
-        self.assertEqual(cli_config.connection.connection_pool.connection_kwargs['port'], 6379)
-        self.assertEqual(cli_config.connection.connection_pool.connection_kwargs['db'], 0)
-        self.assertEqual(cli_config.connection.connection_pool.connection_kwargs['password'], None)
-
-    def test_config_file_default_options_override(self):
-        """"""
-        cli_config = CliConfig(config='tests.config_files.dummy_override')
-
-        self.assertEqual(
-            cli_config.connection.connection_pool.connection_kwargs['host'],
-            'testhost.example.com',
-        )
-        self.assertEqual(cli_config.connection.connection_pool.connection_kwargs['port'], 6378)
-        self.assertEqual(cli_config.connection.connection_pool.connection_kwargs['db'], 2)
-        self.assertEqual(cli_config.connection.connection_pool.connection_kwargs['password'], '123')
-
-    def test_config_env_vars(self):
-        os.environ['REDIS_HOST'] = "testhost.example.com"
-
-        cli_config = CliConfig()
-
-        self.assertEqual(
-            cli_config.connection.connection_pool.connection_kwargs['host'],
-            'testhost.example.com',
-        )
-
-    def test_death_penalty_class(self):
-        cli_config = CliConfig()
-
-        self.assertEqual(UnixSignalDeathPenalty, cli_config.death_penalty_class)
-
-        cli_config = CliConfig(death_penalty_class='rq.job.Job')
-        self.assertEqual(Job, cli_config.death_penalty_class)
-
-        with self.assertRaises(ValueError):
-            CliConfig(death_penalty_class='rq.abcd')
-
-    def test_empty_nothing(self):
-        """rq empty -u <url>"""
-        runner = CliRunner()
-        result = runner.invoke(main, ['empty', '-u', self.redis_url])
-        self.assert_normal_execution(result)
-        self.assertEqual(result.output.strip(), 'Nothing to do')
-
-    def test_requeue(self):
-        """rq requeue -u <url> --all"""
-        connection = Redis.from_url(self.redis_url)
-        queue = Queue('requeue', connection=connection)
-        registry = queue.failed_job_registry
-
-        runner = CliRunner()
-
-        job = queue.enqueue(div_by_zero)
-        job2 = queue.enqueue(div_by_zero)
-        job3 = queue.enqueue(div_by_zero)
-
-        worker = Worker([queue])
-        worker.work(burst=True)
-
-        self.assertIn(job, registry)
-        self.assertIn(job2, registry)
-        self.assertIn(job3, registry)
-
-        result = runner.invoke(main, ['requeue', '-u', self.redis_url, '--queue', 'requeue', job.id])
-        self.assert_normal_execution(result)
-
-        # Only the first specified job is requeued
-        self.assertNotIn(job, registry)
-        self.assertIn(job2, registry)
-        self.assertIn(job3, registry)
-
-        result = runner.invoke(main, ['requeue', '-u', self.redis_url, '--queue', 'requeue', '--all'])
-        self.assert_normal_execution(result)
-        # With --all flag, all failed jobs are requeued
-        self.assertNotIn(job2, registry)
-        self.assertNotIn(job3, registry)
-
-    def test_requeue_with_serializer(self):
-        """rq requeue -u <url> -S <serializer> --all"""
-        connection = Redis.from_url(self.redis_url)
-        queue = Queue('requeue', connection=connection, serializer=JSONSerializer)
-        registry = queue.failed_job_registry
-
-        runner = CliRunner()
-
-        job = queue.enqueue(div_by_zero)
-        job2 = queue.enqueue(div_by_zero)
-        job3 = queue.enqueue(div_by_zero)
-
-        worker = Worker([queue], serializer=JSONSerializer)
-        worker.work(burst=True)
-
-        self.assertIn(job, registry)
-        self.assertIn(job2, registry)
-        self.assertIn(job3, registry)
-
-        result = runner.invoke(
-            main, ['requeue', '-u', self.redis_url, '--queue', 'requeue', '-S', 'rq.serializers.JSONSerializer', job.id]
-        )
-        self.assert_normal_execution(result)
-
-        # Only the first specified job is requeued
-        self.assertNotIn(job, registry)
-        self.assertIn(job2, registry)
-        self.assertIn(job3, registry)
-
-        result = runner.invoke(
-            main,
-            ['requeue', '-u', self.redis_url, '--queue', 'requeue', '-S', 'rq.serializers.JSONSerializer', '--all'],
-        )
-        self.assert_normal_execution(result)
-        # With --all flag, all failed jobs are requeued
-        self.assertNotIn(job2, registry)
-        self.assertNotIn(job3, registry)
-
-    def test_info(self):
-        """rq info -u <url>"""
-        runner = CliRunner()
-        result = runner.invoke(main, ['info', '-u', self.redis_url])
-        self.assert_normal_execution(result)
-        self.assertIn('0 queues, 0 jobs total', result.output)
-
-        queue = Queue(connection=self.connection)
-        queue.enqueue(say_hello)
-
-        result = runner.invoke(main, ['info', '-u', self.redis_url])
-        self.assert_normal_execution(result)
-        self.assertIn('1 queues, 1 jobs total', result.output)
-
-    def test_info_only_queues(self):
-        """rq info -u <url> --only-queues (-Q)"""
-        runner = CliRunner()
-        result = runner.invoke(main, ['info', '-u', self.redis_url, '--only-queues'])
-        self.assert_normal_execution(result)
-        self.assertIn('0 queues, 0 jobs total', result.output)
-
-        queue = Queue(connection=self.connection)
-        queue.enqueue(say_hello)
-
-        result = runner.invoke(main, ['info', '-u', self.redis_url])
-        self.assert_normal_execution(result)
-        self.assertIn('1 queues, 1 jobs total', result.output)
-
-    def test_info_only_workers(self):
-        """rq info -u <url> --only-workers (-W)"""
-        runner = CliRunner()
-        result = runner.invoke(main, ['info', '-u', self.redis_url, '--only-workers'])
-        self.assert_normal_execution(result)
-        self.assertIn('0 workers, 0 queue', result.output)
-
-        result = runner.invoke(main, ['info', '--by-queue', '-u', self.redis_url, '--only-workers'])
-        self.assert_normal_execution(result)
-        self.assertIn('0 workers, 0 queue', result.output)
-
-        worker = Worker(['default'], connection=self.connection)
-        worker.register_birth()
-        result = runner.invoke(main, ['info', '-u', self.redis_url, '--only-workers'])
-        self.assert_normal_execution(result)
-        self.assertIn('1 workers, 0 queues', result.output)
-        worker.register_death()
-
-        queue = Queue(connection=self.connection)
-        queue.enqueue(say_hello)
-        result = runner.invoke(main, ['info', '-u', self.redis_url, '--only-workers'])
-        self.assert_normal_execution(result)
-        self.assertIn('0 workers, 1 queues', result.output)
-
-        foo_queue = Queue(name='foo', connection=self.connection)
-        foo_queue.enqueue(say_hello)
-
-        bar_queue = Queue(name='bar', connection=self.connection)
-        bar_queue.enqueue(say_hello)
-
-        worker_1 = Worker([foo_queue, bar_queue], connection=self.connection)
-        worker_1.register_birth()
-
-        worker_2 = Worker([foo_queue, bar_queue], connection=self.connection)
-        worker_2.register_birth()
-        worker_2.set_state(WorkerStatus.BUSY)
-
-        result = runner.invoke(main, ['info', 'foo', 'bar', '-u', self.redis_url, '--only-workers'])
-
-        self.assert_normal_execution(result)
-        self.assertIn('2 workers, 2 queues', result.output)
-
-        result = runner.invoke(main, ['info', 'foo', 'bar', '--by-queue', '-u', self.redis_url, '--only-workers'])
-
-        self.assert_normal_execution(result)
-        # Ensure both queues' workers are shown
-        self.assertIn('foo:', result.output)
-        self.assertIn('bar:', result.output)
-        self.assertIn('2 workers, 2 queues', result.output)
-
-    def test_worker(self):
-        """rq worker -u <url> -b"""
-        runner = CliRunner()
-        result = runner.invoke(main, ['worker', '-u', self.redis_url, '-b'])
-        self.assert_normal_execution(result)
-
-    def test_worker_pid(self):
-        """rq worker -u <url> /tmp/.."""
-        pid = self.tmpdir.join('rq.pid')
-        runner = CliRunner()
-        result = runner.invoke(main, ['worker', '-u', self.redis_url, '-b', '--pid', str(pid)])
-        self.assertTrue(len(pid.read()) > 0)
-        self.assert_normal_execution(result)
-
-    def test_worker_with_scheduler(self):
-        """rq worker -u <url> --with-scheduler"""
-        queue = Queue(connection=self.connection)
-        queue.enqueue_at(datetime(2019, 1, 1, tzinfo=timezone.utc), say_hello)
-        registry = ScheduledJobRegistry(queue=queue)
-
-        runner = CliRunner()
-        result = runner.invoke(main, ['worker', '-u', self.redis_url, '-b'])
-        self.assert_normal_execution(result)
-        self.assertEqual(len(registry), 1)  # 1 job still scheduled
-
-        result = runner.invoke(main, ['worker', '-u', self.redis_url, '-b', '--with-scheduler'])
-        self.assert_normal_execution(result)
-        self.assertEqual(len(registry), 0)  # Job has been enqueued
-
-    def test_worker_logging_options(self):
-        """--quiet and --verbose logging options are supported"""
-        runner = CliRunner()
-        args = ['worker', '-u', self.redis_url, '-b']
-        result = runner.invoke(main, args + ['--verbose'])
-        self.assert_normal_execution(result)
-        result = runner.invoke(main, args + ['--quiet'])
-        self.assert_normal_execution(result)
-
-        # --quiet and --verbose are mutually exclusive
-        result = runner.invoke(main, args + ['--quiet', '--verbose'])
-        self.assertNotEqual(result.exit_code, 0)
-
-    def test_worker_dequeue_strategy(self):
-        """--quiet and --verbose logging options are supported"""
-        runner = CliRunner()
-        args = ['worker', '-u', self.redis_url, '-b', '--dequeue-strategy', 'random']
-        result = runner.invoke(main, args)
-        self.assert_normal_execution(result)
-
-        args = ['worker', '-u', self.redis_url, '-b', '--dequeue-strategy', 'round_robin']
-        result = runner.invoke(main, args)
-        self.assert_normal_execution(result)
-
-        args = ['worker', '-u', self.redis_url, '-b', '--dequeue-strategy', 'wrong']
-        result = runner.invoke(main, args)
-        self.assertEqual(result.exit_code, 1)
-
-    def test_exception_handlers(self):
-        """rq worker -u <url> -b --exception-handler <handler>"""
-        connection = Redis.from_url(self.redis_url)
-        q = Queue('default', connection=connection)
-        runner = CliRunner()
-
-        # If exception handler is not given, no custom exception handler is run
-        job = q.enqueue(div_by_zero)
-        runner.invoke(main, ['worker', '-u', self.redis_url, '-b'])
-        registry = FailedJobRegistry(queue=q)
-        self.assertTrue(job in registry)
-
-        # If disable-default-exception-handler is given, job is not moved to FailedJobRegistry
-        job = q.enqueue(div_by_zero)
-        runner.invoke(main, ['worker', '-u', self.redis_url, '-b', '--disable-default-exception-handler'])
-        registry = FailedJobRegistry(queue=q)
-        self.assertFalse(job in registry)
-
-        # Both default and custom exception handler is run
-        job = q.enqueue(div_by_zero)
-        runner.invoke(main, ['worker', '-u', self.redis_url, '-b', '--exception-handler', 'tests.fixtures.add_meta'])
-        registry = FailedJobRegistry(queue=q)
-        self.assertTrue(job in registry)
-        job.refresh()
-        self.assertEqual(job.meta, {'foo': 1})
-
-        # Only custom exception handler is run
-        job = q.enqueue(div_by_zero)
-        runner.invoke(
-            main,
-            [
-                'worker',
-                '-u',
-                self.redis_url,
-                '-b',
-                '--exception-handler',
-                'tests.fixtures.add_meta',
-                '--disable-default-exception-handler',
-            ],
-        )
-        registry = FailedJobRegistry(queue=q)
-        self.assertFalse(job in registry)
-        job.refresh()
-        self.assertEqual(job.meta, {'foo': 1})
-
-    def test_suspend_and_resume(self):
-        """rq suspend -u <url>
-        rq worker -u <url> -b
-        rq resume -u <url>
-        """
-        runner = CliRunner()
-        result = runner.invoke(main, ['suspend', '-u', self.redis_url])
-        self.assert_normal_execution(result)
-
-        result = runner.invoke(main, ['worker', '-u', self.redis_url, '-b'])
-        self.assertEqual(result.exit_code, 1)
-        self.assertEqual(result.output.strip(), 'RQ is currently suspended, to resume job execution run "rq resume"')
-
-        result = runner.invoke(main, ['resume', '-u', self.redis_url])
-        self.assert_normal_execution(result)
-
-    def test_suspend_with_ttl(self):
-        """rq suspend -u <url> --duration=2"""
-        runner = CliRunner()
-        result = runner.invoke(main, ['suspend', '-u', self.redis_url, '--duration', 1])
-        self.assert_normal_execution(result)
-
-    def test_suspend_with_invalid_ttl(self):
-        """rq suspend -u <url> --duration=0"""
-        runner = CliRunner()
-        result = runner.invoke(main, ['suspend', '-u', self.redis_url, '--duration', 0])
-
-        self.assertEqual(result.exit_code, 1)
-        self.assertIn("Duration must be an integer greater than 1", result.output)
-
-    def test_serializer(self):
-        """rq worker -u <url> --serializer <serializer>"""
-        connection = Redis.from_url(self.redis_url)
-        q = Queue('default', connection=connection, serializer=JSONSerializer)
-        runner = CliRunner()
-        job = q.enqueue(say_hello)
-        runner.invoke(main, ['worker', '-u', self.redis_url, '--serializer rq.serializer.JSONSerializer'])
-        self.assertIn(job.id, q.job_ids)
-
-    def test_cli_enqueue(self):
-        """rq enqueue -u <url> tests.fixtures.say_hello"""
-        queue = Queue(connection=self.connection)
-        self.assertTrue(queue.is_empty())
-
-        runner = CliRunner()
-        result = runner.invoke(main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.say_hello'])
-        self.assert_normal_execution(result)
-
-        prefix = 'Enqueued tests.fixtures.say_hello() with job-id \''
-        suffix = '\'.\n'
-
-        self.assertTrue(result.output.startswith(prefix))
-        self.assertTrue(result.output.endswith(suffix))
-
-        job_id = result.output[len(prefix) : -len(suffix)]
-        queue_key = 'rq:queue:default'
-        self.assertEqual(self.connection.llen(queue_key), 1)
-        self.assertEqual(self.connection.lrange(queue_key, 0, -1)[0].decode('ascii'), job_id)
-
-        worker = Worker(queue)
-        worker.work(True)
-        self.assertEqual(Job(job_id).result, 'Hi there, Stranger!')
-
-    def test_cli_enqueue_with_serializer(self):
-        """rq enqueue -u <url> -S rq.serializers.JSONSerializer tests.fixtures.say_hello"""
-        queue = Queue(connection=self.connection, serializer=JSONSerializer)
-        self.assertTrue(queue.is_empty())
-
-        runner = CliRunner()
-        result = runner.invoke(
-            main, ['enqueue', '-u', self.redis_url, '-S', 'rq.serializers.JSONSerializer', 'tests.fixtures.say_hello']
-        )
-        self.assert_normal_execution(result)
-
-        prefix = 'Enqueued tests.fixtures.say_hello() with job-id \''
-        suffix = '\'.\n'
-
-        self.assertTrue(result.output.startswith(prefix))
-        self.assertTrue(result.output.endswith(suffix))
-
-        job_id = result.output[len(prefix) : -len(suffix)]
-        queue_key = 'rq:queue:default'
-        self.assertEqual(self.connection.llen(queue_key), 1)
-        self.assertEqual(self.connection.lrange(queue_key, 0, -1)[0].decode('ascii'), job_id)
-
-        worker = Worker(queue, serializer=JSONSerializer)
-        worker.work(True)
-        self.assertEqual(Job(job_id, serializer=JSONSerializer).result, 'Hi there, Stranger!')
-
-    def test_cli_enqueue_args(self):
-        """rq enqueue -u <url> tests.fixtures.echo hello ':[1, {"key": "value"}]' json:=["abc"] nojson=def"""
-        queue = Queue(connection=self.connection)
-        self.assertTrue(queue.is_empty())
-
-        runner = CliRunner()
-        result = runner.invoke(
-            main,
-            [
-                'enqueue',
-                '-u',
-                self.redis_url,
-                'tests.fixtures.echo',
-                'hello',
-                ':[1, {"key": "value"}]',
-                ':@tests/test.json',
-                '%1, 2',
-                'json:=[3.0, true]',
-                'nojson=abc',
-                'file=@tests/test.json',
-            ],
-        )
-        self.assert_normal_execution(result)
-
-        job_id = self.connection.lrange('rq:queue:default', 0, -1)[0].decode('ascii')
-
-        worker = Worker(queue)
-        worker.work(True)
-
-        args, kwargs = Job(job_id).result
-
-        self.assertEqual(args, ('hello', [1, {'key': 'value'}], {"test": True}, (1, 2)))
-        self.assertEqual(kwargs, {'json': [3.0, True], 'nojson': 'abc', 'file': '{\n    "test": true\n}\n'})
-
-    def test_cli_enqueue_schedule_in(self):
-        """rq enqueue -u <url> tests.fixtures.say_hello --schedule-in 1s"""
-        queue = Queue(connection=self.connection)
-        registry = ScheduledJobRegistry(queue=queue)
-        worker = Worker(queue)
-        scheduler = RQScheduler(queue, self.connection)
-
-        self.assertTrue(len(queue) == 0)
-        self.assertTrue(len(registry) == 0)
-
-        runner = CliRunner()
-        result = runner.invoke(
-            main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.say_hello', '--schedule-in', '10s']
-        )
-        self.assert_normal_execution(result)
-
-        scheduler.acquire_locks()
-        scheduler.enqueue_scheduled_jobs()
-
-        self.assertTrue(len(queue) == 0)
-        self.assertTrue(len(registry) == 1)
-
-        self.assertFalse(worker.work(True))
-
-        sleep(11)
-
-        scheduler.enqueue_scheduled_jobs()
-
-        self.assertTrue(len(queue) == 1)
-        self.assertTrue(len(registry) == 0)
-
-        self.assertTrue(worker.work(True))
-
-    def test_cli_enqueue_schedule_at(self):
-        """
-        rq enqueue -u <url> tests.fixtures.say_hello --schedule-at 2021-01-01T00:00:00
-
-        rq enqueue -u <url> tests.fixtures.say_hello --schedule-at 2100-01-01T00:00:00
-        """
-        queue = Queue(connection=self.connection)
-        registry = ScheduledJobRegistry(queue=queue)
-        worker = Worker(queue)
-        scheduler = RQScheduler(queue, self.connection)
-
-        self.assertTrue(len(queue) == 0)
-        self.assertTrue(len(registry) == 0)
-
-        runner = CliRunner()
-        result = runner.invoke(
-            main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.say_hello', '--schedule-at', '2021-01-01T00:00:00']
-        )
-        self.assert_normal_execution(result)
-
-        scheduler.acquire_locks()
-
-        self.assertTrue(len(queue) == 0)
-        self.assertTrue(len(registry) == 1)
-
-        scheduler.enqueue_scheduled_jobs()
-
-        self.assertTrue(len(queue) == 1)
-        self.assertTrue(len(registry) == 0)
-
-        self.assertTrue(worker.work(True))
-
-        self.assertTrue(len(queue) == 0)
-        self.assertTrue(len(registry) == 0)
-
-        result = runner.invoke(
-            main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.say_hello', '--schedule-at', '2100-01-01T00:00:00']
-        )
-        self.assert_normal_execution(result)
-
-        self.assertTrue(len(queue) == 0)
-        self.assertTrue(len(registry) == 1)
-
-        scheduler.enqueue_scheduled_jobs()
-
-        self.assertTrue(len(queue) == 0)
-        self.assertTrue(len(registry) == 1)
-
-        self.assertFalse(worker.work(True))
-
-    def test_cli_enqueue_retry(self):
-        """rq enqueue -u <url> tests.fixtures.say_hello --retry-max 3 --retry-interval 10 --retry-interval 20
-        --retry-interval 40"""
-        queue = Queue(connection=self.connection)
-        self.assertTrue(queue.is_empty())
-
-        runner = CliRunner()
-        result = runner.invoke(
-            main,
-            [
-                'enqueue',
-                '-u',
-                self.redis_url,
-                'tests.fixtures.say_hello',
-                '--retry-max',
-                '3',
-                '--retry-interval',
-                '10',
-                '--retry-interval',
-                '20',
-                '--retry-interval',
-                '40',
-            ],
-        )
-        self.assert_normal_execution(result)
-
-        job = Job.fetch(
-            self.connection.lrange('rq:queue:default', 0, -1)[0].decode('ascii'), connection=self.connection
-        )
-
-        self.assertEqual(job.retries_left, 3)
-        self.assertEqual(job.retry_intervals, [10, 20, 40])
-
-    def test_cli_enqueue_errors(self):
-        """
-        rq enqueue -u <url> tests.fixtures.echo :invalid_json
-
-        rq enqueue -u <url> tests.fixtures.echo %invalid_eval_statement
-
-        rq enqueue -u <url> tests.fixtures.echo key=value key=value
-
-        rq enqueue -u <url> tests.fixtures.echo --schedule-in 1s --schedule-at 2000-01-01T00:00:00
-
-        rq enqueue -u <url> tests.fixtures.echo @not_existing_file
-        """
-        runner = CliRunner()
-
-        result = runner.invoke(main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.echo', ':invalid_json'])
-        self.assertNotEqual(result.exit_code, 0)
-        self.assertIn('Unable to parse 1. non keyword argument as JSON.', result.output)
-
-        result = runner.invoke(
-            main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.echo', '%invalid_eval_statement']
-        )
-        self.assertNotEqual(result.exit_code, 0)
-        self.assertIn('Unable to eval 1. non keyword argument as Python object.', result.output)
-
-        result = runner.invoke(main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.echo', 'key=value', 'key=value'])
-        self.assertNotEqual(result.exit_code, 0)
-        self.assertIn('You can\'t specify multiple values for the same keyword.', result.output)
-
-        result = runner.invoke(
-            main,
-            [
-                'enqueue',
-                '-u',
-                self.redis_url,
-                'tests.fixtures.echo',
-                '--schedule-in',
-                '1s',
-                '--schedule-at',
-                '2000-01-01T00:00:00',
-            ],
-        )
-        self.assertNotEqual(result.exit_code, 0)
-        self.assertIn('You can\'t specify both --schedule-in and --schedule-at', result.output)
-
-        result = runner.invoke(main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.echo', '@not_existing_file'])
-        self.assertNotEqual(result.exit_code, 0)
-        self.assertIn('Not found', result.output)
-
-    def test_parse_schedule(self):
-        """executes the rq.cli.helpers.parse_schedule function"""
-        self.assertEqual(parse_schedule(None, '2000-01-23T23:45:01'), datetime(2000, 1, 23, 23, 45, 1))
-
-        start = datetime.now(timezone.utc) + timedelta(minutes=5)
-        middle = parse_schedule('5m', None)
-        end = datetime.now(timezone.utc) + timedelta(minutes=5)
-
-        self.assertGreater(middle, start)
-        self.assertLess(middle, end)
-
-    def test_parse_function_arg(self):
-        """executes the rq.cli.helpers.parse_function_arg function"""
-        self.assertEqual(parse_function_arg('abc', 0), (None, 'abc'))
-        self.assertEqual(parse_function_arg(':{"json": true}', 1), (None, {'json': True}))
-        self.assertEqual(parse_function_arg('%1, 2', 2), (None, (1, 2)))
-        self.assertEqual(parse_function_arg('key=value', 3), ('key', 'value'))
-        self.assertEqual(parse_function_arg('jsonkey:=["json", "value"]', 4), ('jsonkey', ['json', 'value']))
-        self.assertEqual(parse_function_arg('evalkey%=1.2', 5), ('evalkey', 1.2))
-        self.assertEqual(parse_function_arg(':@tests/test.json', 6), (None, {'test': True}))
-        self.assertEqual(parse_function_arg('@tests/test.json', 7), (None, '{\n    "test": true\n}\n'))
-
-    def test_cli_enqueue_doc_test(self):
-        """tests the examples of the documentation"""
-        runner = CliRunner()
-
-        id = str(uuid4())
-        result = runner.invoke(main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', 'abc'])
-        self.assert_normal_execution(result)
-        job = Job.fetch(id)
-        self.assertEqual((job.args, job.kwargs), (['abc'], {}))
-
-        id = str(uuid4())
-        result = runner.invoke(
-            main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', 'abc=def']
-        )
-        self.assert_normal_execution(result)
-        job = Job.fetch(id)
-        self.assertEqual((job.args, job.kwargs), ([], {'abc': 'def'}))
-
-        id = str(uuid4())
-        result = runner.invoke(
-            main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', ':{"json": "abc"}']
-        )
-        self.assert_normal_execution(result)
-        job = Job.fetch(id)
-        self.assertEqual((job.args, job.kwargs), ([{'json': 'abc'}], {}))
-
-        id = str(uuid4())
-        result = runner.invoke(
-            main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', 'key:={"json": "abc"}']
-        )
-        self.assert_normal_execution(result)
-        job = Job.fetch(id)
-        self.assertEqual((job.args, job.kwargs), ([], {'key': {'json': 'abc'}}))
-
-        id = str(uuid4())
-        result = runner.invoke(main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', '%1, 2'])
-        self.assert_normal_execution(result)
-        job = Job.fetch(id)
-        self.assertEqual((job.args, job.kwargs), ([(1, 2)], {}))
-
-        id = str(uuid4())
-        result = runner.invoke(main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', '%None'])
-        self.assert_normal_execution(result)
-        job = Job.fetch(id)
-        self.assertEqual((job.args, job.kwargs), ([None], {}))
-
-        id = str(uuid4())
-        result = runner.invoke(main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', '%True'])
-        self.assert_normal_execution(result)
-        job = Job.fetch(id)
-        self.assertEqual((job.args, job.kwargs), ([True], {}))
-
-        id = str(uuid4())
-        result = runner.invoke(
-            main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', 'key%=(1, 2)']
-        )
-        self.assert_normal_execution(result)
-        job = Job.fetch(id)
-        self.assertEqual((job.args, job.kwargs), ([], {'key': (1, 2)}))
-
-        id = str(uuid4())
-        result = runner.invoke(
-            main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', 'key%={"foo": True}']
-        )
-        self.assert_normal_execution(result)
-        job = Job.fetch(id)
-        self.assertEqual((job.args, job.kwargs), ([], {'key': {"foo": True}}))
-
-        id = str(uuid4())
-        result = runner.invoke(
-            main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', '@tests/test.json']
-        )
-        self.assert_normal_execution(result)
-        job = Job.fetch(id)
-        self.assertEqual((job.args, job.kwargs), ([open('tests/test.json', 'r').read()], {}))
-
-        id = str(uuid4())
-        result = runner.invoke(
-            main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', 'key=@tests/test.json']
-        )
-        self.assert_normal_execution(result)
-        job = Job.fetch(id)
-        self.assertEqual((job.args, job.kwargs), ([], {'key': open('tests/test.json', 'r').read()}))
-
-        id = str(uuid4())
-        result = runner.invoke(
-            main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', ':@tests/test.json']
-        )
-        self.assert_normal_execution(result)
-        job = Job.fetch(id)
-        self.assertEqual((job.args, job.kwargs), ([json.loads(open('tests/test.json', 'r').read())], {}))
-
-        id = str(uuid4())
-        result = runner.invoke(
-            main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', 'key:=@tests/test.json']
-        )
-        self.assert_normal_execution(result)
-        job = Job.fetch(id)
-        self.assertEqual((job.args, job.kwargs), ([], {'key': json.loads(open('tests/test.json', 'r').read())}))
-
-
-class WorkerPoolCLITestCase(CLITestCase):
-    def test_worker_pool_burst_and_num_workers(self):
-        """rq worker-pool -u <url> -b -n 3"""
-        runner = CliRunner()
-        result = runner.invoke(main, ['worker-pool', '-u', self.redis_url, '-b', '-n', '3'])
-        self.assert_normal_execution(result)
-
-    def test_serializer_and_queue_argument(self):
-        """rq worker-pool foo bar -u <url> -b"""
-        queue = Queue('foo', connection=self.connection, serializer=JSONSerializer)
-        job = queue.enqueue(say_hello, 'Hello')
-        queue = Queue('bar', connection=self.connection, serializer=JSONSerializer)
-        job_2 = queue.enqueue(say_hello, 'Hello')
-        runner = CliRunner()
-        runner.invoke(
-            main,
-            ['worker-pool', 'foo', 'bar', '-u', self.redis_url, '-b', '--serializer', 'rq.serializers.JSONSerializer'],
-        )
-        self.assertEqual(job.get_status(refresh=True), JobStatus.FINISHED)
-        self.assertEqual(job_2.get_status(refresh=True), JobStatus.FINISHED)
-
-    def test_worker_class_argument(self):
-        """rq worker-pool -u <url> -b --worker-class rq.Worker"""
-        runner = CliRunner()
-        result = runner.invoke(main, ['worker-pool', '-u', self.redis_url, '-b', '--worker-class', 'rq.Worker'])
-        self.assert_normal_execution(result)
-        result = runner.invoke(
-            main, ['worker-pool', '-u', self.redis_url, '-b', '--worker-class', 'rq.worker.SimpleWorker']
-        )
-        self.assert_normal_execution(result)
-
-        # This one fails because the worker class doesn't exist
-        result = runner.invoke(
-            main, ['worker-pool', '-u', self.redis_url, '-b', '--worker-class', 'rq.worker.NonExistantWorker']
-        )
-        self.assertNotEqual(result.exit_code, 0)
-
-    def test_job_class_argument(self):
-        """rq worker-pool -u <url> -b --job-class rq.job.Job"""
-        runner = CliRunner()
-        result = runner.invoke(main, ['worker-pool', '-u', self.redis_url, '-b', '--job-class', 'rq.job.Job'])
-        self.assert_normal_execution(result)
-
-        # This one fails because Job class doesn't exist
-        result = runner.invoke(
-            main, ['worker-pool', '-u', self.redis_url, '-b', '--job-class', 'rq.job.NonExistantJob']
-        )
-        self.assertNotEqual(result.exit_code, 0)
diff --git a/tests/test_commands.py b/tests/test_commands.py
deleted file mode 100644
index 355b72a..0000000
--- a/tests/test_commands.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import time
-from multiprocessing import Process
-
-from redis import Redis
-
-from rq import Queue, Worker
-from rq.command import send_command, send_kill_horse_command, send_shutdown_command, send_stop_job_command
-from rq.exceptions import InvalidJobOperation, NoSuchJobError
-from rq.serializers import JSONSerializer
-from rq.worker import WorkerStatus
-from tests import RQTestCase
-from tests.fixtures import _send_kill_horse_command, _send_shutdown_command, long_running_job
-
-
-def start_work(queue_name, worker_name, connection_kwargs):
-    worker = Worker(queue_name, name=worker_name, connection=Redis(**connection_kwargs))
-    worker.work()
-
-
-def start_work_burst(queue_name, worker_name, connection_kwargs):
-    worker = Worker(queue_name, name=worker_name, connection=Redis(**connection_kwargs), serializer=JSONSerializer)
-    worker.work(burst=True)
-
-
-class TestCommands(RQTestCase):
-    def test_shutdown_command(self):
-        """Ensure that shutdown command works properly."""
-        connection = self.testconn
-        worker = Worker('foo', connection=connection)
-
-        p = Process(
-            target=_send_shutdown_command, args=(worker.name, connection.connection_pool.connection_kwargs.copy())
-        )
-        p.start()
-        worker.work()
-        p.join(1)
-
-    def test_kill_horse_command(self):
-        """Ensure that shutdown command works properly."""
-        connection = self.testconn
-        queue = Queue('foo', connection=connection)
-        job = queue.enqueue(long_running_job, 4)
-        worker = Worker('foo', connection=connection)
-
-        p = Process(
-            target=_send_kill_horse_command, args=(worker.name, connection.connection_pool.connection_kwargs.copy())
-        )
-        p.start()
-        worker.work(burst=True)
-        p.join(1)
-        job.refresh()
-        self.assertTrue(job.id in queue.failed_job_registry)
-
-        p = Process(target=start_work, args=('foo', worker.name, connection.connection_pool.connection_kwargs.copy()))
-        p.start()
-        p.join(2)
-
-        send_kill_horse_command(connection, worker.name)
-        worker.refresh()
-        # Since worker is not busy, command will be ignored
-        self.assertEqual(worker.get_state(), WorkerStatus.IDLE)
-        send_shutdown_command(connection, worker.name)
-
-    def test_stop_job_command(self):
-        """Ensure that stop_job command works properly."""
-
-        connection = self.testconn
-        queue = Queue('foo', connection=connection, serializer=JSONSerializer)
-        job = queue.enqueue(long_running_job, 3)
-        worker = Worker('foo', connection=connection, serializer=JSONSerializer)
-
-        # If job is not executing, an error is raised
-        with self.assertRaises(InvalidJobOperation):
-            send_stop_job_command(connection, job_id=job.id, serializer=JSONSerializer)
-
-        # An exception is raised if job ID is invalid
-        with self.assertRaises(NoSuchJobError):
-            send_stop_job_command(connection, job_id='1', serializer=JSONSerializer)
-
-        p = Process(
-            target=start_work_burst, args=('foo', worker.name, connection.connection_pool.connection_kwargs.copy())
-        )
-        p.start()
-        p.join(1)
-
-        time.sleep(0.1)
-
-        send_command(connection, worker.name, 'stop-job', job_id=1)
-        time.sleep(0.25)
-        # Worker still working due to job_id mismatch
-        worker.refresh()
-        self.assertEqual(worker.get_state(), WorkerStatus.BUSY)
-
-        send_stop_job_command(connection, job_id=job.id, serializer=JSONSerializer)
-        time.sleep(0.25)
-
-        # Job status is set appropriately
-        self.assertTrue(job.is_stopped)
-
-        # Worker has stopped working
-        worker.refresh()
-        self.assertEqual(worker.get_state(), WorkerStatus.IDLE)
diff --git a/tests/test_connection.py b/tests/test_connection.py
deleted file mode 100644
index 5ac76d6..0000000
--- a/tests/test_connection.py
+++ /dev/null
@@ -1,51 +0,0 @@
-from redis import ConnectionPool, Redis, SSLConnection, UnixDomainSocketConnection
-
-from rq import Connection, Queue
-from rq.connections import parse_connection
-from tests import RQTestCase, find_empty_redis_database
-from tests.fixtures import do_nothing
-
-
-def new_connection():
-    return find_empty_redis_database()
-
-
-class TestConnectionInheritance(RQTestCase):
-    def test_connection_detection(self):
-        """Automatic detection of the connection."""
-        q = Queue()
-        self.assertEqual(q.connection, self.testconn)
-
-    def test_connection_stacking(self):
-        """Connection stacking."""
-        conn1 = Redis(db=4)
-        conn2 = Redis(db=5)
-
-        with Connection(conn1):
-            q1 = Queue()
-            with Connection(conn2):
-                q2 = Queue()
-        self.assertNotEqual(q1.connection, q2.connection)
-
-    def test_connection_pass_thru(self):
-        """Connection passed through from queues to jobs."""
-        q1 = Queue(connection=self.testconn)
-        with Connection(new_connection()):
-            q2 = Queue()
-        job1 = q1.enqueue(do_nothing)
-        job2 = q2.enqueue(do_nothing)
-        self.assertEqual(q1.connection, job1.connection)
-        self.assertEqual(q2.connection, job2.connection)
-
-    def test_parse_connection(self):
-        """Test parsing the connection"""
-        conn_class, pool_class, pool_kwargs = parse_connection(Redis(ssl=True))
-        self.assertEqual(conn_class, Redis)
-        self.assertEqual(pool_class, SSLConnection)
-
-        path = '/tmp/redis.sock'
-        pool = ConnectionPool(connection_class=UnixDomainSocketConnection, path=path)
-        conn_class, pool_class, pool_kwargs = parse_connection(Redis(connection_pool=pool))
-        self.assertEqual(conn_class, Redis)
-        self.assertEqual(pool_class, UnixDomainSocketConnection)
-        self.assertEqual(pool_kwargs, {"path": path})
diff --git a/tests/test_decorator.py b/tests/test_decorator.py
deleted file mode 100644
index 69ddde1..0000000
--- a/tests/test_decorator.py
+++ /dev/null
@@ -1,279 +0,0 @@
-from unittest import mock
-
-from redis import Redis
-
-from rq.decorators import job
-from rq.job import Job, Retry
-from rq.queue import Queue
-from rq.worker import DEFAULT_RESULT_TTL
-from tests import RQTestCase
-from tests.fixtures import decorated_job
-
-
-class TestDecorator(RQTestCase):
-    def setUp(self):
-        super().setUp()
-
-    def test_decorator_preserves_functionality(self):
-        """Ensure that a decorated function's functionality is still preserved."""
-        self.assertEqual(decorated_job(1, 2), 3)
-
-    def test_decorator_adds_delay_attr(self):
-        """Ensure that decorator adds a delay attribute to function that returns
-        a Job instance when called.
-        """
-        self.assertTrue(hasattr(decorated_job, 'delay'))
-        result = decorated_job.delay(1, 2)
-        self.assertTrue(isinstance(result, Job))
-        # Ensure that job returns the right result when performed
-        self.assertEqual(result.perform(), 3)
-
-    def test_decorator_accepts_queue_name_as_argument(self):
-        """Ensure that passing in queue name to the decorator puts the job in
-        the right queue.
-        """
-
-        @job(queue='queue_name')
-        def hello():
-            return 'Hi'
-
-        result = hello.delay()
-        self.assertEqual(result.origin, 'queue_name')
-
-    def test_decorator_accepts_result_ttl_as_argument(self):
-        """Ensure that passing in result_ttl to the decorator sets the
-        result_ttl on the job
-        """
-        # Ensure default
-        result = decorated_job.delay(1, 2)
-        self.assertEqual(result.result_ttl, DEFAULT_RESULT_TTL)
-
-        @job('default', result_ttl=10)
-        def hello():
-            return 'Why hello'
-
-        result = hello.delay()
-        self.assertEqual(result.result_ttl, 10)
-
-    def test_decorator_accepts_ttl_as_argument(self):
-        """Ensure that passing in ttl to the decorator sets the ttl on the job"""
-        # Ensure default
-        result = decorated_job.delay(1, 2)
-        self.assertEqual(result.ttl, None)
-
-        @job('default', ttl=30)
-        def hello():
-            return 'Hello'
-
-        result = hello.delay()
-        self.assertEqual(result.ttl, 30)
-
-    def test_decorator_accepts_meta_as_argument(self):
-        """Ensure that passing in meta to the decorator sets the meta on the job"""
-        # Ensure default
-        result = decorated_job.delay(1, 2)
-        self.assertEqual(result.meta, {})
-
-        test_meta = {
-            'metaKey1': 1,
-            'metaKey2': 2,
-        }
-
-        @job('default', meta=test_meta)
-        def hello():
-            return 'Hello'
-
-        result = hello.delay()
-        self.assertEqual(result.meta, test_meta)
-
-    def test_decorator_accepts_result_depends_on_as_argument(self):
-        """Ensure that passing in depends_on to the decorator sets the
-        correct dependency on the job
-        """
-        # Ensure default
-        result = decorated_job.delay(1, 2)
-        self.assertEqual(result.dependency, None)
-        self.assertEqual(result._dependency_id, None)
-
-        @job(queue='queue_name')
-        def foo():
-            return 'Firstly'
-
-        foo_job = foo.delay()
-
-        @job(queue='queue_name', depends_on=foo_job)
-        def bar():
-            return 'Secondly'
-
-        bar_job = bar.delay()
-
-        self.assertEqual(foo_job._dependency_ids, [])
-        self.assertIsNone(foo_job._dependency_id)
-
-        self.assertEqual(foo_job.dependency, None)
-        self.assertEqual(bar_job.dependency, foo_job)
-        self.assertEqual(bar_job.dependency.id, foo_job.id)
-
-    def test_decorator_delay_accepts_depends_on_as_argument(self):
-        """Ensure that passing in depends_on to the delay method of
-        a decorated function overrides the depends_on set in the
-        constructor.
-        """
-        # Ensure default
-        result = decorated_job.delay(1, 2)
-        self.assertEqual(result.dependency, None)
-        self.assertEqual(result._dependency_id, None)
-
-        @job(queue='queue_name')
-        def foo():
-            return 'Firstly'
-
-        @job(queue='queue_name')
-        def bar():
-            return 'Firstly'
-
-        foo_job = foo.delay()
-        bar_job = bar.delay()
-
-        @job(queue='queue_name', depends_on=foo_job)
-        def baz():
-            return 'Secondly'
-
-        baz_job = bar.delay(depends_on=bar_job)
-
-        self.assertIsNone(foo_job._dependency_id)
-        self.assertIsNone(bar_job._dependency_id)
-
-        self.assertEqual(foo_job._dependency_ids, [])
-        self.assertEqual(bar_job._dependency_ids, [])
-        self.assertEqual(baz_job._dependency_id, bar_job.id)
-        self.assertEqual(baz_job.dependency, bar_job)
-        self.assertEqual(baz_job.dependency.id, bar_job.id)
-
-    def test_decorator_accepts_on_failure_function_as_argument(self):
-        """Ensure that passing in on_failure function to the decorator sets the
-        correct on_failure function on the job.
-        """
-
-        # Only functions and builtins are supported as callback
-        @job('default', on_failure=Job.fetch)
-        def foo():
-            return 'Foo'
-
-        with self.assertRaises(ValueError):
-            result = foo.delay()
-
-        @job('default', on_failure=print)
-        def hello():
-            return 'Hello'
-
-        result = hello.delay()
-        result_job = Job.fetch(id=result.id, connection=self.testconn)
-        self.assertEqual(result_job.failure_callback, print)
-
-    def test_decorator_accepts_on_success_function_as_argument(self):
-        """Ensure that passing in on_failure function to the decorator sets the
-        correct on_success function on the job.
-        """
-
-        # Only functions and builtins are supported as callback
-        @job('default', on_failure=Job.fetch)
-        def foo():
-            return 'Foo'
-
-        with self.assertRaises(ValueError):
-            result = foo.delay()
-
-        @job('default', on_success=print)
-        def hello():
-            return 'Hello'
-
-        result = hello.delay()
-        result_job = Job.fetch(id=result.id, connection=self.testconn)
-        self.assertEqual(result_job.success_callback, print)
-
-    @mock.patch('rq.queue.resolve_connection')
-    def test_decorator_connection_laziness(self, resolve_connection):
-        """Ensure that job decorator resolve connection in `lazy` way"""
-
-        resolve_connection.return_value = Redis()
-
-        @job(queue='queue_name')
-        def foo():
-            return 'do something'
-
-        self.assertEqual(resolve_connection.call_count, 0)
-
-        foo()
-
-        self.assertEqual(resolve_connection.call_count, 0)
-
-        foo.delay()
-
-        self.assertEqual(resolve_connection.call_count, 1)
-
-    def test_decorator_custom_queue_class(self):
-        """Ensure that a custom queue class can be passed to the job decorator"""
-
-        class CustomQueue(Queue):
-            pass
-
-        CustomQueue.enqueue_call = mock.MagicMock(spec=lambda *args, **kwargs: None, name='enqueue_call')
-
-        custom_decorator = job(queue='default', queue_class=CustomQueue)
-        self.assertIs(custom_decorator.queue_class, CustomQueue)
-
-        @custom_decorator
-        def custom_queue_class_job(x, y):
-            return x + y
-
-        custom_queue_class_job.delay(1, 2)
-        self.assertEqual(CustomQueue.enqueue_call.call_count, 1)
-
-    def test_decorate_custom_queue(self):
-        """Ensure that a custom queue instance can be passed to the job decorator"""
-
-        class CustomQueue(Queue):
-            pass
-
-        CustomQueue.enqueue_call = mock.MagicMock(spec=lambda *args, **kwargs: None, name='enqueue_call')
-        queue = CustomQueue()
-
-        @job(queue=queue)
-        def custom_queue_job(x, y):
-            return x + y
-
-        custom_queue_job.delay(1, 2)
-        self.assertEqual(queue.enqueue_call.call_count, 1)
-
-    def test_decorator_custom_failure_ttl(self):
-        """Ensure that passing in failure_ttl to the decorator sets the
-        failure_ttl on the job
-        """
-        # Ensure default
-        result = decorated_job.delay(1, 2)
-        self.assertEqual(result.failure_ttl, None)
-
-        @job('default', failure_ttl=10)
-        def hello():
-            return 'Why hello'
-
-        result = hello.delay()
-        self.assertEqual(result.failure_ttl, 10)
-
-    def test_decorator_custom_retry(self):
-        """Ensure that passing in retry to the decorator sets the
-        retry on the job
-        """
-        # Ensure default
-        result = decorated_job.delay(1, 2)
-        self.assertEqual(result.retries_left, None)
-        self.assertEqual(result.retry_intervals, None)
-
-        @job('default', retry=Retry(3, [2]))
-        def hello():
-            return 'Why hello'
-
-        result = hello.delay()
-        self.assertEqual(result.retries_left, 3)
-        self.assertEqual(result.retry_intervals, [2])
diff --git a/tests/test_dependencies.py b/tests/test_dependencies.py
deleted file mode 100644
index b4e2842..0000000
--- a/tests/test_dependencies.py
+++ /dev/null
@@ -1,198 +0,0 @@
-from rq import Queue, SimpleWorker, Worker
-from rq.job import Dependency, Job, JobStatus
-from tests import RQTestCase
-from tests.fixtures import check_dependencies_are_met, div_by_zero, say_hello
-
-
-class TestDependencies(RQTestCase):
-    def test_allow_failure_is_persisted(self):
-        """Ensure that job.allow_dependency_failures is properly set
-        when providing Dependency object to depends_on."""
-        dep_job = Job.create(func=say_hello)
-
-        # default to False, maintaining current behavior
-        job = Job.create(func=say_hello, depends_on=Dependency([dep_job]))
-        job.save()
-        Job.fetch(job.id, connection=self.testconn)
-        self.assertFalse(job.allow_dependency_failures)
-
-        job = Job.create(func=say_hello, depends_on=Dependency([dep_job], allow_failure=True))
-        job.save()
-        job = Job.fetch(job.id, connection=self.testconn)
-        self.assertTrue(job.allow_dependency_failures)
-
-        jobs = Job.fetch_many([job.id], connection=self.testconn)
-        self.assertTrue(jobs[0].allow_dependency_failures)
-
-    def test_job_dependency(self):
-        """Enqueue dependent jobs only when appropriate"""
-        q = Queue(connection=self.testconn)
-        w = SimpleWorker([q], connection=q.connection)
-
-        # enqueue dependent job when parent successfully finishes
-        parent_job = q.enqueue(say_hello)
-        job = q.enqueue_call(say_hello, depends_on=parent_job)
-        w.work(burst=True)
-        job = Job.fetch(job.id, connection=self.testconn)
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-        q.empty()
-
-        # don't enqueue dependent job when parent fails
-        parent_job = q.enqueue(div_by_zero)
-        job = q.enqueue_call(say_hello, depends_on=parent_job)
-        w.work(burst=True)
-        job = Job.fetch(job.id, connection=self.testconn)
-        self.assertNotEqual(job.get_status(), JobStatus.FINISHED)
-        q.empty()
-
-        # don't enqueue dependent job when Dependency.allow_failure=False (the default)
-        parent_job = q.enqueue(div_by_zero)
-        dependency = Dependency(jobs=parent_job)
-        job = q.enqueue_call(say_hello, depends_on=dependency)
-        w.work(burst=True)
-        job = Job.fetch(job.id, connection=self.testconn)
-        self.assertNotEqual(job.get_status(), JobStatus.FINISHED)
-
-        # enqueue dependent job when Dependency.allow_failure=True
-        parent_job = q.enqueue(div_by_zero)
-        dependency = Dependency(jobs=parent_job, allow_failure=True)
-        job = q.enqueue_call(say_hello, depends_on=dependency)
-
-        job = Job.fetch(job.id, connection=self.testconn)
-        self.assertTrue(job.allow_dependency_failures)
-
-        w.work(burst=True)
-        job = Job.fetch(job.id, connection=self.testconn)
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-
-        # When a failing job has multiple dependents, only enqueue those
-        # with allow_failure=True
-        parent_job = q.enqueue(div_by_zero)
-        job_allow_failure = q.enqueue(say_hello, depends_on=Dependency(jobs=parent_job, allow_failure=True))
-        job = q.enqueue(say_hello, depends_on=Dependency(jobs=parent_job, allow_failure=False))
-        w.work(burst=True, max_jobs=1)
-        self.assertEqual(parent_job.get_status(), JobStatus.FAILED)
-        self.assertEqual(job_allow_failure.get_status(), JobStatus.QUEUED)
-        self.assertEqual(job.get_status(), JobStatus.DEFERRED)
-        q.empty()
-
-        # only enqueue dependent job when all dependencies have finished/failed
-        first_parent_job = q.enqueue(div_by_zero)
-        second_parent_job = q.enqueue(say_hello)
-        dependencies = Dependency(jobs=[first_parent_job, second_parent_job], allow_failure=True)
-        job = q.enqueue_call(say_hello, depends_on=dependencies)
-        w.work(burst=True, max_jobs=1)
-        self.assertEqual(first_parent_job.get_status(), JobStatus.FAILED)
-        self.assertEqual(second_parent_job.get_status(), JobStatus.QUEUED)
-        self.assertEqual(job.get_status(), JobStatus.DEFERRED)
-
-        # When second job finishes, dependent job should be queued
-        w.work(burst=True, max_jobs=1)
-        self.assertEqual(second_parent_job.get_status(), JobStatus.FINISHED)
-        self.assertEqual(job.get_status(), JobStatus.QUEUED)
-        w.work(burst=True)
-        job = Job.fetch(job.id, connection=self.testconn)
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-
-        # Test dependant is enqueued at front
-        q.empty()
-        parent_job = q.enqueue(say_hello)
-        q.enqueue(say_hello, job_id='fake_job_id_1', depends_on=Dependency(jobs=[parent_job]))
-        q.enqueue(say_hello, job_id='fake_job_id_2', depends_on=Dependency(jobs=[parent_job], enqueue_at_front=True))
-        w.work(burst=True, max_jobs=1)
-
-        self.assertEqual(q.job_ids, ["fake_job_id_2", "fake_job_id_1"])
-
-    def test_multiple_jobs_with_dependencies(self):
-        """Enqueue dependent jobs only when appropriate"""
-        q = Queue(connection=self.testconn)
-        w = SimpleWorker([q], connection=q.connection)
-
-        # Multiple jobs are enqueued with correct status
-        parent_job = q.enqueue(say_hello)
-        job_no_deps = Queue.prepare_data(say_hello)
-        job_with_deps = Queue.prepare_data(say_hello, depends_on=parent_job)
-        jobs = q.enqueue_many([job_no_deps, job_with_deps])
-        self.assertEqual(jobs[0].get_status(), JobStatus.QUEUED)
-        self.assertEqual(jobs[1].get_status(), JobStatus.DEFERRED)
-        w.work(burst=True, max_jobs=1)
-        self.assertEqual(jobs[1].get_status(), JobStatus.QUEUED)
-
-        job_with_met_deps = Queue.prepare_data(say_hello, depends_on=parent_job)
-        jobs = q.enqueue_many([job_with_met_deps])
-        self.assertEqual(jobs[0].get_status(), JobStatus.QUEUED)
-        q.empty()
-
-    def test_dependency_list_in_depends_on(self):
-        """Enqueue with Dependency list in depends_on"""
-        q = Queue(connection=self.testconn)
-        w = SimpleWorker([q], connection=q.connection)
-
-        # enqueue dependent job when parent successfully finishes
-        parent_job1 = q.enqueue(say_hello)
-        parent_job2 = q.enqueue(say_hello)
-        job = q.enqueue_call(say_hello, depends_on=[Dependency([parent_job1]), Dependency([parent_job2])])
-        w.work(burst=True)
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-
-    def test_enqueue_job_dependency(self):
-        """Enqueue via Queue.enqueue_job() with depencency"""
-        q = Queue(connection=self.testconn)
-        w = SimpleWorker([q], connection=q.connection)
-
-        # enqueue dependent job when parent successfully finishes
-        parent_job = Job.create(say_hello)
-        parent_job.save()
-        job = Job.create(say_hello, depends_on=parent_job)
-        q.enqueue_job(job)
-        w.work(burst=True)
-        self.assertEqual(job.get_status(), JobStatus.DEFERRED)
-        q.enqueue_job(parent_job)
-        w.work(burst=True)
-        self.assertEqual(parent_job.get_status(), JobStatus.FINISHED)
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-
-    def test_dependencies_are_met_if_parent_is_canceled(self):
-        """When parent job is canceled, it should be treated as failed"""
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(say_hello)
-        job.set_status(JobStatus.CANCELED)
-        dependent_job = queue.enqueue(say_hello, depends_on=job)
-        # dependencies_are_met() should return False, whether or not
-        # parent_job is provided
-        self.assertFalse(dependent_job.dependencies_are_met(job))
-        self.assertFalse(dependent_job.dependencies_are_met())
-
-    def test_can_enqueue_job_if_dependency_is_deleted(self):
-        queue = Queue(connection=self.testconn)
-
-        dependency_job = queue.enqueue(say_hello, result_ttl=0)
-
-        w = Worker([queue])
-        w.work(burst=True)
-
-        assert queue.enqueue(say_hello, depends_on=dependency_job)
-
-    def test_dependencies_are_met_if_dependency_is_deleted(self):
-        queue = Queue(connection=self.testconn)
-
-        dependency_job = queue.enqueue(say_hello, result_ttl=0)
-        dependent_job = queue.enqueue(say_hello, depends_on=dependency_job)
-
-        w = Worker([queue])
-        w.work(burst=True, max_jobs=1)
-
-        assert dependent_job.dependencies_are_met()
-        assert dependent_job.get_status() == JobStatus.QUEUED
-
-    def test_dependencies_are_met_at_execution_time(self):
-        queue = Queue(connection=self.testconn)
-        queue.empty()
-        queue.enqueue(say_hello, job_id="A")
-        queue.enqueue(say_hello, job_id="B")
-        job_c = queue.enqueue(check_dependencies_are_met, job_id="C", depends_on=["A", "B"])
-
-        job_c.dependencies_are_met()
-        w = Worker([queue])
-        w.work(burst=True)
-        assert job_c.result
diff --git a/tests/test_fixtures.py b/tests/test_fixtures.py
deleted file mode 100644
index 1517b80..0000000
--- a/tests/test_fixtures.py
+++ /dev/null
@@ -1,15 +0,0 @@
-from rq import Queue
-from tests import RQTestCase, fixtures
-
-
-class TestFixtures(RQTestCase):
-    def test_rpush_fixture(self):
-        fixtures.rpush('foo', 'bar')
-        assert self.testconn.lrange('foo', 0, 0)[0].decode() == 'bar'
-
-    def test_start_worker_fixture(self):
-        queue = Queue(name='testing', connection=self.testconn)
-        queue.enqueue(fixtures.say_hello)
-        conn_kwargs = self.testconn.connection_pool.connection_kwargs
-        fixtures.start_worker(queue.name, conn_kwargs, 'w1', True)
-        assert not queue.jobs
diff --git a/tests/test_helpers.py b/tests/test_helpers.py
deleted file mode 100644
index c351b77..0000000
--- a/tests/test_helpers.py
+++ /dev/null
@@ -1,89 +0,0 @@
-from unittest import mock
-
-from rq.cli.helpers import get_redis_from_config
-from tests import RQTestCase
-
-
-class TestHelpers(RQTestCase):
-    @mock.patch('rq.cli.helpers.Sentinel')
-    def test_get_redis_from_config(self, sentinel_class_mock):
-        """Ensure Redis connection params are properly parsed"""
-        settings = {'REDIS_URL': 'redis://localhost:1/1'}
-
-        # Ensure REDIS_URL is read
-        redis = get_redis_from_config(settings)
-        connection_kwargs = redis.connection_pool.connection_kwargs
-        self.assertEqual(connection_kwargs['db'], 1)
-        self.assertEqual(connection_kwargs['port'], 1)
-
-        settings = {
-            'REDIS_URL': 'redis://localhost:1/1',
-            'REDIS_HOST': 'foo',
-            'REDIS_DB': 2,
-            'REDIS_PORT': 2,
-            'REDIS_PASSWORD': 'bar',
-        }
-
-        # Ensure REDIS_URL is preferred
-        redis = get_redis_from_config(settings)
-        connection_kwargs = redis.connection_pool.connection_kwargs
-        self.assertEqual(connection_kwargs['db'], 1)
-        self.assertEqual(connection_kwargs['port'], 1)
-
-        # Ensure fall back to regular connection parameters
-        settings['REDIS_URL'] = None
-        redis = get_redis_from_config(settings)
-        connection_kwargs = redis.connection_pool.connection_kwargs
-        self.assertEqual(connection_kwargs['host'], 'foo')
-        self.assertEqual(connection_kwargs['db'], 2)
-        self.assertEqual(connection_kwargs['port'], 2)
-        self.assertEqual(connection_kwargs['password'], 'bar')
-
-        # Add Sentinel to the settings
-        settings.update(
-            {
-                'SENTINEL': {
-                    'INSTANCES': [
-                        ('remote.host1.org', 26379),
-                        ('remote.host2.org', 26379),
-                        ('remote.host3.org', 26379),
-                    ],
-                    'MASTER_NAME': 'master',
-                    'DB': 2,
-                    'USERNAME': 'redis-user',
-                    'PASSWORD': 'redis-secret',
-                    'SOCKET_TIMEOUT': None,
-                    'CONNECTION_KWARGS': {
-                        'ssl_ca_path': None,
-                    },
-                    'SENTINEL_KWARGS': {
-                        'username': 'sentinel-user',
-                        'password': 'sentinel-secret',
-                    },
-                },
-            }
-        )
-
-        # Ensure SENTINEL is preferred against REDIS_* parameters
-        redis = get_redis_from_config(settings)
-        sentinel_init_sentinels_args = sentinel_class_mock.call_args[0]
-        sentinel_init_sentinel_kwargs = sentinel_class_mock.call_args[1]
-        self.assertEqual(
-            sentinel_init_sentinels_args,
-            ([('remote.host1.org', 26379), ('remote.host2.org', 26379), ('remote.host3.org', 26379)],),
-        )
-        self.assertDictEqual(
-            sentinel_init_sentinel_kwargs,
-            {
-                'db': 2,
-                'ssl': False,
-                'username': 'redis-user',
-                'password': 'redis-secret',
-                'socket_timeout': None,
-                'ssl_ca_path': None,
-                'sentinel_kwargs': {
-                    'username': 'sentinel-user',
-                    'password': 'sentinel-secret',
-                },
-            },
-        )
diff --git a/tests/test_job.py b/tests/test_job.py
deleted file mode 100644
index 29c309f..0000000
--- a/tests/test_job.py
+++ /dev/null
@@ -1,1221 +0,0 @@
-import json
-import queue
-import time
-import zlib
-from datetime import datetime, timedelta
-from pickle import dumps, loads
-
-from redis import WatchError
-
-from rq.defaults import CALLBACK_TIMEOUT
-from rq.exceptions import DeserializationError, InvalidJobOperation, NoSuchJobError
-from rq.job import Callback, Dependency, Job, JobStatus, cancel_job, get_current_job
-from rq.queue import Queue
-from rq.registry import (
-    CanceledJobRegistry,
-    DeferredJobRegistry,
-    FailedJobRegistry,
-    FinishedJobRegistry,
-    ScheduledJobRegistry,
-    StartedJobRegistry,
-)
-from rq.serializers import JSONSerializer
-from rq.utils import as_text, utcformat, utcnow
-from rq.worker import Worker
-from tests import RQTestCase, fixtures
-
-
-class TestJob(RQTestCase):
-    def test_unicode(self):
-        """Unicode in job description [issue405]"""
-        job = Job.create(
-            'myfunc',
-            args=[12, "☃"],
-            kwargs=dict(snowman="☃", null=None),
-        )
-        self.assertEqual(
-            job.description,
-            "myfunc(12, '☃', null=None, snowman='☃')",
-        )
-
-    def test_create_empty_job(self):
-        """Creation of new empty jobs."""
-        job = Job()
-        job.description = 'test job'
-
-        # Jobs have a random UUID and a creation date
-        self.assertIsNotNone(job.id)
-        self.assertIsNotNone(job.created_at)
-        self.assertEqual(str(job), "<Job %s: test job>" % job.id)
-
-        # ...and nothing else
-        self.assertEqual(job.origin, '')
-        self.assertIsNone(job.enqueued_at)
-        self.assertIsNone(job.started_at)
-        self.assertIsNone(job.ended_at)
-        self.assertIsNone(job.result)
-        self.assertIsNone(job.exc_info)
-
-        with self.assertRaises(DeserializationError):
-            job.func
-        with self.assertRaises(DeserializationError):
-            job.instance
-        with self.assertRaises(DeserializationError):
-            job.args
-        with self.assertRaises(DeserializationError):
-            job.kwargs
-
-    def test_create_param_errors(self):
-        """Creation of jobs may result in errors"""
-        self.assertRaises(TypeError, Job.create, fixtures.say_hello, args="string")
-        self.assertRaises(TypeError, Job.create, fixtures.say_hello, kwargs="string")
-        self.assertRaises(TypeError, Job.create, func=42)
-
-    def test_create_typical_job(self):
-        """Creation of jobs for function calls."""
-        job = Job.create(func=fixtures.some_calculation, args=(3, 4), kwargs=dict(z=2))
-
-        # Jobs have a random UUID
-        self.assertIsNotNone(job.id)
-        self.assertIsNotNone(job.created_at)
-        self.assertIsNotNone(job.description)
-        self.assertIsNone(job.instance)
-
-        # Job data is set...
-        self.assertEqual(job.func, fixtures.some_calculation)
-        self.assertEqual(job.args, (3, 4))
-        self.assertEqual(job.kwargs, {'z': 2})
-
-        # ...but metadata is not
-        self.assertEqual(job.origin, '')
-        self.assertIsNone(job.enqueued_at)
-        self.assertIsNone(job.result)
-
-    def test_create_instance_method_job(self):
-        """Creation of jobs for instance methods."""
-        n = fixtures.Number(2)
-        job = Job.create(func=n.div, args=(4,))
-
-        # Job data is set
-        self.assertEqual(job.func, n.div)
-        self.assertEqual(job.instance, n)
-        self.assertEqual(job.args, (4,))
-
-    def test_create_job_with_serializer(self):
-        """Creation of jobs with serializer for instance methods."""
-        # Test using json serializer
-        n = fixtures.Number(2)
-        job = Job.create(func=n.div, args=(4,), serializer=json)
-
-        self.assertIsNotNone(job.serializer)
-        self.assertEqual(job.func, n.div)
-        self.assertEqual(job.instance, n)
-        self.assertEqual(job.args, (4,))
-
-    def test_create_job_from_string_function(self):
-        """Creation of jobs using string specifier."""
-        job = Job.create(func='tests.fixtures.say_hello', args=('World',))
-
-        # Job data is set
-        self.assertEqual(job.func, fixtures.say_hello)
-        self.assertIsNone(job.instance)
-        self.assertEqual(job.args, ('World',))
-
-    def test_create_job_from_callable_class(self):
-        """Creation of jobs using a callable class specifier."""
-        kallable = fixtures.CallableObject()
-        job = Job.create(func=kallable)
-
-        self.assertEqual(job.func, kallable.__call__)
-        self.assertEqual(job.instance, kallable)
-
-    def test_job_properties_set_data_property(self):
-        """Data property gets derived from the job tuple."""
-        job = Job()
-        job.func_name = 'foo'
-        fname, instance, args, kwargs = loads(job.data)
-
-        self.assertEqual(fname, job.func_name)
-        self.assertEqual(instance, None)
-        self.assertEqual(args, ())
-        self.assertEqual(kwargs, {})
-
-    def test_data_property_sets_job_properties(self):
-        """Job tuple gets derived lazily from data property."""
-        job = Job()
-        job.data = dumps(('foo', None, (1, 2, 3), {'bar': 'qux'}))
-
-        self.assertEqual(job.func_name, 'foo')
-        self.assertEqual(job.instance, None)
-        self.assertEqual(job.args, (1, 2, 3))
-        self.assertEqual(job.kwargs, {'bar': 'qux'})
-
-    def test_save(self):  # noqa
-        """Storing jobs."""
-        job = Job.create(func=fixtures.some_calculation, args=(3, 4), kwargs=dict(z=2))
-
-        # Saving creates a Redis hash
-        self.assertEqual(self.testconn.exists(job.key), False)
-        job.save()
-        self.assertEqual(self.testconn.type(job.key), b'hash')
-
-        # Saving writes pickled job data
-        unpickled_data = loads(zlib.decompress(self.testconn.hget(job.key, 'data')))
-        self.assertEqual(unpickled_data[0], 'tests.fixtures.some_calculation')
-
-    def test_fetch(self):
-        """Fetching jobs."""
-        # Prepare test
-        self.testconn.hset(
-            'rq:job:some_id', 'data', "(S'tests.fixtures.some_calculation'\nN(I3\nI4\nt(dp1\nS'z'\nI2\nstp2\n."
-        )
-        self.testconn.hset('rq:job:some_id', 'created_at', '2012-02-07T22:13:24.123456Z')
-
-        # Fetch returns a job
-        job = Job.fetch('some_id')
-        self.assertEqual(job.id, 'some_id')
-        self.assertEqual(job.func_name, 'tests.fixtures.some_calculation')
-        self.assertIsNone(job.instance)
-        self.assertEqual(job.args, (3, 4))
-        self.assertEqual(job.kwargs, dict(z=2))
-        self.assertEqual(job.created_at, datetime(2012, 2, 7, 22, 13, 24, 123456))
-
-    def test_fetch_many(self):
-        """Fetching many jobs at once."""
-        data = {
-            'func': fixtures.some_calculation,
-            'args': (3, 4),
-            'kwargs': dict(z=2),
-            'connection': self.testconn,
-        }
-        job = Job.create(**data)
-        job.save()
-
-        job2 = Job.create(**data)
-        job2.save()
-
-        jobs = Job.fetch_many([job.id, job2.id, 'invalid_id'], self.testconn)
-        self.assertEqual(jobs, [job, job2, None])
-
-    def test_persistence_of_empty_jobs(self):  # noqa
-        """Storing empty jobs."""
-        job = Job()
-        with self.assertRaises(ValueError):
-            job.save()
-
-    def test_persistence_of_typical_jobs(self):
-        """Storing typical jobs."""
-        job = Job.create(func=fixtures.some_calculation, args=(3, 4), kwargs=dict(z=2))
-        job.save()
-
-        stored_date = self.testconn.hget(job.key, 'created_at').decode('utf-8')
-        self.assertEqual(stored_date, utcformat(job.created_at))
-
-        # ... and no other keys are stored
-        self.assertEqual(
-            {
-                b'created_at',
-                b'data',
-                b'description',
-                b'ended_at',
-                b'last_heartbeat',
-                b'started_at',
-                b'worker_name',
-                b'success_callback_name',
-                b'failure_callback_name',
-                b'stopped_callback_name',
-            },
-            set(self.testconn.hkeys(job.key)),
-        )
-
-        self.assertEqual(job.last_heartbeat, None)
-        self.assertEqual(job.last_heartbeat, None)
-
-        ts = utcnow()
-        job.heartbeat(ts, 0)
-        self.assertEqual(job.last_heartbeat, ts)
-
-    def test_persistence_of_parent_job(self):
-        """Storing jobs with parent job, either instance or key."""
-        parent_job = Job.create(func=fixtures.some_calculation)
-        parent_job.save()
-        job = Job.create(func=fixtures.some_calculation, depends_on=parent_job)
-        job.save()
-        stored_job = Job.fetch(job.id)
-        self.assertEqual(stored_job._dependency_id, parent_job.id)
-        self.assertEqual(stored_job._dependency_ids, [parent_job.id])
-        self.assertEqual(stored_job.dependency.id, parent_job.id)
-        self.assertEqual(stored_job.dependency, parent_job)
-
-        job = Job.create(func=fixtures.some_calculation, depends_on=parent_job.id)
-        job.save()
-        stored_job = Job.fetch(job.id)
-        self.assertEqual(stored_job._dependency_id, parent_job.id)
-        self.assertEqual(stored_job._dependency_ids, [parent_job.id])
-        self.assertEqual(stored_job.dependency.id, parent_job.id)
-        self.assertEqual(stored_job.dependency, parent_job)
-
-    def test_persistence_of_callbacks(self):
-        """Storing jobs with success and/or failure callbacks."""
-        job = Job.create(
-            func=fixtures.some_calculation,
-            on_success=Callback(fixtures.say_hello, timeout=10),
-            on_failure=fixtures.say_pid,
-            on_stopped=fixtures.say_hello,
-        )  # deprecated callable
-        job.save()
-        stored_job = Job.fetch(job.id)
-
-        self.assertEqual(fixtures.say_hello, stored_job.success_callback)
-        self.assertEqual(10, stored_job.success_callback_timeout)
-        self.assertEqual(fixtures.say_pid, stored_job.failure_callback)
-        self.assertEqual(fixtures.say_hello, stored_job.stopped_callback)
-        self.assertEqual(CALLBACK_TIMEOUT, stored_job.failure_callback_timeout)
-        self.assertEqual(CALLBACK_TIMEOUT, stored_job.stopped_callback_timeout)
-
-        # None(s)
-        job = Job.create(func=fixtures.some_calculation, on_failure=None)
-        job.save()
-        stored_job = Job.fetch(job.id)
-        self.assertIsNone(stored_job.success_callback)
-        self.assertEqual(CALLBACK_TIMEOUT, job.success_callback_timeout)  # timeout should be never none
-        self.assertEqual(CALLBACK_TIMEOUT, stored_job.success_callback_timeout)
-        self.assertIsNone(stored_job.failure_callback)
-        self.assertEqual(CALLBACK_TIMEOUT, job.failure_callback_timeout)  # timeout should be never none
-        self.assertEqual(CALLBACK_TIMEOUT, stored_job.failure_callback_timeout)
-        self.assertEqual(CALLBACK_TIMEOUT, job.stopped_callback_timeout)  # timeout should be never none
-        self.assertIsNone(stored_job.stopped_callback)
-
-    def test_store_then_fetch(self):
-        """Store, then fetch."""
-        job = Job.create(func=fixtures.some_calculation, timeout='1h', args=(3, 4), kwargs=dict(z=2))
-        job.save()
-
-        job2 = Job.fetch(job.id)
-        self.assertEqual(job.func, job2.func)
-        self.assertEqual(job.args, job2.args)
-        self.assertEqual(job.kwargs, job2.kwargs)
-        self.assertEqual(job.timeout, job2.timeout)
-
-        # Mathematical equation
-        self.assertEqual(job, job2)
-
-    def test_fetching_can_fail(self):
-        """Fetching fails for non-existing jobs."""
-        with self.assertRaises(NoSuchJobError):
-            Job.fetch('b4a44d44-da16-4620-90a6-798e8cd72ca0')
-
-    def test_fetching_unreadable_data(self):
-        """Fetching succeeds on unreadable data, but lazy props fail."""
-        # Set up
-        job = Job.create(func=fixtures.some_calculation, args=(3, 4), kwargs=dict(z=2))
-        job.save()
-
-        # Just replace the data hkey with some random noise
-        self.testconn.hset(job.key, 'data', 'this is no pickle string')
-        job.refresh()
-
-        for attr in ('func_name', 'instance', 'args', 'kwargs'):
-            with self.assertRaises(Exception):
-                getattr(job, attr)
-
-    def test_job_is_unimportable(self):
-        """Jobs that cannot be imported throw exception on access."""
-        job = Job.create(func=fixtures.say_hello, args=('Lionel',))
-        job.save()
-
-        # Now slightly modify the job to make it unimportable (this is
-        # equivalent to a worker not having the most up-to-date source code
-        # and unable to import the function)
-        job_data = job.data
-        unimportable_data = job_data.replace(b'say_hello', b'nay_hello')
-
-        self.testconn.hset(job.key, 'data', zlib.compress(unimportable_data))
-
-        job.refresh()
-        with self.assertRaises(ValueError):
-            job.func  # accessing the func property should fail
-
-    def test_compressed_exc_info_handling(self):
-        """Jobs handle both compressed and uncompressed exc_info"""
-        exception_string = 'Some exception'
-
-        job = Job.create(func=fixtures.say_hello, args=('Lionel',))
-        job._exc_info = exception_string
-        job.save()
-
-        # exc_info is stored in compressed format
-        exc_info = self.testconn.hget(job.key, 'exc_info')
-        self.assertEqual(as_text(zlib.decompress(exc_info)), exception_string)
-
-        job.refresh()
-        self.assertEqual(job.exc_info, exception_string)
-
-        # Uncompressed exc_info is also handled
-        self.testconn.hset(job.key, 'exc_info', exception_string)
-
-        job.refresh()
-        self.assertEqual(job.exc_info, exception_string)
-
-    def test_compressed_job_data_handling(self):
-        """Jobs handle both compressed and uncompressed data"""
-
-        job = Job.create(func=fixtures.say_hello, args=('Lionel',))
-        job.save()
-
-        # Job data is stored in compressed format
-        job_data = job.data
-        self.assertEqual(zlib.compress(job_data), self.testconn.hget(job.key, 'data'))
-
-        self.testconn.hset(job.key, 'data', job_data)
-        job.refresh()
-        self.assertEqual(job.data, job_data)
-
-    def test_custom_meta_is_persisted(self):
-        """Additional meta data on jobs are stored persisted correctly."""
-        job = Job.create(func=fixtures.say_hello, args=('Lionel',))
-        job.meta['foo'] = 'bar'
-        job.save()
-
-        raw_data = self.testconn.hget(job.key, 'meta')
-        self.assertEqual(loads(raw_data)['foo'], 'bar')
-
-        job2 = Job.fetch(job.id)
-        self.assertEqual(job2.meta['foo'], 'bar')
-
-    def test_get_meta(self):
-        """Test get_meta() function"""
-        job = Job.create(func=fixtures.say_hello, args=('Lionel',))
-        job.meta['foo'] = 'bar'
-        job.save()
-        self.assertEqual(job.get_meta()['foo'], 'bar')
-
-        # manually write different data in meta
-        self.testconn.hset(job.key, 'meta', dumps({'fee': 'boo'}))
-
-        # check if refresh=False keeps old data
-        self.assertEqual(job.get_meta(False)['foo'], 'bar')
-
-        # check if meta is updated
-        self.assertEqual(job.get_meta()['fee'], 'boo')
-
-    def test_custom_meta_is_rewriten_by_save_meta(self):
-        """New meta data can be stored by save_meta."""
-        job = Job.create(func=fixtures.say_hello, args=('Lionel',))
-        job.save()
-        serialized = job.to_dict()
-
-        job.meta['foo'] = 'bar'
-        job.save_meta()
-
-        raw_meta = self.testconn.hget(job.key, 'meta')
-        self.assertEqual(loads(raw_meta)['foo'], 'bar')
-
-        job2 = Job.fetch(job.id)
-        self.assertEqual(job2.meta['foo'], 'bar')
-
-        # nothing else was changed
-        serialized2 = job2.to_dict()
-        serialized2.pop('meta')
-        self.assertDictEqual(serialized, serialized2)
-
-    def test_unpickleable_result(self):
-        """Unpickleable job result doesn't crash job.save() and job.refresh()"""
-        job = Job.create(func=fixtures.say_hello, args=('Lionel',))
-        job._result = queue.Queue()
-        job.save()
-
-        self.assertEqual(self.testconn.hget(job.key, 'result').decode('utf-8'), 'Unserializable return value')
-
-        job = Job.fetch(job.id)
-        self.assertEqual(job.result, 'Unserializable return value')
-
-    def test_result_ttl_is_persisted(self):
-        """Ensure that job's result_ttl is set properly"""
-        job = Job.create(func=fixtures.say_hello, args=('Lionel',), result_ttl=10)
-        job.save()
-        Job.fetch(job.id, connection=self.testconn)
-        self.assertEqual(job.result_ttl, 10)
-
-        job = Job.create(func=fixtures.say_hello, args=('Lionel',))
-        job.save()
-        Job.fetch(job.id, connection=self.testconn)
-        self.assertEqual(job.result_ttl, None)
-
-    def test_failure_ttl_is_persisted(self):
-        """Ensure job.failure_ttl is set and restored properly"""
-        job = Job.create(func=fixtures.say_hello, args=('Lionel',), failure_ttl=15)
-        job.save()
-        Job.fetch(job.id, connection=self.testconn)
-        self.assertEqual(job.failure_ttl, 15)
-
-        job = Job.create(func=fixtures.say_hello, args=('Lionel',))
-        job.save()
-        Job.fetch(job.id, connection=self.testconn)
-        self.assertEqual(job.failure_ttl, None)
-
-    def test_description_is_persisted(self):
-        """Ensure that job's custom description is set properly"""
-        job = Job.create(func=fixtures.say_hello, args=('Lionel',), description='Say hello!')
-        job.save()
-        Job.fetch(job.id, connection=self.testconn)
-        self.assertEqual(job.description, 'Say hello!')
-
-        # Ensure job description is constructed from function call string
-        job = Job.create(func=fixtures.say_hello, args=('Lionel',))
-        job.save()
-        Job.fetch(job.id, connection=self.testconn)
-        self.assertEqual(job.description, "tests.fixtures.say_hello('Lionel')")
-
-    def test_dependency_parameter_constraints(self):
-        """Ensures the proper constraints are in place for values passed in as job references."""
-        dep_job = Job.create(func=fixtures.say_hello)
-        # raise error on empty jobs
-        self.assertRaises(ValueError, Dependency, jobs=[])
-        # raise error on non-str/Job value in jobs iterable
-        self.assertRaises(ValueError, Dependency, jobs=[dep_job, 1])
-
-    def test_multiple_dependencies_are_accepted_and_persisted(self):
-        """Ensure job._dependency_ids accepts different input formats, and
-        is set and restored properly"""
-        job_A = Job.create(func=fixtures.some_calculation, args=(3, 1, 4), id="A")
-        job_B = Job.create(func=fixtures.some_calculation, args=(2, 7, 2), id="B")
-
-        # No dependencies
-        job = Job.create(func=fixtures.say_hello)
-        job.save()
-        Job.fetch(job.id, connection=self.testconn)
-        self.assertEqual(job._dependency_ids, [])
-
-        # Various ways of specifying dependencies
-        cases = [
-            ["A", ["A"]],
-            [job_A, ["A"]],
-            [["A", "B"], ["A", "B"]],
-            [[job_A, job_B], ["A", "B"]],
-            [["A", job_B], ["A", "B"]],
-            [("A", "B"), ["A", "B"]],
-            [(job_A, job_B), ["A", "B"]],
-            [(job_A, "B"), ["A", "B"]],
-            [Dependency("A"), ["A"]],
-            [Dependency(job_A), ["A"]],
-            [Dependency(["A", "B"]), ["A", "B"]],
-            [Dependency([job_A, job_B]), ["A", "B"]],
-            [Dependency(["A", job_B]), ["A", "B"]],
-            [Dependency(("A", "B")), ["A", "B"]],
-            [Dependency((job_A, job_B)), ["A", "B"]],
-            [Dependency((job_A, "B")), ["A", "B"]],
-        ]
-        for given, expected in cases:
-            job = Job.create(func=fixtures.say_hello, depends_on=given)
-            job.save()
-            Job.fetch(job.id, connection=self.testconn)
-            self.assertEqual(job._dependency_ids, expected)
-
-    def test_prepare_for_execution(self):
-        """job.prepare_for_execution works properly"""
-        job = Job.create(func=fixtures.say_hello)
-        job.save()
-        with self.testconn.pipeline() as pipeline:
-            job.prepare_for_execution("worker_name", pipeline)
-            pipeline.execute()
-        job.refresh()
-        self.assertEqual(job.worker_name, "worker_name")
-        self.assertEqual(job.get_status(), JobStatus.STARTED)
-        self.assertIsNotNone(job.last_heartbeat)
-        self.assertIsNotNone(job.started_at)
-
-    def test_job_access_outside_job_fails(self):
-        """The current job is accessible only within a job context."""
-        self.assertIsNone(get_current_job())
-
-    def test_job_access_within_job_function(self):
-        """The current job is accessible within the job function."""
-        q = Queue()
-        job = q.enqueue(fixtures.access_self)
-        w = Worker([q])
-        w.work(burst=True)
-        # access_self calls get_current_job() and executes successfully
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-
-    def test_job_access_within_synchronous_job_function(self):
-        queue = Queue(is_async=False)
-        queue.enqueue(fixtures.access_self)
-
-    def test_job_async_status_finished(self):
-        queue = Queue(is_async=False)
-        job = queue.enqueue(fixtures.say_hello)
-        self.assertEqual(job.result, 'Hi there, Stranger!')
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-
-    def test_enqueue_job_async_status_finished(self):
-        queue = Queue(is_async=False)
-        job = Job.create(func=fixtures.say_hello)
-        job = queue.enqueue_job(job)
-        self.assertEqual(job.result, 'Hi there, Stranger!')
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-
-    def test_get_result_ttl(self):
-        """Getting job result TTL."""
-        job_result_ttl = 1
-        default_ttl = 2
-        job = Job.create(func=fixtures.say_hello, result_ttl=job_result_ttl)
-        job.save()
-        self.assertEqual(job.get_result_ttl(default_ttl=default_ttl), job_result_ttl)
-        job = Job.create(func=fixtures.say_hello)
-        job.save()
-        self.assertEqual(job.get_result_ttl(default_ttl=default_ttl), default_ttl)
-
-    def test_get_job_ttl(self):
-        """Getting job TTL."""
-        ttl = 1
-        job = Job.create(func=fixtures.say_hello, ttl=ttl)
-        job.save()
-        self.assertEqual(job.get_ttl(), ttl)
-        job = Job.create(func=fixtures.say_hello)
-        job.save()
-        self.assertEqual(job.get_ttl(), None)
-
-    def test_ttl_via_enqueue(self):
-        ttl = 1
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(fixtures.say_hello, ttl=ttl)
-        self.assertEqual(job.get_ttl(), ttl)
-
-    def test_never_expire_during_execution(self):
-        """Test what happens when job expires during execution"""
-        ttl = 1
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(fixtures.long_running_job, args=(2,), ttl=ttl)
-        self.assertEqual(job.get_ttl(), ttl)
-        job.save()
-        job.perform()
-        self.assertEqual(job.get_ttl(), ttl)
-        self.assertTrue(job.exists(job.id))
-        self.assertEqual(job.result, 'Done sleeping...')
-
-    def test_cleanup(self):
-        """Test that jobs and results are expired properly."""
-        job = Job.create(func=fixtures.say_hello)
-        job.save()
-
-        # Jobs with negative TTLs don't expire
-        job.cleanup(ttl=-1)
-        self.assertEqual(self.testconn.ttl(job.key), -1)
-
-        # Jobs with positive TTLs are eventually deleted
-        job.cleanup(ttl=100)
-        self.assertEqual(self.testconn.ttl(job.key), 100)
-
-        # Jobs with 0 TTL are immediately deleted
-        job.cleanup(ttl=0)
-        self.assertRaises(NoSuchJobError, Job.fetch, job.id, self.testconn)
-
-    def test_cleanup_expires_dependency_keys(self):
-        dependency_job = Job.create(func=fixtures.say_hello)
-        dependency_job.save()
-
-        dependent_job = Job.create(func=fixtures.say_hello, depends_on=dependency_job)
-
-        dependent_job.register_dependency()
-        dependent_job.save()
-
-        dependent_job.cleanup(ttl=100)
-        dependency_job.cleanup(ttl=100)
-
-        self.assertEqual(self.testconn.ttl(dependent_job.dependencies_key), 100)
-        self.assertEqual(self.testconn.ttl(dependency_job.dependents_key), 100)
-
-    def test_job_get_position(self):
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(fixtures.say_hello)
-        job2 = queue.enqueue(fixtures.say_hello)
-        job3 = Job(fixtures.say_hello)
-
-        self.assertEqual(0, job.get_position())
-        self.assertEqual(1, job2.get_position())
-        self.assertEqual(None, job3.get_position())
-
-    def test_job_with_dependents_delete_parent(self):
-        """job.delete() deletes itself from Redis but not dependents.
-        Wthout a save, the dependent job is never saved into redis. The delete
-        method will get and pass a NoSuchJobError.
-        """
-        queue = Queue(connection=self.testconn, serializer=JSONSerializer)
-        job = queue.enqueue(fixtures.say_hello)
-        job2 = Job.create(func=fixtures.say_hello, depends_on=job, serializer=JSONSerializer)
-        job2.register_dependency()
-
-        job.delete()
-        self.assertFalse(self.testconn.exists(job.key))
-        self.assertFalse(self.testconn.exists(job.dependents_key))
-
-        # By default, dependents are not deleted, but The job is in redis only
-        # if it was saved!
-        self.assertFalse(self.testconn.exists(job2.key))
-
-        self.assertNotIn(job.id, queue.get_job_ids())
-
-    def test_job_delete_removes_itself_from_registries(self):
-        """job.delete() should remove itself from job registries"""
-        job = Job.create(
-            func=fixtures.say_hello,
-            status=JobStatus.FAILED,
-            connection=self.testconn,
-            origin='default',
-            serializer=JSONSerializer,
-        )
-        job.save()
-        registry = FailedJobRegistry(connection=self.testconn, serializer=JSONSerializer)
-        registry.add(job, 500)
-
-        job.delete()
-        self.assertFalse(job in registry)
-
-        job = Job.create(
-            func=fixtures.say_hello,
-            status=JobStatus.STOPPED,
-            connection=self.testconn,
-            origin='default',
-            serializer=JSONSerializer,
-        )
-        job.save()
-        registry = FailedJobRegistry(connection=self.testconn, serializer=JSONSerializer)
-        registry.add(job, 500)
-
-        job.delete()
-        self.assertFalse(job in registry)
-
-        job = Job.create(
-            func=fixtures.say_hello,
-            status=JobStatus.FINISHED,
-            connection=self.testconn,
-            origin='default',
-            serializer=JSONSerializer,
-        )
-        job.save()
-
-        registry = FinishedJobRegistry(connection=self.testconn, serializer=JSONSerializer)
-        registry.add(job, 500)
-
-        job.delete()
-        self.assertFalse(job in registry)
-
-        job = Job.create(
-            func=fixtures.say_hello,
-            status=JobStatus.STARTED,
-            connection=self.testconn,
-            origin='default',
-            serializer=JSONSerializer,
-        )
-        job.save()
-
-        registry = StartedJobRegistry(connection=self.testconn, serializer=JSONSerializer)
-        registry.add(job, 500)
-
-        job.delete()
-        self.assertFalse(job in registry)
-
-        job = Job.create(
-            func=fixtures.say_hello,
-            status=JobStatus.DEFERRED,
-            connection=self.testconn,
-            origin='default',
-            serializer=JSONSerializer,
-        )
-        job.save()
-
-        registry = DeferredJobRegistry(connection=self.testconn, serializer=JSONSerializer)
-        registry.add(job, 500)
-
-        job.delete()
-        self.assertFalse(job in registry)
-
-        job = Job.create(
-            func=fixtures.say_hello,
-            status=JobStatus.SCHEDULED,
-            connection=self.testconn,
-            origin='default',
-            serializer=JSONSerializer,
-        )
-        job.save()
-
-        registry = ScheduledJobRegistry(connection=self.testconn, serializer=JSONSerializer)
-        registry.add(job, 500)
-
-        job.delete()
-        self.assertFalse(job in registry)
-
-    def test_job_with_dependents_delete_parent_with_saved(self):
-        """job.delete() deletes itself from Redis but not dependents. If the
-        dependent job was saved, it will remain in redis."""
-        queue = Queue(connection=self.testconn, serializer=JSONSerializer)
-        job = queue.enqueue(fixtures.say_hello)
-        job2 = Job.create(func=fixtures.say_hello, depends_on=job, serializer=JSONSerializer)
-        job2.register_dependency()
-        job2.save()
-
-        job.delete()
-        self.assertFalse(self.testconn.exists(job.key))
-        self.assertFalse(self.testconn.exists(job.dependents_key))
-
-        # By default, dependents are not deleted, but The job is in redis only
-        # if it was saved!
-        self.assertTrue(self.testconn.exists(job2.key))
-
-        self.assertNotIn(job.id, queue.get_job_ids())
-
-    def test_job_with_dependents_deleteall(self):
-        """job.delete() deletes itself from Redis. Dependents need to be
-        deleted explicitly."""
-        queue = Queue(connection=self.testconn, serializer=JSONSerializer)
-        job = queue.enqueue(fixtures.say_hello)
-        job2 = Job.create(func=fixtures.say_hello, depends_on=job, serializer=JSONSerializer)
-        job2.register_dependency()
-
-        job.delete(delete_dependents=True)
-        self.assertFalse(self.testconn.exists(job.key))
-        self.assertFalse(self.testconn.exists(job.dependents_key))
-        self.assertFalse(self.testconn.exists(job2.key))
-
-        self.assertNotIn(job.id, queue.get_job_ids())
-
-    def test_job_with_dependents_delete_all_with_saved(self):
-        """job.delete() deletes itself from Redis. Dependents need to be
-        deleted explictely. Without a save, the dependent job is never saved
-        into redis. The delete method will get and pass a NoSuchJobError.
-        """
-        queue = Queue(connection=self.testconn, serializer=JSONSerializer)
-        job = queue.enqueue(fixtures.say_hello)
-        job2 = Job.create(func=fixtures.say_hello, depends_on=job, serializer=JSONSerializer)
-        job2.register_dependency()
-        job2.save()
-
-        job.delete(delete_dependents=True)
-        self.assertFalse(self.testconn.exists(job.key))
-        self.assertFalse(self.testconn.exists(job.dependents_key))
-        self.assertFalse(self.testconn.exists(job2.key))
-
-        self.assertNotIn(job.id, queue.get_job_ids())
-
-    def test_dependent_job_creates_dependencies_key(self):
-        queue = Queue(connection=self.testconn)
-        dependency_job = queue.enqueue(fixtures.say_hello)
-        dependent_job = Job.create(func=fixtures.say_hello, depends_on=dependency_job)
-
-        dependent_job.register_dependency()
-        dependent_job.save()
-
-        self.assertTrue(self.testconn.exists(dependent_job.dependencies_key))
-
-    def test_dependent_job_deletes_dependencies_key(self):
-        """
-        job.delete() deletes itself from Redis.
-        """
-        queue = Queue(connection=self.testconn, serializer=JSONSerializer)
-        dependency_job = queue.enqueue(fixtures.say_hello)
-        dependent_job = Job.create(func=fixtures.say_hello, depends_on=dependency_job, serializer=JSONSerializer)
-
-        dependent_job.register_dependency()
-        dependent_job.save()
-        dependent_job.delete()
-
-        self.assertTrue(self.testconn.exists(dependency_job.key))
-        self.assertFalse(self.testconn.exists(dependent_job.dependencies_key))
-        self.assertFalse(self.testconn.exists(dependent_job.key))
-
-    def test_create_job_with_id(self):
-        """test creating jobs with a custom ID"""
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(fixtures.say_hello, job_id="1234")
-        self.assertEqual(job.id, "1234")
-        job.perform()
-
-        self.assertRaises(TypeError, queue.enqueue, fixtures.say_hello, job_id=1234)
-
-    def test_create_job_with_async(self):
-        """test creating jobs with async function"""
-        queue = Queue(connection=self.testconn)
-
-        async_job = queue.enqueue(fixtures.say_hello_async, job_id="async_job")
-        sync_job = queue.enqueue(fixtures.say_hello, job_id="sync_job")
-
-        self.assertEqual(async_job.id, "async_job")
-        self.assertEqual(sync_job.id, "sync_job")
-
-        async_task_result = async_job.perform()
-        sync_task_result = sync_job.perform()
-
-        self.assertEqual(sync_task_result, async_task_result)
-
-    def test_get_call_string_unicode(self):
-        """test call string with unicode keyword arguments"""
-        queue = Queue(connection=self.testconn)
-
-        job = queue.enqueue(fixtures.echo, arg_with_unicode=fixtures.UnicodeStringObject())
-        self.assertIsNotNone(job.get_call_string())
-        job.perform()
-
-    def test_create_job_from_static_method(self):
-        """test creating jobs with static method"""
-        queue = Queue(connection=self.testconn)
-
-        job = queue.enqueue(fixtures.ClassWithAStaticMethod.static_method)
-        self.assertIsNotNone(job.get_call_string())
-        job.perform()
-
-    def test_create_job_with_ttl_should_have_ttl_after_enqueued(self):
-        """test creating jobs with ttl and checks if get_jobs returns it properly [issue502]"""
-        queue = Queue(connection=self.testconn)
-        queue.enqueue(fixtures.say_hello, job_id="1234", ttl=10)
-        job = queue.get_jobs()[0]
-        self.assertEqual(job.ttl, 10)
-
-    def test_create_job_with_ttl_should_expire(self):
-        """test if a job created with ttl expires [issue502]"""
-        queue = Queue(connection=self.testconn)
-        queue.enqueue(fixtures.say_hello, job_id="1234", ttl=1)
-        time.sleep(1.1)
-        self.assertEqual(0, len(queue.get_jobs()))
-
-    def test_create_and_cancel_job(self):
-        """Ensure job.cancel() works properly"""
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(fixtures.say_hello)
-        self.assertEqual(1, len(queue.get_jobs()))
-        cancel_job(job.id)
-        self.assertEqual(0, len(queue.get_jobs()))
-        registry = CanceledJobRegistry(connection=self.testconn, queue=queue)
-        self.assertIn(job, registry)
-        self.assertEqual(job.get_status(), JobStatus.CANCELED)
-
-        # If job is deleted, it's also removed from CanceledJobRegistry
-        job.delete()
-        self.assertNotIn(job, registry)
-
-    def test_create_and_cancel_job_fails_already_canceled(self):
-        """Ensure job.cancel() fails on already canceled job"""
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(fixtures.say_hello, job_id='fake_job_id')
-        self.assertEqual(1, len(queue.get_jobs()))
-
-        # First cancel should be fine
-        cancel_job(job.id)
-        self.assertEqual(0, len(queue.get_jobs()))
-        registry = CanceledJobRegistry(connection=self.testconn, queue=queue)
-        self.assertIn(job, registry)
-        self.assertEqual(job.get_status(), JobStatus.CANCELED)
-
-        # Second cancel should fail
-        self.assertRaisesRegex(
-            InvalidJobOperation, r'Cannot cancel already canceled job: fake_job_id', cancel_job, job.id
-        )
-
-    def test_create_and_cancel_job_enqueue_dependents(self):
-        """Ensure job.cancel() works properly with enqueue_dependents=True"""
-        queue = Queue(connection=self.testconn)
-        dependency = queue.enqueue(fixtures.say_hello)
-        dependent = queue.enqueue(fixtures.say_hello, depends_on=dependency)
-
-        self.assertEqual(1, len(queue.get_jobs()))
-        self.assertEqual(1, len(queue.deferred_job_registry))
-        cancel_job(dependency.id, enqueue_dependents=True)
-        self.assertEqual(1, len(queue.get_jobs()))
-        self.assertEqual(0, len(queue.deferred_job_registry))
-        registry = CanceledJobRegistry(connection=self.testconn, queue=queue)
-        self.assertIn(dependency, registry)
-        self.assertEqual(dependency.get_status(), JobStatus.CANCELED)
-        self.assertIn(dependent, queue.get_jobs())
-        self.assertEqual(dependent.get_status(), JobStatus.QUEUED)
-        # If job is deleted, it's also removed from CanceledJobRegistry
-        dependency.delete()
-        self.assertNotIn(dependency, registry)
-
-    def test_create_and_cancel_job_enqueue_dependents_in_registry(self):
-        """Ensure job.cancel() works properly with enqueue_dependents=True and when the job is in a registry"""
-        queue = Queue(connection=self.testconn)
-        dependency = queue.enqueue(fixtures.raise_exc)
-        dependent = queue.enqueue(fixtures.say_hello, depends_on=dependency)
-        print('# Post enqueue', self.testconn.smembers(dependency.dependents_key))
-        self.assertTrue(dependency.dependent_ids)
-
-        self.assertEqual(1, len(queue.get_jobs()))
-        self.assertEqual(1, len(queue.deferred_job_registry))
-        w = Worker([queue])
-        w.work(burst=True, max_jobs=1)
-        self.assertTrue(dependency.dependent_ids)
-        print('# Post work', self.testconn.smembers(dependency.dependents_key))
-        dependency.refresh()
-        dependent.refresh()
-        self.assertEqual(0, len(queue.get_jobs()))
-        self.assertEqual(1, len(queue.deferred_job_registry))
-        self.assertEqual(1, len(queue.failed_job_registry))
-
-        print('# Pre cancel', self.testconn.smembers(dependency.dependents_key))
-        cancel_job(dependency.id, enqueue_dependents=True)
-        dependency.refresh()
-        dependent.refresh()
-        print('#Post cancel', self.testconn.smembers(dependency.dependents_key))
-
-        self.assertEqual(1, len(queue.get_jobs()))
-        self.assertEqual(0, len(queue.deferred_job_registry))
-        self.assertEqual(0, len(queue.failed_job_registry))
-        self.assertEqual(1, len(queue.canceled_job_registry))
-        registry = CanceledJobRegistry(connection=self.testconn, queue=queue)
-        self.assertIn(dependency, registry)
-        self.assertEqual(dependency.get_status(), JobStatus.CANCELED)
-        self.assertNotIn(dependency, queue.failed_job_registry)
-        self.assertIn(dependent, queue.get_jobs())
-        self.assertEqual(dependent.get_status(), JobStatus.QUEUED)
-        # If job is deleted, it's also removed from CanceledJobRegistry
-        dependency.delete()
-        self.assertNotIn(dependency, registry)
-
-    def test_create_and_cancel_job_enqueue_dependents_with_pipeline(self):
-        """Ensure job.cancel() works properly with enqueue_dependents=True"""
-        queue = Queue(connection=self.testconn)
-        dependency = queue.enqueue(fixtures.say_hello)
-        dependent = queue.enqueue(fixtures.say_hello, depends_on=dependency)
-
-        self.assertEqual(1, len(queue.get_jobs()))
-        self.assertEqual(1, len(queue.deferred_job_registry))
-        self.testconn.set('some:key', b'some:value')
-
-        with self.testconn.pipeline() as pipe:
-            pipe.watch('some:key')
-            self.assertEqual(self.testconn.get('some:key'), b'some:value')
-            dependency.cancel(pipeline=pipe, enqueue_dependents=True)
-            pipe.set('some:key', b'some:other:value')
-            pipe.execute()
-        self.assertEqual(self.testconn.get('some:key'), b'some:other:value')
-        self.assertEqual(1, len(queue.get_jobs()))
-        self.assertEqual(0, len(queue.deferred_job_registry))
-        registry = CanceledJobRegistry(connection=self.testconn, queue=queue)
-        self.assertIn(dependency, registry)
-        self.assertEqual(dependency.get_status(), JobStatus.CANCELED)
-        self.assertIn(dependent, queue.get_jobs())
-        self.assertEqual(dependent.get_status(), JobStatus.QUEUED)
-        # If job is deleted, it's also removed from CanceledJobRegistry
-        dependency.delete()
-        self.assertNotIn(dependency, registry)
-
-    def test_create_and_cancel_job_with_serializer(self):
-        """test creating and using cancel_job (with serializer) deletes job properly"""
-        queue = Queue(connection=self.testconn, serializer=JSONSerializer)
-        job = queue.enqueue(fixtures.say_hello)
-        self.assertEqual(1, len(queue.get_jobs()))
-        cancel_job(job.id, serializer=JSONSerializer)
-        self.assertEqual(0, len(queue.get_jobs()))
-
-    def test_dependents_key_for_should_return_prefixed_job_id(self):
-        """test redis key to store job dependents hash under"""
-        job_id = 'random'
-        key = Job.dependents_key_for(job_id=job_id)
-
-        assert key == Job.redis_job_namespace_prefix + job_id + ':dependents'
-
-    def test_key_for_should_return_prefixed_job_id(self):
-        """test redis key to store job hash under"""
-        job_id = 'random'
-        key = Job.key_for(job_id=job_id)
-
-        assert key == (Job.redis_job_namespace_prefix + job_id).encode('utf-8')
-
-    def test_dependencies_key_should_have_prefixed_job_id(self):
-        job_id = 'random'
-        job = Job(id=job_id)
-        expected_key = Job.redis_job_namespace_prefix + ":" + job_id + ':dependencies'
-
-        assert job.dependencies_key == expected_key
-
-    def test_fetch_dependencies_returns_dependency_jobs(self):
-        queue = Queue(connection=self.testconn)
-        dependency_job = queue.enqueue(fixtures.say_hello)
-        dependent_job = Job.create(func=fixtures.say_hello, depends_on=dependency_job)
-
-        dependent_job.register_dependency()
-        dependent_job.save()
-
-        dependencies = dependent_job.fetch_dependencies(pipeline=self.testconn)
-
-        self.assertListEqual(dependencies, [dependency_job])
-
-    def test_fetch_dependencies_returns_empty_if_not_dependent_job(self):
-        dependent_job = Job.create(func=fixtures.say_hello)
-
-        dependent_job.register_dependency()
-        dependent_job.save()
-
-        dependencies = dependent_job.fetch_dependencies(pipeline=self.testconn)
-
-        self.assertListEqual(dependencies, [])
-
-    def test_fetch_dependencies_raises_if_dependency_deleted(self):
-        queue = Queue(connection=self.testconn)
-        dependency_job = queue.enqueue(fixtures.say_hello)
-        dependent_job = Job.create(func=fixtures.say_hello, depends_on=dependency_job)
-
-        dependent_job.register_dependency()
-        dependent_job.save()
-
-        dependency_job.delete()
-
-        self.assertNotIn(dependent_job.id, [job.id for job in dependent_job.fetch_dependencies(pipeline=self.testconn)])
-
-    def test_fetch_dependencies_watches(self):
-        queue = Queue(connection=self.testconn)
-        dependency_job = queue.enqueue(fixtures.say_hello)
-        dependent_job = Job.create(func=fixtures.say_hello, depends_on=dependency_job)
-
-        dependent_job.register_dependency()
-        dependent_job.save()
-
-        with self.testconn.pipeline() as pipeline:
-            dependent_job.fetch_dependencies(watch=True, pipeline=pipeline)
-
-            pipeline.multi()
-
-            with self.assertRaises(WatchError):
-                self.testconn.set(Job.key_for(dependency_job.id), 'somethingelsehappened')
-                pipeline.touch(dependency_job.id)
-                pipeline.execute()
-
-    def test_dependencies_finished_returns_false_if_dependencies_queued(self):
-        queue = Queue(connection=self.testconn)
-
-        dependency_job_ids = [queue.enqueue(fixtures.say_hello).id for _ in range(5)]
-
-        dependent_job = Job.create(func=fixtures.say_hello)
-        dependent_job._dependency_ids = dependency_job_ids
-        dependent_job.register_dependency()
-
-        dependencies_finished = dependent_job.dependencies_are_met()
-
-        self.assertFalse(dependencies_finished)
-
-    def test_dependencies_finished_returns_true_if_no_dependencies(self):
-        dependent_job = Job.create(func=fixtures.say_hello)
-        dependent_job.register_dependency()
-
-        dependencies_finished = dependent_job.dependencies_are_met()
-
-        self.assertTrue(dependencies_finished)
-
-    def test_dependencies_finished_returns_true_if_all_dependencies_finished(self):
-        dependency_jobs = [Job.create(fixtures.say_hello) for _ in range(5)]
-
-        dependent_job = Job.create(func=fixtures.say_hello)
-        dependent_job._dependency_ids = [job.id for job in dependency_jobs]
-        dependent_job.register_dependency()
-
-        now = utcnow()
-
-        # Set ended_at timestamps
-        for i, job in enumerate(dependency_jobs):
-            job._status = JobStatus.FINISHED
-            job.ended_at = now - timedelta(seconds=i)
-            job.save()
-
-        dependencies_finished = dependent_job.dependencies_are_met()
-
-        self.assertTrue(dependencies_finished)
-
-    def test_dependencies_finished_returns_false_if_unfinished_job(self):
-        dependency_jobs = [Job.create(fixtures.say_hello) for _ in range(2)]
-
-        dependency_jobs[0]._status = JobStatus.FINISHED
-        dependency_jobs[0].ended_at = utcnow()
-        dependency_jobs[0].save()
-
-        dependency_jobs[1]._status = JobStatus.STARTED
-        dependency_jobs[1].ended_at = None
-        dependency_jobs[1].save()
-
-        dependent_job = Job.create(func=fixtures.say_hello)
-        dependent_job._dependency_ids = [job.id for job in dependency_jobs]
-        dependent_job.register_dependency()
-
-        dependencies_finished = dependent_job.dependencies_are_met()
-
-        self.assertFalse(dependencies_finished)
-
-    def test_dependencies_finished_watches_job(self):
-        queue = Queue(connection=self.testconn)
-
-        dependency_job = queue.enqueue(fixtures.say_hello)
-
-        dependent_job = Job.create(func=fixtures.say_hello)
-        dependent_job._dependency_ids = [dependency_job.id]
-        dependent_job.register_dependency()
-
-        with self.testconn.pipeline() as pipeline:
-            dependent_job.dependencies_are_met(
-                pipeline=pipeline,
-            )
-
-            dependency_job.set_status(JobStatus.FAILED, pipeline=self.testconn)
-            pipeline.multi()
-
-            with self.assertRaises(WatchError):
-                pipeline.touch(Job.key_for(dependent_job.id))
-                pipeline.execute()
-
-    def test_execution_order_with_sole_dependency(self):
-        queue = Queue(connection=self.testconn)
-        key = 'test_job:job_order'
-
-        # When there are no dependencies, the two fast jobs ("A" and "B") run in the order enqueued.
-        # Worker 1 will be busy with the slow job, so worker 2 will complete both fast jobs.
-        job_slow = queue.enqueue(fixtures.rpush, args=[key, "slow", True, 0.5], job_id='slow_job')
-        job_A = queue.enqueue(fixtures.rpush, args=[key, "A", True])
-        job_B = queue.enqueue(fixtures.rpush, args=[key, "B", True])
-        fixtures.burst_two_workers(queue)
-        time.sleep(0.75)
-        jobs_completed = [v.decode() for v in self.testconn.lrange(key, 0, 2)]
-        self.assertEqual(queue.count, 0)
-        self.assertTrue(all(job.is_finished for job in [job_slow, job_A, job_B]))
-        self.assertEqual(jobs_completed, ["A:w2", "B:w2", "slow:w1"])
-        self.testconn.delete(key)
-
-        # When job "A" depends on the slow job, then job "B" finishes before "A".
-        # There is no clear requirement on which worker should take job "A", so we stay silent on that.
-        job_slow = queue.enqueue(fixtures.rpush, args=[key, "slow", True, 0.5], job_id='slow_job')
-        job_A = queue.enqueue(fixtures.rpush, args=[key, "A", False], depends_on='slow_job')
-        job_B = queue.enqueue(fixtures.rpush, args=[key, "B", True])
-        fixtures.burst_two_workers(queue)
-        time.sleep(0.75)
-        jobs_completed = [v.decode() for v in self.testconn.lrange(key, 0, 2)]
-        self.assertEqual(queue.count, 0)
-        self.assertTrue(all(job.is_finished for job in [job_slow, job_A, job_B]))
-        self.assertEqual(jobs_completed, ["B:w2", "slow:w1", "A"])
-
-    def test_execution_order_with_dual_dependency(self):
-        queue = Queue(connection=self.testconn)
-        key = 'test_job:job_order'
-
-        # When there are no dependencies, the two fast jobs ("A" and "B") run in the order enqueued.
-        job_slow_1 = queue.enqueue(fixtures.rpush, args=[key, "slow_1", True, 0.5], job_id='slow_1')
-        job_slow_2 = queue.enqueue(fixtures.rpush, args=[key, "slow_2", True, 0.75], job_id='slow_2')
-        job_A = queue.enqueue(fixtures.rpush, args=[key, "A", True])
-        job_B = queue.enqueue(fixtures.rpush, args=[key, "B", True])
-        fixtures.burst_two_workers(queue)
-        time.sleep(1)
-        jobs_completed = [v.decode() for v in self.testconn.lrange(key, 0, 3)]
-        self.assertEqual(queue.count, 0)
-        self.assertTrue(all(job.is_finished for job in [job_slow_1, job_slow_2, job_A, job_B]))
-        self.assertEqual(jobs_completed, ["slow_1:w1", "A:w1", "B:w1", "slow_2:w2"])
-        self.testconn.delete(key)
-
-        # This time job "A" depends on two slow jobs, while job "B" depends only on the faster of
-        # the two. Job "B" should be completed before job "A".
-        # There is no clear requirement on which worker should take job "A", so we stay silent on that.
-        job_slow_1 = queue.enqueue(fixtures.rpush, args=[key, "slow_1", True, 0.5], job_id='slow_1')
-        job_slow_2 = queue.enqueue(fixtures.rpush, args=[key, "slow_2", True, 0.75], job_id='slow_2')
-        job_A = queue.enqueue(fixtures.rpush, args=[key, "A", False], depends_on=['slow_1', 'slow_2'])
-        job_B = queue.enqueue(fixtures.rpush, args=[key, "B", True], depends_on=['slow_1'])
-        fixtures.burst_two_workers(queue)
-        time.sleep(1)
-        jobs_completed = [v.decode() for v in self.testconn.lrange(key, 0, 3)]
-        self.assertEqual(queue.count, 0)
-        self.assertTrue(all(job.is_finished for job in [job_slow_1, job_slow_2, job_A, job_B]))
-        self.assertEqual(jobs_completed, ["slow_1:w1", "B:w1", "slow_2:w2", "A"])
diff --git a/tests/test_maintenance.py b/tests/test_maintenance.py
deleted file mode 100644
index 8cef010..0000000
--- a/tests/test_maintenance.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import unittest
-from unittest.mock import patch
-
-from redis import Redis
-
-from rq.job import JobStatus
-from rq.maintenance import clean_intermediate_queue
-from rq.queue import Queue
-from rq.utils import get_version
-from rq.worker import Worker
-from tests import RQTestCase
-from tests.fixtures import say_hello
-
-
-class MaintenanceTestCase(RQTestCase):
-    @unittest.skipIf(get_version(Redis()) < (6, 2, 0), 'Skip if Redis server < 6.2.0')
-    def test_cleanup_intermediate_queue(self):
-        """Ensure jobs stuck in the intermediate queue are cleaned up."""
-        queue = Queue('foo', connection=self.testconn)
-        job = queue.enqueue(say_hello)
-
-        # If job execution fails after it's dequeued, job should be in the intermediate queue
-        # # and it's status is still QUEUED
-        with patch.object(Worker, 'execute_job'):
-            # mocked.execute_job.side_effect = Exception()
-            worker = Worker(queue, connection=self.testconn)
-            worker.work(burst=True)
-
-            self.assertEqual(job.get_status(), JobStatus.QUEUED)
-            self.assertFalse(job.id in queue.get_job_ids())
-            self.assertIsNotNone(self.testconn.lpos(queue.intermediate_queue_key, job.id))
-            # After cleaning up the intermediate queue, job status should be `FAILED`
-            # and job is also removed from the intermediate queue
-            clean_intermediate_queue(worker, queue)
-            self.assertEqual(job.get_status(), JobStatus.FAILED)
-            self.assertIsNone(self.testconn.lpos(queue.intermediate_queue_key, job.id))
diff --git a/tests/test_queue.py b/tests/test_queue.py
deleted file mode 100644
index 4ae4e26..0000000
--- a/tests/test_queue.py
+++ /dev/null
@@ -1,802 +0,0 @@
-import json
-import unittest
-from datetime import datetime, timedelta, timezone
-from unittest.mock import patch
-
-from redis import Redis
-
-from rq import Queue, Retry
-from rq.job import Job, JobStatus
-from rq.registry import (
-    CanceledJobRegistry,
-    DeferredJobRegistry,
-    FailedJobRegistry,
-    FinishedJobRegistry,
-    ScheduledJobRegistry,
-    StartedJobRegistry,
-)
-from rq.serializers import JSONSerializer
-from rq.utils import get_version
-from rq.worker import Worker
-from tests import RQTestCase
-from tests.fixtures import echo, say_hello
-
-
-class MultipleDependencyJob(Job):
-    """
-    Allows for the patching of `_dependency_ids` to simulate multi-dependency
-    support without modifying the public interface of `Job`
-    """
-
-    create_job = Job.create
-
-    @classmethod
-    def create(cls, *args, **kwargs):
-        dependency_ids = kwargs.pop('kwargs').pop('_dependency_ids')
-        _job = cls.create_job(*args, **kwargs)
-        _job._dependency_ids = dependency_ids
-        return _job
-
-
-class TestQueue(RQTestCase):
-    def test_create_queue(self):
-        """Creating queues."""
-        q = Queue('my-queue')
-        self.assertEqual(q.name, 'my-queue')
-        self.assertEqual(str(q), '<Queue my-queue>')
-
-    def test_create_queue_with_serializer(self):
-        """Creating queues with serializer."""
-        # Test using json serializer
-        q = Queue('queue-with-serializer', serializer=json)
-        self.assertEqual(q.name, 'queue-with-serializer')
-        self.assertEqual(str(q), '<Queue queue-with-serializer>')
-        self.assertIsNotNone(q.serializer)
-
-    def test_create_default_queue(self):
-        """Instantiating the default queue."""
-        q = Queue()
-        self.assertEqual(q.name, 'default')
-
-    def test_equality(self):
-        """Mathematical equality of queues."""
-        q1 = Queue('foo')
-        q2 = Queue('foo')
-        q3 = Queue('bar')
-
-        self.assertEqual(q1, q2)
-        self.assertEqual(q2, q1)
-        self.assertNotEqual(q1, q3)
-        self.assertNotEqual(q2, q3)
-        self.assertGreater(q1, q3)
-        self.assertRaises(TypeError, lambda: q1 == 'some string')
-        self.assertRaises(TypeError, lambda: q1 < 'some string')
-
-    def test_empty_queue(self):
-        """Emptying queues."""
-        q = Queue('example')
-
-        self.testconn.rpush('rq:queue:example', 'foo')
-        self.testconn.rpush('rq:queue:example', 'bar')
-        self.assertEqual(q.is_empty(), False)
-
-        q.empty()
-
-        self.assertEqual(q.is_empty(), True)
-        self.assertIsNone(self.testconn.lpop('rq:queue:example'))
-
-    def test_empty_removes_jobs(self):
-        """Emptying a queue deletes the associated job objects"""
-        q = Queue('example')
-        job = q.enqueue(say_hello)
-        self.assertTrue(Job.exists(job.id))
-        q.empty()
-        self.assertFalse(Job.exists(job.id))
-
-    def test_queue_is_empty(self):
-        """Detecting empty queues."""
-        q = Queue('example')
-        self.assertEqual(q.is_empty(), True)
-
-        self.testconn.rpush('rq:queue:example', 'sentinel message')
-        self.assertEqual(q.is_empty(), False)
-
-    def test_queue_delete(self):
-        """Test queue.delete properly removes queue"""
-        q = Queue('example')
-        job = q.enqueue(say_hello)
-        job2 = q.enqueue(say_hello)
-
-        self.assertEqual(2, len(q.get_job_ids()))
-
-        q.delete()
-
-        self.assertEqual(0, len(q.get_job_ids()))
-        self.assertEqual(False, self.testconn.exists(job.key))
-        self.assertEqual(False, self.testconn.exists(job2.key))
-        self.assertEqual(0, len(self.testconn.smembers(Queue.redis_queues_keys)))
-        self.assertEqual(False, self.testconn.exists(q.key))
-
-    def test_queue_delete_but_keep_jobs(self):
-        """Test queue.delete properly removes queue but keeps the job keys in the redis store"""
-        q = Queue('example')
-        job = q.enqueue(say_hello)
-        job2 = q.enqueue(say_hello)
-
-        self.assertEqual(2, len(q.get_job_ids()))
-
-        q.delete(delete_jobs=False)
-
-        self.assertEqual(0, len(q.get_job_ids()))
-        self.assertEqual(True, self.testconn.exists(job.key))
-        self.assertEqual(True, self.testconn.exists(job2.key))
-        self.assertEqual(0, len(self.testconn.smembers(Queue.redis_queues_keys)))
-        self.assertEqual(False, self.testconn.exists(q.key))
-
-    def test_position(self):
-        """Test queue.delete properly removes queue but keeps the job keys in the redis store"""
-        q = Queue('example')
-        job = q.enqueue(say_hello)
-        job2 = q.enqueue(say_hello)
-        job3 = q.enqueue(say_hello)
-
-        self.assertEqual(0, q.get_job_position(job.id))
-        self.assertEqual(1, q.get_job_position(job2.id))
-        self.assertEqual(2, q.get_job_position(job3))
-        self.assertEqual(None, q.get_job_position("no_real_job"))
-
-    def test_remove(self):
-        """Ensure queue.remove properly removes Job from queue."""
-        q = Queue('example', serializer=JSONSerializer)
-        job = q.enqueue(say_hello)
-        self.assertIn(job.id, q.job_ids)
-        q.remove(job)
-        self.assertNotIn(job.id, q.job_ids)
-
-        job = q.enqueue(say_hello)
-        self.assertIn(job.id, q.job_ids)
-        q.remove(job.id)
-        self.assertNotIn(job.id, q.job_ids)
-
-    def test_jobs(self):
-        """Getting jobs out of a queue."""
-        q = Queue('example')
-        self.assertEqual(q.jobs, [])
-        job = q.enqueue(say_hello)
-        self.assertEqual(q.jobs, [job])
-
-        # Deleting job removes it from queue
-        job.delete()
-        self.assertEqual(q.job_ids, [])
-
-    def test_compact(self):
-        """Queue.compact() removes non-existing jobs."""
-        q = Queue()
-
-        q.enqueue(say_hello, 'Alice')
-        q.enqueue(say_hello, 'Charlie')
-        self.testconn.lpush(q.key, '1', '2')
-
-        self.assertEqual(q.count, 4)
-        self.assertEqual(len(q), 4)
-
-        q.compact()
-
-        self.assertEqual(q.count, 2)
-        self.assertEqual(len(q), 2)
-
-    def test_enqueue(self):
-        """Enqueueing job onto queues."""
-        q = Queue()
-        self.assertEqual(q.is_empty(), True)
-
-        # say_hello spec holds which queue this is sent to
-        job = q.enqueue(say_hello, 'Nick', foo='bar')
-        job_id = job.id
-        self.assertEqual(job.origin, q.name)
-
-        # Inspect data inside Redis
-        q_key = 'rq:queue:default'
-        self.assertEqual(self.testconn.llen(q_key), 1)
-        self.assertEqual(self.testconn.lrange(q_key, 0, -1)[0].decode('ascii'), job_id)
-
-    def test_enqueue_sets_metadata(self):
-        """Enqueueing job onto queues modifies meta data."""
-        q = Queue()
-        job = Job.create(func=say_hello, args=('Nick',), kwargs=dict(foo='bar'))
-
-        # Preconditions
-        self.assertIsNone(job.enqueued_at)
-
-        # Action
-        q.enqueue_job(job)
-
-        # Postconditions
-        self.assertIsNotNone(job.enqueued_at)
-
-    def test_pop_job_id(self):
-        """Popping job IDs from queues."""
-        # Set up
-        q = Queue()
-        uuid = '112188ae-4e9d-4a5b-a5b3-f26f2cb054da'
-        q.push_job_id(uuid)
-
-        # Pop it off the queue...
-        self.assertEqual(q.count, 1)
-        self.assertEqual(q.pop_job_id(), uuid)
-
-        # ...and assert the queue count when down
-        self.assertEqual(q.count, 0)
-
-    def test_dequeue_any(self):
-        """Fetching work from any given queue."""
-        fooq = Queue('foo', connection=self.testconn)
-        barq = Queue('bar', connection=self.testconn)
-
-        self.assertRaises(ValueError, Queue.dequeue_any, [fooq, barq], timeout=0, connection=self.testconn)
-
-        self.assertEqual(Queue.dequeue_any([fooq, barq], None), None)
-
-        # Enqueue a single item
-        barq.enqueue(say_hello)
-        job, queue = Queue.dequeue_any([fooq, barq], None)
-        self.assertEqual(job.func, say_hello)
-        self.assertEqual(queue, barq)
-
-        # Enqueue items on both queues
-        barq.enqueue(say_hello, 'for Bar')
-        fooq.enqueue(say_hello, 'for Foo')
-
-        job, queue = Queue.dequeue_any([fooq, barq], None)
-        self.assertEqual(queue, fooq)
-        self.assertEqual(job.func, say_hello)
-        self.assertEqual(job.origin, fooq.name)
-        self.assertEqual(job.args[0], 'for Foo', 'Foo should be dequeued first.')
-
-        job, queue = Queue.dequeue_any([fooq, barq], None)
-        self.assertEqual(queue, barq)
-        self.assertEqual(job.func, say_hello)
-        self.assertEqual(job.origin, barq.name)
-        self.assertEqual(job.args[0], 'for Bar', 'Bar should be dequeued second.')
-
-    @unittest.skipIf(get_version(Redis()) < (6, 2, 0), 'Skip if Redis server < 6.2.0')
-    def test_dequeue_any_reliable(self):
-        """Dequeueing job from a single queue moves job to intermediate queue."""
-        foo_queue = Queue('foo', connection=self.testconn)
-        job_1 = foo_queue.enqueue(say_hello)
-        self.assertRaises(ValueError, Queue.dequeue_any, [foo_queue], timeout=0, connection=self.testconn)
-
-        # Job ID is not in intermediate queue
-        self.assertIsNone(self.testconn.lpos(foo_queue.intermediate_queue_key, job_1.id))
-        job, queue = Queue.dequeue_any([foo_queue], timeout=None, connection=self.testconn)
-        self.assertEqual(queue, foo_queue)
-        self.assertEqual(job.func, say_hello)
-        # After job is dequeued, the job ID is in the intermediate queue
-        self.assertEqual(self.testconn.lpos(foo_queue.intermediate_queue_key, job.id), 0)
-
-        # Test the blocking version
-        foo_queue.enqueue(say_hello)
-        job, queue = Queue.dequeue_any([foo_queue], timeout=1, connection=self.testconn)
-        self.assertEqual(queue, foo_queue)
-        self.assertEqual(job.func, say_hello)
-        # After job is dequeued, the job ID is in the intermediate queue
-        self.assertEqual(self.testconn.lpos(foo_queue.intermediate_queue_key, job.id), 1)
-
-    @unittest.skipIf(get_version(Redis()) < (6, 2, 0), 'Skip if Redis server < 6.2.0')
-    def test_intermediate_queue(self):
-        """Job should be stuck in intermediate queue if execution fails after dequeued."""
-        queue = Queue('foo', connection=self.testconn)
-        job = queue.enqueue(say_hello)
-
-        # If job execution fails after it's dequeued, job should be in the intermediate queue
-        # # and it's status is still QUEUED
-        with patch.object(Worker, 'execute_job'):
-            # mocked.execute_job.side_effect = Exception()
-            worker = Worker(queue, connection=self.testconn)
-            worker.work(burst=True)
-
-            # Job status is still QUEUED even though it's already dequeued
-            self.assertEqual(job.get_status(refresh=True), JobStatus.QUEUED)
-            self.assertFalse(job.id in queue.get_job_ids())
-            self.assertIsNotNone(self.testconn.lpos(queue.intermediate_queue_key, job.id))
-
-    def test_dequeue_any_ignores_nonexisting_jobs(self):
-        """Dequeuing (from any queue) silently ignores non-existing jobs."""
-
-        q = Queue('low')
-        uuid = '49f205ab-8ea3-47dd-a1b5-bfa186870fc8'
-        q.push_job_id(uuid)
-
-        # Dequeue simply ignores the missing job and returns None
-        self.assertEqual(q.count, 1)
-        self.assertEqual(Queue.dequeue_any([Queue(), Queue('low')], None), None)  # noqa
-        self.assertEqual(q.count, 0)
-
-    def test_enqueue_with_ttl(self):
-        """Negative TTL value is not allowed"""
-        queue = Queue()
-        self.assertRaises(ValueError, queue.enqueue, echo, 1, ttl=0)
-        self.assertRaises(ValueError, queue.enqueue, echo, 1, ttl=-1)
-
-    def test_enqueue_sets_status(self):
-        """Enqueueing a job sets its status to "queued"."""
-        q = Queue()
-        job = q.enqueue(say_hello)
-        self.assertEqual(job.get_status(), JobStatus.QUEUED)
-
-    def test_enqueue_meta_arg(self):
-        """enqueue() can set the job.meta contents."""
-        q = Queue()
-        job = q.enqueue(say_hello, meta={'foo': 'bar', 'baz': 42})
-        self.assertEqual(job.meta['foo'], 'bar')
-        self.assertEqual(job.meta['baz'], 42)
-
-    def test_enqueue_with_failure_ttl(self):
-        """enqueue() properly sets job.failure_ttl"""
-        q = Queue()
-        job = q.enqueue(say_hello, failure_ttl=10)
-        job.refresh()
-        self.assertEqual(job.failure_ttl, 10)
-
-    def test_job_timeout(self):
-        """Timeout can be passed via job_timeout argument"""
-        queue = Queue()
-        job = queue.enqueue(echo, 1, job_timeout=15)
-        self.assertEqual(job.timeout, 15)
-
-        # Not passing job_timeout will use queue._default_timeout
-        job = queue.enqueue(echo, 1)
-        self.assertEqual(job.timeout, queue._default_timeout)
-
-        # job_timeout = 0 is not allowed
-        self.assertRaises(ValueError, queue.enqueue, echo, 1, job_timeout=0)
-
-    def test_default_timeout(self):
-        """Timeout can be passed via job_timeout argument"""
-        queue = Queue()
-        job = queue.enqueue(echo, 1)
-        self.assertEqual(job.timeout, queue.DEFAULT_TIMEOUT)
-
-        job = Job.create(func=echo)
-        job = queue.enqueue_job(job)
-        self.assertEqual(job.timeout, queue.DEFAULT_TIMEOUT)
-
-        queue = Queue(default_timeout=15)
-        job = queue.enqueue(echo, 1)
-        self.assertEqual(job.timeout, 15)
-
-        job = Job.create(func=echo)
-        job = queue.enqueue_job(job)
-        self.assertEqual(job.timeout, 15)
-
-    def test_synchronous_timeout(self):
-        queue = Queue(is_async=False)
-        self.assertFalse(queue.is_async)
-
-        no_expire_job = queue.enqueue(echo, result_ttl=-1)
-        self.assertEqual(queue.connection.ttl(no_expire_job.key), -1)
-
-        delete_job = queue.enqueue(echo, result_ttl=0)
-        self.assertEqual(queue.connection.ttl(delete_job.key), -2)
-
-        keep_job = queue.enqueue(echo, result_ttl=100)
-        self.assertLessEqual(queue.connection.ttl(keep_job.key), 100)
-
-    def test_enqueue_explicit_args(self):
-        """enqueue() works for both implicit/explicit args."""
-        q = Queue()
-
-        # Implicit args/kwargs mode
-        job = q.enqueue(echo, 1, job_timeout=1, result_ttl=1, bar='baz')
-        self.assertEqual(job.timeout, 1)
-        self.assertEqual(job.result_ttl, 1)
-        self.assertEqual(job.perform(), ((1,), {'bar': 'baz'}))
-
-        # Explicit kwargs mode
-        kwargs = {
-            'timeout': 1,
-            'result_ttl': 1,
-        }
-        job = q.enqueue(echo, job_timeout=2, result_ttl=2, args=[1], kwargs=kwargs)
-        self.assertEqual(job.timeout, 2)
-        self.assertEqual(job.result_ttl, 2)
-        self.assertEqual(job.perform(), ((1,), {'timeout': 1, 'result_ttl': 1}))
-
-        # Explicit args and kwargs should also work with enqueue_at
-        time = datetime.now(timezone.utc) + timedelta(seconds=10)
-        job = q.enqueue_at(time, echo, job_timeout=2, result_ttl=2, args=[1], kwargs=kwargs)
-        self.assertEqual(job.timeout, 2)
-        self.assertEqual(job.result_ttl, 2)
-        self.assertEqual(job.perform(), ((1,), {'timeout': 1, 'result_ttl': 1}))
-
-        # Positional arguments is not allowed if explicit args and kwargs are used
-        self.assertRaises(Exception, q.enqueue, echo, 1, kwargs=kwargs)
-
-    def test_all_queues(self):
-        """All queues"""
-        q1 = Queue('first-queue')
-        q2 = Queue('second-queue')
-        q3 = Queue('third-queue')
-
-        # Ensure a queue is added only once a job is enqueued
-        self.assertEqual(len(Queue.all()), 0)
-        q1.enqueue(say_hello)
-        self.assertEqual(len(Queue.all()), 1)
-
-        # Ensure this holds true for multiple queues
-        q2.enqueue(say_hello)
-        q3.enqueue(say_hello)
-        names = [q.name for q in Queue.all()]
-        self.assertEqual(len(Queue.all()), 3)
-
-        # Verify names
-        self.assertTrue('first-queue' in names)
-        self.assertTrue('second-queue' in names)
-        self.assertTrue('third-queue' in names)
-
-        # Now empty two queues
-        w = Worker([q2, q3])
-        w.work(burst=True)
-
-        # Queue.all() should still report the empty queues
-        self.assertEqual(len(Queue.all()), 3)
-
-    def test_all_custom_job(self):
-        class CustomJob(Job):
-            pass
-
-        q = Queue('all-queue')
-        q.enqueue(say_hello)
-        queues = Queue.all(job_class=CustomJob)
-        self.assertEqual(len(queues), 1)
-        self.assertIs(queues[0].job_class, CustomJob)
-
-    def test_from_queue_key(self):
-        """Ensure being able to get a Queue instance manually from Redis"""
-        q = Queue()
-        key = Queue.redis_queue_namespace_prefix + 'default'
-        reverse_q = Queue.from_queue_key(key)
-        self.assertEqual(q, reverse_q)
-
-    def test_from_queue_key_error(self):
-        """Ensure that an exception is raised if the queue prefix is wrong"""
-        key = 'some:weird:prefix:' + 'default'
-        self.assertRaises(ValueError, Queue.from_queue_key, key)
-
-    def test_enqueue_dependents(self):
-        """Enqueueing dependent jobs pushes all jobs in the depends set to the queue
-        and removes them from DeferredJobQueue."""
-        q = Queue()
-        parent_job = Job.create(func=say_hello)
-        parent_job.save()
-        job_1 = q.enqueue(say_hello, depends_on=parent_job)
-        job_2 = q.enqueue(say_hello, depends_on=parent_job)
-
-        registry = DeferredJobRegistry(q.name, connection=self.testconn)
-
-        parent_job.set_status(JobStatus.FINISHED)
-
-        self.assertEqual(set(registry.get_job_ids()), set([job_1.id, job_2.id]))
-        # After dependents is enqueued, job_1 and job_2 should be in queue
-        self.assertEqual(q.job_ids, [])
-        q.enqueue_dependents(parent_job)
-        self.assertEqual(set(q.job_ids), set([job_2.id, job_1.id]))
-        self.assertFalse(self.testconn.exists(parent_job.dependents_key))
-
-        # DeferredJobRegistry should also be empty
-        self.assertEqual(registry.get_job_ids(), [])
-
-    def test_enqueue_dependents_on_multiple_queues(self):
-        """Enqueueing dependent jobs on multiple queues pushes jobs in the queues
-        and removes them from DeferredJobRegistry for each different queue."""
-        q_1 = Queue("queue_1")
-        q_2 = Queue("queue_2")
-        parent_job = Job.create(func=say_hello)
-        parent_job.save()
-        job_1 = q_1.enqueue(say_hello, depends_on=parent_job)
-        job_2 = q_2.enqueue(say_hello, depends_on=parent_job)
-
-        # Each queue has its own DeferredJobRegistry
-        registry_1 = DeferredJobRegistry(q_1.name, connection=self.testconn)
-        self.assertEqual(set(registry_1.get_job_ids()), set([job_1.id]))
-        registry_2 = DeferredJobRegistry(q_2.name, connection=self.testconn)
-
-        parent_job.set_status(JobStatus.FINISHED)
-
-        self.assertEqual(set(registry_2.get_job_ids()), set([job_2.id]))
-
-        # After dependents is enqueued, job_1 on queue_1 and
-        # job_2 should be in queue_2
-        self.assertEqual(q_1.job_ids, [])
-        self.assertEqual(q_2.job_ids, [])
-        q_1.enqueue_dependents(parent_job)
-        q_2.enqueue_dependents(parent_job)
-        self.assertEqual(set(q_1.job_ids), set([job_1.id]))
-        self.assertEqual(set(q_2.job_ids), set([job_2.id]))
-        self.assertFalse(self.testconn.exists(parent_job.dependents_key))
-
-        # DeferredJobRegistry should also be empty
-        self.assertEqual(registry_1.get_job_ids(), [])
-        self.assertEqual(registry_2.get_job_ids(), [])
-
-    def test_enqueue_job_with_dependency(self):
-        """Jobs are enqueued only when their dependencies are finished."""
-        # Job with unfinished dependency is not immediately enqueued
-        parent_job = Job.create(func=say_hello)
-        parent_job.save()
-        q = Queue()
-        job = q.enqueue_call(say_hello, depends_on=parent_job)
-        self.assertEqual(q.job_ids, [])
-        self.assertEqual(job.get_status(), JobStatus.DEFERRED)
-
-        # Jobs dependent on finished jobs are immediately enqueued
-        parent_job.set_status(JobStatus.FINISHED)
-        parent_job.save()
-        job = q.enqueue_call(say_hello, depends_on=parent_job)
-        self.assertEqual(q.job_ids, [job.id])
-        self.assertEqual(job.timeout, Queue.DEFAULT_TIMEOUT)
-        self.assertEqual(job.get_status(), JobStatus.QUEUED)
-
-    def test_enqueue_job_with_dependency_and_pipeline(self):
-        """Jobs are enqueued only when their dependencies are finished, and by the caller when passing a pipeline."""
-        # Job with unfinished dependency is not immediately enqueued
-        parent_job = Job.create(func=say_hello)
-        parent_job.save()
-        q = Queue()
-        with q.connection.pipeline() as pipe:
-            job = q.enqueue_call(say_hello, depends_on=parent_job, pipeline=pipe)
-            self.assertEqual(q.job_ids, [])
-            self.assertEqual(job.get_status(refresh=False), JobStatus.DEFERRED)
-            # Not in registry before execute, since passed in pipeline
-            self.assertEqual(len(q.deferred_job_registry), 0)
-            pipe.execute()
-            # Only in registry after execute, since passed in pipeline
-        self.assertEqual(len(q.deferred_job_registry), 1)
-
-        # Jobs dependent on finished jobs are immediately enqueued
-        parent_job.set_status(JobStatus.FINISHED)
-        parent_job.save()
-        with q.connection.pipeline() as pipe:
-            job = q.enqueue_call(say_hello, depends_on=parent_job, pipeline=pipe)
-            # Pre execute conditions
-            self.assertEqual(q.job_ids, [])
-            self.assertEqual(job.timeout, Queue.DEFAULT_TIMEOUT)
-            self.assertEqual(job.get_status(refresh=False), JobStatus.QUEUED)
-            pipe.execute()
-        # Post execute conditions
-        self.assertEqual(q.job_ids, [job.id])
-        self.assertEqual(job.timeout, Queue.DEFAULT_TIMEOUT)
-        self.assertEqual(job.get_status(refresh=False), JobStatus.QUEUED)
-
-    def test_enqueue_job_with_no_dependency_prior_watch_and_pipeline(self):
-        """Jobs are enqueued only when their dependencies are finished, and by the caller when passing a pipeline."""
-        q = Queue()
-        with q.connection.pipeline() as pipe:
-            pipe.watch(b'fake_key')  # Test watch then enqueue
-            job = q.enqueue_call(say_hello, pipeline=pipe)
-            self.assertEqual(q.job_ids, [])
-            self.assertEqual(job.get_status(refresh=False), JobStatus.QUEUED)
-            # Not in queue before execute, since passed in pipeline
-            self.assertEqual(len(q), 0)
-            # Make sure modifying key doesn't cause issues, if in multi mode won't fail
-            pipe.set(b'fake_key', b'fake_value')
-            pipe.execute()
-            # Only in registry after execute, since passed in pipeline
-        self.assertEqual(len(q), 1)
-
-    def test_enqueue_many_internal_pipeline(self):
-        """Jobs should be enqueued in bulk with an internal pipeline, enqueued in order provided
-        (but at_front still applies)"""
-        # Job with unfinished dependency is not immediately enqueued
-        q = Queue()
-        job_1_data = Queue.prepare_data(say_hello, job_id='fake_job_id_1', at_front=False)
-        job_2_data = Queue.prepare_data(say_hello, job_id='fake_job_id_2', at_front=False)
-        job_3_data = Queue.prepare_data(say_hello, job_id='fake_job_id_3', at_front=True)
-        jobs = q.enqueue_many(
-            [job_1_data, job_2_data, job_3_data],
-        )
-        for job in jobs:
-            self.assertEqual(job.get_status(refresh=False), JobStatus.QUEUED)
-        # Only in registry after execute, since passed in pipeline
-        self.assertEqual(len(q), 3)
-        self.assertEqual(q.job_ids, ['fake_job_id_3', 'fake_job_id_1', 'fake_job_id_2'])
-
-    def test_enqueue_many_with_passed_pipeline(self):
-        """Jobs should be enqueued in bulk with a passed pipeline, enqueued in order provided
-        (but at_front still applies)"""
-        # Job with unfinished dependency is not immediately enqueued
-        q = Queue()
-        with q.connection.pipeline() as pipe:
-            job_1_data = Queue.prepare_data(say_hello, job_id='fake_job_id_1', at_front=False)
-            job_2_data = Queue.prepare_data(say_hello, job_id='fake_job_id_2', at_front=False)
-            job_3_data = Queue.prepare_data(say_hello, job_id='fake_job_id_3', at_front=True)
-            jobs = q.enqueue_many([job_1_data, job_2_data, job_3_data], pipeline=pipe)
-            self.assertEqual(q.job_ids, [])
-            for job in jobs:
-                self.assertEqual(job.get_status(refresh=False), JobStatus.QUEUED)
-            pipe.execute()
-            # Only in registry after execute, since passed in pipeline
-            self.assertEqual(len(q), 3)
-            self.assertEqual(q.job_ids, ['fake_job_id_3', 'fake_job_id_1', 'fake_job_id_2'])
-
-    def test_enqueue_job_with_dependency_by_id(self):
-        """Can specify job dependency with job object or job id."""
-        parent_job = Job.create(func=say_hello)
-        parent_job.save()
-
-        q = Queue()
-        q.enqueue_call(say_hello, depends_on=parent_job.id)
-        self.assertEqual(q.job_ids, [])
-
-        # Jobs dependent on finished jobs are immediately enqueued
-        parent_job.set_status(JobStatus.FINISHED)
-        parent_job.save()
-        job = q.enqueue_call(say_hello, depends_on=parent_job.id)
-        self.assertEqual(q.job_ids, [job.id])
-        self.assertEqual(job.timeout, Queue.DEFAULT_TIMEOUT)
-
-    def test_enqueue_job_with_dependency_and_timeout(self):
-        """Jobs remember their timeout when enqueued as a dependency."""
-        # Job with unfinished dependency is not immediately enqueued
-        parent_job = Job.create(func=say_hello)
-        parent_job.save()
-        q = Queue()
-        job = q.enqueue_call(say_hello, depends_on=parent_job, timeout=123)
-        self.assertEqual(q.job_ids, [])
-        self.assertEqual(job.timeout, 123)
-
-        # Jobs dependent on finished jobs are immediately enqueued
-        parent_job.set_status(JobStatus.FINISHED)
-        parent_job.save()
-        job = q.enqueue_call(say_hello, depends_on=parent_job, timeout=123)
-        self.assertEqual(q.job_ids, [job.id])
-        self.assertEqual(job.timeout, 123)
-
-    def test_enqueue_job_with_multiple_queued_dependencies(self):
-        parent_jobs = [Job.create(func=say_hello) for _ in range(2)]
-
-        for job in parent_jobs:
-            job._status = JobStatus.QUEUED
-            job.save()
-
-        q = Queue()
-        with patch('rq.queue.Job.create', new=MultipleDependencyJob.create):
-            job = q.enqueue(say_hello, depends_on=parent_jobs[0], _dependency_ids=[job.id for job in parent_jobs])
-            self.assertEqual(job.get_status(), JobStatus.DEFERRED)
-            self.assertEqual(q.job_ids, [])
-            self.assertEqual(job.fetch_dependencies(), parent_jobs)
-
-    def test_enqueue_job_with_multiple_finished_dependencies(self):
-        parent_jobs = [Job.create(func=say_hello) for _ in range(2)]
-
-        for job in parent_jobs:
-            job._status = JobStatus.FINISHED
-            job.save()
-
-        q = Queue()
-        with patch('rq.queue.Job.create', new=MultipleDependencyJob.create):
-            job = q.enqueue(say_hello, depends_on=parent_jobs[0], _dependency_ids=[job.id for job in parent_jobs])
-            self.assertEqual(job.get_status(), JobStatus.QUEUED)
-            self.assertEqual(q.job_ids, [job.id])
-            self.assertEqual(job.fetch_dependencies(), parent_jobs)
-
-    def test_enqueues_dependent_if_other_dependencies_finished(self):
-        parent_jobs = [Job.create(func=say_hello) for _ in range(3)]
-
-        parent_jobs[0]._status = JobStatus.STARTED
-        parent_jobs[0].save()
-
-        parent_jobs[1]._status = JobStatus.FINISHED
-        parent_jobs[1].save()
-
-        parent_jobs[2]._status = JobStatus.FINISHED
-        parent_jobs[2].save()
-
-        q = Queue()
-        with patch('rq.queue.Job.create', new=MultipleDependencyJob.create):
-            # dependent job deferred, b/c parent_job 0 is still 'started'
-            dependent_job = q.enqueue(
-                say_hello, depends_on=parent_jobs[0], _dependency_ids=[job.id for job in parent_jobs]
-            )
-            self.assertEqual(dependent_job.get_status(), JobStatus.DEFERRED)
-
-        # now set parent job 0 to 'finished'
-        parent_jobs[0].set_status(JobStatus.FINISHED)
-
-        q.enqueue_dependents(parent_jobs[0])
-        self.assertEqual(dependent_job.get_status(), JobStatus.QUEUED)
-        self.assertEqual(q.job_ids, [dependent_job.id])
-
-    def test_does_not_enqueue_dependent_if_other_dependencies_not_finished(self):
-        started_dependency = Job.create(func=say_hello, status=JobStatus.STARTED)
-        started_dependency.save()
-
-        queued_dependency = Job.create(func=say_hello, status=JobStatus.QUEUED)
-        queued_dependency.save()
-
-        q = Queue()
-        with patch('rq.queue.Job.create', new=MultipleDependencyJob.create):
-            dependent_job = q.enqueue(
-                say_hello,
-                depends_on=[started_dependency],
-                _dependency_ids=[started_dependency.id, queued_dependency.id],
-            )
-            self.assertEqual(dependent_job.get_status(), JobStatus.DEFERRED)
-
-        q.enqueue_dependents(started_dependency)
-        self.assertEqual(dependent_job.get_status(), JobStatus.DEFERRED)
-        self.assertEqual(q.job_ids, [])
-
-    def test_fetch_job_successful(self):
-        """Fetch a job from a queue."""
-        q = Queue('example')
-        job_orig = q.enqueue(say_hello)
-        job_fetch: Job = q.fetch_job(job_orig.id)  # type: ignore
-        self.assertIsNotNone(job_fetch)
-        self.assertEqual(job_orig.id, job_fetch.id)
-        self.assertEqual(job_orig.description, job_fetch.description)
-
-    def test_fetch_job_missing(self):
-        """Fetch a job from a queue which doesn't exist."""
-        q = Queue('example')
-        job = q.fetch_job('123')
-        self.assertIsNone(job)
-
-    def test_fetch_job_different_queue(self):
-        """Fetch a job from a queue which is in a different queue."""
-        q1 = Queue('example1')
-        q2 = Queue('example2')
-        job_orig = q1.enqueue(say_hello)
-        job_fetch = q2.fetch_job(job_orig.id)
-        self.assertIsNone(job_fetch)
-
-        job_fetch = q1.fetch_job(job_orig.id)
-        self.assertIsNotNone(job_fetch)
-
-    def test_getting_registries(self):
-        """Getting job registries from queue object"""
-        queue = Queue('example')
-        self.assertEqual(queue.scheduled_job_registry, ScheduledJobRegistry(queue=queue))
-        self.assertEqual(queue.started_job_registry, StartedJobRegistry(queue=queue))
-        self.assertEqual(queue.failed_job_registry, FailedJobRegistry(queue=queue))
-        self.assertEqual(queue.deferred_job_registry, DeferredJobRegistry(queue=queue))
-        self.assertEqual(queue.finished_job_registry, FinishedJobRegistry(queue=queue))
-        self.assertEqual(queue.canceled_job_registry, CanceledJobRegistry(queue=queue))
-
-    def test_getting_registries_with_serializer(self):
-        """Getting job registries from queue object (with custom serializer)"""
-        queue = Queue('example', serializer=JSONSerializer)
-        self.assertEqual(queue.scheduled_job_registry, ScheduledJobRegistry(queue=queue))
-        self.assertEqual(queue.started_job_registry, StartedJobRegistry(queue=queue))
-        self.assertEqual(queue.failed_job_registry, FailedJobRegistry(queue=queue))
-        self.assertEqual(queue.deferred_job_registry, DeferredJobRegistry(queue=queue))
-        self.assertEqual(queue.finished_job_registry, FinishedJobRegistry(queue=queue))
-        self.assertEqual(queue.canceled_job_registry, CanceledJobRegistry(queue=queue))
-
-        # Make sure we don't use default when queue has custom
-        self.assertEqual(queue.scheduled_job_registry.serializer, JSONSerializer)
-        self.assertEqual(queue.started_job_registry.serializer, JSONSerializer)
-        self.assertEqual(queue.failed_job_registry.serializer, JSONSerializer)
-        self.assertEqual(queue.deferred_job_registry.serializer, JSONSerializer)
-        self.assertEqual(queue.finished_job_registry.serializer, JSONSerializer)
-        self.assertEqual(queue.canceled_job_registry.serializer, JSONSerializer)
-
-    def test_enqueue_with_retry(self):
-        """Enqueueing with retry_strategy works"""
-        queue = Queue('example', connection=self.testconn)
-        job = queue.enqueue(say_hello, retry=Retry(max=3, interval=5))
-
-        job = Job.fetch(job.id, connection=self.testconn)
-        self.assertEqual(job.retries_left, 3)
-        self.assertEqual(job.retry_intervals, [5])
-
-
-class TestJobScheduling(RQTestCase):
-    def test_enqueue_at(self):
-        """enqueue_at() creates a job in ScheduledJobRegistry"""
-        queue = Queue(connection=self.testconn)
-        scheduled_time = datetime.now(timezone.utc) + timedelta(seconds=10)
-        job = queue.enqueue_at(scheduled_time, say_hello)
-        registry = ScheduledJobRegistry(queue=queue)
-        self.assertIn(job, registry)
-        self.assertTrue(registry.get_expiration_time(job), scheduled_time)
diff --git a/tests/test_registry.py b/tests/test_registry.py
deleted file mode 100644
index 5dd0be6..0000000
--- a/tests/test_registry.py
+++ /dev/null
@@ -1,513 +0,0 @@
-from datetime import datetime, timedelta
-from unittest import mock
-from unittest.mock import ANY
-
-from rq.defaults import DEFAULT_FAILURE_TTL
-from rq.exceptions import AbandonedJobError, InvalidJobOperation
-from rq.job import Job, JobStatus, requeue_job
-from rq.queue import Queue
-from rq.registry import (
-    CanceledJobRegistry,
-    DeferredJobRegistry,
-    FailedJobRegistry,
-    FinishedJobRegistry,
-    StartedJobRegistry,
-    clean_registries,
-)
-from rq.serializers import JSONSerializer
-from rq.utils import as_text, current_timestamp
-from rq.worker import Worker
-from tests import RQTestCase
-from tests.fixtures import div_by_zero, say_hello
-
-
-class CustomJob(Job):
-    """A custom job class just to test it"""
-
-
-class TestRegistry(RQTestCase):
-    def setUp(self):
-        super().setUp()
-        self.registry = StartedJobRegistry(connection=self.testconn)
-
-    def test_init(self):
-        """Registry can be instantiated with queue or name/Redis connection"""
-        queue = Queue('foo', connection=self.testconn)
-        registry = StartedJobRegistry(queue=queue)
-        self.assertEqual(registry.name, queue.name)
-        self.assertEqual(registry.connection, queue.connection)
-        self.assertEqual(registry.serializer, queue.serializer)
-
-        registry = StartedJobRegistry('bar', self.testconn, serializer=JSONSerializer)
-        self.assertEqual(registry.name, 'bar')
-        self.assertEqual(registry.connection, self.testconn)
-        self.assertEqual(registry.serializer, JSONSerializer)
-
-    def test_key(self):
-        self.assertEqual(self.registry.key, 'rq:wip:default')
-
-    def test_custom_job_class(self):
-        registry = StartedJobRegistry(job_class=CustomJob)
-        self.assertFalse(registry.job_class == self.registry.job_class)
-
-    def test_contains(self):
-        registry = StartedJobRegistry(connection=self.testconn)
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(say_hello)
-
-        self.assertFalse(job in registry)
-        self.assertFalse(job.id in registry)
-
-        registry.add(job, 5)
-
-        self.assertTrue(job in registry)
-        self.assertTrue(job.id in registry)
-
-    def test_get_expiration_time(self):
-        """registry.get_expiration_time() returns correct datetime objects"""
-        registry = StartedJobRegistry(connection=self.testconn)
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(say_hello)
-
-        registry.add(job, 5)
-        time = registry.get_expiration_time(job)
-        expected_time = (datetime.utcnow() + timedelta(seconds=5)).replace(microsecond=0)
-        self.assertGreaterEqual(time, expected_time - timedelta(seconds=2))
-        self.assertLessEqual(time, expected_time + timedelta(seconds=2))
-
-    def test_add_and_remove(self):
-        """Adding and removing job to StartedJobRegistry."""
-        timestamp = current_timestamp()
-
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(say_hello)
-
-        # Test that job is added with the right score
-        self.registry.add(job, 1000)
-        self.assertLess(self.testconn.zscore(self.registry.key, job.id), timestamp + 1002)
-
-        # Ensure that a timeout of -1 results in a score of inf
-        self.registry.add(job, -1)
-        self.assertEqual(self.testconn.zscore(self.registry.key, job.id), float('inf'))
-
-        # Ensure that job is removed from sorted set, but job key is not deleted
-        self.registry.remove(job)
-        self.assertIsNone(self.testconn.zscore(self.registry.key, job.id))
-        self.assertTrue(self.testconn.exists(job.key))
-
-        self.registry.add(job, -1)
-
-        # registry.remove() also accepts job.id
-        self.registry.remove(job.id)
-        self.assertIsNone(self.testconn.zscore(self.registry.key, job.id))
-
-        self.registry.add(job, -1)
-
-        # delete_job = True deletes job key
-        self.registry.remove(job, delete_job=True)
-        self.assertIsNone(self.testconn.zscore(self.registry.key, job.id))
-        self.assertFalse(self.testconn.exists(job.key))
-
-        job = queue.enqueue(say_hello)
-
-        self.registry.add(job, -1)
-
-        # delete_job = True also works with job.id
-        self.registry.remove(job.id, delete_job=True)
-        self.assertIsNone(self.testconn.zscore(self.registry.key, job.id))
-        self.assertFalse(self.testconn.exists(job.key))
-
-    def test_add_and_remove_with_serializer(self):
-        """Adding and removing job to StartedJobRegistry (with serializer)."""
-        # delete_job = True also works with job.id and custom serializer
-        queue = Queue(connection=self.testconn, serializer=JSONSerializer)
-        registry = StartedJobRegistry(connection=self.testconn, serializer=JSONSerializer)
-        job = queue.enqueue(say_hello)
-        registry.add(job, -1)
-        registry.remove(job.id, delete_job=True)
-        self.assertIsNone(self.testconn.zscore(registry.key, job.id))
-        self.assertFalse(self.testconn.exists(job.key))
-
-    def test_get_job_ids(self):
-        """Getting job ids from StartedJobRegistry."""
-        timestamp = current_timestamp()
-        self.testconn.zadd(self.registry.key, {'foo': timestamp + 10})
-        self.testconn.zadd(self.registry.key, {'bar': timestamp + 20})
-        self.assertEqual(self.registry.get_job_ids(), ['foo', 'bar'])
-
-    def test_get_expired_job_ids(self):
-        """Getting expired job ids form StartedJobRegistry."""
-        timestamp = current_timestamp()
-
-        self.testconn.zadd(self.registry.key, {'foo': 1})
-        self.testconn.zadd(self.registry.key, {'bar': timestamp + 10})
-        self.testconn.zadd(self.registry.key, {'baz': timestamp + 30})
-
-        self.assertEqual(self.registry.get_expired_job_ids(), ['foo'])
-        self.assertEqual(self.registry.get_expired_job_ids(timestamp + 20), ['foo', 'bar'])
-
-        # CanceledJobRegistry does not implement get_expired_job_ids()
-        registry = CanceledJobRegistry(connection=self.testconn)
-        self.assertRaises(NotImplementedError, registry.get_expired_job_ids)
-
-    def test_cleanup_moves_jobs_to_failed_job_registry(self):
-        """Moving expired jobs to FailedJobRegistry."""
-        queue = Queue(connection=self.testconn)
-        failed_job_registry = FailedJobRegistry(connection=self.testconn)
-        job = queue.enqueue(say_hello)
-
-        self.testconn.zadd(self.registry.key, {job.id: 2})
-
-        # Job has not been moved to FailedJobRegistry
-        self.registry.cleanup(1)
-        self.assertNotIn(job, failed_job_registry)
-        self.assertIn(job, self.registry)
-
-        with mock.patch.object(Job, 'execute_failure_callback') as mocked:
-            self.registry.cleanup()
-            mocked.assert_called_once_with(queue.death_penalty_class, AbandonedJobError, ANY, ANY)
-        self.assertIn(job.id, failed_job_registry)
-        self.assertNotIn(job, self.registry)
-        job.refresh()
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        self.assertTrue(job.exc_info)  # explanation is written to exc_info
-
-    def test_job_execution(self):
-        """Job is removed from StartedJobRegistry after execution."""
-        registry = StartedJobRegistry(connection=self.testconn)
-        queue = Queue(connection=self.testconn)
-        worker = Worker([queue])
-
-        job = queue.enqueue(say_hello)
-        self.assertTrue(job.is_queued)
-
-        worker.prepare_job_execution(job)
-        self.assertIn(job.id, registry.get_job_ids())
-        self.assertTrue(job.is_started)
-
-        worker.perform_job(job, queue)
-        self.assertNotIn(job.id, registry.get_job_ids())
-        self.assertTrue(job.is_finished)
-
-        # Job that fails
-        job = queue.enqueue(div_by_zero)
-
-        worker.prepare_job_execution(job)
-        self.assertIn(job.id, registry.get_job_ids())
-
-        worker.perform_job(job, queue)
-        self.assertNotIn(job.id, registry.get_job_ids())
-
-    def test_job_deletion(self):
-        """Ensure job is removed from StartedJobRegistry when deleted."""
-        registry = StartedJobRegistry(connection=self.testconn)
-        queue = Queue(connection=self.testconn)
-        worker = Worker([queue])
-
-        job = queue.enqueue(say_hello)
-        self.assertTrue(job.is_queued)
-
-        worker.prepare_job_execution(job)
-        self.assertIn(job.id, registry.get_job_ids())
-
-        job.delete()
-        self.assertNotIn(job.id, registry.get_job_ids())
-
-    def test_get_job_count(self):
-        """StartedJobRegistry returns the right number of job count."""
-        timestamp = current_timestamp() + 10
-        self.testconn.zadd(self.registry.key, {'foo': timestamp})
-        self.testconn.zadd(self.registry.key, {'bar': timestamp})
-        self.assertEqual(self.registry.count, 2)
-        self.assertEqual(len(self.registry), 2)
-
-        # Make sure
-
-    def test_clean_registries(self):
-        """clean_registries() cleans Started and Finished job registries."""
-
-        queue = Queue(connection=self.testconn)
-
-        finished_job_registry = FinishedJobRegistry(connection=self.testconn)
-        self.testconn.zadd(finished_job_registry.key, {'foo': 1})
-
-        started_job_registry = StartedJobRegistry(connection=self.testconn)
-        self.testconn.zadd(started_job_registry.key, {'foo': 1})
-
-        failed_job_registry = FailedJobRegistry(connection=self.testconn)
-        self.testconn.zadd(failed_job_registry.key, {'foo': 1})
-
-        clean_registries(queue)
-        self.assertEqual(self.testconn.zcard(finished_job_registry.key), 0)
-        self.assertEqual(self.testconn.zcard(started_job_registry.key), 0)
-        self.assertEqual(self.testconn.zcard(failed_job_registry.key), 0)
-
-    def test_clean_registries_with_serializer(self):
-        """clean_registries() cleans Started and Finished job registries (with serializer)."""
-
-        queue = Queue(connection=self.testconn, serializer=JSONSerializer)
-
-        finished_job_registry = FinishedJobRegistry(connection=self.testconn, serializer=JSONSerializer)
-        self.testconn.zadd(finished_job_registry.key, {'foo': 1})
-
-        started_job_registry = StartedJobRegistry(connection=self.testconn, serializer=JSONSerializer)
-        self.testconn.zadd(started_job_registry.key, {'foo': 1})
-
-        failed_job_registry = FailedJobRegistry(connection=self.testconn, serializer=JSONSerializer)
-        self.testconn.zadd(failed_job_registry.key, {'foo': 1})
-
-        clean_registries(queue)
-        self.assertEqual(self.testconn.zcard(finished_job_registry.key), 0)
-        self.assertEqual(self.testconn.zcard(started_job_registry.key), 0)
-        self.assertEqual(self.testconn.zcard(failed_job_registry.key), 0)
-
-    def test_get_queue(self):
-        """registry.get_queue() returns the right Queue object."""
-        registry = StartedJobRegistry(connection=self.testconn)
-        self.assertEqual(registry.get_queue(), Queue(connection=self.testconn))
-
-        registry = StartedJobRegistry('foo', connection=self.testconn, serializer=JSONSerializer)
-        self.assertEqual(registry.get_queue(), Queue('foo', connection=self.testconn, serializer=JSONSerializer))
-
-
-class TestFinishedJobRegistry(RQTestCase):
-    def setUp(self):
-        super().setUp()
-        self.registry = FinishedJobRegistry(connection=self.testconn)
-
-    def test_key(self):
-        self.assertEqual(self.registry.key, 'rq:finished:default')
-
-    def test_cleanup(self):
-        """Finished job registry removes expired jobs."""
-        timestamp = current_timestamp()
-        self.testconn.zadd(self.registry.key, {'foo': 1})
-        self.testconn.zadd(self.registry.key, {'bar': timestamp + 10})
-        self.testconn.zadd(self.registry.key, {'baz': timestamp + 30})
-
-        self.registry.cleanup()
-        self.assertEqual(self.registry.get_job_ids(), ['bar', 'baz'])
-
-        self.registry.cleanup(timestamp + 20)
-        self.assertEqual(self.registry.get_job_ids(), ['baz'])
-
-        # CanceledJobRegistry now implements noop cleanup, should not raise exception
-        registry = CanceledJobRegistry(connection=self.testconn)
-        registry.cleanup()
-
-    def test_jobs_are_put_in_registry(self):
-        """Completed jobs are added to FinishedJobRegistry."""
-        self.assertEqual(self.registry.get_job_ids(), [])
-        queue = Queue(connection=self.testconn)
-        worker = Worker([queue])
-
-        # Completed jobs are put in FinishedJobRegistry
-        job = queue.enqueue(say_hello)
-        worker.perform_job(job, queue)
-        self.assertEqual(self.registry.get_job_ids(), [job.id])
-
-        # When job is deleted, it should be removed from FinishedJobRegistry
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-        job.delete()
-        self.assertEqual(self.registry.get_job_ids(), [])
-
-        # Failed jobs are not put in FinishedJobRegistry
-        failed_job = queue.enqueue(div_by_zero)
-        worker.perform_job(failed_job, queue)
-        self.assertEqual(self.registry.get_job_ids(), [])
-
-
-class TestDeferredRegistry(RQTestCase):
-    def setUp(self):
-        super().setUp()
-        self.registry = DeferredJobRegistry(connection=self.testconn)
-
-    def test_key(self):
-        self.assertEqual(self.registry.key, 'rq:deferred:default')
-
-    def test_add(self):
-        """Adding a job to DeferredJobsRegistry."""
-        job = Job()
-        self.registry.add(job)
-        job_ids = [as_text(job_id) for job_id in self.testconn.zrange(self.registry.key, 0, -1)]
-        self.assertEqual(job_ids, [job.id])
-
-    def test_register_dependency(self):
-        """Ensure job creation and deletion works with DeferredJobRegistry."""
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(say_hello)
-        job2 = queue.enqueue(say_hello, depends_on=job)
-
-        registry = DeferredJobRegistry(connection=self.testconn)
-        self.assertEqual(registry.get_job_ids(), [job2.id])
-
-        # When deleted, job removes itself from DeferredJobRegistry
-        job2.delete()
-        self.assertEqual(registry.get_job_ids(), [])
-
-
-class TestFailedJobRegistry(RQTestCase):
-    def test_default_failure_ttl(self):
-        """Job TTL defaults to DEFAULT_FAILURE_TTL"""
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(say_hello)
-
-        registry = FailedJobRegistry(connection=self.testconn)
-        key = registry.key
-
-        timestamp = current_timestamp()
-        registry.add(job)
-        score = self.testconn.zscore(key, job.id)
-        self.assertLess(score, timestamp + DEFAULT_FAILURE_TTL + 2)
-        self.assertGreater(score, timestamp + DEFAULT_FAILURE_TTL - 2)
-
-        # Job key will also expire
-        job_ttl = self.testconn.ttl(job.key)
-        self.assertLess(job_ttl, DEFAULT_FAILURE_TTL + 2)
-        self.assertGreater(job_ttl, DEFAULT_FAILURE_TTL - 2)
-
-        timestamp = current_timestamp()
-        ttl = 5
-        registry.add(job, ttl=ttl)
-        score = self.testconn.zscore(key, job.id)
-        self.assertLess(score, timestamp + ttl + 2)
-        self.assertGreater(score, timestamp + ttl - 2)
-
-        job_ttl = self.testconn.ttl(job.key)
-        self.assertLess(job_ttl, ttl + 2)
-        self.assertGreater(job_ttl, ttl - 2)
-
-    def test_requeue(self):
-        """FailedJobRegistry.requeue works properly"""
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(div_by_zero, failure_ttl=5)
-
-        worker = Worker([queue])
-        worker.work(burst=True)
-
-        registry = FailedJobRegistry(connection=worker.connection)
-        self.assertTrue(job in registry)
-
-        registry.requeue(job.id)
-        self.assertFalse(job in registry)
-        self.assertIn(job.id, queue.get_job_ids())
-
-        job.refresh()
-        self.assertEqual(job.get_status(), JobStatus.QUEUED)
-        self.assertEqual(job.started_at, None)
-        self.assertEqual(job.ended_at, None)
-
-        worker.work(burst=True)
-        self.assertTrue(job in registry)
-
-        # Should also work with job instance
-        registry.requeue(job)
-        self.assertFalse(job in registry)
-        self.assertIn(job.id, queue.get_job_ids())
-
-        job.refresh()
-        self.assertEqual(job.get_status(), JobStatus.QUEUED)
-
-        worker.work(burst=True)
-        self.assertTrue(job in registry)
-
-        # requeue_job should work the same way
-        requeue_job(job.id, connection=self.testconn)
-        self.assertFalse(job in registry)
-        self.assertIn(job.id, queue.get_job_ids())
-
-        job.refresh()
-        self.assertEqual(job.get_status(), JobStatus.QUEUED)
-
-        worker.work(burst=True)
-        self.assertTrue(job in registry)
-
-        # And so does job.requeue()
-        job.requeue()
-        self.assertFalse(job in registry)
-        self.assertIn(job.id, queue.get_job_ids())
-
-        job.refresh()
-        self.assertEqual(job.get_status(), JobStatus.QUEUED)
-
-    def test_requeue_with_serializer(self):
-        """FailedJobRegistry.requeue works properly (with serializer)"""
-        queue = Queue(connection=self.testconn, serializer=JSONSerializer)
-        job = queue.enqueue(div_by_zero, failure_ttl=5)
-
-        worker = Worker([queue], serializer=JSONSerializer)
-        worker.work(burst=True)
-
-        registry = FailedJobRegistry(connection=worker.connection, serializer=JSONSerializer)
-        self.assertTrue(job in registry)
-
-        registry.requeue(job.id)
-        self.assertFalse(job in registry)
-        self.assertIn(job.id, queue.get_job_ids())
-
-        job.refresh()
-        self.assertEqual(job.get_status(), JobStatus.QUEUED)
-        self.assertEqual(job.started_at, None)
-        self.assertEqual(job.ended_at, None)
-
-        worker.work(burst=True)
-        self.assertTrue(job in registry)
-
-        # Should also work with job instance
-        registry.requeue(job)
-        self.assertFalse(job in registry)
-        self.assertIn(job.id, queue.get_job_ids())
-
-        job.refresh()
-        self.assertEqual(job.get_status(), JobStatus.QUEUED)
-
-        worker.work(burst=True)
-        self.assertTrue(job in registry)
-
-        # requeue_job should work the same way
-        requeue_job(job.id, connection=self.testconn, serializer=JSONSerializer)
-        self.assertFalse(job in registry)
-        self.assertIn(job.id, queue.get_job_ids())
-
-        job.refresh()
-        self.assertEqual(job.get_status(), JobStatus.QUEUED)
-
-        worker.work(burst=True)
-        self.assertTrue(job in registry)
-
-        # And so does job.requeue()
-        job.requeue()
-        self.assertFalse(job in registry)
-        self.assertIn(job.id, queue.get_job_ids())
-
-        job.refresh()
-        self.assertEqual(job.get_status(), JobStatus.QUEUED)
-
-    def test_invalid_job(self):
-        """Requeuing a job that's not in FailedJobRegistry raises an error."""
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(say_hello)
-
-        registry = FailedJobRegistry(connection=self.testconn)
-        with self.assertRaises(InvalidJobOperation):
-            registry.requeue(job)
-
-    def test_worker_handle_job_failure(self):
-        """Failed jobs are added to FailedJobRegistry"""
-        q = Queue(connection=self.testconn)
-
-        w = Worker([q])
-        registry = FailedJobRegistry(connection=w.connection)
-
-        timestamp = current_timestamp()
-
-        job = q.enqueue(div_by_zero, failure_ttl=5)
-        w.handle_job_failure(job, q)
-        # job is added to FailedJobRegistry with default failure ttl
-        self.assertIn(job.id, registry.get_job_ids())
-        self.assertLess(self.testconn.zscore(registry.key, job.id), timestamp + DEFAULT_FAILURE_TTL + 5)
-
-        # job is added to FailedJobRegistry with specified ttl
-        job = q.enqueue(div_by_zero, failure_ttl=5)
-        w.handle_job_failure(job, q)
-        self.assertLess(self.testconn.zscore(registry.key, job.id), timestamp + 7)
diff --git a/tests/test_results.py b/tests/test_results.py
deleted file mode 100644
index e27e872..0000000
--- a/tests/test_results.py
+++ /dev/null
@@ -1,251 +0,0 @@
-import tempfile
-import unittest
-from datetime import timedelta
-from unittest.mock import PropertyMock, patch
-
-from redis import Redis
-
-from rq.defaults import UNSERIALIZABLE_RETURN_VALUE_PAYLOAD
-from rq.job import Job
-from rq.queue import Queue
-from rq.registry import StartedJobRegistry
-from rq.results import Result, get_key
-from rq.utils import get_version, utcnow
-from rq.worker import Worker
-from tests import RQTestCase
-
-from .fixtures import div_by_zero, say_hello
-
-
-@unittest.skipIf(get_version(Redis()) < (5, 0, 0), 'Skip if Redis server < 5.0')
-class TestScheduledJobRegistry(RQTestCase):
-    def test_save_and_get_result(self):
-        """Ensure data is saved properly"""
-        queue = Queue(connection=self.connection)
-        job = queue.enqueue(say_hello)
-
-        result = Result.fetch_latest(job)
-        self.assertIsNone(result)
-
-        Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1)
-        result = Result.fetch_latest(job)
-        self.assertEqual(result.return_value, 1)
-        self.assertEqual(job.latest_result().return_value, 1)
-
-        # Check that ttl is properly set
-        key = get_key(job.id)
-        ttl = self.connection.pttl(key)
-        self.assertTrue(5000 < ttl <= 10000)
-
-        # Check job with None return value
-        Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=None)
-        result = Result.fetch_latest(job)
-        self.assertIsNone(result.return_value)
-        Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=2)
-        result = Result.fetch_latest(job)
-        self.assertEqual(result.return_value, 2)
-
-    def test_create_failure(self):
-        """Ensure data is saved properly"""
-        queue = Queue(connection=self.connection)
-        job = queue.enqueue(say_hello)
-        Result.create_failure(job, ttl=10, exc_string='exception')
-        result = Result.fetch_latest(job)
-        self.assertEqual(result.exc_string, 'exception')
-
-        # Check that ttl is properly set
-        key = get_key(job.id)
-        ttl = self.connection.pttl(key)
-        self.assertTrue(5000 < ttl <= 10000)
-
-    def test_getting_results(self):
-        """Check getting all execution results"""
-        queue = Queue(connection=self.connection)
-        job = queue.enqueue(say_hello)
-
-        # latest_result() returns None when there's no result
-        self.assertIsNone(job.latest_result())
-
-        result_1 = Result.create_failure(job, ttl=10, exc_string='exception')
-        result_2 = Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1)
-        result_3 = Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1)
-
-        # Result.fetch_latest() returns the latest result
-        result = Result.fetch_latest(job)
-        self.assertEqual(result, result_3)
-        self.assertEqual(job.latest_result(), result_3)
-
-        # Result.all() and job.results() returns all results, newest first
-        results = Result.all(job)
-        self.assertEqual(results, [result_3, result_2, result_1])
-        self.assertEqual(job.results(), [result_3, result_2, result_1])
-
-    def test_count(self):
-        """Result.count(job) returns number of results"""
-        queue = Queue(connection=self.connection)
-        job = queue.enqueue(say_hello)
-        self.assertEqual(Result.count(job), 0)
-        Result.create_failure(job, ttl=10, exc_string='exception')
-        self.assertEqual(Result.count(job), 1)
-        Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1)
-        self.assertEqual(Result.count(job), 2)
-
-    def test_delete_all(self):
-        """Result.delete_all(job) deletes all results from Redis"""
-        queue = Queue(connection=self.connection)
-        job = queue.enqueue(say_hello)
-        Result.create_failure(job, ttl=10, exc_string='exception')
-        Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1)
-        Result.delete_all(job)
-        self.assertEqual(Result.count(job), 0)
-
-    def test_job_successful_result_fallback(self):
-        """Changes to job.result handling should be backwards compatible."""
-        queue = Queue(connection=self.connection)
-        job = queue.enqueue(say_hello)
-        worker = Worker([queue])
-        worker.register_birth()
-
-        self.assertEqual(worker.failed_job_count, 0)
-        self.assertEqual(worker.successful_job_count, 0)
-        self.assertEqual(worker.total_working_time, 0)
-
-        # These should only run on workers that supports Redis streams
-        registry = StartedJobRegistry(connection=self.connection)
-        job.started_at = utcnow()
-        job.ended_at = job.started_at + timedelta(seconds=0.75)
-        job._result = 'Success'
-        worker.handle_job_success(job, queue, registry)
-
-        payload = self.connection.hgetall(job.key)
-        self.assertFalse(b'result' in payload.keys())
-        self.assertEqual(job.result, 'Success')
-
-        with patch('rq.worker.Worker.supports_redis_streams', new_callable=PropertyMock) as mock:
-            with patch('rq.job.Job.supports_redis_streams', new_callable=PropertyMock) as job_mock:
-                job_mock.return_value = False
-                mock.return_value = False
-                worker = Worker([queue])
-                worker.register_birth()
-                job = queue.enqueue(say_hello)
-                job._result = 'Success'
-                job.started_at = utcnow()
-                job.ended_at = job.started_at + timedelta(seconds=0.75)
-
-                # If `save_result_to_job` = True, result will be saved to job
-                # hash, simulating older versions of RQ
-
-                worker.handle_job_success(job, queue, registry)
-                payload = self.connection.hgetall(job.key)
-                self.assertTrue(b'result' in payload.keys())
-                # Delete all new result objects so we only have result stored in job hash,
-                # this should simulate a job that was executed in an earlier RQ version
-                self.assertEqual(job.result, 'Success')
-
-    def test_job_failed_result_fallback(self):
-        """Changes to job.result failure handling should be backwards compatible."""
-        queue = Queue(connection=self.connection)
-        job = queue.enqueue(say_hello)
-        worker = Worker([queue])
-        worker.register_birth()
-
-        self.assertEqual(worker.failed_job_count, 0)
-        self.assertEqual(worker.successful_job_count, 0)
-        self.assertEqual(worker.total_working_time, 0)
-
-        registry = StartedJobRegistry(connection=self.connection)
-        job.started_at = utcnow()
-        job.ended_at = job.started_at + timedelta(seconds=0.75)
-        worker.handle_job_failure(job, exc_string='Error', queue=queue, started_job_registry=registry)
-
-        job = Job.fetch(job.id, connection=self.connection)
-        payload = self.connection.hgetall(job.key)
-        self.assertFalse(b'exc_info' in payload.keys())
-        self.assertEqual(job.exc_info, 'Error')
-
-        with patch('rq.worker.Worker.supports_redis_streams', new_callable=PropertyMock) as mock:
-            with patch('rq.job.Job.supports_redis_streams', new_callable=PropertyMock) as job_mock:
-                job_mock.return_value = False
-                mock.return_value = False
-                worker = Worker([queue])
-                worker.register_birth()
-
-                job = queue.enqueue(say_hello)
-                job.started_at = utcnow()
-                job.ended_at = job.started_at + timedelta(seconds=0.75)
-
-                # If `save_result_to_job` = True, result will be saved to job
-                # hash, simulating older versions of RQ
-
-                worker.handle_job_failure(job, exc_string='Error', queue=queue, started_job_registry=registry)
-                payload = self.connection.hgetall(job.key)
-                self.assertTrue(b'exc_info' in payload.keys())
-                # Delete all new result objects so we only have result stored in job hash,
-                # this should simulate a job that was executed in an earlier RQ version
-                Result.delete_all(job)
-                job = Job.fetch(job.id, connection=self.connection)
-                self.assertEqual(job.exc_info, 'Error')
-
-    def test_job_return_value(self):
-        """Test job.return_value"""
-        queue = Queue(connection=self.connection)
-        job = queue.enqueue(say_hello)
-
-        # Returns None when there's no result
-        self.assertIsNone(job.return_value())
-
-        Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1)
-        self.assertEqual(job.return_value(), 1)
-
-        # Returns None if latest result is a failure
-        Result.create_failure(job, ttl=10, exc_string='exception')
-        self.assertIsNone(job.return_value(refresh=True))
-
-    def test_job_return_value_sync(self):
-        """Test job.return_value when queue.is_async=False"""
-        queue = Queue(connection=self.connection, is_async=False)
-        job = queue.enqueue(say_hello)
-
-        # Returns None when there's no result
-        self.assertIsNotNone(job.return_value())
-
-        job = queue.enqueue(div_by_zero)
-        self.assertEqual(job.latest_result().type, Result.Type.FAILED)
-
-    def test_job_return_value_result_ttl_infinity(self):
-        """Test job.return_value when queue.result_ttl=-1"""
-        queue = Queue(connection=self.connection, result_ttl=-1)
-        job = queue.enqueue(say_hello)
-
-        # Returns None when there's no result
-        self.assertIsNone(job.return_value())
-
-        Result.create(job, Result.Type.SUCCESSFUL, ttl=-1, return_value=1)
-        self.assertEqual(job.return_value(), 1)
-
-    def test_job_return_value_result_ttl_zero(self):
-        """Test job.return_value when queue.result_ttl=0"""
-        queue = Queue(connection=self.connection, result_ttl=0)
-        job = queue.enqueue(say_hello)
-
-        # Returns None when there's no result
-        self.assertIsNone(job.return_value())
-
-        Result.create(job, Result.Type.SUCCESSFUL, ttl=0, return_value=1)
-        self.assertIsNone(job.return_value())
-
-    def test_job_return_value_unserializable(self):
-        """Test job.return_value when it is not serializable"""
-        queue = Queue(connection=self.connection, result_ttl=0)
-        job = queue.enqueue(say_hello)
-
-        # Returns None when there's no result
-        self.assertIsNone(job.return_value())
-
-        # tempfile.NamedTemporaryFile() is not picklable
-        Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=tempfile.NamedTemporaryFile())
-        self.assertEqual(job.return_value(), UNSERIALIZABLE_RETURN_VALUE_PAYLOAD)
-        self.assertEqual(Result.count(job), 1)
-
-        Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1)
-        self.assertEqual(Result.count(job), 2)
diff --git a/tests/test_retry.py b/tests/test_retry.py
deleted file mode 100644
index ed2d477..0000000
--- a/tests/test_retry.py
+++ /dev/null
@@ -1,138 +0,0 @@
-from datetime import datetime, timedelta, timezone
-
-from rq.job import Job, JobStatus, Retry
-from rq.queue import Queue
-from rq.registry import FailedJobRegistry, StartedJobRegistry
-from rq.worker import Worker
-from tests import RQTestCase, fixtures
-from tests.fixtures import div_by_zero, say_hello
-
-
-class TestRetry(RQTestCase):
-    def test_persistence_of_retry_data(self):
-        """Retry related data is stored and restored properly"""
-        job = Job.create(func=fixtures.some_calculation)
-        job.retries_left = 3
-        job.retry_intervals = [1, 2, 3]
-        job.save()
-
-        job.retries_left = None
-        job.retry_intervals = None
-        job.refresh()
-        self.assertEqual(job.retries_left, 3)
-        self.assertEqual(job.retry_intervals, [1, 2, 3])
-
-    def test_retry_class(self):
-        """Retry parses `max` and `interval` correctly"""
-        retry = Retry(max=1)
-        self.assertEqual(retry.max, 1)
-        self.assertEqual(retry.intervals, [0])
-        self.assertRaises(ValueError, Retry, max=0)
-
-        retry = Retry(max=2, interval=5)
-        self.assertEqual(retry.max, 2)
-        self.assertEqual(retry.intervals, [5])
-
-        retry = Retry(max=3, interval=[5, 10])
-        self.assertEqual(retry.max, 3)
-        self.assertEqual(retry.intervals, [5, 10])
-
-        # interval can't be negative
-        self.assertRaises(ValueError, Retry, max=1, interval=-5)
-        self.assertRaises(ValueError, Retry, max=1, interval=[1, -5])
-
-    def test_get_retry_interval(self):
-        """get_retry_interval() returns the right retry interval"""
-        job = Job.create(func=fixtures.say_hello)
-
-        # Handle case where self.retry_intervals is None
-        job.retries_left = 2
-        self.assertEqual(job.get_retry_interval(), 0)
-
-        # Handle the most common case
-        job.retry_intervals = [1, 2]
-        self.assertEqual(job.get_retry_interval(), 1)
-        job.retries_left = 1
-        self.assertEqual(job.get_retry_interval(), 2)
-
-        # Handle cases where number of retries > length of interval
-        job.retries_left = 4
-        job.retry_intervals = [1, 2, 3]
-        self.assertEqual(job.get_retry_interval(), 1)
-        job.retries_left = 3
-        self.assertEqual(job.get_retry_interval(), 1)
-        job.retries_left = 2
-        self.assertEqual(job.get_retry_interval(), 2)
-        job.retries_left = 1
-        self.assertEqual(job.get_retry_interval(), 3)
-
-    def test_job_retry(self):
-        """Test job.retry() works properly"""
-        queue = Queue(connection=self.testconn)
-        retry = Retry(max=3, interval=5)
-        job = queue.enqueue(div_by_zero, retry=retry)
-
-        with self.testconn.pipeline() as pipeline:
-            job.retry(queue, pipeline)
-            pipeline.execute()
-
-        self.assertEqual(job.retries_left, 2)
-        # status should be scheduled since it's retried with 5 seconds interval
-        self.assertEqual(job.get_status(), JobStatus.SCHEDULED)
-
-        retry = Retry(max=3)
-        job = queue.enqueue(div_by_zero, retry=retry)
-
-        with self.testconn.pipeline() as pipeline:
-            job.retry(queue, pipeline)
-
-            pipeline.execute()
-
-        self.assertEqual(job.retries_left, 2)
-        # status should be queued
-        self.assertEqual(job.get_status(), JobStatus.QUEUED)
-
-    def test_retry_interval(self):
-        """Retries with intervals are scheduled"""
-        connection = self.testconn
-        queue = Queue(connection=connection)
-        retry = Retry(max=1, interval=5)
-        job = queue.enqueue(div_by_zero, retry=retry)
-
-        worker = Worker([queue])
-        registry = queue.scheduled_job_registry
-        # If job if configured to retry with interval, it will be scheduled,
-        # not directly put back in the queue
-        queue.empty()
-        worker.handle_job_failure(job, queue)
-        job.refresh()
-        self.assertEqual(job.get_status(), JobStatus.SCHEDULED)
-        self.assertEqual(job.retries_left, 0)
-        self.assertEqual(len(registry), 1)
-        self.assertEqual(queue.job_ids, [])
-        # Scheduled time is roughly 5 seconds from now
-        scheduled_time = registry.get_scheduled_time(job)
-        now = datetime.now(timezone.utc)
-        self.assertTrue(now + timedelta(seconds=4) < scheduled_time < now + timedelta(seconds=10))
-
-    def test_cleanup_handles_retries(self):
-        """Expired jobs should also be retried"""
-        queue = Queue(connection=self.testconn)
-        registry = StartedJobRegistry(connection=self.testconn)
-        failed_job_registry = FailedJobRegistry(connection=self.testconn)
-        job = queue.enqueue(say_hello, retry=Retry(max=1))
-
-        # Add job to StartedJobRegistry with past expiration time
-        self.testconn.zadd(registry.key, {job.id: 2})
-
-        registry.cleanup()
-        self.assertEqual(len(queue), 2)
-        self.assertEqual(job.get_status(), JobStatus.QUEUED)
-        self.assertNotIn(job, failed_job_registry)
-
-        self.testconn.zadd(registry.key, {job.id: 2})
-        # Job goes to FailedJobRegistry because it's only retried once
-        registry.cleanup()
-        self.assertEqual(len(queue), 2)
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        self.assertIn(job, failed_job_registry)
diff --git a/tests/test_scheduler.py b/tests/test_scheduler.py
deleted file mode 100644
index e6edf72..0000000
--- a/tests/test_scheduler.py
+++ /dev/null
@@ -1,507 +0,0 @@
-import os
-from datetime import datetime, timedelta, timezone
-from multiprocessing import Process
-from unittest import mock
-
-import redis
-
-from rq import Queue
-from rq.defaults import DEFAULT_MAINTENANCE_TASK_INTERVAL
-from rq.exceptions import NoSuchJobError
-from rq.job import Job, Retry
-from rq.registry import FinishedJobRegistry, ScheduledJobRegistry
-from rq.scheduler import RQScheduler
-from rq.serializers import JSONSerializer
-from rq.utils import current_timestamp
-from rq.worker import Worker
-from tests import RQTestCase, find_empty_redis_database, ssl_test
-
-from .fixtures import kill_worker, say_hello
-
-
-class CustomRedisConnection(redis.Connection):
-    """Custom redis connection with a custom arg, used in test_custom_connection_pool"""
-
-    def __init__(self, *args, custom_arg=None, **kwargs):
-        self.custom_arg = custom_arg
-        super().__init__(*args, **kwargs)
-
-    def get_custom_arg(self):
-        return self.custom_arg
-
-
-class TestScheduledJobRegistry(RQTestCase):
-    def test_get_jobs_to_enqueue(self):
-        """Getting job ids to enqueue from ScheduledJobRegistry."""
-        queue = Queue(connection=self.testconn)
-        registry = ScheduledJobRegistry(queue=queue)
-        timestamp = current_timestamp()
-
-        self.testconn.zadd(registry.key, {'foo': 1})
-        self.testconn.zadd(registry.key, {'bar': timestamp + 10})
-        self.testconn.zadd(registry.key, {'baz': timestamp + 30})
-
-        self.assertEqual(registry.get_jobs_to_enqueue(), ['foo'])
-        self.assertEqual(registry.get_jobs_to_enqueue(timestamp + 20), ['foo', 'bar'])
-
-    def test_get_jobs_to_schedule_with_chunk_size(self):
-        """Max amount of jobs returns by get_jobs_to_schedule() equal to chunk_size"""
-        queue = Queue(connection=self.testconn)
-        registry = ScheduledJobRegistry(queue=queue)
-        timestamp = current_timestamp()
-        chunk_size = 5
-
-        for index in range(0, chunk_size * 2):
-            self.testconn.zadd(registry.key, {'foo_{}'.format(index): 1})
-
-        self.assertEqual(len(registry.get_jobs_to_schedule(timestamp, chunk_size)), chunk_size)
-        self.assertEqual(len(registry.get_jobs_to_schedule(timestamp, chunk_size * 2)), chunk_size * 2)
-
-    def test_get_scheduled_time(self):
-        """get_scheduled_time() returns job's scheduled datetime"""
-        queue = Queue(connection=self.testconn)
-        registry = ScheduledJobRegistry(queue=queue)
-
-        job = Job.create('myfunc', connection=self.testconn)
-        job.save()
-        dt = datetime(2019, 1, 1, tzinfo=timezone.utc)
-        registry.schedule(job, datetime(2019, 1, 1, tzinfo=timezone.utc))
-        self.assertEqual(registry.get_scheduled_time(job), dt)
-        # get_scheduled_time() should also work with job ID
-        self.assertEqual(registry.get_scheduled_time(job.id), dt)
-
-        # registry.get_scheduled_time() raises NoSuchJobError if
-        # job.id is not found
-        self.assertRaises(NoSuchJobError, registry.get_scheduled_time, '123')
-
-    def test_schedule(self):
-        """Adding job with the correct score to ScheduledJobRegistry"""
-        queue = Queue(connection=self.testconn)
-        job = Job.create('myfunc', connection=self.testconn)
-        job.save()
-        registry = ScheduledJobRegistry(queue=queue)
-
-        from datetime import timezone
-
-        # If we pass in a datetime with no timezone, `schedule()`
-        # assumes local timezone so depending on your local timezone,
-        # the timestamp maybe different
-        #
-        # we need to account for the difference between a timezone
-        # with DST active and without DST active.  The time.timezone
-        # property isn't accurate when time.daylight is non-zero,
-        # we'll test both.
-        #
-        # first, time.daylight == 0 (not in DST).
-        # mock the sitatuoin for American/New_York not in DST (UTC - 5)
-        # time.timezone = 18000
-        # time.daylight = 0
-        # time.altzone = 14400
-
-        mock_day = mock.patch('time.daylight', 0)
-        mock_tz = mock.patch('time.timezone', 18000)
-        mock_atz = mock.patch('time.altzone', 14400)
-        with mock_tz, mock_day, mock_atz:
-            registry.schedule(job, datetime(2019, 1, 1))
-            self.assertEqual(
-                self.testconn.zscore(registry.key, job.id), 1546300800 + 18000
-            )  # 2019-01-01 UTC in Unix timestamp
-
-            # second, time.daylight != 0 (in DST)
-            # mock the sitatuoin for American/New_York not in DST (UTC - 4)
-            # time.timezone = 18000
-            # time.daylight = 1
-            # time.altzone = 14400
-            mock_day = mock.patch('time.daylight', 1)
-            mock_tz = mock.patch('time.timezone', 18000)
-            mock_atz = mock.patch('time.altzone', 14400)
-            with mock_tz, mock_day, mock_atz:
-                registry.schedule(job, datetime(2019, 1, 1))
-                self.assertEqual(
-                    self.testconn.zscore(registry.key, job.id), 1546300800 + 14400
-                )  # 2019-01-01 UTC in Unix timestamp
-
-            # Score is always stored in UTC even if datetime is in a different tz
-            tz = timezone(timedelta(hours=7))
-            job = Job.create('myfunc', connection=self.testconn)
-            job.save()
-            registry.schedule(job, datetime(2019, 1, 1, 7, tzinfo=tz))
-            self.assertEqual(self.testconn.zscore(registry.key, job.id), 1546300800)  # 2019-01-01 UTC in Unix timestamp
-
-
-class TestScheduler(RQTestCase):
-    def test_init(self):
-        """Scheduler can be instantiated with queues or queue names"""
-        foo_queue = Queue('foo', connection=self.testconn)
-        scheduler = RQScheduler([foo_queue, 'bar'], connection=self.testconn)
-        self.assertEqual(scheduler._queue_names, {'foo', 'bar'})
-        self.assertEqual(scheduler.status, RQScheduler.Status.STOPPED)
-
-    def test_should_reacquire_locks(self):
-        """scheduler.should_reacquire_locks works properly"""
-        queue = Queue(connection=self.testconn)
-        scheduler = RQScheduler([queue], connection=self.testconn)
-        self.assertTrue(scheduler.should_reacquire_locks)
-        scheduler.acquire_locks()
-        self.assertIsNotNone(scheduler.lock_acquisition_time)
-
-        # scheduler.should_reacquire_locks always returns False if
-        # scheduler.acquired_locks and scheduler._queue_names are the same
-        self.assertFalse(scheduler.should_reacquire_locks)
-        scheduler.lock_acquisition_time = datetime.now() - timedelta(seconds=DEFAULT_MAINTENANCE_TASK_INTERVAL + 6)
-        self.assertFalse(scheduler.should_reacquire_locks)
-
-        scheduler._queue_names = set(['default', 'foo'])
-        self.assertTrue(scheduler.should_reacquire_locks)
-        scheduler.acquire_locks()
-        self.assertFalse(scheduler.should_reacquire_locks)
-
-    def test_lock_acquisition(self):
-        """Test lock acquisition"""
-        name_1 = 'lock-test-1'
-        name_2 = 'lock-test-2'
-        name_3 = 'lock-test-3'
-        scheduler = RQScheduler([name_1], self.testconn)
-
-        self.assertEqual(scheduler.acquire_locks(), {name_1})
-        self.assertEqual(scheduler._acquired_locks, {name_1})
-        self.assertEqual(scheduler.acquire_locks(), set([]))
-
-        # Only name_2 is returned since name_1 is already locked
-        scheduler = RQScheduler([name_1, name_2], self.testconn)
-        self.assertEqual(scheduler.acquire_locks(), {name_2})
-        self.assertEqual(scheduler._acquired_locks, {name_2})
-
-        # When a new lock is successfully acquired, _acquired_locks is added
-        scheduler._queue_names.add(name_3)
-        self.assertEqual(scheduler.acquire_locks(), {name_3})
-        self.assertEqual(scheduler._acquired_locks, {name_2, name_3})
-
-    def test_lock_acquisition_with_auto_start(self):
-        """Test lock acquisition with auto_start=True"""
-        scheduler = RQScheduler(['auto-start'], self.testconn)
-        with mock.patch.object(scheduler, 'start') as mocked:
-            scheduler.acquire_locks(auto_start=True)
-            self.assertEqual(mocked.call_count, 1)
-
-        # If process has started, scheduler.start() won't be called
-        running_process = mock.MagicMock()
-        running_process.is_alive.return_value = True
-        scheduler = RQScheduler(['auto-start2'], self.testconn)
-        scheduler._process = running_process
-        with mock.patch.object(scheduler, 'start') as mocked:
-            scheduler.acquire_locks(auto_start=True)
-            self.assertEqual(mocked.call_count, 0)
-            self.assertEqual(running_process.is_alive.call_count, 1)
-
-        # If the process has stopped for some reason, the scheduler should restart
-        scheduler = RQScheduler(['auto-start3'], self.testconn)
-        stopped_process = mock.MagicMock()
-        stopped_process.is_alive.return_value = False
-        scheduler._process = stopped_process
-        with mock.patch.object(scheduler, 'start') as mocked:
-            scheduler.acquire_locks(auto_start=True)
-            self.assertEqual(mocked.call_count, 1)
-            self.assertEqual(stopped_process.is_alive.call_count, 1)
-
-    def test_lock_release(self):
-        """Test that scheduler.release_locks() only releases acquired locks"""
-        name_1 = 'lock-test-1'
-        name_2 = 'lock-test-2'
-        scheduler_1 = RQScheduler([name_1], self.testconn)
-
-        self.assertEqual(scheduler_1.acquire_locks(), {name_1})
-        self.assertEqual(scheduler_1._acquired_locks, {name_1})
-
-        # Only name_2 is returned since name_1 is already locked
-        scheduler_1_2 = RQScheduler([name_1, name_2], self.testconn)
-        self.assertEqual(scheduler_1_2.acquire_locks(), {name_2})
-        self.assertEqual(scheduler_1_2._acquired_locks, {name_2})
-
-        self.assertTrue(self.testconn.exists(scheduler_1.get_locking_key(name_1)))
-        self.assertTrue(self.testconn.exists(scheduler_1_2.get_locking_key(name_1)))
-        self.assertTrue(self.testconn.exists(scheduler_1_2.get_locking_key(name_2)))
-
-        scheduler_1_2.release_locks()
-
-        self.assertEqual(scheduler_1_2._acquired_locks, set())
-        self.assertEqual(scheduler_1._acquired_locks, {name_1})
-
-        self.assertTrue(self.testconn.exists(scheduler_1.get_locking_key(name_1)))
-        self.assertTrue(self.testconn.exists(scheduler_1_2.get_locking_key(name_1)))
-        self.assertFalse(self.testconn.exists(scheduler_1_2.get_locking_key(name_2)))
-
-    def test_queue_scheduler_pid(self):
-        queue = Queue(connection=self.testconn)
-        scheduler = RQScheduler(
-            [
-                queue,
-            ],
-            connection=self.testconn,
-        )
-        scheduler.acquire_locks()
-        assert queue.scheduler_pid == os.getpid()
-
-    def test_heartbeat(self):
-        """Test that heartbeat updates locking keys TTL"""
-        name_1 = 'lock-test-1'
-        name_2 = 'lock-test-2'
-        name_3 = 'lock-test-3'
-        scheduler = RQScheduler([name_3], self.testconn)
-        scheduler.acquire_locks()
-        scheduler = RQScheduler([name_1, name_2, name_3], self.testconn)
-        scheduler.acquire_locks()
-
-        locking_key_1 = RQScheduler.get_locking_key(name_1)
-        locking_key_2 = RQScheduler.get_locking_key(name_2)
-        locking_key_3 = RQScheduler.get_locking_key(name_3)
-
-        with self.testconn.pipeline() as pipeline:
-            pipeline.expire(locking_key_1, 1000)
-            pipeline.expire(locking_key_2, 1000)
-            pipeline.expire(locking_key_3, 1000)
-            pipeline.execute()
-
-        scheduler.heartbeat()
-        self.assertEqual(self.testconn.ttl(locking_key_1), 61)
-        self.assertEqual(self.testconn.ttl(locking_key_2), 61)
-        self.assertEqual(self.testconn.ttl(locking_key_3), 1000)
-
-        # scheduler.stop() releases locks and sets status to STOPPED
-        scheduler._status = scheduler.Status.WORKING
-        scheduler.stop()
-        self.assertFalse(self.testconn.exists(locking_key_1))
-        self.assertFalse(self.testconn.exists(locking_key_2))
-        self.assertTrue(self.testconn.exists(locking_key_3))
-        self.assertEqual(scheduler.status, scheduler.Status.STOPPED)
-
-        # Heartbeat also works properly for schedulers with a single queue
-        scheduler = RQScheduler([name_1], self.testconn)
-        scheduler.acquire_locks()
-        self.testconn.expire(locking_key_1, 1000)
-        scheduler.heartbeat()
-        self.assertEqual(self.testconn.ttl(locking_key_1), 61)
-
-    def test_enqueue_scheduled_jobs(self):
-        """Scheduler can enqueue scheduled jobs"""
-        queue = Queue(connection=self.testconn)
-        registry = ScheduledJobRegistry(queue=queue)
-        job = Job.create('myfunc', connection=self.testconn)
-        job.save()
-        registry.schedule(job, datetime(2019, 1, 1, tzinfo=timezone.utc))
-        scheduler = RQScheduler([queue], connection=self.testconn)
-        scheduler.acquire_locks()
-        scheduler.enqueue_scheduled_jobs()
-        self.assertEqual(len(queue), 1)
-
-        # After job is scheduled, registry should be empty
-        self.assertEqual(len(registry), 0)
-
-        # Jobs scheduled in the far future should not be affected
-        registry.schedule(job, datetime(2100, 1, 1, tzinfo=timezone.utc))
-        scheduler.enqueue_scheduled_jobs()
-        self.assertEqual(len(queue), 1)
-
-    def test_prepare_registries(self):
-        """prepare_registries() creates self._scheduled_job_registries"""
-        foo_queue = Queue('foo', connection=self.testconn)
-        bar_queue = Queue('bar', connection=self.testconn)
-        scheduler = RQScheduler([foo_queue, bar_queue], connection=self.testconn)
-        self.assertEqual(scheduler._scheduled_job_registries, [])
-        scheduler.prepare_registries([foo_queue.name])
-        self.assertEqual(scheduler._scheduled_job_registries, [ScheduledJobRegistry(queue=foo_queue)])
-        scheduler.prepare_registries([foo_queue.name, bar_queue.name])
-        self.assertEqual(
-            scheduler._scheduled_job_registries,
-            [ScheduledJobRegistry(queue=foo_queue), ScheduledJobRegistry(queue=bar_queue)],
-        )
-
-
-class TestWorker(RQTestCase):
-    def test_work_burst(self):
-        """worker.work() with scheduler enabled works properly"""
-        queue = Queue(connection=self.testconn)
-        worker = Worker(queues=[queue], connection=self.testconn)
-        worker.work(burst=True, with_scheduler=False)
-        self.assertIsNone(worker.scheduler)
-
-        worker = Worker(queues=[queue], connection=self.testconn)
-        worker.work(burst=True, with_scheduler=True)
-        self.assertIsNotNone(worker.scheduler)
-        self.assertIsNone(self.testconn.get(worker.scheduler.get_locking_key('default')))
-
-    @mock.patch.object(RQScheduler, 'acquire_locks')
-    def test_run_maintenance_tasks(self, mocked):
-        """scheduler.acquire_locks() is called only when scheduled is enabled"""
-        queue = Queue(connection=self.testconn)
-        worker = Worker(queues=[queue], connection=self.testconn)
-
-        worker.run_maintenance_tasks()
-        self.assertEqual(mocked.call_count, 0)
-
-        # if scheduler object exists and it's a first start, acquire locks should not run
-        worker.last_cleaned_at = None
-        worker.scheduler = RQScheduler([queue], connection=self.testconn)
-        worker.run_maintenance_tasks()
-        self.assertEqual(mocked.call_count, 0)
-
-        # the scheduler exists and it's NOT a first start, since the process doesn't exists,
-        # should call acquire_locks to start the process
-        worker.last_cleaned_at = datetime.now()
-        worker.run_maintenance_tasks()
-        self.assertEqual(mocked.call_count, 1)
-
-        # the scheduler exists, the process exists, but the process is not alive
-        running_process = mock.MagicMock()
-        running_process.is_alive.return_value = False
-        worker.scheduler._process = running_process
-        worker.run_maintenance_tasks()
-        self.assertEqual(mocked.call_count, 2)
-        self.assertEqual(running_process.is_alive.call_count, 1)
-
-        # the scheduler exists, the process exits, and it is alive. acquire_locks shouldn't run
-        running_process.is_alive.return_value = True
-        worker.run_maintenance_tasks()
-        self.assertEqual(mocked.call_count, 2)
-        self.assertEqual(running_process.is_alive.call_count, 2)
-
-    def test_work(self):
-        queue = Queue(connection=self.testconn)
-        worker = Worker(queues=[queue], connection=self.testconn)
-        p = Process(target=kill_worker, args=(os.getpid(), False, 5))
-
-        p.start()
-        queue.enqueue_at(datetime(2019, 1, 1, tzinfo=timezone.utc), say_hello)
-        worker.work(burst=False, with_scheduler=True)
-        p.join(1)
-        self.assertIsNotNone(worker.scheduler)
-        registry = FinishedJobRegistry(queue=queue)
-        self.assertEqual(len(registry), 1)
-
-    @ssl_test
-    def test_work_with_ssl(self):
-        connection = find_empty_redis_database(ssl=True)
-        queue = Queue(connection=connection)
-        worker = Worker(queues=[queue], connection=connection)
-        p = Process(target=kill_worker, args=(os.getpid(), False, 5))
-
-        p.start()
-        queue.enqueue_at(datetime(2019, 1, 1, tzinfo=timezone.utc), say_hello)
-        worker.work(burst=False, with_scheduler=True)
-        p.join(1)
-        self.assertIsNotNone(worker.scheduler)
-        registry = FinishedJobRegistry(queue=queue)
-        self.assertEqual(len(registry), 1)
-
-    def test_work_with_serializer(self):
-        queue = Queue(connection=self.testconn, serializer=JSONSerializer)
-        worker = Worker(queues=[queue], connection=self.testconn, serializer=JSONSerializer)
-        p = Process(target=kill_worker, args=(os.getpid(), False, 5))
-
-        p.start()
-        queue.enqueue_at(datetime(2019, 1, 1, tzinfo=timezone.utc), say_hello, meta={'foo': 'bar'})
-        worker.work(burst=False, with_scheduler=True)
-        p.join(1)
-        self.assertIsNotNone(worker.scheduler)
-        registry = FinishedJobRegistry(queue=queue)
-        self.assertEqual(len(registry), 1)
-
-
-class TestQueue(RQTestCase):
-    def test_enqueue_at(self):
-        """queue.enqueue_at() puts job in the scheduled"""
-        queue = Queue(connection=self.testconn)
-        registry = ScheduledJobRegistry(queue=queue)
-        scheduler = RQScheduler([queue], connection=self.testconn)
-        scheduler.acquire_locks()
-        # Jobs created using enqueue_at is put in the ScheduledJobRegistry
-        job = queue.enqueue_at(datetime(2019, 1, 1, tzinfo=timezone.utc), say_hello)
-        self.assertEqual(len(queue), 0)
-        self.assertEqual(len(registry), 1)
-
-        # enqueue_at set job status to "scheduled"
-        self.assertTrue(job.get_status() == 'scheduled')
-
-        # After enqueue_scheduled_jobs() is called, the registry is empty
-        # and job is enqueued
-        scheduler.enqueue_scheduled_jobs()
-        self.assertEqual(len(queue), 1)
-        self.assertEqual(len(registry), 0)
-
-    def test_enqueue_at_at_front(self):
-        """queue.enqueue_at() accepts at_front argument. When true, job will be put at position 0
-        of the queue when the time comes for the job to be scheduled"""
-        queue = Queue(connection=self.testconn)
-        registry = ScheduledJobRegistry(queue=queue)
-        scheduler = RQScheduler([queue], connection=self.testconn)
-        scheduler.acquire_locks()
-        # Jobs created using enqueue_at is put in the ScheduledJobRegistry
-        # job_first should be enqueued first
-        job_first = queue.enqueue_at(datetime(2019, 1, 1, tzinfo=timezone.utc), say_hello)
-        # job_second will be enqueued second, but "at_front"
-        job_second = queue.enqueue_at(datetime(2019, 1, 2, tzinfo=timezone.utc), say_hello, at_front=True)
-        self.assertEqual(len(queue), 0)
-        self.assertEqual(len(registry), 2)
-
-        # enqueue_at set job status to "scheduled"
-        self.assertTrue(job_first.get_status() == 'scheduled')
-        self.assertTrue(job_second.get_status() == 'scheduled')
-
-        # After enqueue_scheduled_jobs() is called, the registry is empty
-        # and job is enqueued
-        scheduler.enqueue_scheduled_jobs()
-        self.assertEqual(len(queue), 2)
-        self.assertEqual(len(registry), 0)
-        self.assertEqual(0, queue.get_job_position(job_second.id))
-        self.assertEqual(1, queue.get_job_position(job_first.id))
-
-    def test_enqueue_in(self):
-        """queue.enqueue_in() schedules job correctly"""
-        queue = Queue(connection=self.testconn)
-        registry = ScheduledJobRegistry(queue=queue)
-
-        job = queue.enqueue_in(timedelta(seconds=30), say_hello)
-        now = datetime.now(timezone.utc)
-        scheduled_time = registry.get_scheduled_time(job)
-        # Ensure that job is scheduled roughly 30 seconds from now
-        self.assertTrue(now + timedelta(seconds=28) < scheduled_time < now + timedelta(seconds=32))
-
-    def test_enqueue_in_with_retry(self):
-        """Ensure that the retry parameter is passed
-        to the enqueue_at function from enqueue_in.
-        """
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue_in(timedelta(seconds=30), say_hello, retry=Retry(3, [2]))
-        self.assertEqual(job.retries_left, 3)
-        self.assertEqual(job.retry_intervals, [2])
-
-    def test_custom_connection_pool(self):
-        """Connection pool customizing. Ensure that we can properly set a
-        custom connection pool class and pass extra arguments"""
-        custom_conn = redis.Redis(
-            connection_pool=redis.ConnectionPool(
-                connection_class=CustomRedisConnection,
-                db=4,
-                custom_arg="foo",
-            )
-        )
-
-        queue = Queue(connection=custom_conn)
-        scheduler = RQScheduler([queue], connection=custom_conn)
-
-        scheduler_connection = scheduler.connection.connection_pool.get_connection('info')
-
-        self.assertEqual(scheduler_connection.__class__, CustomRedisConnection)
-        self.assertEqual(scheduler_connection.get_custom_arg(), "foo")
-
-    def test_no_custom_connection_pool(self):
-        """Connection pool customizing must not interfere if we're using a standard
-        connection (non-pooled)"""
-        standard_conn = redis.Redis(db=5)
-
-        queue = Queue(connection=standard_conn)
-        scheduler = RQScheduler([queue], connection=standard_conn)
-
-        scheduler_connection = scheduler.connection.connection_pool.get_connection('info')
-
-        self.assertEqual(scheduler_connection.__class__, redis.Connection)
diff --git a/tests/test_sentry.py b/tests/test_sentry.py
deleted file mode 100644
index 4ae9722..0000000
--- a/tests/test_sentry.py
+++ /dev/null
@@ -1,54 +0,0 @@
-from unittest import mock
-
-from click.testing import CliRunner
-
-from rq import Queue
-from rq.cli import main
-from rq.cli.helpers import read_config_file
-from rq.contrib.sentry import register_sentry
-from rq.worker import SimpleWorker
-from tests import RQTestCase
-from tests.fixtures import div_by_zero
-
-
-class FakeSentry:
-    servers = []
-
-    def captureException(self, *args, **kwds):  # noqa
-        pass  # we cannot check this, because worker forks
-
-
-class TestSentry(RQTestCase):
-    def setUp(self):
-        super().setUp()
-        db_num = self.testconn.connection_pool.connection_kwargs['db']
-        self.redis_url = 'redis://127.0.0.1:6379/%d' % db_num
-
-    def test_reading_dsn_from_file(self):
-        settings = read_config_file('tests.config_files.sentry')
-        self.assertIn('SENTRY_DSN', settings)
-        self.assertEqual(settings['SENTRY_DSN'], 'https://123@sentry.io/123')
-
-    @mock.patch('rq.contrib.sentry.register_sentry')
-    def test_cli_flag(self, mocked):
-        """rq worker -u <url> -b --exception-handler <handler>"""
-        # connection = Redis.from_url(self.redis_url)
-        runner = CliRunner()
-        runner.invoke(main, ['worker', '-u', self.redis_url, '-b', '--sentry-dsn', 'https://1@sentry.io/1'])
-        self.assertEqual(mocked.call_count, 1)
-
-        runner.invoke(main, ['worker-pool', '-u', self.redis_url, '-b', '--sentry-dsn', 'https://1@sentry.io/1'])
-        self.assertEqual(mocked.call_count, 2)
-
-    def test_failure_capture(self):
-        """Test failure is captured by Sentry SDK"""
-        from sentry_sdk import Hub
-
-        hub = Hub.current
-        self.assertIsNone(hub.last_event_id())
-        queue = Queue(connection=self.testconn)
-        queue.enqueue(div_by_zero)
-        worker = SimpleWorker(queues=[queue], connection=self.testconn)
-        register_sentry('https://123@sentry.io/123')
-        worker.work(burst=True)
-        self.assertIsNotNone(hub.last_event_id())
diff --git a/tests/test_serializers.py b/tests/test_serializers.py
deleted file mode 100644
index 6ef7ed8..0000000
--- a/tests/test_serializers.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import json
-import pickle
-import pickletools
-import queue
-import unittest
-
-from rq.serializers import DefaultSerializer, resolve_serializer
-
-
-class TestSerializers(unittest.TestCase):
-    def test_resolve_serializer(self):
-        """Ensure function resolve_serializer works correctly"""
-        serializer = resolve_serializer(None)
-        self.assertIsNotNone(serializer)
-        self.assertEqual(serializer, DefaultSerializer)
-
-        # Test round trip with default serializer
-        test_data = {'test': 'data'}
-        serialized_data = serializer.dumps(test_data)
-        self.assertEqual(serializer.loads(serialized_data), test_data)
-        self.assertEqual(next(pickletools.genops(serialized_data))[1], pickle.HIGHEST_PROTOCOL)
-
-        # Test using json serializer
-        serializer = resolve_serializer(json)
-        self.assertIsNotNone(serializer)
-
-        self.assertTrue(hasattr(serializer, 'dumps'))
-        self.assertTrue(hasattr(serializer, 'loads'))
-
-        # Test raise NotImplmentedError
-        with self.assertRaises(NotImplementedError):
-            resolve_serializer(object)
-
-        # Test raise Exception
-        with self.assertRaises(Exception):
-            resolve_serializer(queue.Queue())
-
-        # Test using path.to.serializer string
-        serializer = resolve_serializer('tests.fixtures.Serializer')
-        self.assertIsNotNone(serializer)
diff --git a/tests/test_timeouts.py b/tests/test_timeouts.py
deleted file mode 100644
index 42cd207..0000000
--- a/tests/test_timeouts.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import time
-
-from rq import Queue, SimpleWorker
-from rq.registry import FailedJobRegistry, FinishedJobRegistry
-from rq.timeouts import TimerDeathPenalty
-from tests import RQTestCase
-
-
-class TimerBasedWorker(SimpleWorker):
-    death_penalty_class = TimerDeathPenalty
-
-
-def thread_friendly_sleep_func(seconds):
-    end_at = time.time() + seconds
-    while True:
-        if time.time() > end_at:
-            break
-
-
-class TestTimeouts(RQTestCase):
-    def test_timer_death_penalty(self):
-        """Ensure TimerDeathPenalty works correctly."""
-        q = Queue(connection=self.testconn)
-        q.empty()
-        finished_job_registry = FinishedJobRegistry(connection=self.testconn)
-        failed_job_registry = FailedJobRegistry(connection=self.testconn)
-
-        # make sure death_penalty_class persists
-        w = TimerBasedWorker([q], connection=self.testconn)
-        self.assertIsNotNone(w)
-        self.assertEqual(w.death_penalty_class, TimerDeathPenalty)
-
-        # Test short-running job doesn't raise JobTimeoutException
-        job = q.enqueue(thread_friendly_sleep_func, args=(1,), job_timeout=3)
-        w.work(burst=True)
-        job.refresh()
-        self.assertIn(job, finished_job_registry)
-
-        # Test long-running job raises JobTimeoutException
-        job = q.enqueue(thread_friendly_sleep_func, args=(5,), job_timeout=3)
-        w.work(burst=True)
-        self.assertIn(job, failed_job_registry)
-        job.refresh()
-        self.assertIn("rq.timeouts.JobTimeoutException", job.exc_info)
-
-        # Test negative timeout doesn't raise JobTimeoutException,
-        # which implies an unintended immediate timeout.
-        job = q.enqueue(thread_friendly_sleep_func, args=(1,), job_timeout=-1)
-        w.work(burst=True)
-        job.refresh()
-        self.assertIn(job, finished_job_registry)
diff --git a/tests/test_utils.py b/tests/test_utils.py
deleted file mode 100644
index 2dbb613..0000000
--- a/tests/test_utils.py
+++ /dev/null
@@ -1,165 +0,0 @@
-import datetime
-import re
-from unittest.mock import Mock
-
-from redis import Redis
-
-from rq.exceptions import TimeoutFormatError
-from rq.utils import (
-    backend_class,
-    ceildiv,
-    ensure_list,
-    first,
-    get_call_string,
-    get_version,
-    import_attribute,
-    is_nonstring_iterable,
-    parse_timeout,
-    split_list,
-    truncate_long_string,
-    utcparse,
-)
-from rq.worker import SimpleWorker
-from tests import RQTestCase, fixtures
-
-
-class TestUtils(RQTestCase):
-    def test_parse_timeout(self):
-        """Ensure function parse_timeout works correctly"""
-        self.assertEqual(12, parse_timeout(12))
-        self.assertEqual(12, parse_timeout('12'))
-        self.assertEqual(12, parse_timeout('12s'))
-        self.assertEqual(720, parse_timeout('12m'))
-        self.assertEqual(3600, parse_timeout('1h'))
-        self.assertEqual(3600, parse_timeout('1H'))
-
-    def test_parse_timeout_coverage_scenarios(self):
-        """Test parse_timeout edge cases for coverage"""
-        timeouts = ['h12', 'h', 'm', 's', '10k']
-
-        self.assertEqual(None, parse_timeout(None))
-        with self.assertRaises(TimeoutFormatError):
-            for timeout in timeouts:
-                parse_timeout(timeout)
-
-    def test_first(self):
-        """Ensure function first works correctly"""
-        self.assertEqual(42, first([0, False, None, [], (), 42]))
-        self.assertEqual(None, first([0, False, None, [], ()]))
-        self.assertEqual('ohai', first([0, False, None, [], ()], default='ohai'))
-        self.assertEqual('bc', first(re.match(regex, 'abc') for regex in ['b.*', 'a(.*)']).group(1))
-        self.assertEqual(4, first([1, 1, 3, 4, 5], key=lambda x: x % 2 == 0))
-
-    def test_is_nonstring_iterable(self):
-        """Ensure function is_nonstring_iterable works correctly"""
-        self.assertEqual(True, is_nonstring_iterable([]))
-        self.assertEqual(False, is_nonstring_iterable('test'))
-        self.assertEqual(True, is_nonstring_iterable({}))
-        self.assertEqual(True, is_nonstring_iterable(()))
-
-    def test_ensure_list(self):
-        """Ensure function ensure_list works correctly"""
-        self.assertEqual([], ensure_list([]))
-        self.assertEqual(['test'], ensure_list('test'))
-        self.assertEqual({}, ensure_list({}))
-        self.assertEqual((), ensure_list(()))
-
-    def test_utcparse(self):
-        """Ensure function utcparse works correctly"""
-        utc_formated_time = '2017-08-31T10:14:02.123456Z'
-        self.assertEqual(datetime.datetime(2017, 8, 31, 10, 14, 2, 123456), utcparse(utc_formated_time))
-
-    def test_utcparse_legacy(self):
-        """Ensure function utcparse works correctly"""
-        utc_formated_time = '2017-08-31T10:14:02Z'
-        self.assertEqual(datetime.datetime(2017, 8, 31, 10, 14, 2), utcparse(utc_formated_time))
-
-    def test_backend_class(self):
-        """Ensure function backend_class works correctly"""
-        self.assertEqual(fixtures.DummyQueue, backend_class(fixtures, 'DummyQueue'))
-        self.assertNotEqual(fixtures.say_pid, backend_class(fixtures, 'DummyQueue'))
-        self.assertEqual(fixtures.DummyQueue, backend_class(fixtures, 'DummyQueue', override=fixtures.DummyQueue))
-        self.assertEqual(
-            fixtures.DummyQueue, backend_class(fixtures, 'DummyQueue', override='tests.fixtures.DummyQueue')
-        )
-
-    def test_get_redis_version(self):
-        """Ensure get_version works properly"""
-        redis = Redis()
-        self.assertTrue(isinstance(get_version(redis), tuple))
-
-        # Parses 3 digit version numbers correctly
-        class DummyRedis(Redis):
-            def info(*args):
-                return {'redis_version': '4.0.8'}
-
-        self.assertEqual(get_version(DummyRedis()), (4, 0, 8))
-
-        # Parses 3 digit version numbers correctly
-        class DummyRedis(Redis):
-            def info(*args):
-                return {'redis_version': '3.0.7.9'}
-
-        self.assertEqual(get_version(DummyRedis()), (3, 0, 7))
-
-    def test_get_redis_version_gets_cached(self):
-        """Ensure get_version works properly"""
-        # Parses 3 digit version numbers correctly
-        redis = Mock(spec=['info'])
-        redis.info = Mock(return_value={'redis_version': '4.0.8'})
-        self.assertEqual(get_version(redis), (4, 0, 8))
-        self.assertEqual(get_version(redis), (4, 0, 8))
-        redis.info.assert_called_once()
-
-    def test_import_attribute(self):
-        """Ensure get_version works properly"""
-        self.assertEqual(import_attribute('rq.utils.get_version'), get_version)
-        self.assertEqual(import_attribute('rq.worker.SimpleWorker'), SimpleWorker)
-        self.assertRaises(ValueError, import_attribute, 'non.existent.module')
-        self.assertRaises(ValueError, import_attribute, 'rq.worker.WrongWorker')
-
-    def test_ceildiv_even(self):
-        """When a number is evenly divisible by another ceildiv returns the quotient"""
-        dividend = 12
-        divisor = 4
-        self.assertEqual(ceildiv(dividend, divisor), dividend // divisor)
-
-    def test_ceildiv_uneven(self):
-        """When a number is not evenly divisible by another ceildiv returns the quotient plus one"""
-        dividend = 13
-        divisor = 4
-        self.assertEqual(ceildiv(dividend, divisor), dividend // divisor + 1)
-
-    def test_split_list(self):
-        """Ensure split_list works properly"""
-        BIG_LIST_SIZE = 42
-        SEGMENT_SIZE = 5
-
-        big_list = ['1'] * BIG_LIST_SIZE
-        small_lists = list(split_list(big_list, SEGMENT_SIZE))
-
-        expected_small_list_count = ceildiv(BIG_LIST_SIZE, SEGMENT_SIZE)
-        self.assertEqual(len(small_lists), expected_small_list_count)
-
-    def test_truncate_long_string(self):
-        """Ensure truncate_long_string works properly"""
-        assert truncate_long_string("12", max_length=3) == "12"
-        assert truncate_long_string("123", max_length=3) == "123"
-        assert truncate_long_string("1234", max_length=3) == "123..."
-        assert truncate_long_string("12345", max_length=3) == "123..."
-
-        s = "long string but no max_length provided so no truncating should occur" * 10
-        assert truncate_long_string(s) == s
-
-    def test_get_call_string(self):
-        """Ensure a case, when func_name, args and kwargs are not None, works properly"""
-        cs = get_call_string("f", ('some', 'args', 42), {"key1": "value1", "key2": True})
-        assert cs == "f('some', 'args', 42, key1='value1', key2=True)"
-
-    def test_get_call_string_with_max_length(self):
-        """Ensure get_call_string works properly when max_length is provided"""
-        func_name = "f"
-        args = (1234, 12345, 123456)
-        kwargs = {"len4": 1234, "len5": 12345, "len6": 123456}
-        cs = get_call_string(func_name, args, kwargs, max_length=5)
-        assert cs == "f(1234, 12345, 12345..., len4=1234, len5=12345, len6=12345...)"
diff --git a/tests/test_worker.py b/tests/test_worker.py
deleted file mode 100644
index 6b6d3d5..0000000
--- a/tests/test_worker.py
+++ /dev/null
@@ -1,1565 +0,0 @@
-import json
-import os
-import shutil
-import signal
-import subprocess
-import sys
-import time
-import zlib
-from datetime import datetime, timedelta
-from multiprocessing import Process
-from time import sleep
-from unittest import mock, skipIf
-from unittest.mock import Mock
-
-import psutil
-import pytest
-import redis.exceptions
-from redis import Redis
-
-from rq import Queue, SimpleWorker, Worker, get_current_connection
-from rq.defaults import DEFAULT_MAINTENANCE_TASK_INTERVAL
-from rq.job import Job, JobStatus, Retry
-from rq.registry import FailedJobRegistry, FinishedJobRegistry, StartedJobRegistry
-from rq.results import Result
-from rq.serializers import JSONSerializer
-from rq.suspension import resume, suspend
-from rq.utils import as_text, get_version, utcnow
-from rq.version import VERSION
-from rq.worker import HerokuWorker, RandomWorker, RoundRobinWorker, WorkerStatus
-from tests import RQTestCase, slow
-from tests.fixtures import (
-    CustomJob,
-    access_self,
-    create_file,
-    create_file_after_timeout,
-    create_file_after_timeout_and_setsid,
-    div_by_zero,
-    do_nothing,
-    kill_worker,
-    launch_process_within_worker_and_store_pid,
-    long_running_job,
-    modify_self,
-    modify_self_and_error,
-    raise_exc_mock,
-    run_dummy_heroku_worker,
-    save_key_ttl,
-    say_hello,
-    say_pid,
-)
-
-
-class CustomQueue(Queue):
-    pass
-
-
-class TestWorker(RQTestCase):
-    def test_create_worker(self):
-        """Worker creation using various inputs."""
-
-        # With single string argument
-        w = Worker('foo')
-        self.assertEqual(w.queues[0].name, 'foo')
-
-        # With list of strings
-        w = Worker(['foo', 'bar'])
-        self.assertEqual(w.queues[0].name, 'foo')
-        self.assertEqual(w.queues[1].name, 'bar')
-
-        self.assertEqual(w.queue_keys(), [w.queues[0].key, w.queues[1].key])
-        self.assertEqual(w.queue_names(), ['foo', 'bar'])
-
-        # With iterable of strings
-        w = Worker(iter(['foo', 'bar']))
-        self.assertEqual(w.queues[0].name, 'foo')
-        self.assertEqual(w.queues[1].name, 'bar')
-
-        # With single Queue
-        w = Worker(Queue('foo'))
-        self.assertEqual(w.queues[0].name, 'foo')
-
-        # With iterable of Queues
-        w = Worker(iter([Queue('foo'), Queue('bar')]))
-        self.assertEqual(w.queues[0].name, 'foo')
-        self.assertEqual(w.queues[1].name, 'bar')
-
-        # With list of Queues
-        w = Worker([Queue('foo'), Queue('bar')])
-        self.assertEqual(w.queues[0].name, 'foo')
-        self.assertEqual(w.queues[1].name, 'bar')
-
-        # With string and serializer
-        w = Worker('foo', serializer=json)
-        self.assertEqual(w.queues[0].name, 'foo')
-
-        # With queue having serializer
-        w = Worker(Queue('foo'), serializer=json)
-        self.assertEqual(w.queues[0].name, 'foo')
-
-    def test_work_and_quit(self):
-        """Worker processes work, then quits."""
-        fooq, barq = Queue('foo'), Queue('bar')
-        w = Worker([fooq, barq])
-        self.assertEqual(w.work(burst=True), False, 'Did not expect any work on the queue.')
-
-        fooq.enqueue(say_hello, name='Frank')
-        self.assertEqual(w.work(burst=True), True, 'Expected at least some work done.')
-
-    def test_work_and_quit_custom_serializer(self):
-        """Worker processes work, then quits."""
-        fooq, barq = Queue('foo', serializer=JSONSerializer), Queue('bar', serializer=JSONSerializer)
-        w = Worker([fooq, barq], serializer=JSONSerializer)
-        self.assertEqual(w.work(burst=True), False, 'Did not expect any work on the queue.')
-
-        fooq.enqueue(say_hello, name='Frank')
-        self.assertEqual(w.work(burst=True), True, 'Expected at least some work done.')
-
-    def test_worker_all(self):
-        """Worker.all() works properly"""
-        foo_queue = Queue('foo')
-        bar_queue = Queue('bar')
-
-        w1 = Worker([foo_queue, bar_queue], name='w1')
-        w1.register_birth()
-        w2 = Worker([foo_queue], name='w2')
-        w2.register_birth()
-
-        self.assertEqual(set(Worker.all(connection=foo_queue.connection)), set([w1, w2]))
-        self.assertEqual(set(Worker.all(queue=foo_queue)), set([w1, w2]))
-        self.assertEqual(set(Worker.all(queue=bar_queue)), set([w1]))
-
-        w1.register_death()
-        w2.register_death()
-
-    def test_find_by_key(self):
-        """Worker.find_by_key restores queues, state and job_id."""
-        queues = [Queue('foo'), Queue('bar')]
-        w = Worker(queues)
-        w.register_death()
-        w.register_birth()
-        w.set_state(WorkerStatus.STARTED)
-        worker = Worker.find_by_key(w.key)
-        self.assertEqual(worker.queues, queues)
-        self.assertEqual(worker.get_state(), WorkerStatus.STARTED)
-        self.assertEqual(worker._job_id, None)
-        self.assertTrue(worker.key in Worker.all_keys(worker.connection))
-        self.assertEqual(worker.version, VERSION)
-
-        # If worker is gone, its keys should also be removed
-        worker.connection.delete(worker.key)
-        Worker.find_by_key(worker.key)
-        self.assertFalse(worker.key in Worker.all_keys(worker.connection))
-
-        self.assertRaises(ValueError, Worker.find_by_key, 'foo')
-
-    def test_worker_ttl(self):
-        """Worker ttl."""
-        w = Worker([])
-        w.register_birth()
-        [worker_key] = self.testconn.smembers(Worker.redis_workers_keys)
-        self.assertIsNotNone(self.testconn.ttl(worker_key))
-        w.register_death()
-
-    def test_work_via_string_argument(self):
-        """Worker processes work fed via string arguments."""
-        q = Queue('foo')
-        w = Worker([q])
-        job = q.enqueue('tests.fixtures.say_hello', name='Frank')
-        self.assertEqual(w.work(burst=True), True, 'Expected at least some work done.')
-        expected_result = 'Hi there, Frank!'
-        self.assertEqual(job.result, expected_result)
-        # Only run if Redis server supports streams
-        if job.supports_redis_streams:
-            self.assertEqual(Result.fetch_latest(job).return_value, expected_result)
-        self.assertIsNone(job.worker_name)
-
-    def test_job_times(self):
-        """job times are set correctly."""
-        q = Queue('foo')
-        w = Worker([q])
-        before = utcnow()
-        before = before.replace(microsecond=0)
-        job = q.enqueue(say_hello)
-        self.assertIsNotNone(job.enqueued_at)
-        self.assertIsNone(job.started_at)
-        self.assertIsNone(job.ended_at)
-        self.assertEqual(w.work(burst=True), True, 'Expected at least some work done.')
-        self.assertEqual(job.result, 'Hi there, Stranger!')
-        after = utcnow()
-        job.refresh()
-        self.assertTrue(before <= job.enqueued_at <= after, 'Not %s <= %s <= %s' % (before, job.enqueued_at, after))
-        self.assertTrue(before <= job.started_at <= after, 'Not %s <= %s <= %s' % (before, job.started_at, after))
-        self.assertTrue(before <= job.ended_at <= after, 'Not %s <= %s <= %s' % (before, job.ended_at, after))
-
-    def test_work_is_unreadable(self):
-        """Unreadable jobs are put on the failed job registry."""
-        q = Queue()
-        self.assertEqual(q.count, 0)
-
-        # NOTE: We have to fake this enqueueing for this test case.
-        # What we're simulating here is a call to a function that is not
-        # importable from the worker process.
-        job = Job.create(func=div_by_zero, args=(3,), origin=q.name)
-        job.save()
-
-        job_data = job.data
-        invalid_data = job_data.replace(b'div_by_zero', b'nonexisting')
-        assert job_data != invalid_data
-        self.testconn.hset(job.key, 'data', zlib.compress(invalid_data))
-
-        # We use the low-level internal function to enqueue any data (bypassing
-        # validity checks)
-        q.push_job_id(job.id)
-
-        self.assertEqual(q.count, 1)
-
-        # All set, we're going to process it
-        w = Worker([q])
-        w.work(burst=True)  # should silently pass
-        self.assertEqual(q.count, 0)
-
-        failed_job_registry = FailedJobRegistry(queue=q)
-        self.assertTrue(job in failed_job_registry)
-
-    def test_meta_is_unserializable(self):
-        """Unserializable jobs are put on the failed job registry."""
-        q = Queue()
-        self.assertEqual(q.count, 0)
-
-        # NOTE: We have to fake this enqueueing for this test case.
-        # What we're simulating here is a call to a function that is not
-        # importable from the worker process.
-        job = Job.create(func=do_nothing, origin=q.name, meta={'key': 'value'})
-        job.save()
-
-        invalid_meta = '{{{{{{{{INVALID_JSON'
-        self.testconn.hset(job.key, 'meta', invalid_meta)
-        job.refresh()
-        self.assertIsInstance(job.meta, dict)
-        self.assertTrue('unserialized' in job.meta.keys())
-
-    @mock.patch('rq.worker.logger.error')
-    def test_deserializing_failure_is_handled(self, mock_logger_error):
-        """
-        Test that exceptions are properly handled for a job that fails to
-        deserialize.
-        """
-        q = Queue()
-        self.assertEqual(q.count, 0)
-
-        # as in test_work_is_unreadable(), we create a fake bad job
-        job = Job.create(func=div_by_zero, args=(3,), origin=q.name)
-        job.save()
-
-        # setting data to b'' ensures that pickling will completely fail
-        job_data = job.data
-        invalid_data = job_data.replace(b'div_by_zero', b'')
-        assert job_data != invalid_data
-        self.testconn.hset(job.key, 'data', zlib.compress(invalid_data))
-
-        # We use the low-level internal function to enqueue any data (bypassing
-        # validity checks)
-        q.push_job_id(job.id)
-        self.assertEqual(q.count, 1)
-
-        # Now we try to run the job...
-        w = Worker([q])
-        job, queue = w.dequeue_job_and_maintain_ttl(10)
-        w.perform_job(job, queue)
-
-        # An exception should be logged here at ERROR level
-        self.assertIn("Traceback", mock_logger_error.call_args[0][0])
-
-    def test_heartbeat(self):
-        """Heartbeat saves last_heartbeat"""
-        q = Queue()
-        w = Worker([q])
-        w.register_birth()
-
-        self.assertEqual(str(w.pid), as_text(self.testconn.hget(w.key, 'pid')))
-        self.assertEqual(w.hostname, as_text(self.testconn.hget(w.key, 'hostname')))
-        last_heartbeat = self.testconn.hget(w.key, 'last_heartbeat')
-        self.assertIsNotNone(self.testconn.hget(w.key, 'birth'))
-        self.assertTrue(last_heartbeat is not None)
-        w = Worker.find_by_key(w.key)
-        self.assertIsInstance(w.last_heartbeat, datetime)
-
-        # worker.refresh() shouldn't fail if last_heartbeat is None
-        # for compatibility reasons
-        self.testconn.hdel(w.key, 'last_heartbeat')
-        w.refresh()
-        # worker.refresh() shouldn't fail if birth is None
-        # for compatibility reasons
-        self.testconn.hdel(w.key, 'birth')
-        w.refresh()
-
-    def test_maintain_heartbeats(self):
-        """worker.maintain_heartbeats() shouldn't create new job keys"""
-        queue = Queue(connection=self.testconn)
-        worker = Worker([queue], connection=self.testconn)
-        job = queue.enqueue(say_hello)
-        worker.maintain_heartbeats(job)
-        self.assertTrue(self.testconn.exists(worker.key))
-        self.assertTrue(self.testconn.exists(job.key))
-
-        self.testconn.delete(job.key)
-
-        worker.maintain_heartbeats(job)
-        self.assertFalse(self.testconn.exists(job.key))
-
-    @slow
-    def test_heartbeat_survives_lost_connection(self):
-        with mock.patch.object(Worker, 'heartbeat') as mocked:
-            # None -> Heartbeat is first called before the job loop
-            mocked.side_effect = [None, redis.exceptions.ConnectionError()]
-            q = Queue()
-            w = Worker([q])
-            w.work(burst=True)
-            # First call is prior to job loop, second raises the error,
-            # third is successful, after "recovery"
-            assert mocked.call_count == 3
-
-    def test_job_timeout_moved_to_failed_job_registry(self):
-        """Jobs that run long are moved to FailedJobRegistry"""
-        queue = Queue()
-        worker = Worker([queue])
-        job = queue.enqueue(long_running_job, 5, job_timeout=1)
-        worker.work(burst=True)
-        self.assertIn(job, job.failed_job_registry)
-        job.refresh()
-        self.assertIn('rq.timeouts.JobTimeoutException', job.exc_info)
-
-    @slow
-    def test_heartbeat_busy(self):
-        """Periodic heartbeats while horse is busy with long jobs"""
-        q = Queue()
-        w = Worker([q], job_monitoring_interval=5)
-
-        for timeout, expected_heartbeats in [(2, 0), (7, 1), (12, 2)]:
-            job = q.enqueue(long_running_job, args=(timeout,), job_timeout=30, result_ttl=-1)
-            with mock.patch.object(w, 'heartbeat', wraps=w.heartbeat) as mocked:
-                w.execute_job(job, q)
-                self.assertEqual(mocked.call_count, expected_heartbeats)
-            job = Job.fetch(job.id)
-            self.assertEqual(job.get_status(), JobStatus.FINISHED)
-
-    def test_work_fails(self):
-        """Failing jobs are put on the failed queue."""
-        q = Queue()
-        self.assertEqual(q.count, 0)
-
-        # Action
-        job = q.enqueue(div_by_zero)
-        self.assertEqual(q.count, 1)
-
-        # keep for later
-        enqueued_at_date = str(job.enqueued_at)
-
-        w = Worker([q])
-        w.work(burst=True)
-
-        # Postconditions
-        self.assertEqual(q.count, 0)
-        failed_job_registry = FailedJobRegistry(queue=q)
-        self.assertTrue(job in failed_job_registry)
-        self.assertEqual(w.get_current_job_id(), None)
-
-        # Check the job
-        job = Job.fetch(job.id)
-        self.assertEqual(job.origin, q.name)
-
-        # Should be the original enqueued_at date, not the date of enqueueing
-        # to the failed queue
-        self.assertEqual(str(job.enqueued_at), enqueued_at_date)
-        self.assertTrue(job.exc_info)  # should contain exc_info
-        if job.supports_redis_streams:
-            result = Result.fetch_latest(job)
-            self.assertEqual(result.exc_string, job.exc_info)
-            self.assertEqual(result.type, Result.Type.FAILED)
-
-    def test_horse_fails(self):
-        """Tests that job status is set to FAILED even if horse unexpectedly fails"""
-        q = Queue()
-        self.assertEqual(q.count, 0)
-
-        # Action
-        job = q.enqueue(say_hello)
-        self.assertEqual(q.count, 1)
-
-        # keep for later
-        enqueued_at_date = str(job.enqueued_at)
-
-        w = Worker([q])
-        with mock.patch.object(w, 'perform_job', new_callable=raise_exc_mock):
-            w.work(burst=True)  # should silently pass
-
-        # Postconditions
-        self.assertEqual(q.count, 0)
-        failed_job_registry = FailedJobRegistry(queue=q)
-        self.assertTrue(job in failed_job_registry)
-        self.assertEqual(w.get_current_job_id(), None)
-
-        # Check the job
-        job = Job.fetch(job.id)
-        self.assertEqual(job.origin, q.name)
-
-        # Should be the original enqueued_at date, not the date of enqueueing
-        # to the failed queue
-        self.assertEqual(str(job.enqueued_at), enqueued_at_date)
-        self.assertTrue(job.exc_info)  # should contain exc_info
-
-    def test_statistics(self):
-        """Successful and failed job counts are saved properly"""
-        queue = Queue(connection=self.connection)
-        job = queue.enqueue(div_by_zero)
-        worker = Worker([queue])
-        worker.register_birth()
-
-        self.assertEqual(worker.failed_job_count, 0)
-        self.assertEqual(worker.successful_job_count, 0)
-        self.assertEqual(worker.total_working_time, 0)
-
-        registry = StartedJobRegistry(connection=worker.connection)
-        job.started_at = utcnow()
-        job.ended_at = job.started_at + timedelta(seconds=0.75)
-        worker.handle_job_failure(job, queue)
-        worker.handle_job_success(job, queue, registry)
-
-        worker.refresh()
-        self.assertEqual(worker.failed_job_count, 1)
-        self.assertEqual(worker.successful_job_count, 1)
-        self.assertEqual(worker.total_working_time, 1.5)  # 1.5 seconds
-
-        worker.handle_job_failure(job, queue)
-        worker.handle_job_success(job, queue, registry)
-
-        worker.refresh()
-        self.assertEqual(worker.failed_job_count, 2)
-        self.assertEqual(worker.successful_job_count, 2)
-        self.assertEqual(worker.total_working_time, 3.0)
-
-    def test_handle_retry(self):
-        """handle_job_failure() handles retry properly"""
-        connection = self.testconn
-        queue = Queue(connection=connection)
-        retry = Retry(max=2)
-        job = queue.enqueue(div_by_zero, retry=retry)
-        registry = FailedJobRegistry(queue=queue)
-
-        worker = Worker([queue])
-
-        # If job is configured to retry, it will be put back in the queue
-        # and not put in the FailedJobRegistry.
-        # This is the original execution
-        queue.empty()
-        worker.handle_job_failure(job, queue)
-        job.refresh()
-        self.assertEqual(job.retries_left, 1)
-        self.assertEqual([job.id], queue.job_ids)
-        self.assertFalse(job in registry)
-
-        # First retry
-        queue.empty()
-        worker.handle_job_failure(job, queue)
-        job.refresh()
-        self.assertEqual(job.retries_left, 0)
-        self.assertEqual([job.id], queue.job_ids)
-
-        # Second retry
-        queue.empty()
-        worker.handle_job_failure(job, queue)
-        job.refresh()
-        self.assertEqual(job.retries_left, 0)
-        self.assertEqual([], queue.job_ids)
-        # If a job is no longer retries, it's put in FailedJobRegistry
-        self.assertTrue(job in registry)
-
-    def test_total_working_time(self):
-        """worker.total_working_time is stored properly"""
-        queue = Queue()
-        job = queue.enqueue(long_running_job, 0.05)
-        worker = Worker([queue])
-        worker.register_birth()
-
-        worker.perform_job(job, queue)
-        worker.refresh()
-        # total_working_time should be a little bit more than 0.05 seconds
-        self.assertGreaterEqual(worker.total_working_time, 0.05)
-        # in multi-user environments delays might be unpredictable,
-        # please adjust this magic limit accordingly in case if It takes even longer to run
-        self.assertLess(worker.total_working_time, 1)
-
-    def test_max_jobs(self):
-        """Worker exits after number of jobs complete."""
-        queue = Queue()
-        job1 = queue.enqueue(do_nothing)
-        job2 = queue.enqueue(do_nothing)
-        worker = Worker([queue])
-        worker.work(max_jobs=1)
-
-        self.assertEqual(JobStatus.FINISHED, job1.get_status())
-        self.assertEqual(JobStatus.QUEUED, job2.get_status())
-
-    def test_disable_default_exception_handler(self):
-        """
-        Job is not moved to FailedJobRegistry when default custom exception
-        handler is disabled.
-        """
-        queue = Queue(name='default', connection=self.testconn)
-
-        job = queue.enqueue(div_by_zero)
-        worker = Worker([queue], disable_default_exception_handler=False)
-        worker.work(burst=True)
-
-        registry = FailedJobRegistry(queue=queue)
-        self.assertTrue(job in registry)
-
-        # Job is not added to FailedJobRegistry if
-        # disable_default_exception_handler is True
-        job = queue.enqueue(div_by_zero)
-        worker = Worker([queue], disable_default_exception_handler=True)
-        worker.work(burst=True)
-        self.assertFalse(job in registry)
-
-    def test_custom_exc_handling(self):
-        """Custom exception handling."""
-
-        def first_handler(job, *exc_info):
-            job.meta = {'first_handler': True}
-            job.save_meta()
-            return True
-
-        def second_handler(job, *exc_info):
-            job.meta.update({'second_handler': True})
-            job.save_meta()
-
-        def black_hole(job, *exc_info):
-            # Don't fall through to default behaviour (moving to failed queue)
-            return False
-
-        q = Queue()
-        self.assertEqual(q.count, 0)
-        job = q.enqueue(div_by_zero)
-
-        w = Worker([q], exception_handlers=first_handler)
-        w.work(burst=True)
-
-        # Check the job
-        job.refresh()
-        self.assertEqual(job.is_failed, True)
-        self.assertTrue(job.meta['first_handler'])
-
-        job = q.enqueue(div_by_zero)
-        w = Worker([q], exception_handlers=[first_handler, second_handler])
-        w.work(burst=True)
-
-        # Both custom exception handlers are run
-        job.refresh()
-        self.assertEqual(job.is_failed, True)
-        self.assertTrue(job.meta['first_handler'])
-        self.assertTrue(job.meta['second_handler'])
-
-        job = q.enqueue(div_by_zero)
-        w = Worker([q], exception_handlers=[first_handler, black_hole, second_handler])
-        w.work(burst=True)
-
-        # second_handler is not run since it's interrupted by black_hole
-        job.refresh()
-        self.assertEqual(job.is_failed, True)
-        self.assertTrue(job.meta['first_handler'])
-        self.assertEqual(job.meta.get('second_handler'), None)
-
-    def test_deleted_jobs_arent_executed(self):
-        """Cancelling jobs."""
-
-        SENTINEL_FILE = '/tmp/rq-tests.txt'  # noqa
-
-        try:
-            # Remove the sentinel if it is leftover from a previous test run
-            os.remove(SENTINEL_FILE)
-        except OSError as e:
-            if e.errno != 2:
-                raise
-
-        q = Queue()
-        job = q.enqueue(create_file, SENTINEL_FILE)
-
-        # Here, we cancel the job, so the sentinel file may not be created
-        self.testconn.delete(job.key)
-
-        w = Worker([q])
-        w.work(burst=True)
-        assert q.count == 0
-
-        # Should not have created evidence of execution
-        self.assertEqual(os.path.exists(SENTINEL_FILE), False)
-
-    @slow
-    def test_max_idle_time(self):
-        q = Queue()
-        w = Worker([q])
-        q.enqueue(say_hello, args=('Frank',))
-        self.assertIsNotNone(w.dequeue_job_and_maintain_ttl(1))
-
-        # idle for 1 second
-        self.assertIsNone(w.dequeue_job_and_maintain_ttl(1, max_idle_time=1))
-
-        # idle for 3 seconds
-        now = utcnow()
-        self.assertIsNone(w.dequeue_job_and_maintain_ttl(1, max_idle_time=3))
-        self.assertLess((utcnow() - now).total_seconds(), 5)  # 5 for some buffer
-
-        # idle for 2 seconds because idle_time is less than timeout
-        now = utcnow()
-        self.assertIsNone(w.dequeue_job_and_maintain_ttl(3, max_idle_time=2))
-        self.assertLess((utcnow() - now).total_seconds(), 4)  # 4 for some buffer
-
-        # idle for 3 seconds because idle_time is less than two rounds of timeout
-        now = utcnow()
-        self.assertIsNone(w.dequeue_job_and_maintain_ttl(2, max_idle_time=3))
-        self.assertLess((utcnow() - now).total_seconds(), 5)  # 5 for some buffer
-
-    @slow  # noqa
-    def test_timeouts(self):
-        """Worker kills jobs after timeout."""
-        sentinel_file = '/tmp/.rq_sentinel'
-
-        q = Queue()
-        w = Worker([q])
-
-        # Put it on the queue with a timeout value
-        res = q.enqueue(create_file_after_timeout, args=(sentinel_file, 4), job_timeout=1)
-
-        try:
-            os.unlink(sentinel_file)
-        except OSError as e:
-            if e.errno == 2:
-                pass
-
-        self.assertEqual(os.path.exists(sentinel_file), False)
-        w.work(burst=True)
-        self.assertEqual(os.path.exists(sentinel_file), False)
-
-        # TODO: Having to do the manual refresh() here is really ugly!
-        res.refresh()
-        self.assertIn('JobTimeoutException', as_text(res.exc_info))
-
-    def test_dequeue_job_and_maintain_ttl_non_blocking(self):
-        """Not passing a timeout should return immediately with None as a result"""
-        q = Queue()
-        w = Worker([q])
-
-        self.assertIsNone(w.dequeue_job_and_maintain_ttl(None))
-
-    def test_worker_ttl_param_resolves_timeout(self):
-        """
-        Ensures the worker_ttl param is being considered in the dequeue_timeout and
-        connection_timeout params, takes into account 15 seconds gap (hard coded)
-        """
-        q = Queue()
-        w = Worker([q])
-        self.assertEqual(w.dequeue_timeout, 405)
-        self.assertEqual(w.connection_timeout, 415)
-        w = Worker([q], default_worker_ttl=500)
-        self.assertEqual(w.dequeue_timeout, 485)
-        self.assertEqual(w.connection_timeout, 495)
-
-    def test_worker_sets_result_ttl(self):
-        """Ensure that Worker properly sets result_ttl for individual jobs."""
-        q = Queue()
-        job = q.enqueue(say_hello, args=('Frank',), result_ttl=10)
-        w = Worker([q])
-        self.assertIn(job.get_id().encode(), self.testconn.lrange(q.key, 0, -1))
-        w.work(burst=True)
-        self.assertNotEqual(self.testconn.ttl(job.key), 0)
-        self.assertNotIn(job.get_id().encode(), self.testconn.lrange(q.key, 0, -1))
-
-        # Job with -1 result_ttl don't expire
-        job = q.enqueue(say_hello, args=('Frank',), result_ttl=-1)
-        w = Worker([q])
-        self.assertIn(job.get_id().encode(), self.testconn.lrange(q.key, 0, -1))
-        w.work(burst=True)
-        self.assertEqual(self.testconn.ttl(job.key), -1)
-        self.assertNotIn(job.get_id().encode(), self.testconn.lrange(q.key, 0, -1))
-
-        # Job with result_ttl = 0 gets deleted immediately
-        job = q.enqueue(say_hello, args=('Frank',), result_ttl=0)
-        w = Worker([q])
-        self.assertIn(job.get_id().encode(), self.testconn.lrange(q.key, 0, -1))
-        w.work(burst=True)
-        self.assertEqual(self.testconn.get(job.key), None)
-        self.assertNotIn(job.get_id().encode(), self.testconn.lrange(q.key, 0, -1))
-
-    def test_worker_sets_job_status(self):
-        """Ensure that worker correctly sets job status."""
-        q = Queue()
-        w = Worker([q])
-
-        job = q.enqueue(say_hello)
-        self.assertEqual(job.get_status(), JobStatus.QUEUED)
-        self.assertEqual(job.is_queued, True)
-        self.assertEqual(job.is_finished, False)
-        self.assertEqual(job.is_failed, False)
-
-        w.work(burst=True)
-        job = Job.fetch(job.id)
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-        self.assertEqual(job.is_queued, False)
-        self.assertEqual(job.is_finished, True)
-        self.assertEqual(job.is_failed, False)
-
-        # Failed jobs should set status to "failed"
-        job = q.enqueue(div_by_zero, args=(1,))
-        w.work(burst=True)
-        job = Job.fetch(job.id)
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        self.assertEqual(job.is_queued, False)
-        self.assertEqual(job.is_finished, False)
-        self.assertEqual(job.is_failed, True)
-
-    def test_get_current_job(self):
-        """Ensure worker.get_current_job() works properly"""
-        q = Queue()
-        worker = Worker([q])
-        job = q.enqueue_call(say_hello)
-
-        self.assertEqual(self.testconn.hget(worker.key, 'current_job'), None)
-        worker.set_current_job_id(job.id)
-        self.assertEqual(worker.get_current_job_id(), as_text(self.testconn.hget(worker.key, 'current_job')))
-        self.assertEqual(worker.get_current_job(), job)
-
-    def test_custom_job_class(self):
-        """Ensure Worker accepts custom job class."""
-        q = Queue()
-        worker = Worker([q], job_class=CustomJob)
-        self.assertEqual(worker.job_class, CustomJob)
-
-    def test_custom_queue_class(self):
-        """Ensure Worker accepts custom queue class."""
-        q = CustomQueue()
-        worker = Worker([q], queue_class=CustomQueue)
-        self.assertEqual(worker.queue_class, CustomQueue)
-
-    def test_custom_queue_class_is_not_global(self):
-        """Ensure Worker custom queue class is not global."""
-        q = CustomQueue()
-        worker_custom = Worker([q], queue_class=CustomQueue)
-        q_generic = Queue()
-        worker_generic = Worker([q_generic])
-        self.assertEqual(worker_custom.queue_class, CustomQueue)
-        self.assertEqual(worker_generic.queue_class, Queue)
-        self.assertEqual(Worker.queue_class, Queue)
-
-    def test_custom_job_class_is_not_global(self):
-        """Ensure Worker custom job class is not global."""
-        q = Queue()
-        worker_custom = Worker([q], job_class=CustomJob)
-        q_generic = Queue()
-        worker_generic = Worker([q_generic])
-        self.assertEqual(worker_custom.job_class, CustomJob)
-        self.assertEqual(worker_generic.job_class, Job)
-        self.assertEqual(Worker.job_class, Job)
-
-    def test_work_via_simpleworker(self):
-        """Worker processes work, with forking disabled,
-        then returns."""
-        fooq, barq = Queue('foo'), Queue('bar')
-        w = SimpleWorker([fooq, barq])
-        self.assertEqual(w.work(burst=True), False, 'Did not expect any work on the queue.')
-
-        job = fooq.enqueue(say_pid)
-        self.assertEqual(w.work(burst=True), True, 'Expected at least some work done.')
-        self.assertEqual(job.result, os.getpid(), 'PID mismatch, fork() is not supposed to happen here')
-
-    def test_simpleworker_heartbeat_ttl(self):
-        """SimpleWorker's key must last longer than job.timeout when working"""
-        queue = Queue('foo')
-
-        worker = SimpleWorker([queue])
-        job_timeout = 300
-        job = queue.enqueue(save_key_ttl, worker.key, job_timeout=job_timeout)
-        worker.work(burst=True)
-        job.refresh()
-        self.assertGreater(job.meta['ttl'], job_timeout)
-
-    def test_prepare_job_execution(self):
-        """Prepare job execution does the necessary bookkeeping."""
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(say_hello)
-        worker = Worker([queue])
-        worker.prepare_job_execution(job)
-
-        # Updates working queue
-        registry = StartedJobRegistry(connection=self.testconn)
-        self.assertEqual(registry.get_job_ids(), [job.id])
-
-        # Updates worker's current job
-        self.assertEqual(worker.get_current_job_id(), job.id)
-
-        # job status is also updated
-        self.assertEqual(job._status, JobStatus.STARTED)
-        self.assertEqual(job.worker_name, worker.name)
-
-    @skipIf(get_version(Redis()) < (6, 2, 0), 'Skip if Redis server < 6.2.0')
-    def test_prepare_job_execution_removes_key_from_intermediate_queue(self):
-        """Prepare job execution removes job from intermediate queue."""
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(say_hello)
-
-        Queue.dequeue_any([queue], timeout=None, connection=self.testconn)
-        self.assertIsNotNone(self.testconn.lpos(queue.intermediate_queue_key, job.id))
-        worker = Worker([queue])
-        worker.prepare_job_execution(job, remove_from_intermediate_queue=True)
-        self.assertIsNone(self.testconn.lpos(queue.intermediate_queue_key, job.id))
-        self.assertEqual(queue.count, 0)
-
-    @skipIf(get_version(Redis()) < (6, 2, 0), 'Skip if Redis server < 6.2.0')
-    def test_work_removes_key_from_intermediate_queue(self):
-        """Worker removes job from intermediate queue."""
-        queue = Queue(connection=self.testconn)
-        job = queue.enqueue(say_hello)
-        worker = Worker([queue])
-        worker.work(burst=True)
-        self.assertIsNone(self.testconn.lpos(queue.intermediate_queue_key, job.id))
-
-    def test_work_unicode_friendly(self):
-        """Worker processes work with unicode description, then quits."""
-        q = Queue('foo')
-        w = Worker([q])
-        job = q.enqueue('tests.fixtures.say_hello', name='Adam', description='你好 世界!')
-        self.assertEqual(w.work(burst=True), True, 'Expected at least some work done.')
-        self.assertEqual(job.result, 'Hi there, Adam!')
-        self.assertEqual(job.description, '你好 世界!')
-
-    def test_work_log_unicode_friendly(self):
-        """Worker process work with unicode or str other than pure ascii content,
-        logging work properly"""
-        q = Queue("foo")
-        w = Worker([q])
-
-        job = q.enqueue('tests.fixtures.say_hello', name='阿达姆', description='你好 世界!')
-        w.work(burst=True)
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-
-        job = q.enqueue('tests.fixtures.say_hello_unicode', name='阿达姆', description='你好 世界!')
-        w.work(burst=True)
-        self.assertEqual(job.get_status(), JobStatus.FINISHED)
-
-    def test_suspend_worker_execution(self):
-        """Test Pause Worker Execution"""
-
-        SENTINEL_FILE = '/tmp/rq-tests.txt'  # noqa
-
-        try:
-            # Remove the sentinel if it is leftover from a previous test run
-            os.remove(SENTINEL_FILE)
-        except OSError as e:
-            if e.errno != 2:
-                raise
-
-        q = Queue()
-        q.enqueue(create_file, SENTINEL_FILE)
-
-        w = Worker([q])
-
-        suspend(self.testconn)
-
-        w.work(burst=True)
-        assert q.count == 1
-
-        # Should not have created evidence of execution
-        self.assertEqual(os.path.exists(SENTINEL_FILE), False)
-
-        resume(self.testconn)
-        w.work(burst=True)
-        assert q.count == 0
-        self.assertEqual(os.path.exists(SENTINEL_FILE), True)
-
-    @slow
-    def test_suspend_with_duration(self):
-        q = Queue()
-        for _ in range(5):
-            q.enqueue(do_nothing)
-
-        w = Worker([q])
-
-        # This suspends workers for working for 2 second
-        suspend(self.testconn, 2)
-
-        # So when this burst of work happens the queue should remain at 5
-        w.work(burst=True)
-        assert q.count == 5
-
-        sleep(3)
-
-        # The suspension should be expired now, and a burst of work should now clear the queue
-        w.work(burst=True)
-        assert q.count == 0
-
-    def test_worker_hash_(self):
-        """Workers are hashed by their .name attribute"""
-        q = Queue('foo')
-        w1 = Worker([q], name="worker1")
-        w2 = Worker([q], name="worker2")
-        w3 = Worker([q], name="worker1")
-        worker_set = set([w1, w2, w3])
-        self.assertEqual(len(worker_set), 2)
-
-    def test_worker_sets_birth(self):
-        """Ensure worker correctly sets worker birth date."""
-        q = Queue()
-        w = Worker([q])
-
-        w.register_birth()
-
-        birth_date = w.birth_date
-        self.assertIsNotNone(birth_date)
-        self.assertEqual(type(birth_date).__name__, 'datetime')
-
-    def test_worker_sets_death(self):
-        """Ensure worker correctly sets worker death date."""
-        q = Queue()
-        w = Worker([q])
-
-        w.register_death()
-
-        death_date = w.death_date
-        self.assertIsNotNone(death_date)
-        self.assertIsInstance(death_date, datetime)
-
-    def test_clean_queue_registries(self):
-        """worker.clean_registries sets last_cleaned_at and cleans registries."""
-        foo_queue = Queue('foo', connection=self.testconn)
-        foo_registry = StartedJobRegistry('foo', connection=self.testconn)
-        self.testconn.zadd(foo_registry.key, {'foo': 1})
-        self.assertEqual(self.testconn.zcard(foo_registry.key), 1)
-
-        bar_queue = Queue('bar', connection=self.testconn)
-        bar_registry = StartedJobRegistry('bar', connection=self.testconn)
-        self.testconn.zadd(bar_registry.key, {'bar': 1})
-        self.assertEqual(self.testconn.zcard(bar_registry.key), 1)
-
-        worker = Worker([foo_queue, bar_queue])
-        self.assertEqual(worker.last_cleaned_at, None)
-        worker.clean_registries()
-        self.assertNotEqual(worker.last_cleaned_at, None)
-        self.assertEqual(self.testconn.zcard(foo_registry.key), 0)
-        self.assertEqual(self.testconn.zcard(bar_registry.key), 0)
-
-        # worker.clean_registries() only runs once every 15 minutes
-        # If we add another key, calling clean_registries() should do nothing
-        self.testconn.zadd(bar_registry.key, {'bar': 1})
-        worker.clean_registries()
-        self.assertEqual(self.testconn.zcard(bar_registry.key), 1)
-
-    def test_should_run_maintenance_tasks(self):
-        """Workers should run maintenance tasks on startup and every hour."""
-        queue = Queue(connection=self.testconn)
-        worker = Worker(queue)
-        self.assertTrue(worker.should_run_maintenance_tasks)
-
-        worker.last_cleaned_at = utcnow()
-        self.assertFalse(worker.should_run_maintenance_tasks)
-        worker.last_cleaned_at = utcnow() - timedelta(seconds=DEFAULT_MAINTENANCE_TASK_INTERVAL + 100)
-        self.assertTrue(worker.should_run_maintenance_tasks)
-
-        # custom maintenance_interval
-        worker = Worker(queue, maintenance_interval=10)
-        self.assertTrue(worker.should_run_maintenance_tasks)
-        worker.last_cleaned_at = utcnow()
-        self.assertFalse(worker.should_run_maintenance_tasks)
-        worker.last_cleaned_at = utcnow() - timedelta(seconds=11)
-        self.assertTrue(worker.should_run_maintenance_tasks)
-
-    def test_worker_calls_clean_registries(self):
-        """Worker calls clean_registries when run."""
-        queue = Queue(connection=self.testconn)
-        registry = StartedJobRegistry(connection=self.testconn)
-        self.testconn.zadd(registry.key, {'foo': 1})
-
-        worker = Worker(queue, connection=self.testconn)
-        worker.work(burst=True)
-        self.assertEqual(self.testconn.zcard(registry.key), 0)
-
-    def test_job_dependency_race_condition(self):
-        """Dependencies added while the job gets finished shouldn't get lost."""
-
-        # This patches the enqueue_dependents to enqueue a new dependency AFTER
-        # the original code was executed.
-        orig_enqueue_dependents = Queue.enqueue_dependents
-
-        def new_enqueue_dependents(self, job, *args, **kwargs):
-            orig_enqueue_dependents(self, job, *args, **kwargs)
-            if hasattr(Queue, '_add_enqueue') and Queue._add_enqueue is not None and Queue._add_enqueue.id == job.id:
-                Queue._add_enqueue = None
-                Queue().enqueue_call(say_hello, depends_on=job)
-
-        Queue.enqueue_dependents = new_enqueue_dependents
-
-        q = Queue()
-        w = Worker([q])
-        with mock.patch.object(Worker, 'execute_job', wraps=w.execute_job) as mocked:
-            parent_job = q.enqueue(say_hello, result_ttl=0)
-            Queue._add_enqueue = parent_job
-            job = q.enqueue_call(say_hello, depends_on=parent_job)
-            w.work(burst=True)
-            job = Job.fetch(job.id)
-            self.assertEqual(job.get_status(), JobStatus.FINISHED)
-
-            # The created spy checks two issues:
-            # * before the fix of #739, 2 of the 3 jobs where executed due
-            #   to the race condition
-            # * during the development another issue was fixed:
-            #   due to a missing pipeline usage in Queue.enqueue_job, the job
-            #   which was enqueued before the "rollback" was executed twice.
-            #   So before that fix the call count was 4 instead of 3
-            self.assertEqual(mocked.call_count, 3)
-
-    def test_self_modification_persistence(self):
-        """Make sure that any meta modification done by
-        the job itself persists completely through the
-        queue/worker/job stack."""
-        q = Queue()
-        # Also make sure that previously existing metadata
-        # persists properly
-        job = q.enqueue(modify_self, meta={'foo': 'bar', 'baz': 42}, args=[{'baz': 10, 'newinfo': 'waka'}])
-
-        w = Worker([q])
-        w.work(burst=True)
-
-        job_check = Job.fetch(job.id)
-        self.assertEqual(job_check.meta['foo'], 'bar')
-        self.assertEqual(job_check.meta['baz'], 10)
-        self.assertEqual(job_check.meta['newinfo'], 'waka')
-
-    def test_self_modification_persistence_with_error(self):
-        """Make sure that any meta modification done by
-        the job itself persists completely through the
-        queue/worker/job stack -- even if the job errored"""
-        q = Queue()
-        # Also make sure that previously existing metadata
-        # persists properly
-        job = q.enqueue(modify_self_and_error, meta={'foo': 'bar', 'baz': 42}, args=[{'baz': 10, 'newinfo': 'waka'}])
-
-        w = Worker([q])
-        w.work(burst=True)
-
-        # Postconditions
-        self.assertEqual(q.count, 0)
-        failed_job_registry = FailedJobRegistry(queue=q)
-        self.assertTrue(job in failed_job_registry)
-        self.assertEqual(w.get_current_job_id(), None)
-
-        job_check = Job.fetch(job.id)
-        self.assertEqual(job_check.meta['foo'], 'bar')
-        self.assertEqual(job_check.meta['baz'], 10)
-        self.assertEqual(job_check.meta['newinfo'], 'waka')
-
-    @mock.patch('rq.worker.logger.info')
-    def test_log_result_lifespan_true(self, mock_logger_info):
-        """Check that log_result_lifespan True causes job lifespan to be logged."""
-        q = Queue()
-
-        w = Worker([q])
-        job = q.enqueue(say_hello, args=('Frank',), result_ttl=10)
-        w.perform_job(job, q)
-        mock_logger_info.assert_called_with('Result is kept for %s seconds', 10)
-        self.assertIn('Result is kept for %s seconds', [c[0][0] for c in mock_logger_info.call_args_list])
-
-    @mock.patch('rq.worker.logger.info')
-    def test_log_result_lifespan_false(self, mock_logger_info):
-        """Check that log_result_lifespan False causes job lifespan to not be logged."""
-        q = Queue()
-
-        class TestWorker(Worker):
-            log_result_lifespan = False
-
-        w = TestWorker([q])
-        job = q.enqueue(say_hello, args=('Frank',), result_ttl=10)
-        w.perform_job(job, q)
-        self.assertNotIn('Result is kept for 10 seconds', [c[0][0] for c in mock_logger_info.call_args_list])
-
-    @mock.patch('rq.worker.logger.info')
-    def test_log_job_description_true(self, mock_logger_info):
-        """Check that log_job_description True causes job lifespan to be logged."""
-        q = Queue()
-        w = Worker([q])
-        q.enqueue(say_hello, args=('Frank',), result_ttl=10)
-        w.dequeue_job_and_maintain_ttl(10)
-        self.assertIn("Frank", mock_logger_info.call_args[0][2])
-
-    @mock.patch('rq.worker.logger.info')
-    def test_log_job_description_false(self, mock_logger_info):
-        """Check that log_job_description False causes job lifespan to not be logged."""
-        q = Queue()
-        w = Worker([q], log_job_description=False)
-        q.enqueue(say_hello, args=('Frank',), result_ttl=10)
-        w.dequeue_job_and_maintain_ttl(10)
-        self.assertNotIn("Frank", mock_logger_info.call_args[0][2])
-
-    def test_worker_configures_socket_timeout(self):
-        """Ensures that the worker correctly updates Redis client connection to have a socket_timeout"""
-        q = Queue()
-        _ = Worker([q])
-        connection_kwargs = q.connection.connection_pool.connection_kwargs
-        self.assertEqual(connection_kwargs["socket_timeout"], 415)
-
-    def test_worker_version(self):
-        q = Queue()
-        w = Worker([q])
-        w.version = '0.0.0'
-        w.register_birth()
-        self.assertEqual(w.version, '0.0.0')
-        w.refresh()
-        self.assertEqual(w.version, '0.0.0')
-        # making sure that version is preserved when worker is retrieved by key
-        worker = Worker.find_by_key(w.key)
-        self.assertEqual(worker.version, '0.0.0')
-
-    def test_python_version(self):
-        python_version = sys.version
-        q = Queue()
-        w = Worker([q])
-        w.register_birth()
-        self.assertEqual(w.python_version, python_version)
-        # now patching version
-        python_version = 'X.Y.Z.final'  # dummy version
-        self.assertNotEqual(python_version, sys.version)  # otherwise tests are pointless
-        w2 = Worker([q])
-        w2.python_version = python_version
-        w2.register_birth()
-        self.assertEqual(w2.python_version, python_version)
-        # making sure that version is preserved when worker is retrieved by key
-        worker = Worker.find_by_key(w2.key)
-        self.assertEqual(worker.python_version, python_version)
-
-    def test_dequeue_random_strategy(self):
-        qs = [Queue('q%d' % i) for i in range(5)]
-
-        for i in range(5):
-            for j in range(3):
-                qs[i].enqueue(say_pid, job_id='q%d_%d' % (i, j))
-
-        w = Worker(qs)
-        w.work(burst=True, dequeue_strategy="random")
-
-        start_times = []
-        for i in range(5):
-            for j in range(3):
-                job = Job.fetch('q%d_%d' % (i, j))
-                start_times.append(('q%d_%d' % (i, j), job.started_at))
-        sorted_by_time = sorted(start_times, key=lambda tup: tup[1])
-        sorted_ids = [tup[0] for tup in sorted_by_time]
-        expected_rr = ['q%d_%d' % (i, j) for j in range(3) for i in range(5)]
-        expected_ser = ['q%d_%d' % (i, j) for i in range(5) for j in range(3)]
-
-        self.assertNotEqual(sorted_ids, expected_rr)
-        self.assertNotEqual(sorted_ids, expected_ser)
-        expected_rr.reverse()
-        expected_ser.reverse()
-        self.assertNotEqual(sorted_ids, expected_rr)
-        self.assertNotEqual(sorted_ids, expected_ser)
-        sorted_ids.sort()
-        expected_ser.sort()
-        self.assertEqual(sorted_ids, expected_ser)
-
-    def test_request_force_stop_ignores_consecutive_signals(self):
-        """Ignore signals sent within 1 second of the last signal"""
-        queue = Queue(connection=self.testconn)
-        worker = Worker([queue])
-        worker._horse_pid = 1
-        worker._shutdown_requested_date = utcnow()
-        with mock.patch.object(worker, 'kill_horse') as mocked:
-            worker.request_force_stop(1, frame=None)
-            self.assertEqual(mocked.call_count, 0)
-        # If signal is sent a few seconds after, kill_horse() is called
-        worker._shutdown_requested_date = utcnow() - timedelta(seconds=2)
-        with mock.patch.object(worker, 'kill_horse') as mocked:
-            self.assertRaises(SystemExit, worker.request_force_stop, 1, frame=None)
-
-    def test_dequeue_round_robin(self):
-        qs = [Queue('q%d' % i) for i in range(5)]
-
-        for i in range(5):
-            for j in range(3):
-                qs[i].enqueue(say_pid, job_id='q%d_%d' % (i, j))
-
-        w = Worker(qs)
-        w.work(burst=True, dequeue_strategy="round_robin")
-
-        start_times = []
-        for i in range(5):
-            for j in range(3):
-                job = Job.fetch('q%d_%d' % (i, j))
-                start_times.append(('q%d_%d' % (i, j), job.started_at))
-        sorted_by_time = sorted(start_times, key=lambda tup: tup[1])
-        sorted_ids = [tup[0] for tup in sorted_by_time]
-        expected = [
-            'q0_0',
-            'q1_0',
-            'q2_0',
-            'q3_0',
-            'q4_0',
-            'q0_1',
-            'q1_1',
-            'q2_1',
-            'q3_1',
-            'q4_1',
-            'q0_2',
-            'q1_2',
-            'q2_2',
-            'q3_2',
-            'q4_2',
-        ]
-
-        self.assertEqual(expected, sorted_ids)
-
-
-def wait_and_kill_work_horse(pid, time_to_wait=0.0):
-    time.sleep(time_to_wait)
-    os.kill(pid, signal.SIGKILL)
-
-
-class TimeoutTestCase:
-    def setUp(self):
-        # we want tests to fail if signal are ignored and the work remain
-        # running, so set a signal to kill them after X seconds
-        self.killtimeout = 15
-        signal.signal(signal.SIGALRM, self._timeout)
-        signal.alarm(self.killtimeout)
-
-    def _timeout(self, signal, frame):
-        raise AssertionError(
-            "test still running after %i seconds, likely the worker wasn't shutdown correctly" % self.killtimeout
-        )
-
-
-class WorkerShutdownTestCase(TimeoutTestCase, RQTestCase):
-    @slow
-    def test_idle_worker_warm_shutdown(self):
-        """worker with no ongoing job receiving single SIGTERM signal and shutting down"""
-        w = Worker('foo')
-        self.assertFalse(w._stop_requested)
-        p = Process(target=kill_worker, args=(os.getpid(), False))
-        p.start()
-
-        w.work()
-
-        p.join(1)
-        self.assertFalse(w._stop_requested)
-
-    @slow
-    def test_working_worker_warm_shutdown(self):
-        """worker with an ongoing job receiving single SIGTERM signal, allowing job to finish then shutting down"""
-        fooq = Queue('foo')
-        w = Worker(fooq)
-
-        sentinel_file = '/tmp/.rq_sentinel_warm'
-        fooq.enqueue(create_file_after_timeout, sentinel_file, 2)
-        self.assertFalse(w._stop_requested)
-        p = Process(target=kill_worker, args=(os.getpid(), False))
-        p.start()
-
-        w.work()
-
-        p.join(2)
-        self.assertFalse(p.is_alive())
-        self.assertTrue(w._stop_requested)
-        self.assertTrue(os.path.exists(sentinel_file))
-
-        self.assertIsNotNone(w.shutdown_requested_date)
-        self.assertEqual(type(w.shutdown_requested_date).__name__, 'datetime')
-
-    @slow
-    def test_working_worker_cold_shutdown(self):
-        """Busy worker shuts down immediately on double SIGTERM signal"""
-        fooq = Queue('foo')
-        w = Worker(fooq)
-
-        sentinel_file = '/tmp/.rq_sentinel_cold'
-        self.assertFalse(
-            os.path.exists(sentinel_file), '{sentinel_file} file should not exist yet, delete that file and try again.'
-        )
-        fooq.enqueue(create_file_after_timeout, sentinel_file, 5)
-        self.assertFalse(w._stop_requested)
-        p = Process(target=kill_worker, args=(os.getpid(), True))
-        p.start()
-
-        self.assertRaises(SystemExit, w.work)
-
-        p.join(1)
-        self.assertTrue(w._stop_requested)
-        self.assertFalse(os.path.exists(sentinel_file))
-
-        shutdown_requested_date = w.shutdown_requested_date
-        self.assertIsNotNone(shutdown_requested_date)
-        self.assertEqual(type(shutdown_requested_date).__name__, 'datetime')
-
-    @slow
-    def test_work_horse_death_sets_job_failed(self):
-        """worker with an ongoing job whose work horse dies unexpectadly (before
-        completing the job) should set the job's status to FAILED
-        """
-        fooq = Queue('foo')
-        self.assertEqual(fooq.count, 0)
-        w = Worker(fooq)
-        sentinel_file = '/tmp/.rq_sentinel_work_horse_death'
-        if os.path.exists(sentinel_file):
-            os.remove(sentinel_file)
-        fooq.enqueue(create_file_after_timeout, sentinel_file, 100)
-        job, queue = w.dequeue_job_and_maintain_ttl(5)
-        w.fork_work_horse(job, queue)
-        p = Process(target=wait_and_kill_work_horse, args=(w._horse_pid, 0.5))
-        p.start()
-        w.monitor_work_horse(job, queue)
-        job_status = job.get_status()
-        p.join(1)
-        self.assertEqual(job_status, JobStatus.FAILED)
-        failed_job_registry = FailedJobRegistry(queue=fooq)
-        self.assertTrue(job in failed_job_registry)
-        self.assertEqual(fooq.count, 0)
-
-    @slow
-    def test_work_horse_force_death(self):
-        """Simulate a frozen worker that doesn't observe the timeout properly.
-        Fake it by artificially setting the timeout of the parent process to
-        something much smaller after the process is already forked.
-        """
-        fooq = Queue('foo')
-        self.assertEqual(fooq.count, 0)
-        w = Worker([fooq], job_monitoring_interval=1)
-
-        sentinel_file = '/tmp/.rq_sentinel_work_horse_death'
-        if os.path.exists(sentinel_file):
-            os.remove(sentinel_file)
-
-        job = fooq.enqueue(launch_process_within_worker_and_store_pid, sentinel_file, 100)
-
-        _, queue = w.dequeue_job_and_maintain_ttl(5)
-        w.prepare_job_execution(job)
-        w.fork_work_horse(job, queue)
-        job.timeout = 5
-        time.sleep(1)
-        with open(sentinel_file) as f:
-            subprocess_pid = int(f.read().strip())
-        self.assertTrue(psutil.pid_exists(subprocess_pid))
-
-        with mock.patch.object(w, 'handle_work_horse_killed', wraps=w.handle_work_horse_killed) as mocked:
-            w.monitor_work_horse(job, queue)
-            self.assertEqual(mocked.call_count, 1)
-        fudge_factor = 1
-        total_time = w.job_monitoring_interval + 65 + fudge_factor
-
-        now = utcnow()
-        self.assertTrue((utcnow() - now).total_seconds() < total_time)
-        self.assertEqual(job.get_status(), JobStatus.FAILED)
-        failed_job_registry = FailedJobRegistry(queue=fooq)
-        self.assertTrue(job in failed_job_registry)
-        self.assertEqual(fooq.count, 0)
-        self.assertFalse(psutil.pid_exists(subprocess_pid))
-
-
-def schedule_access_self():
-    q = Queue('default', connection=get_current_connection())
-    q.enqueue(access_self)
-
-
-@pytest.mark.skipif(sys.platform == 'darwin', reason='Fails on OS X')
-class TestWorkerSubprocess(RQTestCase):
-    def setUp(self):
-        super().setUp()
-        db_num = self.testconn.connection_pool.connection_kwargs['db']
-        self.redis_url = 'redis://127.0.0.1:6379/%d' % db_num
-
-    def test_run_empty_queue(self):
-        """Run the worker in its own process with an empty queue"""
-        subprocess.check_call(['rqworker', '-u', self.redis_url, '-b'])
-
-    def test_run_access_self(self):
-        """Schedule a job, then run the worker as subprocess"""
-        q = Queue()
-        job = q.enqueue(access_self)
-        subprocess.check_call(['rqworker', '-u', self.redis_url, '-b'])
-        registry = FinishedJobRegistry(queue=q)
-        self.assertTrue(job in registry)
-        assert q.count == 0
-
-    @skipIf('pypy' in sys.version.lower(), 'often times out with pypy')
-    def test_run_scheduled_access_self(self):
-        """Schedule a job that schedules a job, then run the worker as subprocess"""
-        q = Queue()
-        job = q.enqueue(schedule_access_self)
-        subprocess.check_call(['rqworker', '-u', self.redis_url, '-b'])
-        registry = FinishedJobRegistry(queue=q)
-        self.assertTrue(job in registry)
-        assert q.count == 0
-
-
-@pytest.mark.skipif(sys.platform == 'darwin', reason='requires Linux signals')
-@skipIf('pypy' in sys.version.lower(), 'these tests often fail on pypy')
-class HerokuWorkerShutdownTestCase(TimeoutTestCase, RQTestCase):
-    def setUp(self):
-        super().setUp()
-        self.sandbox = '/tmp/rq_shutdown/'
-        os.makedirs(self.sandbox)
-
-    def tearDown(self):
-        shutil.rmtree(self.sandbox, ignore_errors=True)
-
-    @slow
-    def test_immediate_shutdown(self):
-        """Heroku work horse shutdown with immediate (0 second) kill"""
-        p = Process(target=run_dummy_heroku_worker, args=(self.sandbox, 0))
-        p.start()
-        time.sleep(0.5)
-
-        os.kill(p.pid, signal.SIGRTMIN)
-
-        p.join(2)
-        self.assertEqual(p.exitcode, 1)
-        self.assertTrue(os.path.exists(os.path.join(self.sandbox, 'started')))
-        self.assertFalse(os.path.exists(os.path.join(self.sandbox, 'finished')))
-
-    @slow
-    def test_1_sec_shutdown(self):
-        """Heroku work horse shutdown with 1 second kill"""
-        p = Process(target=run_dummy_heroku_worker, args=(self.sandbox, 1))
-        p.start()
-        time.sleep(0.5)
-
-        os.kill(p.pid, signal.SIGRTMIN)
-        time.sleep(0.1)
-        self.assertEqual(p.exitcode, None)
-        p.join(2)
-        self.assertEqual(p.exitcode, 1)
-
-        self.assertTrue(os.path.exists(os.path.join(self.sandbox, 'started')))
-        self.assertFalse(os.path.exists(os.path.join(self.sandbox, 'finished')))
-
-    @slow
-    def test_shutdown_double_sigrtmin(self):
-        """Heroku work horse shutdown with long delay but SIGRTMIN sent twice"""
-        p = Process(target=run_dummy_heroku_worker, args=(self.sandbox, 10))
-        p.start()
-        time.sleep(0.5)
-
-        os.kill(p.pid, signal.SIGRTMIN)
-        # we have to wait a short while otherwise the second signal wont bet processed.
-        time.sleep(0.1)
-        os.kill(p.pid, signal.SIGRTMIN)
-        p.join(2)
-        self.assertEqual(p.exitcode, 1)
-
-        self.assertTrue(os.path.exists(os.path.join(self.sandbox, 'started')))
-        self.assertFalse(os.path.exists(os.path.join(self.sandbox, 'finished')))
-
-    @mock.patch('rq.worker.logger.info')
-    def test_handle_shutdown_request(self, mock_logger_info):
-        """Mutate HerokuWorker so _horse_pid refers to an artificial process
-        and test handle_warm_shutdown_request"""
-        w = HerokuWorker('foo')
-
-        path = os.path.join(self.sandbox, 'shouldnt_exist')
-        p = Process(target=create_file_after_timeout_and_setsid, args=(path, 2))
-        p.start()
-        self.assertEqual(p.exitcode, None)
-        time.sleep(0.1)
-
-        w._horse_pid = p.pid
-        w.handle_warm_shutdown_request()
-        p.join(2)
-        # would expect p.exitcode to be -34
-        self.assertEqual(p.exitcode, -34)
-        self.assertFalse(os.path.exists(path))
-        mock_logger_info.assert_called_with('Killed horse pid %s', p.pid)
-
-    def test_handle_shutdown_request_no_horse(self):
-        """Mutate HerokuWorker so _horse_pid refers to non existent process
-        and test handle_warm_shutdown_request"""
-        w = HerokuWorker('foo')
-
-        w._horse_pid = 19999
-        w.handle_warm_shutdown_request()
-
-
-class TestExceptionHandlerMessageEncoding(RQTestCase):
-    def setUp(self):
-        super().setUp()
-        self.worker = Worker("foo")
-        self.worker._exc_handlers = []
-        # Mimic how exception info is actually passed forwards
-        try:
-            raise Exception(u"💪")
-        except Exception:
-            self.exc_info = sys.exc_info()
-
-    def test_handle_exception_handles_non_ascii_in_exception_message(self):
-        """worker.handle_exception doesn't crash on non-ascii in exception message."""
-        self.worker.handle_exception(Mock(), *self.exc_info)
-
-
-class TestRoundRobinWorker(RQTestCase):
-    def test_round_robin(self):
-        qs = [Queue('q%d' % i) for i in range(5)]
-
-        for i in range(5):
-            for j in range(3):
-                qs[i].enqueue(say_pid, job_id='q%d_%d' % (i, j))
-
-        w = RoundRobinWorker(qs)
-        w.work(burst=True)
-        start_times = []
-        for i in range(5):
-            for j in range(3):
-                job = Job.fetch('q%d_%d' % (i, j))
-                start_times.append(('q%d_%d' % (i, j), job.started_at))
-        sorted_by_time = sorted(start_times, key=lambda tup: tup[1])
-        sorted_ids = [tup[0] for tup in sorted_by_time]
-        expected = [
-            'q0_0',
-            'q1_0',
-            'q2_0',
-            'q3_0',
-            'q4_0',
-            'q0_1',
-            'q1_1',
-            'q2_1',
-            'q3_1',
-            'q4_1',
-            'q0_2',
-            'q1_2',
-            'q2_2',
-            'q3_2',
-            'q4_2',
-        ]
-        self.assertEqual(expected, sorted_ids)
-
-
-class TestRandomWorker(RQTestCase):
-    def test_random_worker(self):
-        qs = [Queue('q%d' % i) for i in range(5)]
-
-        for i in range(5):
-            for j in range(3):
-                qs[i].enqueue(say_pid, job_id='q%d_%d' % (i, j))
-
-        w = RandomWorker(qs)
-        w.work(burst=True)
-        start_times = []
-        for i in range(5):
-            for j in range(3):
-                job = Job.fetch('q%d_%d' % (i, j))
-                start_times.append(('q%d_%d' % (i, j), job.started_at))
-        sorted_by_time = sorted(start_times, key=lambda tup: tup[1])
-        sorted_ids = [tup[0] for tup in sorted_by_time]
-        expected_rr = ['q%d_%d' % (i, j) for j in range(3) for i in range(5)]
-        expected_ser = ['q%d_%d' % (i, j) for i in range(5) for j in range(3)]
-        self.assertNotEqual(sorted_ids, expected_rr)
-        self.assertNotEqual(sorted_ids, expected_ser)
-        expected_rr.reverse()
-        expected_ser.reverse()
-        self.assertNotEqual(sorted_ids, expected_rr)
-        self.assertNotEqual(sorted_ids, expected_ser)
-        sorted_ids.sort()
-        expected_ser.sort()
-        self.assertEqual(sorted_ids, expected_ser)
diff --git a/tests/test_worker_pool.py b/tests/test_worker_pool.py
deleted file mode 100644
index 219b4a8..0000000
--- a/tests/test_worker_pool.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import os
-import signal
-from multiprocessing import Process
-from time import sleep
-
-from rq.connections import parse_connection
-from rq.job import JobStatus
-from rq.queue import Queue
-from rq.serializers import JSONSerializer
-from rq.worker import SimpleWorker
-from rq.worker_pool import WorkerPool, run_worker
-from tests import TestCase
-from tests.fixtures import CustomJob, _send_shutdown_command, long_running_job, say_hello
-
-
-def wait_and_send_shutdown_signal(pid, time_to_wait=0.0):
-    sleep(time_to_wait)
-    os.kill(pid, signal.SIGTERM)
-
-
-class TestWorkerPool(TestCase):
-    def test_queues(self):
-        """Test queue parsing"""
-        pool = WorkerPool(['default', 'foo'], connection=self.connection)
-        self.assertEqual(
-            set(pool.queues), {Queue('default', connection=self.connection), Queue('foo', connection=self.connection)}
-        )
-
-    # def test_spawn_workers(self):
-    #     """Test spawning workers"""
-    #     pool = WorkerPool(['default', 'foo'], connection=self.connection, num_workers=2)
-    #     pool.start_workers(burst=False)
-    #     self.assertEqual(len(pool.worker_dict.keys()), 2)
-    #     pool.stop_workers()
-
-    def test_check_workers(self):
-        """Test check_workers()"""
-        pool = WorkerPool(['default'], connection=self.connection, num_workers=2)
-        pool.start_workers(burst=False)
-
-        # There should be two workers
-        pool.check_workers()
-        self.assertEqual(len(pool.worker_dict.keys()), 2)
-
-        worker_data = list(pool.worker_dict.values())[0]
-        _send_shutdown_command(worker_data.name, self.connection.connection_pool.connection_kwargs.copy(), delay=0)
-        # 1 worker should be dead since we sent a shutdown command
-        sleep(0.2)
-        pool.check_workers(respawn=False)
-        self.assertEqual(len(pool.worker_dict.keys()), 1)
-
-        # If we call `check_workers` with `respawn=True`, the worker should be respawned
-        pool.check_workers(respawn=True)
-        self.assertEqual(len(pool.worker_dict.keys()), 2)
-
-        pool.stop_workers()
-
-    def test_reap_workers(self):
-        """Dead workers are removed from worker_dict"""
-        pool = WorkerPool(['default'], connection=self.connection, num_workers=2)
-        pool.start_workers(burst=False)
-
-        # There should be two workers
-        pool.reap_workers()
-        self.assertEqual(len(pool.worker_dict.keys()), 2)
-
-        worker_data = list(pool.worker_dict.values())[0]
-        _send_shutdown_command(worker_data.name, self.connection.connection_pool.connection_kwargs.copy(), delay=0)
-        # 1 worker should be dead since we sent a shutdown command
-        sleep(0.2)
-        pool.reap_workers()
-        self.assertEqual(len(pool.worker_dict.keys()), 1)
-        pool.stop_workers()
-
-    def test_start(self):
-        """Test start()"""
-        pool = WorkerPool(['default'], connection=self.connection, num_workers=2)
-
-        p = Process(target=wait_and_send_shutdown_signal, args=(os.getpid(), 0.5))
-        p.start()
-        pool.start()
-        self.assertEqual(pool.status, pool.Status.STOPPED)
-        self.assertTrue(pool.all_workers_have_stopped())
-        # We need this line so the test doesn't hang
-        pool.stop_workers()
-
-    def test_pool_ignores_consecutive_shutdown_signals(self):
-        """If two shutdown signals are sent within one second, only the first one is processed"""
-        # Send two shutdown signals within one second while the worker is
-        # working on a long running job. The job should still complete (not killed)
-        pool = WorkerPool(['foo'], connection=self.connection, num_workers=2)
-
-        process_1 = Process(target=wait_and_send_shutdown_signal, args=(os.getpid(), 0.5))
-        process_1.start()
-        process_2 = Process(target=wait_and_send_shutdown_signal, args=(os.getpid(), 0.5))
-        process_2.start()
-
-        queue = Queue('foo', connection=self.connection)
-        job = queue.enqueue(long_running_job, 1)
-        pool.start(burst=True)
-
-        self.assertEqual(job.get_status(refresh=True), JobStatus.FINISHED)
-        # We need this line so the test doesn't hang
-        pool.stop_workers()
-
-    def test_run_worker(self):
-        """Ensure run_worker() properly spawns a Worker"""
-        queue = Queue('foo', connection=self.connection)
-        queue.enqueue(say_hello)
-
-        connection_class, pool_class, pool_kwargs = parse_connection(self.connection)
-        run_worker('test-worker', ['foo'], connection_class, pool_class, pool_kwargs)
-        # Worker should have processed the job
-        self.assertEqual(len(queue), 0)
-
-    def test_worker_pool_arguments(self):
-        """Ensure arguments are properly used to create the right workers"""
-        queue = Queue('foo', connection=self.connection)
-        job = queue.enqueue(say_hello)
-        pool = WorkerPool([queue], connection=self.connection, num_workers=2, worker_class=SimpleWorker)
-        pool.start(burst=True)
-        # Worker should have processed the job
-        self.assertEqual(job.get_status(refresh=True), JobStatus.FINISHED)
-
-        queue = Queue('json', connection=self.connection, serializer=JSONSerializer)
-        job = queue.enqueue(say_hello, 'Hello')
-        pool = WorkerPool(
-            [queue], connection=self.connection, num_workers=2, worker_class=SimpleWorker, serializer=JSONSerializer
-        )
-        pool.start(burst=True)
-        # Worker should have processed the job
-        self.assertEqual(job.get_status(refresh=True), JobStatus.FINISHED)
-
-        pool = WorkerPool([queue], connection=self.connection, num_workers=2, job_class=CustomJob)
-        pool.start(burst=True)
-        # Worker should have processed the job
-        self.assertEqual(job.get_status(refresh=True), JobStatus.FINISHED)
diff --git a/tests/test_worker_registration.py b/tests/test_worker_registration.py
deleted file mode 100644
index 26ee617..0000000
--- a/tests/test_worker_registration.py
+++ /dev/null
@@ -1,109 +0,0 @@
-from unittest.mock import patch
-
-from rq import Queue, Worker
-from rq.utils import ceildiv
-from rq.worker_registration import (
-    REDIS_WORKER_KEYS,
-    WORKERS_BY_QUEUE_KEY,
-    clean_worker_registry,
-    get_keys,
-    register,
-    unregister,
-)
-from tests import RQTestCase
-
-
-class TestWorkerRegistry(RQTestCase):
-    def test_worker_registration(self):
-        """Ensure worker.key is correctly set in Redis."""
-        foo_queue = Queue(name='foo')
-        bar_queue = Queue(name='bar')
-        worker = Worker([foo_queue, bar_queue])
-
-        register(worker)
-        redis = worker.connection
-
-        self.assertTrue(redis.sismember(worker.redis_workers_keys, worker.key))
-        self.assertEqual(Worker.count(connection=redis), 1)
-        self.assertTrue(redis.sismember(WORKERS_BY_QUEUE_KEY % foo_queue.name, worker.key))
-        self.assertEqual(Worker.count(queue=foo_queue), 1)
-        self.assertTrue(redis.sismember(WORKERS_BY_QUEUE_KEY % bar_queue.name, worker.key))
-        self.assertEqual(Worker.count(queue=bar_queue), 1)
-
-        unregister(worker)
-        self.assertFalse(redis.sismember(worker.redis_workers_keys, worker.key))
-        self.assertFalse(redis.sismember(WORKERS_BY_QUEUE_KEY % foo_queue.name, worker.key))
-        self.assertFalse(redis.sismember(WORKERS_BY_QUEUE_KEY % bar_queue.name, worker.key))
-
-    def test_get_keys_by_queue(self):
-        """get_keys_by_queue only returns active workers for that queue"""
-        foo_queue = Queue(name='foo')
-        bar_queue = Queue(name='bar')
-        baz_queue = Queue(name='baz')
-
-        worker1 = Worker([foo_queue, bar_queue])
-        worker2 = Worker([foo_queue])
-        worker3 = Worker([baz_queue])
-
-        self.assertEqual(set(), get_keys(foo_queue))
-
-        register(worker1)
-        register(worker2)
-        register(worker3)
-
-        # get_keys(queue) will return worker keys for that queue
-        self.assertEqual(set([worker1.key, worker2.key]), get_keys(foo_queue))
-        self.assertEqual(set([worker1.key]), get_keys(bar_queue))
-
-        # get_keys(connection=connection) will return all worker keys
-        self.assertEqual(set([worker1.key, worker2.key, worker3.key]), get_keys(connection=worker1.connection))
-
-        # Calling get_keys without arguments raises an exception
-        self.assertRaises(ValueError, get_keys)
-
-        unregister(worker1)
-        unregister(worker2)
-        unregister(worker3)
-
-    def test_clean_registry(self):
-        """clean_registry removes worker keys that don't exist in Redis"""
-        queue = Queue(name='foo')
-        worker = Worker([queue])
-
-        register(worker)
-        redis = worker.connection
-
-        self.assertTrue(redis.sismember(worker.redis_workers_keys, worker.key))
-        self.assertTrue(redis.sismember(REDIS_WORKER_KEYS, worker.key))
-
-        clean_worker_registry(queue)
-        self.assertFalse(redis.sismember(worker.redis_workers_keys, worker.key))
-        self.assertFalse(redis.sismember(REDIS_WORKER_KEYS, worker.key))
-
-    def test_clean_large_registry(self):
-        """
-        clean_registry() splits invalid_keys into multiple lists for set removal to avoid sending more than redis can
-        receive
-        """
-        MAX_WORKERS = 41
-        MAX_KEYS = 37
-        # srem is called twice per invalid key batch: once for WORKERS_BY_QUEUE_KEY; once for REDIS_WORKER_KEYS
-        SREM_CALL_COUNT = 2
-
-        queue = Queue(name='foo')
-        for i in range(MAX_WORKERS):
-            worker = Worker([queue])
-            register(worker)
-
-        with patch('rq.worker_registration.MAX_KEYS', MAX_KEYS), patch.object(
-            queue.connection, 'pipeline', wraps=queue.connection.pipeline
-        ) as pipeline_mock:
-            # clean_worker_registry creates a pipeline with a context manager. Configure the mock using the context
-            # manager entry method __enter__
-            pipeline_mock.return_value.__enter__.return_value.srem.return_value = None
-            pipeline_mock.return_value.__enter__.return_value.execute.return_value = [0] * MAX_WORKERS
-
-            clean_worker_registry(queue)
-
-            expected_call_count = (ceildiv(MAX_WORKERS, MAX_KEYS)) * SREM_CALL_COUNT
-            self.assertEqual(pipeline_mock.return_value.__enter__.return_value.srem.call_count, expected_call_count)
diff --git a/tox.ini b/tox.ini
deleted file mode 100644
index a180d77..0000000
--- a/tox.ini
+++ /dev/null
@@ -1,58 +0,0 @@
-[tox]
-envlist=py36,py37,py38,py39,py310
-
-[testenv]
-commands=pytest --cov rq --cov-config=.coveragerc --durations=5 {posargs}
-deps=
-    pytest
-    pytest-cov
-    sentry-sdk
-    codecov
-    psutil
-passenv=
-    RUN_SSL_TESTS
-
-; [testenv:lint]
-; basepython = python3.10
-; deps =
-;     black
-;     ruff
-; commands =
-;     black --check rq tests
-;     ruff check rq tests
-
-[testenv:py36]
-skipdist = True
-basepython = python3.6
-deps = {[testenv]deps}
-
-[testenv:py37]
-skipdist = True
-basepython = python3.7
-deps = {[testenv]deps}
-
-[testenv:py38]
-skipdist = True
-basepython = python3.8
-deps = {[testenv]deps}
-
-[testenv:py39]
-skipdist = True
-basepython = python3.9
-deps = {[testenv]deps}
-
-[testenv:py310]
-skipdist = True
-basepython = python3.10
-deps = {[testenv]deps}
-
-[testenv:ssl]
-skipdist = True
-basepython = python3.10
-deps=
-    pytest
-    sentry-sdk
-    psutil
-passenv=
-    RUN_SSL_TESTS
-commands=pytest -m ssl_test {posargs}

Debdiff

[The following lists of changes regard files as different if they have different names, permissions or owners.]

Files in second set of .debs but not in first

-rw-r--r--  root/root   /usr/lib/python3/dist-packages/rq-1.15.1.egg-info/PKG-INFO
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/rq-1.15.1.egg-info/dependency_links.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/rq-1.15.1.egg-info/entry_points.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/rq-1.15.1.egg-info/not-zip-safe
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/rq-1.15.1.egg-info/requires.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/rq-1.15.1.egg-info/top_level.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/rq/executions.py

Files in first set of .debs but not in second

-rw-r--r--  root/root   /usr/lib/python3/dist-packages/rq-1.15.0.egg-info/PKG-INFO
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/rq-1.15.0.egg-info/dependency_links.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/rq-1.15.0.egg-info/entry_points.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/rq-1.15.0.egg-info/not-zip-safe
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/rq-1.15.0.egg-info/requires.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/rq-1.15.0.egg-info/top_level.txt
-rw-r--r--  root/root   /usr/share/doc/python3-rq/changelog.gz

No differences were encountered in the control files

More details

Full run details