New Upstream Release - python-fakeredis

Ready changes

Summary

Merged new upstream version: 2.13.0 (was: 2.4.0).

Diff

diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS
new file mode 100644
index 0000000..f661820
--- /dev/null
+++ b/.github/CODEOWNERS
@@ -0,0 +1 @@
+* @cunla
\ No newline at end of file
diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
index a8dcb1f..b790768 100644
--- a/.github/CONTRIBUTING.md
+++ b/.github/CONTRIBUTING.md
@@ -3,7 +3,8 @@
 
 First off, thanks for taking the time to contribute! ❤️
 
-All types of contributions are encouraged and valued. See the [Table of Contents](#table-of-contents) for different ways to help and details about how this project handles them. Please make sure to read the relevant section before making your contribution. It will make it a lot easier for us maintainers and smooth out the experience for all involved. The community looks forward to your contributions. 🎉
+All types of contributions are encouraged and valued.
+See the [Table of Contents](#table-of-contents) for different ways to help and details about how this project handles them. Please make sure to read the relevant section before making your contribution. It will make it a lot easier for us maintainers and smooth out the experience for all involved. The community looks forward to your contributions. 🎉
 
 > And if you like the project, but just don't have time to contribute, that's fine. There are other easy ways to support the project and show your appreciation, which we would also be very happy about:
 > - Star the project
@@ -73,12 +74,18 @@ Depending on how large the project is, you may want to outsource the questioning
 <!-- omit in toc -->
 #### Before Submitting a Bug Report
 
-A good bug report shouldn't leave others needing to chase you up for more information. Therefore, we ask you to investigate carefully, collect information and describe the issue in detail in your report. Please complete the following steps in advance to help us fix any potential bug as fast as possible.
+A good bug report shouldn't leave others needing to chase you up for more information.
+Therefore, we ask you to investigate carefully, collect information and describe the issue in detail in your report.
+Please complete the following steps in advance to help us fix any potential bug as fast as possible.
 
 - Make sure that you are using the latest version.
-- Determine if your bug is really a bug and not an error on your side e.g. using incompatible environment components/versions (Make sure that you have read the [documentation](https://github.com/cunla/fakeredis-py). If you are looking for support, you might want to check [this section](#i-have-a-question)).
-- To see if other users have experienced (and potentially already solved) the same issue you are having, check if there is not already a bug report existing for your bug or error in the [bug tracker](https://github.com/cunla/fakeredis-py/issues?q=label%3Abug).
-- Also make sure to search the internet (including Stack Overflow) to see if users outside of the GitHub community have discussed the issue.
+- Determine if your bug is really a bug and not an error on your side e.g. using incompatible
+  environment components/versions (Make sure that you have read the [documentation](https://github.com/cunla/fakeredis-py).
+  If you are looking for support, you might want to check [this section](#i-have-a-question)).
+- To see if other users have experienced (and potentially already solved) the same issue you are having,
+  check if there is not already a bug report existing for your bug or error in the [bug tracker](https://github.com/cunla/fakeredis-py/issues?q=label%3Abug).
+- Also make sure to search the internet (including Stack Overflow) to see if users outside the GitHub
+  community have discussed the issue.
 - Collect information about the bug:
   - Stack trace (Traceback)
   - OS, Platform and Version (Windows, Linux, macOS, x86, ARM)
@@ -89,14 +96,18 @@ A good bug report shouldn't leave others needing to chase you up for more inform
 <!-- omit in toc -->
 #### How Do I Submit a Good Bug Report?
 
-> You must never report security related issues, vulnerabilities or bugs including sensitive information to the issue tracker, or elsewhere in public. Instead sensitive bugs must be sent by email to <daniel.maruani@gmail.com>.
-<!-- You may add a PGP key to allow the messages to be sent encrypted as well. -->
+> You must never report security related issues, vulnerabilities or bugs including sensitive information
+> to the issue tracker, or elsewhere in public.
+> Instead sensitive bugs must be sent by email to <daniel.maruani@gmail.com>.
 
 We use GitHub issues to track bugs and errors. If you run into an issue with the project:
 
-- Open an [Issue](https://github.com/cunla/fakeredis-py/issues/new). (Since we can't be sure at this point whether it is a bug or not, we ask you not to talk about a bug yet and not to label the issue.)
-- Explain the behavior you would expect and the actual behavior.
-- Please provide as much context as possible and describe the *reproduction steps* that someone else can follow to recreate the issue on their own. This usually includes your code. For good bug reports you should isolate the problem and create a reduced test case.
+- Open an [Issue](https://github.com/cunla/fakeredis-py/issues/new).
+  (Since we can't be sure at this point whether it is a bug or not, we ask you not to talk about a bug yet and
+  not to label the issue.)
+- Follow the issue template and provide as much context as possible and describe the *reproduction steps* that someone else can follow to recreate the issue on their own.
+  This usually includes your code.
+  For good bug reports you should isolate the problem and create a reduced test case.
 - Provide the information you collected in the previous section.
 
 Once it's filed:
@@ -134,25 +145,47 @@ Enhancement suggestions are tracked as [GitHub issues](https://github.com/cunla/
 <!-- You might want to create an issue template for enhancement suggestions that can be used as a guide and that defines the structure of the information to be included. If you do so, reference it here in the description. -->
 
 ### Your First Code Contribution
-<!-- TODO
-include Setup of env, IDE and typical getting started instructions?
-
--->
+Unsure where to begin contributing? You can start by looking through
+[help-wanted issues](https://github.com/cunla/fakeredis-py/labels/help%20wanted).
+
+Never contributed to open source before? Here are a couple of friendly
+tutorials:
+
+-   <http://makeapullrequest.com/>
+-   <http://www.firsttimersonly.com/>
+
+### Getting started
+- Create your own fork of the repository
+- Do the changes in your fork
+- Setup poetry `pip install poetry`
+- Let poetry install everything required for a local environment `poetry install`
+- To run all tests, use: `poetry run pytest -v`
+- Note: In order to run the tests, a real redis server should be running.
+  The tests are comparing the results of each command between fakeredis and a real redis.
+  - You can use `docker-compose up redis6` or `docker-compose up redis7` to run redis.
+- Run test with coverage using `poetry run pytest -v --cov=fakeredis --cov-branch`
+  and then you can run `coverage report`.
 
 ### Improving The Documentation
-<!-- TODO
-Updating, improving and correcting the documentation
-
--->
+- Create your own fork of the repository
+- Do the changes in your fork, probably in `README.md`
+- Create a pull request with the changes.
 
 ## Styleguides
 ### Commit Messages
-<!-- TODO
+Taken from [The seven rules of a great Git commit message](https://cbea.ms/git-commit/):
 
--->
+1. Separate subject from body with a blank line
+2. Limit the subject line to 50 characters
+3. Capitalize the subject line
+4. Do not end the subject line with a period
+5. Use the imperative mood in the subject line
+6. Wrap the body at 72 characters
+7. Use the body to explain what and why vs. how
 
 ## Join The Project Team
-<!-- TODO -->
+If you wish to be added to the project team as a collaborator, please send 
+a message to daniel.maruani@gmail.com with explanation.
 
 <!-- omit in toc -->
 ## Attribution
diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml
index 7b5f1de..17ea950 100644
--- a/.github/FUNDING.yml
+++ b/.github/FUNDING.yml
@@ -1 +1,3 @@
+---
+tidelift: "pypi/fakeredis"
 github: cunla
diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
index a5fd9cb..368b634 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.md
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -9,6 +9,7 @@ assignees: ''
 
 **Describe the bug**
 A clear and concise description of what the bug is.
+Add the code you are trying to run, the stacktrace you are getting and anything else that might help.
 
 **To Reproduce**
 Steps to reproduce the behavior:
diff --git a/.github/actions/test-coverage/action.yml b/.github/actions/test-coverage/action.yml
new file mode 100644
index 0000000..be7d3a8
--- /dev/null
+++ b/.github/actions/test-coverage/action.yml
@@ -0,0 +1,48 @@
+name: 'Run test with coverage'
+description: 'Greet someone'
+inputs:
+  github-secret:
+    description: 'GITHUB_TOKEN'
+    required: true
+  gist-secret:
+    description: 'gist secret'
+    required: true
+runs:
+  using: "composite"
+  steps:
+    - name: Test with coverage
+      shell: bash
+      run: |
+        poetry run flake8 fakeredis/
+        poetry run pytest -v --cov=fakeredis --cov-branch
+        poetry run coverage json
+        echo "COVERAGE=$(jq '.totals.percent_covered_display|tonumber' coverage.json)" >> $GITHUB_ENV
+    - name: Create coverage badge
+      if: ${{ github.event_name == 'push' }}
+      uses: schneegans/dynamic-badges-action@v1.6.0
+      with:
+        auth: ${{ inputs.gist-secret }}
+        gistID: b756396efb895f0e34558c980f1ca0c7
+        filename: fakeredis-py.json
+        label: coverage
+        message: ${{ env.COVERAGE }}%
+        color: green
+    - name: Coverage report
+      if: ${{ github.event_name == 'pull_request' }}
+      id: coverage_report
+      shell: bash
+      run: |
+        echo 'REPORT<<EOF' >> $GITHUB_ENV
+        poetry run coverage report >> $GITHUB_ENV
+        echo 'EOF' >> $GITHUB_ENV
+    - uses: mshick/add-pr-comment@v2
+      if: ${{ github.event_name == 'pull_request' }}
+      with:
+        message: |
+          Coverage report:
+          ```
+          ${{ env.REPORT }}
+          ```
+        repo-token: ${{ inputs.github-secret }}
+        allow-repeats: false
+        message-id: coverage
\ No newline at end of file
diff --git a/.github/release-drafter.yml b/.github/release-drafter.yml
new file mode 100644
index 0000000..d430291
--- /dev/null
+++ b/.github/release-drafter.yml
@@ -0,0 +1,55 @@
+---
+name-template: 'v$RESOLVED_VERSION 🌈'
+tag-template: 'v$RESOLVED_VERSION'
+categories:
+  - title: '🚀 Features'
+    labels:
+      - 'feature'
+      - 'enhancement'
+  - title: '🐛 Bug Fixes'
+    labels:
+      - 'fix'
+      - 'bugfix'
+      - 'bug'
+  - title: '🧰 Maintenance'
+    label: 'chore'
+  - title: '⬆️ Dependency Updates'
+    label: 'dependencies'
+change-template: '- $TITLE (#$NUMBER)'
+change-title-escapes: '\<*_&'
+autolabeler:
+  - label: 'chore'
+    files:
+      - '*.md'
+      - '.github/*'
+  - label: 'bug'
+    title:
+      - '/fix/i'
+  - label: 'dependencies'
+    files:
+      - 'poetry.lock'
+version-resolver:
+  major:
+    labels:
+      - 'breaking'
+  minor:
+    labels:
+      - 'feature'
+      - 'enhancement'
+  patch:
+    labels:
+      - 'chore'
+      - 'dependencies'
+      - 'bug'
+  default: patch
+template: |
+  # Changes
+
+  $CHANGES
+
+  ## Contributors
+  We'd like to thank all the contributors who worked on this release!
+
+  $CONTRIBUTORS
+
+  **Full Changelog**: https://github.com/$OWNER/$REPOSITORY/compare/$PREVIOUS_TAG...v$RESOLVED_VERSION
diff --git a/.github/workflows/publish.yml b/.github/workflows/publish.yml
index e4e5a8d..6974771 100644
--- a/.github/workflows/publish.yml
+++ b/.github/workflows/publish.yml
@@ -1,10 +1,4 @@
-# This workflow will upload a Python Package using Twine when a release is created
-# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
-
-# This workflow uses actions that are not certified by GitHub.
-# They are provided by a third-party and are governed by
-# separate terms of service, privacy policy, and support
-# documentation.
+---
 
 name: Upload Python Package
 
@@ -14,43 +8,32 @@ on:
 
 jobs:
   deploy:
-
     runs-on: ubuntu-latest
-
     steps:
-    - uses: actions/checkout@v3
-    - name: Set up Python
-      uses: actions/setup-python@v3
-      with:
-        python-version: '3.10'
-    - name: Install dependencies
-      run: |
-        python -m pip install --upgrade pip
-        pip install build
-    - name: Build package
-      run: python -m build
-    - name: Publish package
-      uses: pypa/gh-action-pypi-publish@release/v1
-      with:
-        user: __token__
-        password: ${{ secrets.PYPI_API_TOKEN }}
-        print_hash: true
-
-  # https://github.community/t/run-github-actions-job-only-if-previous-job-has-failed/174786/2
-  create-issue-on-failure:
-    name: Create an issue if publish failed
-    runs-on: ubuntu-latest
-    needs: [deploy]
-    if: ${{ github.repository == 'cunla/fakeredis-py' && always() && needs.deploy.result == 'failure' }}
-    permissions:
-      issues: write
-    steps:
-      - uses: actions/github-script@v6
+      - uses: actions/checkout@v3
+      - name: Set up Python
+        uses: actions/setup-python@v4
+        with:
+          python-version: "3.11"
+      - name: Install dependencies
+        env:
+          PYTHON_KEYRING_BACKEND: keyring.backends.null.Keyring
+        run: |
+          python -m pip install --upgrade pip
+          pip install build
+      - name: Build package
+        run: python -m build
+      - name: Publish package
+        uses: pypa/gh-action-pypi-publish@release/v1
         with:
-          script: |
-            await github.rest.issues.create({
-              owner: context.repo.owner,
-              repo: context.repo.repo,
-              title: `Release failure on ${new Date().toDateString()}`,
-              body: `Details: https://github.com/${context.repo.owner}/${context.repo.repo}/actions/workflows/publish.yml`,
-            })
+          user: __token__
+          password: ${{ secrets.PYPI_API_TOKEN }}
+          print_hash: true
+      - name: Deploy documentation
+        env:
+          GH_TOKEN: ${{ secrets.GH_TOKEN }}
+          GOOGLE_ANALYTICS_KEY: ${{ secrets.GOOGLE_ANALYTICS_KEY }}
+        run: |
+          pip install -r docs/requirements.txt
+          mkdocs gh-deploy --force
+          mkdocs --version
diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml
index 5db3616..2226206 100644
--- a/.github/workflows/test.yml
+++ b/.github/workflows/test.yml
@@ -1,10 +1,11 @@
+---
 name: Unit tests
 
 on:
   push:
     branches:
       - master
-  pull_request:
+  pull_request_target:
     branches:
       - master
 
@@ -13,132 +14,126 @@ concurrency:
   cancel-in-progress: true
 
 jobs:
-  pre:
-    name: 'Test <Redis, Python, redis-py, aioredis, lupa, cov>'
+  lint:
+    name: "flake8 on code"
     runs-on: ubuntu-latest
     steps:
-      - run: echo ''
+      - uses: actions/checkout@v3
+      - uses: actions/setup-python@v4
+        with:
+          cache-dependency-path: poetry.lock
+          python-version: "3.11"
+      - name: Install dependencies
+        env:
+          PYTHON_KEYRING_BACKEND: keyring.backends.null.Keyring
+        run: |
+          python -m pip --quiet install poetry
+          echo "$HOME/.poetry/bin" >> $GITHUB_PATH
+          poetry install
+      - name: Run flake8
+        shell: bash
+        run: |
+          poetry run flake8 fakeredis/
+      - name: Test import
+        run: |
+          poetry build
+          pip install dist/fakeredis-*.tar.gz
+          python -c "import fakeredis"
   test:
-    name: 'Test <${{ matrix.redis-version }}, ${{ matrix.python-version }}, ${{ matrix.redis-py }}, ${{ matrix.aioredis }}, ${{ matrix.lupa }}, ${{ matrix.coverage }}>'
+    name: >
+      py:${{ matrix.python-version }},${{ matrix.redis-image }},
+      redis-py:${{ matrix.redis-py }},cov:${{ matrix.coverage }},
+      lupa:${{ matrix.lupa }}, json:${{matrix.json}}
+    needs:
+      - "lint"
     runs-on: ubuntu-latest
-    needs: pre
     strategy:
+      max-parallel: 8
       fail-fast: false
       matrix:
-        redis-version: [ "6.2.6", "7.0.4" ]
-        python-version: [ "3.7", "3.8", "3.9", "3.10" ]
-        redis-py: [ "4.1.2", "4.3.4" ]
+        redis-image: [ "redis:6.2.12", "redis:7.0.11" ]
+        python-version: [ "3.7", "3.8", "3.9", "3.10", "3.11" ]
+        redis-py: [ "4.3.6", "4.5.5" , "5.0.0b3" ]
         include:
-          - python-version: "3.10"
-            redis-version: "6.2.6"
-            redis-py: "2.10.6"
-            aioredis: "1.3.1"
-          - python-version: "3.10"
-            redis-version: "6.2.6"
-            redis-py: "3.5.3"
-            aioredis: "1.3.1"
-          - python-version: "3.10"
-            redis-version: "6.2.6"
-            redis-py: "4.0.1"
-            aioredis: "1.3.1"
-          - python-version: "3.10"
-            redis-version: "6.2.6"
-            redis-py: "4.1.2"
-            aioredis: "2.0.1"
-          - python-version: "3.10" # should work fine with redis.asyncio
-            redis-version: "7.0.4"
-            redis-py: "4.3.4"
-            lupa: "1.13"
-            coverage: yes
+          - python-version: "3.11"
+            redis-image: "redis:6.2.12"
+            redis-py: "4.5.5"
+            lupa: true
+            hypothesis: true
+          - python-version: "3.11"
+            redis-image: "redis/redis-stack:7.0.6-RC3"
+            redis-py: "4.5.5"
+            lupa: true
+            json: true
+            coverage: true
+            hypothesis: true
+
+    permissions:
+      pull-requests: write
     services:
       redis:
-        image: redis:${{ matrix.redis-version }}
+        image: ${{ matrix.redis-image }}
         ports:
           - 6379:6379
+        options: >-
+          --health-cmd "redis-cli ping"
+          --health-interval 10s
+          --health-timeout 5s
+          --health-retries 5
     outputs:
       version: ${{ steps.getVersion.outputs.VERSION }}
     steps:
       - uses: actions/checkout@v3
-      - uses: actions/setup-python@v3
+      - uses: actions/setup-python@v4
         with:
           cache-dependency-path: poetry.lock
           python-version: ${{ matrix.python-version }}
       - name: Install dependencies
+        env:
+          PYTHON_KEYRING_BACKEND: keyring.backends.null.Keyring
         run: |
           python -m pip --quiet install poetry
           echo "$HOME/.poetry/bin" >> $GITHUB_PATH
           poetry install
           poetry run pip install redis==${{ matrix.redis-py }}
-      - name: Install aioredis
-        if: ${{ matrix.aioredis }}
-        run: |
-          poetry run pip install aioredis==${{ matrix.aioredis }}
       - name: Install lupa
         if: ${{ matrix.lupa }}
         run: |
-          poetry run pip install lupa==${{ matrix.lupa }}
+          poetry run pip install fakeredis[lua]
+      - name: Install json
+        if: ${{ matrix.json }}
+        run: |
+          poetry run pip install fakeredis[json]
       - name: Get version
         id: getVersion
         shell: bash
         run: |
           VERSION=$(poetry version -s --no-ansi -n)
-          echo "::set-output name=VERSION::$VERSION"
-      - name: Test import
-        run: |
-          poetry build
-          pip install dist/fakeredis-*.tar.gz
-          python -c 'import fakeredis'
-      - name: Test with coverage
-        if: ${{ matrix.coverage == 'yes' }}
-        run: |
-          poetry run flake8 fakeredis/
-          poetry run pytest -v --cov=fakeredis --cov-branch
-          poetry run coverage json
-          echo "COVERAGE=$(jq '.totals.percent_covered_display|tonumber' coverage.json)" >> $GITHUB_ENV
+          echo "VERSION=$VERSION" >> $GITHUB_OUTPUT
       - name: Test without coverage
-        if: ${{ matrix.coverage != 'yes' }}
+        if: ${{ !matrix.coverage }}
         run: |
-          poetry run pytest -v
-      - name: Create coverage badge
-        if: ${{ matrix.coverage == 'yes' && github.event_name == 'push' }}
-        uses: schneegans/dynamic-badges-action@v1.1.0
+          poetry run pytest -v -m "not slow"
+      - name: Test with coverage
+        if: ${{ matrix.coverage }}
+        uses: ./.github/actions/test-coverage
         with:
-          auth: ${{ secrets.GIST_SECRET }}
-          gistID: b756396efb895f0e34558c980f1ca0c7
-          filename: fakeredis-py.json
-          label: coverage
-          message: ${{ env.COVERAGE }}%
-          color: green
+          github-secret: ${{ secrets.GITHUB_TOKEN }}
+          gist-secret: ${{ secrets.GIST_SECRET }}
   # Prepare a draft release for GitHub Releases page for the manual verification
   # If accepted and published, release workflow would be triggered
-  releaseDraft:
-    name: Release Draft
-    if: github.event_name != 'pull_request'
-    needs: test
+  update_release_draft:
+    name: "Create or Update release draft"
+    permissions:
+      # write permission is required to create a GitHub release
+      contents: write
+      # write permission is required for auto-labeler
+      # otherwise, read permission is required at least
+      pull-requests: write
+    needs:
+      - "test"
     runs-on: ubuntu-latest
     steps:
-      # Check out current repository
-      - name: Fetch Sources
-        uses: actions/checkout@v3
-
-      # Remove old release drafts by using the curl request for the available releases with draft flag
-      - name: Remove Old Release Drafts
+      - uses: release-drafter/release-drafter@v5
         env:
           GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
-        run: |
-          gh api repos/{owner}/{repo}/releases \
-            --jq '.[] | select(.draft == true) | .id' \
-            | xargs -I '{}' gh api -X DELETE repos/{owner}/{repo}/releases/{}
-      # Create new release draft - which is not publicly visible and requires manual acceptance
-      - name: Create Release Draft
-        env:
-          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
-        run: |
-          gh release create v${{ needs.build.outputs.version }} \
-            --draft \
-            --title "v${{ needs.build.outputs.version }}" \
-            --notes "$(cat << 'EOM'
-          ${{ needs.build.outputs.version }}
-          EOM
-          )"
-          echo "::notice title=New release draft::${{ needs.build.outputs.version }}"          
diff --git a/.gitignore b/.gitignore
index da49ca1..e799c34 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,4 +1,4 @@
-.commands.json
+scripts/*.json
 fakeredis.egg-info
 dump.rdb
 extras/*
@@ -15,3 +15,10 @@ docker-compose.yml
 .DS_Store
 *.iml
 .venv/
+.fleet
+.mypy_cache
+.pytest_cache
+**/__pycache__
+scratch*
+.python-version
+.env
diff --git a/.readthedocs.yaml b/.readthedocs.yaml
new file mode 100644
index 0000000..0b64ab9
--- /dev/null
+++ b/.readthedocs.yaml
@@ -0,0 +1,13 @@
+version: 2
+build:
+  os: "ubuntu-20.04"
+  tools:
+    python: "3.11"
+
+mkdocs:
+  configuration: mkdocs.yml
+  fail_on_warning: false
+
+python:
+  install:
+    - requirements: docs/requirements.txt
diff --git a/LICENSE b/LICENSE
index abcc676..f02dd6b 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1,55 +1,29 @@
-Copyright (c) 2011 James Saryerwinnie, 2017-2018 Bruce Merry, 2022- Daniel Moran
+BSD 3-Clause License
+
+Copyright (c) 2022-, Daniel Moran, 2017-2018, Bruce Merry, 2011 James Saryerwinnie,
 All rights reserved.
 
 Redistribution and use in source and binary forms, with or without
 modification, are permitted provided that the following conditions are met:
 
-1. Redistributions of source code must retain the above copyright notice,
-   this list of conditions and the following disclaimer.
+1. Redistributions of source code must retain the above copyright notice, this
+   list of conditions and the following disclaimer.
 
 2. Redistributions in binary form must reproduce the above copyright notice,
    this list of conditions and the following disclaimer in the documentation
    and/or other materials provided with the distribution.
 
-3. Neither the name of the copyright holder nor the names of its contributors
-   may be used to endorse or promote products derived from this software
-   without specific prior written permission.
+3. Neither the name of the copyright holder nor the names of its
+   contributors may be used to endorse or promote products derived from
+   this software without specific prior written permission.
 
 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
-THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
-LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-POSSIBILITY OF SUCH DAMAGE.
-
-
-This software contains portions of code from redis-py, which is distributed
-under the following license:
-
-Copyright (c) 2012 Andy McCurdy
-
- Permission is hereby granted, free of charge, to any person
- obtaining a copy of this software and associated documentation
- files (the "Software"), to deal in the Software without
- restriction, including without limitation the rights to use,
- copy, modify, merge, publish, distribute, sublicense, and/or sell
- copies of the Software, and to permit persons to whom the
- Software is furnished to do so, subject to the following
- conditions:
-
- The above copyright notice and this permission notice shall be
- included in all copies or substantial portions of the Software.
-
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
- OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
- HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
- WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- OTHER DEALINGS IN THE SOFTWARE.
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/README.md b/README.md
index 46a3eb6..e6d23cf 100644
--- a/README.md
+++ b/README.md
@@ -1,260 +1,37 @@
 fakeredis: A fake version of a redis-py
 =======================================
 
-![badge](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/cunla/b756396efb895f0e34558c980f1ca0c7/raw/fakeredis-py.json)
+[![badge](https://img.shields.io/pypi/v/fakeredis)](https://pypi.org/project/fakeredis/)
+[![CI](https://github.com/cunla/fakeredis-py/actions/workflows/test.yml/badge.svg)](https://github.com/cunla/fakeredis-py/actions/workflows/test.yml)
+[![badge](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/cunla/b756396efb895f0e34558c980f1ca0c7/raw/fakeredis-py.json)](https://github.com/cunla/fakeredis-py/actions/workflows/test.yml)
+[![badge](https://img.shields.io/pypi/dm/fakeredis)](https://pypi.org/project/fakeredis/)
+[![badge](https://img.shields.io/pypi/l/fakeredis)](./LICENSE)
+[![Open Source Helpers](https://www.codetriage.com/cunla/fakeredis-py/badges/users.svg)](https://www.codetriage.com/cunla/fakeredis-py)
+--------------------
 
-- [fakeredis: A fake version of a redis-py](#fakeredis--a-fake-version-of-a-redis-py)
-- [How to Use](#how-to-use)
-  - [Use to test django-rq](#use-to-test-django-rq)
-- [Other limitations](#other-limitations)
-- [Support for redis-py <4.2 with aioredis](#support-for-redis-py--42-with-aioredis)
-    + [aioredis 1.x](#aioredis-1x)
-    + [aioredis 2.x](#aioredis-2x)
-- [Running the Tests](#running-the-tests)
-- [Contributing](#contributing)
-- [Alternatives](#alternatives)
+Documentation is now hosted in https://fakeredis.readthedocs.io/
 
-fakeredis is a pure-Python implementation of the redis-py python client
-that simulates talking to a redis server. This was created for a single
-purpose: **to write unittests**. Setting up redis is not hard, but
-many times you want to write unittests that do not talk to an external server
-(such as redis). This module now allows tests to simply use this
-module as a reasonable substitute for redis.
+# Intro
 
-Although fakeredis is pure Python, you will need [lupa](https://pypi.org/project/lupa/) if you want to run Lua
-scripts (this includes features like ``redis.lock.Lock``, which are implemented
-in Lua). If you install fakeredis with ``pip install fakeredis[lua]`` it will
-be automatically installed.
+FakeRedis is a pure-Python implementation of the Redis key-value store.
 
-For a list of supported/unsupported redis commands, see [REDIS_COMMANDS.md](REDIS_COMMANDS.md)
+It enables running tests requiring redis server without an actual server.
 
-# How to Use
-FakeRedis can imitate Redis server version 6.x or 7.x - There are a few minor behavior differences. 
-If you do not specify the version, version 7 is used by default.
+It provides enhanced versions of the redis-py Python bindings for Redis. That provide the following added functionality:
+A built-in Redis server that is automatically installed, configured and managed when the Redis bindings are used. A
+single server shared by multiple programs or multiple independent servers. All the servers provided by
+FakeRedis support all Redis functionality including advanced features such as RedisJson, GeoCommands.
 
-The intent is for fakeredis to act as though you're talking to a real
-redis server. It does this by storing state internally.
-For example:
+See [official documentation](https://fakeredis.readthedocs.io/) for list of supported commands.
 
-```
->>> import fakeredis
->>> r = fakeredis.FakeStrictRedis(version=6)
->>> r.set('foo', 'bar')
-True
->>> r.get('foo')
-'bar'
->>> r.lpush('bar', 1)
-1
->>> r.lpush('bar', 2)
-2
->>> r.lrange('bar', 0, -1)
-[2, 1]
-```
+# Sponsor
 
-The state is stored in an instance of `FakeServer`. If one is not provided at
-construction, a new instance is automatically created for you, but you can
-explicitly create one to share state:
+fakeredis-py is developed for free.
 
-```
->>> import fakeredis
->>> server = fakeredis.FakeServer()
->>> r1 = fakeredis.FakeStrictRedis(server=server)
->>> r1.set('foo', 'bar')
-True
->>> r2 = fakeredis.FakeStrictRedis(server=server)
->>> r2.get('foo')
-'bar'
->>> r2.set('bar', 'baz')
-True
->>> r1.get('bar')
-'baz'
->>> r2.get('bar')
-'baz'
-```
+You can support this project by becoming a sponsor using [this link](https://github.com/sponsors/cunla).
 
-It is also possible to mock connection errors so you can effectively test
-your error handling. Simply set the connected attribute of the server to
-`False` after initialization.
+## Security contact information
 
-```
->>> import fakeredis
->>> server = fakeredis.FakeServer()
->>> server.connected = False
->>> r = fakeredis.FakeStrictRedis(server=server)
->>> r.set('foo', 'bar')
-ConnectionError: FakeRedis is emulating a connection error.
->>> server.connected = True
->>> r.set('foo', 'bar')
-True
-```
-
-Fakeredis implements the same interface as `redis-py`, the
-popular redis client for python, and models the responses
-of redis 6.2 (although most new features are not supported).
-
-## Use to test django-rq
-
-There is a need to override `django_rq.queues.get_redis_connection` with
-a method returning the same connection.
-
-```python
-from fakeredis import FakeRedisConnSingleton
-
-django_rq.queues.get_redis_connection = FakeRedisConnSingleton()
-```
-
-# Other limitations
-
-Apart from unimplemented commands, there are a number of cases where fakeredis
-won't give identical results to real redis. The following are differences that
-are unlikely to ever be fixed; there are also differences that are fixable
-(such as commands that do not support all features) which should be filed as
-bugs in Github.
-
-1. Hyperloglogs are implemented using sets underneath. This means that the
-   `type` command will return the wrong answer, you can't use `get` to retrieve
-   the encoded value, and counts will be slightly different (they will in fact be
-   exact).
-
-2. When a command has multiple error conditions, such as operating on a key of
-   the wrong type and an integer argument is not well-formed, the choice of
-   error to return may not match redis.
-
-3. The `incrbyfloat` and `hincrbyfloat` commands in redis use the C `long
-   double` type, which typically has more precision than Python's `float`
-   type.
-
-4. Redis makes guarantees about the order in which clients blocked on blocking
-   commands are woken up. Fakeredis does not honour these guarantees.
-
-5. Where redis contains bugs, fakeredis generally does not try to provide exact
-   bug-compatibility. It's not practical for fakeredis to try to match the set
-   of bugs in your specific version of redis.
-
-6. There are a number of cases where the behaviour of redis is undefined, such
-   as the order of elements returned by set and hash commands. Fakeredis will
-   generally not produce the same results, and in Python versions before 3.6
-   may produce different results each time the process is re-run.
-
-7. SCAN/ZSCAN/HSCAN/SSCAN will not necessarily iterate all items if items are
-   deleted or renamed during iteration. They also won't necessarily iterate in
-   the same chunk sizes or the same order as redis.
-
-8. DUMP/RESTORE will not return or expect data in the RDB format. Instead the
-   `pickle` module is used to mimic an opaque and non-standard format.
-   **WARNING**: Do not use RESTORE with untrusted data, as a malicious pickle
-   can execute arbitrary code.
-
-# Support for redis-py <4.2 with aioredis
-
-Aioredis is now in redis-py 4.2.0. But support is maintained until fakeredis 2 for older version of redis-py.
-
-You can also use fakeredis to mock out [aioredis](https://aioredis.readthedocs.io/). This is a much newer
-addition to fakeredis (added in 1.4.0) with less testing, so your mileage may
-vary. Both version 1 and version 2 (which have very different APIs) are
-supported. The API provided by fakeredis depends on the version of aioredis that is
-installed.
-
-### aioredis 1.x
-
-Example:
-
-```
->>> import fakeredis.aioredis
->>> r = await fakeredis.aioredis.create_redis_pool()
->>> await r.set('foo', 'bar')
-True
->>> await r.get('foo')
-b'bar'
-```
-
-You can pass a `FakeServer` as the first argument to `create_redis` or
-`create_redis_pool` to share state (you can even share state with a
-`fakeredis.FakeRedis`). It should even be safe to do this state sharing between
-threads (as long as each connection/pool is only used in one thread).
-
-It is highly recommended that you only use the aioredis support with
-Python 3.5.3 or higher. Earlier versions will not work correctly with
-non-default event loops.
-
-### aioredis 2.x
-
-Example:
-
-```
->>> import fakeredis.aioredis
->>> r = fakeredis.aioredis.FakeRedis()
->>> await r.set('foo', 'bar')
-True
->>> await r.get('foo')
-b'bar'
-```
-
-The support is essentially the same as for redis-py e.g., you can pass a
-`server` keyword argument to the `FakeRedis` constructor.
-
-# Running the Tests
-
-To ensure parity with the real redis, there are a set of integration tests
-that mirror the unittests. For every unittest that is written, the same
-test is run against a real redis instance using a real redis-py client
-instance. In order to run these tests you must have a redis server running
-on localhost, port 6379 (the default settings). **WARNING**: the tests will
-completely wipe your database!
-
-First install poetry if you don't have it, and then install all the dependencies:
-
-```   
-pip install poetry
-poetry install
-``` 
-
-To run all the tests:
-
-```
-poetry run pytest -v
-```
-
-If you only want to run tests against fake redis, without a real redis::
-
-```
-poetry run pytest -m fake
-```
-
-Because this module is attempting to provide the same interface as `redis-py`,
-the python bindings to redis, a reasonable way to test this to to take each
-unittest and run it against a real redis server. fakeredis and the real redis
-server should give the same result. To run tests against a real redis instance
-instead::
-
-```
-poetry run pytest -m real
-```
-
-If redis is not running and you try to run tests against a real redis server,
-these tests will have a result of 's' for skipped.
-
-There are some tests that test redis blocking operations that are somewhat
-slow. If you want to skip these tests during day to day development,
-they have all been tagged as 'slow' so you can skip them by running::
-
-```
-poetry run pytest -m "not slow"
-```
-
-# Contributing
-
-Contributions are welcome. Please see the
-[contributing guide](.github/CONTRIBUTING.md) for more details.
-The maintainer generally has very little time to work on fakeredis, so the
-best way to get a bug fixed is to contribute a pull request.
-
-If you'd like to help out, you can start with any of the issues
-labeled with `Help wanted`.
-
-# Alternatives
-
-Consider using [redislite](https://redislite.readthedocs.io/en/latest/) instead of fakeredis.
-It runs a real redis server and connects to it over a UNIX domain socket, so it will behave just like a real
-server. Another alternative is [birdisle](https://birdisle.readthedocs.io/en/latest/), which
-runs the redis code as a Python extension (no separate process), but which is currently unmaintained.
+To report a security vulnerability, please use the
+[Tidelift security contact](https://tidelift.com/security).
+Tidelift will coordinate the fix and disclosure.
diff --git a/REDIS_COMMANDS.md b/REDIS_COMMANDS.md
deleted file mode 100644
index d186c93..0000000
--- a/REDIS_COMMANDS.md
+++ /dev/null
@@ -1,430 +0,0 @@
------
-Here is a list of all redis [implemented commands](#implemented-commands) and a
-list of [unimplemented commands](#unimplemented-commands).
-
-# Implemented Commands
-### string
- * append
- * decr
- * decrby
- * get
- * getrange
- * getset
- * incr
- * incrby
- * incrbyfloat
- * mget
- * mset
- * msetnx
- * psetex
- * set
- * setex
- * setnx
- * setrange
- * strlen
- * substr
-
-### server
- * bgsave
- * dbsize
- * flushall
- * flushdb
- * lastsave
- * save
- * swapdb
- * time
-
-### bitmap
- * bitcount
- * getbit
- * setbit
-
-### list
- * blpop
- * brpop
- * brpoplpush
- * lindex
- * linsert
- * llen
- * lmove
- * lpop
- * lpush
- * lpushx
- * lrange
- * lrem
- * lset
- * ltrim
- * rpop
- * rpoplpush
- * rpush
- * rpushx
-
-### generic
- * del
- * dump
- * exists
- * expire
- * expireat
- * keys
- * move
- * persist
- * pexpire
- * pexpireat
- * pttl
- * randomkey
- * rename
- * renamenx
- * restore
- * scan
- * sort
- * ttl
- * type
- * unlink
-
-### transactions
- * discard
- * exec
- * multi
- * unwatch
- * watch
-
-### connection
- * echo
- * ping
- * select
-
-### scripting
- * eval
- * evalsha
- * script
- * script load
-
-### hash
- * hdel
- * hexists
- * hget
- * hgetall
- * hincrby
- * hincrbyfloat
- * hkeys
- * hlen
- * hmget
- * hmset
- * hscan
- * hset
- * hsetnx
- * hstrlen
- * hvals
-
-### hyperloglog
- * pfadd
- * pfcount
- * pfmerge
-
-### pubsub
- * psubscribe
- * publish
- * punsubscribe
- * subscribe
- * unsubscribe
-
-### set
- * sadd
- * scard
- * sdiff
- * sdiffstore
- * sinter
- * sinterstore
- * sismember
- * smembers
- * smismember
- * smove
- * spop
- * srandmember
- * srem
- * sscan
- * sunion
- * sunionstore
-
-### sorted-set
- * zadd
- * zcard
- * zcount
- * zincrby
- * zinterstore
- * zlexcount
- * zrange
- * zrangebylex
- * zrangebyscore
- * zrank
- * zrem
- * zremrangebylex
- * zremrangebyrank
- * zremrangebyscore
- * zrevrange
- * zrevrangebylex
- * zrevrangebyscore
- * zrevrank
- * zscan
- * zscore
- * zunionstore
-
-# Unimplemented Commands
-All of the redis commands are implemented in fakeredis with these exceptions:
-    
-### server
- * acl
- * acl cat
- * acl deluser
- * acl dryrun
- * acl genpass
- * acl getuser
- * acl help
- * acl list
- * acl load
- * acl log
- * acl save
- * acl setuser
- * acl users
- * acl whoami
- * bgrewriteaof
- * command
- * command count
- * command docs
- * command getkeys
- * command getkeysandflags
- * command help
- * command info
- * command list
- * config
- * config get
- * config help
- * config resetstat
- * config rewrite
- * config set
- * debug
- * failover
- * info
- * latency
- * latency doctor
- * latency graph
- * latency help
- * latency histogram
- * latency history
- * latency latest
- * latency reset
- * lolwut
- * memory
- * memory doctor
- * memory help
- * memory malloc-stats
- * memory purge
- * memory stats
- * memory usage
- * module
- * module help
- * module list
- * module load
- * module loadex
- * module unload
- * monitor
- * psync
- * replconf
- * replicaof
- * restore-asking
- * role
- * shutdown
- * slaveof
- * slowlog
- * slowlog get
- * slowlog help
- * slowlog len
- * slowlog reset
- * sync
-
-### cluster
- * asking
- * cluster
- * cluster addslots
- * cluster addslotsrange
- * cluster bumpepoch
- * cluster count-failure-reports
- * cluster countkeysinslot
- * cluster delslots
- * cluster delslotsrange
- * cluster failover
- * cluster flushslots
- * cluster forget
- * cluster getkeysinslot
- * cluster help
- * cluster info
- * cluster keyslot
- * cluster links
- * cluster meet
- * cluster myid
- * cluster nodes
- * cluster replicas
- * cluster replicate
- * cluster reset
- * cluster saveconfig
- * cluster set-config-epoch
- * cluster setslot
- * cluster shards
- * cluster slaves
- * cluster slots
- * readonly
- * readwrite
-
-### connection
- * auth
- * client
- * client caching
- * client getname
- * client getredir
- * client help
- * client id
- * client info
- * client kill
- * client list
- * client no-evict
- * client pause
- * client reply
- * client setname
- * client tracking
- * client trackinginfo
- * client unblock
- * client unpause
- * hello
- * quit
- * reset
-
-### bitmap
- * bitfield
- * bitfield_ro
- * bitop
- * bitpos
-
-### list
- * blmove
- * blmpop
- * lmpop
- * lpos
-
-### sorted-set
- * bzmpop
- * bzpopmax
- * bzpopmin
- * zdiff
- * zdiffstore
- * zinter
- * zintercard
- * zmpop
- * zmscore
- * zpopmax
- * zpopmin
- * zrandmember
- * zrangestore
- * zunion
-
-### generic
- * copy
- * expiretime
- * migrate
- * object
- * object encoding
- * object freq
- * object help
- * object idletime
- * object refcount
- * pexpiretime
- * sort_ro
- * touch
- * wait
-
-### scripting
- * evalsha_ro
- * eval_ro
- * fcall
- * fcall_ro
- * function
- * function delete
- * function dump
- * function flush
- * function help
- * function kill
- * function list
- * function load
- * function restore
- * function stats
- * script debug
- * script exists
- * script flush
- * script help
- * script kill
-
-### geo
- * geoadd
- * geodist
- * geohash
- * geopos
- * georadius
- * georadiusbymember
- * georadiusbymember_ro
- * georadius_ro
- * geosearch
- * geosearchstore
-
-### string
- * getdel
- * getex
- * lcs
-
-### hash
- * hrandfield
-
-### hyperloglog
- * pfdebug
- * pfselftest
-
-### pubsub
- * pubsub
- * pubsub channels
- * pubsub help
- * pubsub numpat
- * pubsub numsub
- * pubsub shardchannels
- * pubsub shardnumsub
- * spublish
- * ssubscribe
- * sunsubscribe
-
-### set
- * sintercard
-
-### stream
- * xack
- * xadd
- * xautoclaim
- * xclaim
- * xdel
- * xgroup
- * xgroup create
- * xgroup createconsumer
- * xgroup delconsumer
- * xgroup destroy
- * xgroup help
- * xgroup setid
- * xinfo
- * xinfo consumers
- * xinfo groups
- * xinfo help
- * xinfo stream
- * xlen
- * xpending
- * xrange
- * xread
- * xreadgroup
- * xrevrange
- * xsetid
- * xtrim
-
diff --git a/SECURITY.md b/SECURITY.md
new file mode 100644
index 0000000..05a2b6e
--- /dev/null
+++ b/SECURITY.md
@@ -0,0 +1,17 @@
+# Security Policy
+
+## Supported Versions
+
+Use this section to tell people about which versions of your project are
+currently being supported with security updates.
+
+| Version | Supported          |
+| ------- | ------------------ |
+| 2.11.x  | :white_check_mark: |
+| 1.10.x  | :white_check_mark: |
+
+## Reporting a Vulnerability
+
+To report a security vulnerability, please use the Tidelift security contact. 
+Tidelift will coordinate the fix and disclosure.
+
diff --git a/debian/changelog b/debian/changelog
index 96cdfc4..13b1e32 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,10 @@
+python-fakeredis (2.13.0-1) UNRELEASED; urgency=low
+
+  * New upstream release.
+  * New upstream release.
+
+ -- Debian Janitor <janitor@jelmer.uk>  Mon, 05 Jun 2023 03:59:02 -0000
+
 python-fakeredis (1.9.0-0.1) unstable; urgency=medium
 
   [ Paul Gevers ]
diff --git a/debian/patches/skip-flaky-test.patch b/debian/patches/skip-flaky-test.patch
index cf81a8b..d356846 100644
--- a/debian/patches/skip-flaky-test.patch
+++ b/debian/patches/skip-flaky-test.patch
@@ -6,11 +6,11 @@ Subject: Skip flaky test
  test/test_hypothesis.py | 1 +
  1 file changed, 1 insertion(+)
 
-Index: python-fakeredis/test/test_hypothesis.py
+Index: python-fakeredis.git/test/test_hypothesis.py
 ===================================================================
---- python-fakeredis.orig/test/test_hypothesis.py
-+++ python-fakeredis/test/test_hypothesis.py
-@@ -628,6 +628,7 @@ def mutated_commands(commands):
+--- python-fakeredis.git.orig/test/test_hypothesis.py
++++ python-fakeredis.git/test/test_hypothesis.py
+@@ -630,6 +630,7 @@ def mutated_commands(commands):
          | swap_args(x))
  
  
diff --git a/docs/about/changelog.md b/docs/about/changelog.md
new file mode 100644
index 0000000..2445a69
--- /dev/null
+++ b/docs/about/changelog.md
@@ -0,0 +1,434 @@
+---
+title: Change log
+description: Change log of all fakeredis releases
+
+---
+
+## Next release
+
+## v2.13.0
+
+### 🧰 Bug Fixes
+
+- Fixed xadd timestamp (fixes #151) (#152)
+- Implement XDEL #153
+
+### 🧰 Maintenance
+
+- Improve test code
+- Fix reported security issue
+
+## v2.12.1
+
+### 🧰 Bug Fixes
+
+- Add support for `Connection.read_response` arguments used in redis-py 4.5.5 and 5.0.0
+- Adding state for scan commands (#99)
+
+### 🧰 Maintenance
+
+- Improved documentation (added async sample, etc.)
+- Add redis-py 5.0.0b3 to GitHub workflow
+
+## v2.12.0
+
+### 🚀 Features
+
+- Implement `XREAD` #147
+
+## v2.11.2
+
+### 🧰 Bug Fixes
+
+- Unique FakeServer when no connection params are provided (#142)
+
+## v2.11.1
+
+### 🧰 Maintenance
+
+- Minor fixes supporting multiple connections
+- Update documentation
+
+## v2.11.0
+
+### 🚀 Features
+
+- connection parameters awareness:
+  Creating multiple clients with the same connection parameters will result in
+  the same server data structure.
+
+### 🧰 Bug Fixes
+
+- Fix creating fakeredis.aioredis using url with user/password (#139)
+
+## v2.10.3
+
+### 🧰 Maintenance
+
+- Support for redis-py 5.0.0b1
+- Include tests in sdist (#133)
+
+### 🐛 Bug Fixes
+
+- Fix import used in GenericCommandsMixin.randomkey (#135)
+
+## v2.10.2
+
+### 🐛 Bug Fixes
+
+- Fix async_timeout usage on py3.11 (#132)
+
+## v2.10.1
+
+### 🐛 Bug Fixes
+
+- Enable testing django-cache using `FakeConnection`.
+
+## v2.10.0
+
+### 🚀 Features
+
+- All geo commands implemented
+
+## v2.9.2
+
+### 🐛 Bug Fixes
+
+- Fix bug for `xrange`
+
+## v2.9.1
+
+### 🐛 Bug Fixes
+
+- Fix bug for `xrevrange`
+
+## v2.9.0
+
+### 🚀 Features
+
+- Implement `XTRIM`
+- Add support for `MAXLEN`, `MAXID`, `LIMIT` arguments for `XADD` command
+- Add support for `ZRANGE` arguments for `ZRANGE` command [#127](https://github.com/cunla/fakeredis-py/issues/127)
+
+### 🧰 Maintenance
+
+- Relax python version requirement #128
+
+## v2.8.0
+
+### 🚀 Features
+
+- Support for redis-py 4.5.0 [#125](https://github.com/cunla/fakeredis-py/issues/125)
+
+### 🐛 Bug Fixes
+
+- Fix import error for redis-py v3+ [#121](https://github.com/cunla/fakeredis-py/issues/121)
+
+## v2.7.1
+
+### 🐛 Bug Fixes
+
+- Fix import error for NoneType #527
+
+## v2.7.0
+
+### 🚀 Features
+
+- Implement `JSON.ARRINDEX`, `JSON.OBJLEN`, `JSON.OBJKEYS` ,
+  `JSON.ARRPOP`, `JSON.ARRTRIM`, `JSON.NUMINCRBY`, `JSON.NUMMULTBY`,
+  `XADD`, `XLEN`, `XRANGE`, `XREVRANGE`
+
+### 🧰 Maintenance
+
+- Improve json commands implementation.
+- Improve commands documentation.
+
+## v2.6.0
+
+### 🚀 Features
+
+- Implement `JSON.TYPE`, `JSON.ARRLEN` and `JSON.ARRAPPEND`
+
+### 🐛 Bug Fixes
+
+- Fix encoding of None (#118)
+
+### 🧰 Maintenance
+
+- Start skeleton for streams commands in `streams_mixin.py` and `test_streams_commands.py`
+- Start migrating documentation to https://fakeredis.readthedocs.io/
+
+**Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v2.5.0...v2.6.0
+
+## v2.5.0
+
+#### 🚀 Features
+
+- Implement support for `BITPOS` (bitmap command) (#112)
+
+#### 🐛 Bug Fixes
+
+- Fix json mget when dict is returned (#114)
+- fix: properly export (#116)
+
+#### 🧰 Maintenance
+
+- Extract param handling (#113)
+
+#### Contributors
+
+We'd like to thank all the contributors who worked on this release!
+
+@Meemaw, @Pichlerdom
+
+**Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v2.4.0...v2.5.0
+
+## v2.4.0
+
+#### 🚀 Features
+
+- Implement `LCS` (#111), `BITOP` (#110)
+
+#### 🐛 Bug Fixes
+
+- Fix bug checking type in scan\_iter (#109)
+
+**Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v2.3.0...v2.4.0
+
+## v2.3.0
+
+#### 🚀 Features
+
+- Implement `GETEX` (#102)
+- Implement support for `JSON.STRAPPEND` (json command) (#98)
+- Implement `JSON.STRLEN`, `JSON.TOGGLE` and fix bugs with `JSON.DEL` (#96)
+- Implement `PUBSUB CHANNELS`, `PUBSUB NUMSUB`
+
+#### 🐛 Bug Fixes
+
+- ZADD with XX \& GT allows updates with lower scores (#105)
+
+**Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v2.2.0...v2.3.0
+
+## v2.2.0
+
+#### 🚀 Features
+
+- Implement `JSON.CLEAR` (#87)
+- Support for [redis-py v4.4.0](https://github.com/redis/redis-py/releases/tag/v4.4.0)
+
+#### 🧰 Maintenance
+
+- Implement script to create issues for missing commands
+- Remove checking for deprecated redis-py versions in tests
+
+**Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v2.1.0...v2.2.0
+
+## v2.1.0
+
+#### 🚀 Features
+
+- Implement json.mget (#85)
+- Initial json module support - `JSON.GET`, `JSON.SET` and `JSON.DEL` (#80)
+
+#### 🐛 Bug Fixes
+
+- fix: add nowait for asyncio disconnect (#76)
+
+#### 🧰 Maintenance
+
+- Refactor how commands are registered (#79)
+- Refactor tests from redispy4\_plus (#77)
+
+#### Contributors
+
+We'd like to thank all the contributors who worked on this release!
+
+@hyeongguen-song, @the-wondersmith
+
+**Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v2.0.0...v2.1.0
+
+## v2.0.0
+
+#### 🚀 Breaking changes
+
+- Remove support for aioredis separate from redis-py (redis-py versions 4.1.2 and below). (#65)
+
+#### 🚀 Features
+
+- Add support for redis-py v4.4rc4 (#73)
+- Add mypy support  (#74)
+
+#### 🧰 Maintenance
+
+- Separate commands to mixins (#71)
+- Use release-drafter
+- Update GitHub workflows
+
+**Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.10.1...v2.0.0
+
+## v1.10.1
+
+#### What's Changed
+
+* Implement support for `zmscore` by @the-wondersmith in [#67](https://github.com/cunla/fakeredis-py/pull/67)
+
+#### New Contributors
+
+* @the-wondersmith made their first contribution in [#67](https://github.com/cunla/fakeredis-py/pull/67)
+
+**Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.10.0...v1.10.1
+
+## v1.10.0
+
+#### What's Changed
+
+* implement `GETDEL` and `SINTERCARD` support in [#57](https://github.com/cunla/fakeredis-py/pull/57)
+* Test get float-type behavior in [#59](https://github.com/cunla/fakeredis-py/pull/59)
+* Implement `BZPOPMIN`/`BZPOPMAX` support in [#60](https://github.com/cunla/fakeredis-py/pull/60)
+
+**Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.9.4...v1.10.0
+
+## v1.9.4
+
+### What's Changed
+
+* Separate LUA support to a different file in [#55](https://github.com/cunla/fakeredis-py/pull/55)
+  **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.9.3...v1.9.4
+
+## v1.9.3
+
+### Changed
+
+* Removed python-six dependency
+
+**Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.9.2...v1.9.3
+
+## v1.9.2
+
+#### What's Changed
+
+* zadd support for GT/LT in [#49](https://github.com/cunla/fakeredis-py/pull/49)
+* Remove six dependency in [#51](https://github.com/cunla/fakeredis-py/pull/51)
+* Add host to `conn_pool_args`  in [#51](https://github.com/cunla/fakeredis-py/pull/51)
+
+**Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.9.1...v1.9.2
+
+## v1.9.1
+
+#### What's Changed
+
+* Zrange byscore in [#44](https://github.com/cunla/fakeredis-py/pull/44)
+* Expire options in [#46](https://github.com/cunla/fakeredis-py/pull/46)
+
+**Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.9.0...v1.9.1
+
+## v1.9.0
+
+#### What's Changed
+
+* Enable redis7 support in [#42](https://github.com/cunla/fakeredis-py/pull/42)
+
+**Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.8.2...v1.9.0
+
+## v1.8.2
+
+#### What's Changed
+
+* Update publish GitHub action to create an issue on failure by @terencehonles
+  in [#33](https://github.com/dsoftwareinc/fakeredis-py/pull/33)
+* Add release draft job in [#37](https://github.com/dsoftwareinc/fakeredis-py/pull/37)
+* Fix input and output type of cursors for SCAN commands by @tohin
+  in [#40](https://github.com/dsoftwareinc/fakeredis-py/pull/40)
+* Fix passing params in args - Fix #36 in [#41](https://github.com/dsoftwareinc/fakeredis-py/pull/41)
+
+#### New Contributors
+
+* @tohin made their first contribution in [#40](https://github.com/dsoftwareinc/fakeredis-py/pull/40)
+
+**Full Changelog**: https://github.com/dsoftwareinc/fakeredis-py/compare/v1.8.1...v1.8.2
+
+## v1.8.1
+
+#### What's Changed
+
+* fix: allow redis 4.3.* by @terencehonles in [#30](https://github.com/dsoftwareinc/fakeredis-py/pull/30)
+
+#### New Contributors
+
+* @terencehonles made their first contribution in [#30](https://github.com/dsoftwareinc/fakeredis-py/pull/30)
+
+**Full Changelog**: https://github.com/dsoftwareinc/fakeredis-py/compare/v1.8...v1.8.1
+
+## v1.8
+
+#### What's Changed
+
+* Fix handling url with username and password in [#27](https://github.com/dsoftwareinc/fakeredis-py/pull/27)
+* Refactor tests in [#28](https://github.com/dsoftwareinc/fakeredis-py/pull/28)
+
+**Full Changelog**: https://github.com/dsoftwareinc/fakeredis-py/compare/v1.7.6.1...v1.8
+
+## v1.7.6.1
+
+#### What's Changed
+
+* 23 - Re-add dependencies lost during switch to poetry by @xkortex
+  in [#26](https://github.com/dsoftwareinc/fakeredis-py/pull/26)
+
+#### New Contributors
+
+* @xkortex made their first contribution in https://github.com/dsoftwareinc/fakeredis-py/pull/26
+
+**Full Changelog**: https://github.com/dsoftwareinc/fakeredis-py/compare/v1.7.6...v1.7.6.1
+
+## v1.7.6
+
+#### Added
+
+* add IMOVE operation by @BGroever in [#11](https://github.com/dsoftwareinc/fakeredis-py/pull/11)
+* Add SMISMEMBER command by @OlegZv in [#20](https://github.com/dsoftwareinc/fakeredis-py/pull/20)
+
+#### Removed
+
+* Remove Python 3.7 by @nzw0301 in [#8](https://github.com/dsoftwareinc/fakeredis-py/pull/8)
+
+#### What's Changed
+
+* fix: work with redis.asyncio by @zhongkechen in [#10](https://github.com/dsoftwareinc/fakeredis-py/pull/10)
+* Migrate to poetry in [#12](https://github.com/dsoftwareinc/fakeredis-py/pull/12)
+* Create annotation for redis4+ tests in [#14](https://github.com/dsoftwareinc/fakeredis-py/pull/14)
+* Make aioredis and lupa optional dependencies in [#16](https://github.com/dsoftwareinc/fakeredis-py/pull/16)
+* Remove aioredis requirement if redis-py 4.2+ by @ikornaselur
+  in [#19](https://github.com/dsoftwareinc/fakeredis-py/pull/19)
+
+#### New Contributors
+
+* @nzw0301 made their first contribution in [#8](https://github.com/dsoftwareinc/fakeredis-py/pull/8)
+* @zhongkechen made their first contribution in [#10](https://github.com/dsoftwareinc/fakeredis-py/pull/10)
+* @BGroever made their first contribution in [#11](https://github.com/dsoftwareinc/fakeredis-py/pull/11)
+* @ikornaselur made their first contribution in [#19](https://github.com/dsoftwareinc/fakeredis-py/pull/19)
+* @OlegZv made their first contribution in [#20](https://github.com/dsoftwareinc/fakeredis-py/pull/20)
+
+**Full Changelog**: https://github.com/dsoftwareinc/fakeredis-py/compare/v1.7.5...v1.7.6
+
+#### Thanks to our sponsors this month
+
+- @beatgeek
+
+## v1.7.5
+
+#### What's Changed
+
+* Fix python3.8 redis4.2+ issue in [#6](https://github.com/dsoftwareinc/fakeredis-py/pull/6)
+
+**Full Changelog**: https://github.com/dsoftwareinc/fakeredis-py/compare/v1.7.4...v1.7.5
+
+## v1.7.4
+
+#### What's Changed
+
+* Support for python3.8 in [#1](https://github.com/dsoftwareinc/fakeredis-py/pull/1)
+* Feature/publish action in [#2](https://github.com/dsoftwareinc/fakeredis-py/pull/2)
+
+**Full Changelog**: https://github.com/dsoftwareinc/fakeredis-py/compare/1.7.1...v1.7.4
diff --git a/docs/about/contributing.md b/docs/about/contributing.md
new file mode 100644
index 0000000..1f8806f
--- /dev/null
+++ b/docs/about/contributing.md
@@ -0,0 +1,179 @@
+---
+markdown_extensions.toc.toc_depth: 3
+---
+# Contributing to fakeredis
+
+First off, thanks for taking the time to contribute! ❤️
+
+All types of contributions are encouraged and valued.
+See the [Table of Contents](#table-of-contents) for different ways to help and details about how this project handles them. Please make sure to read the relevant section before making your contribution. It will make it a lot easier for us maintainers and smooth out the experience for all involved. The community looks forward to your contributions. 🎉
+
+> And if you like the project, but just don't have time to contribute, that's fine. 
+> There are other easy ways to support the project and show your appreciation, which we would also be very happy about:
+>
+> - Star the project
+> - Tweet about it
+> - Refer this project in your project's readme
+> - Mention the project at local meetups and tell your friends/colleagues
+
+
+## Code of Conduct
+
+This project and everyone participating in it is governed by the
+[fakeredis Code of Conduct](https://github.com/cunla/fakeredis-py/blob/main/CODE_OF_CONDUCT.md).
+By participating, you are expected to uphold this code. Please report unacceptable behavior
+to <daniel.maruani@gmail.com>.
+
+
+## I Have a Question
+
+> If you want to ask a question, we assume that you have read the available [Documentation](https://github.com/cunla/fakeredis-py).
+
+Before you ask a question, it is best to search for existing [Issues](https://github.com/cunla/fakeredis-py/issues) that might help you. In case you have found a suitable issue and still need clarification, you can write your question in this issue. It is also advisable to search the internet for answers first.
+
+If you then still feel the need to ask a question and need clarification, we recommend the following:
+
+- Open an [Issue](https://github.com/cunla/fakeredis-py/issues/new).
+- Provide as much context as you can about what you're running into.
+- Provide project and platform versions (nodejs, npm, etc), depending on what seems relevant.
+
+We will then take care of the issue as soon as possible.
+
+<!--
+You might want to create a separate issue tag for questions and include it in this description. People should then tag their issues accordingly.
+
+Depending on how large the project is, you may want to outsource the questioning, e.g. to Stack Overflow or Gitter. You may add additional contact and information possibilities:
+- IRC
+- Slack
+- Gitter
+- Stack Overflow tag
+- Blog
+- FAQ
+- Roadmap
+- E-Mail List
+- Forum
+-->
+
+## I Want To Contribute
+
+> ### Legal Notice <!-- omit in toc -->
+> When contributing to this project, you must agree that you have authored 100% of the content, that you have the necessary rights to the content and that the content you contribute may be provided under the project license.
+
+## Reporting Bugs
+
+### Before Submitting a Bug Report
+
+A good bug report shouldn't leave others needing to chase you up for more information.
+Therefore, we ask you to investigate carefully, collect information and describe the issue in detail in your report.
+Please complete the following steps in advance to help us fix any potential bug as fast as possible.
+
+- Make sure that you are using the latest version.
+- Determine if your bug is really a bug and not an error on your side e.g. using incompatible
+  environment components/versions (Make sure that you have read the [documentation](https://github.com/cunla/fakeredis-py).
+  If you are looking for support, you might want to check [this section](#i-have-a-question)).
+- To see if other users have experienced (and potentially already solved) the same issue you are having,
+  check if there is not already a bug report existing for your bug or error in the [bug tracker](https://github.com/cunla/fakeredis-py/issues?q=label%3Abug).
+- Also make sure to search the internet (including Stack Overflow) to see if users outside the GitHub
+  community have discussed the issue.
+- Collect information about the bug:
+  - Stack trace (Traceback)
+  - OS, Platform and Version (Windows, Linux, macOS, x86, ARM)
+  - Version of the interpreter, compiler, SDK, runtime environment, package manager, depending on what seems relevant.
+  - Possibly your input and the output
+  - Can you reliably reproduce the issue? And can you also reproduce it with older versions?
+
+### How Do I Submit a Good Bug Report?
+
+> You must never report security related issues, vulnerabilities or bugs including sensitive information
+> to the issue tracker, or elsewhere in public.
+> Instead sensitive bugs must be sent by email to <daniel.maruani@gmail.com>.
+
+We use GitHub issues to track bugs and errors. If you run into an issue with the project:
+
+- Open an [Issue](https://github.com/cunla/fakeredis-py/issues/new).
+  (Since we can't be sure at this point whether it is a bug or not, we ask you not to talk about a bug yet and
+  not to label the issue.)
+- Follow the issue template and provide as much context as possible and describe the *reproduction steps* that someone else can follow to recreate the issue on their own.
+  This usually includes your code.
+  For good bug reports you should isolate the problem and create a reduced test case.
+- Provide the information you collected in the previous section.
+
+Once it's filed:
+
+- The project team will label the issue accordingly.
+- A team member will try to reproduce the issue with your provided steps. If there are no reproduction steps or no obvious way to reproduce the issue, the team will ask you for those steps and mark the issue as `needs-repro`. Bugs with the `needs-repro` tag will not be addressed until they are reproduced.
+- If the team is able to reproduce the issue, it will be marked `needs-fix`, as well as possibly other tags (such as `critical`), and the issue will be left to be [implemented by someone](#your-first-code-contribution).
+
+<!-- You might want to create an issue template for bugs and errors that can be used as a guide and that defines the structure of the information to be included. If you do so, reference it here in the description. -->
+
+
+## Suggesting Enhancements
+
+This section guides you through submitting an enhancement suggestion for fakeredis, **including completely new features and minor improvements to existing functionality**. Following these guidelines will help maintainers and the community to understand your suggestion and find related suggestions.
+
+### Before Submitting an Enhancement
+
+- Make sure that you are using the latest version.
+- Read the [documentation](https://github.com/cunla/fakeredis-py) carefully and find out if the functionality is already covered, maybe by an individual configuration.
+- Perform a [search](https://github.com/cunla/fakeredis-py/issues) to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one.
+- Find out whether your idea fits with the scope and aims of the project. It's up to you to make a strong case to convince the project's developers of the merits of this feature. Keep in mind that we want features that will be useful to the majority of our users and not just a small subset. If you're just targeting a minority of users, consider writing an add-on/plugin library.
+
+
+### How Do I Submit a Good Enhancement Suggestion?
+
+Enhancement suggestions are tracked as [GitHub issues](https://github.com/cunla/fakeredis-py/issues).
+
+- Use a **clear and descriptive title** for the issue to identify the suggestion.
+- Provide a **step-by-step description of the suggested enhancement** in as many details as possible.
+- **Describe the current behavior** and **explain which behavior you expected to see instead** and why. At this point you can also tell which alternatives do not work for you.
+- You may want to **include screenshots and animated GIFs** which help you demonstrate the steps or point out the part which the suggestion is related to. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) or [this tool](https://github.com/GNOME/byzanz) on Linux. <!-- this should only be included if the project has a GUI -->
+- **Explain why this enhancement would be useful** to most fakeredis users. You may also want to point out the other projects that solved it better and which could serve as inspiration.
+
+<!-- You might want to create an issue template for enhancement suggestions that can be used as a guide and that defines the structure of the information to be included. If you do so, reference it here in the description. -->
+
+## Your First Code Contribution
+Unsure where to begin contributing? You can start by looking through
+[help-wanted issues](https://github.com/cunla/fakeredis-py/labels/help%20wanted).
+
+Never contributed to open source before? Here are a couple of friendly
+tutorials:
+
+-   <http://makeapullrequest.com/>
+-   <http://www.firsttimersonly.com/>
+
+### Getting started
+- Create your own fork of the repository
+- Do the changes in your fork
+- Setup poetry `pip install poetry`
+- Let poetry install everything required for a local environment `poetry install`
+- To run all tests, use: `poetry run pytest -v`
+- Note: In order to run the tests, a real redis server should be running.
+  The tests are comparing the results of each command between fakeredis and a real redis.
+  - You can use `docker-compose up redis6` or `docker-compose up redis7` to run redis.
+- Run test with coverage using `poetry run pytest -v --cov=fakeredis --cov-branch`
+  and then you can run `coverage report`.
+
+## Improving The Documentation
+- Create your own fork of the repository
+- Do the changes in your fork, probably in `README.md`
+- Create a pull request with the changes.
+
+## Styleguides
+### Commit Messages
+Taken from [The seven rules of a great Git commit message](https://cbea.ms/git-commit/):
+
+1. Separate subject from body with a blank line
+2. Limit the subject line to 50 characters
+3. Capitalize the subject line
+4. Do not end the subject line with a period
+5. Use the imperative mood in the subject line
+6. Wrap the body at 72 characters
+7. Use the body to explain what and why vs. how
+
+## Join The Project Team
+If you wish to be added to the project team as a collaborator, please send 
+a message to daniel.maruani@gmail.com with explanation.
+
+<!-- omit in toc -->
+## Attribution
+This guide is based on the **contributing-gen**. [Make your own](https://github.com/bttger/contributing-gen)!
diff --git a/docs/about/license.md b/docs/about/license.md
new file mode 100644
index 0000000..384700b
--- /dev/null
+++ b/docs/about/license.md
@@ -0,0 +1,35 @@
+# License
+
+The legal stuff.
+
+---
+
+BSD 3-Clause License
+
+Copyright (c) 2022-, Daniel Moran, 2017-2018, Bruce Merry, 2011 James Saryerwinnie,
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this
+   list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright notice,
+   this list of conditions and the following disclaimer in the documentation
+   and/or other materials provided with the distribution.
+
+3. Neither the name of the copyright holder nor the names of its
+   contributors may be used to endorse or promote products derived from
+   this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/docs/guides/implement-command.md b/docs/guides/implement-command.md
new file mode 100644
index 0000000..abf4bd2
--- /dev/null
+++ b/docs/guides/implement-command.md
@@ -0,0 +1,107 @@
+# Implementing support for a command
+
+Creating a new command support should be done in the `FakeSocket` class (in `_fakesocket.py`) by creating the method
+and using `@command` decorator (which should be the command syntax, you can use existing samples on the file).
+
+For example:
+
+```python
+class FakeSocket(BaseFakeSocket, FakeLuaSocket):
+    # ...
+    @command(name='zscore', fixed=(Key(ZSet), bytes), repeat=(), flags=[])
+    def zscore(self, key, member):
+        try:
+            return self._encodefloat(key.value[member], False)
+        except KeyError:
+            return None
+```
+
+## Parsing command arguments
+The `extract_args` method should help extracting arguments from `*args`. 
+It extracts from actual arguments which arguments exist and their value if relevant.
+
+Parameters `extract_args` expect:
+- `actual_args`
+    The actual arguments to parse
+- `expected`
+    Arguments to look for, see below explanation.
+- `error_on_unexpected` (default: True)
+    Should an error be raised when actual_args contain an unexpected argument?
+- `left_from_first_unexpected` (default: True)
+    Once reaching an unexpected argument in actual_args,
+    Should parsing stop?
+
+It returns two lists:
+- List of values for expected arguments.
+- List of remaining args.
+
+### Expected argument structure:
+- If expected argument has only a name, it will be parsed as boolean
+  (Whether it exists in actual `*args` or not).
+- In order to parse a numerical value following the expected argument,
+  a `+` prefix is needed, e.g., `+px` will parse `args=('px', '1')` as `px=1`
+- In order to parse a string value following the expected argument,
+  a `*` prefix is needed, e.g., `*type` will parse `args=('type', 'number')` as `type='number'`
+- You can have more than one `+`/`*`, e.g., `++limit` will parse `args=('limit','1','10')`
+  as `limit=(1,10)`
+
+## How to use `@command` decorator
+
+The `@command` decorator register the method as a redis command and define the accepted format for it.
+It will create a `Signature` instance for the command. Whenever the command is triggered, the `Signature.apply(..)`
+method will be triggered to check the validity of syntax and analyze the command arguments.
+
+By default, it takes the name of the method as the command name.
+
+If the method implements a subcommand (eg, `SCRIPT LOAD`), a Redis module command (eg, `JSON.GET`),
+or a python reserve word where you can not use it as the method name (eg, `EXEC`), then you can supply
+explicitly the name parameter.
+
+If the command implemented require certain arguments, they can be supplied in the first parameter as a tuple.
+When receiving the command through the socket, the bytes will be converted to the argument types
+supplied or remain as `bytes`.
+
+Argument types (All in `_commands.py`):
+
+- `Key(KeyType)` - Will get from the DB the key and validate its value is of `KeyType` (if `KeyType` is supplied).
+  It will generate a `CommandItem` from it which provides access to the database value.
+- `Int` - Decode the `bytes` to `int` and vice versa.
+- `DbIndex`/`BitOffset`/`BitValue`/`Timeout` - Basically the same behavior as `Int`, but with different messages when
+  encode/decode fail.
+- `Hash` - dictionary, usually describe the type of value stored in Key `Key(Hash)`
+- `Float` - Encode/Decode `bytes` <-> `float`
+- `SortFloat` - Similar to `Float` with different error messages.
+- `ScoreTest` - Argument converter for sorted set score endpoints.
+- `StringTest` - Argument converter for sorted set endpoints (lex).
+- `ZSet` - Sorted Set.
+
+## Implement a test for it
+
+There are multiple scenarios for test, with different versions of redis server, redis-py, etc.
+The tests not only assert the validity of output but runs the same test on a real redis-server and compares the output
+to the real server output.
+
+- Create tests in the relevant test file.
+- If support for the command was introduced in a certain version of redis-py (
+  see [redis-py release notes](https://github.com/redis/redis-py/releases/tag/v4.3.4)) you can use the
+  decorator `@testtools.run_test_if_redispy_ver` on your tests. example:
+
+```python
+@testtools.run_test_if_redispy_ver('above', '4.2.0')  # This will run for redis-py 4.2.0 or above.
+def test_expire_should_not_expire__when_no_expire_is_set(r):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    assert r.expire('foo', 1, xx=True) == 0
+```
+
+## Updating documentation
+
+Lastly, run from the root of the project the script to regenerate documentation for
+supported and unsupported commands:
+
+```bash
+python scripts/supported.py    
+```
+
+Include the changes in the `docs/` directory in your pull request.
+
diff --git a/docs/guides/test-case.md b/docs/guides/test-case.md
new file mode 100644
index 0000000..a847197
--- /dev/null
+++ b/docs/guides/test-case.md
@@ -0,0 +1,28 @@
+
+# Write a new test case
+
+There are multiple scenarios for test, with different versions of python, redis-py and redis server, etc.
+The tests not only assert the validity of the expected output with FakeRedis but also with a real redis server.
+That way parity of real Redis and FakeRedis is ensured.
+
+To write a new test case for a command:
+
+- Determine which mixin the command belongs to and the test file for
+  the mixin (eg, `string_mixin.py` => `test_string_commands.py`).
+- Tests should support python 3.7 and above.
+- Determine when support for the command was introduced
+    - To limit the redis-server versions it will run on use:
+      `@pytest.mark.max_server(version)` and `@pytest.mark.min_server(version)`
+    - To limit the redis-py version use `@run_test_if_redispy_ver(above/below, version)`
+- pytest will inject a redis connection to the argument `r` of the test.
+
+Sample of running a test for redis-py v4.2.0 and above, redis-server 7.0 and above.
+
+```python
+@pytest.mark.min_server('7')
+@testtools.run_test_if_redispy_ver('above', '4.2.0')
+def test_expire_should_not_expire__when_no_expire_is_set(r):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    assert r.expire('foo', 1, xx=True) == 0
+```
diff --git a/docs/index.md b/docs/index.md
new file mode 100644
index 0000000..bacd248
--- /dev/null
+++ b/docs/index.md
@@ -0,0 +1,247 @@
+## fakeredis: A python implementation of redis server
+
+FakeRedis is a pure-Python implementation of the Redis key-value store.
+
+It enables running tests requiring redis server without an actual server.
+
+It provides enhanced versions of the redis-py Python bindings for Redis. That provide the following added functionality:
+A built-in Redis server that is automatically installed, configured and managed when the Redis bindings are used. A
+single server shared by multiple programs or multiple independent servers. All the servers provided by
+FakeRedis support all Redis functionality including advanced features such as RedisJson, GeoCommands.
+
+For a list of supported/unsupported redis commands, see [Supported commands](./redis-commands/implemented_commands.md).
+
+## Installation
+
+To install fakeredis-py, simply:
+
+```bash
+pip install fakeredis        ## No additional modules support
+
+pip install fakeredis[lua]   ## Support for LUA scripts
+
+pip install fakeredis[json]  ## Support for RedisJSON commands
+```
+
+## How to Use
+
+### General usage
+
+FakeRedis can imitate Redis server version 6.x or 7.x.
+If you do not specify the version, version 7 is used by default.
+
+The intent is for fakeredis to act as though you're talking to a real
+redis server. It does this by storing state internally.
+For example:
+
+```pycon
+>>> import fakeredis
+>>> r = fakeredis.FakeStrictRedis(version=6)
+>>> r.set('foo', 'bar')
+True
+>>> r.get('foo')
+'bar'
+>>> r.lpush('bar', 1)
+1
+>>> r.lpush('bar', 2)
+2
+>>> r.lrange('bar', 0, -1)
+[2, 1]
+```
+
+The state is stored in an instance of `FakeServer`. If one is not provided at
+construction, a new instance is automatically created for you, but you can
+explicitly create one to share state:
+
+```pycon
+>>> import fakeredis
+>>> server = fakeredis.FakeServer()
+>>> r1 = fakeredis.FakeStrictRedis(server=server)
+>>> r1.set('foo', 'bar')
+True
+>>> r2 = fakeredis.FakeStrictRedis(server=server)
+>>> r2.get('foo')
+'bar'
+>>> r2.set('bar', 'baz')
+True
+>>> r1.get('bar')
+'baz'
+>>> r2.get('bar')
+'baz'
+```
+
+It is also possible to mock connection errors, so you can effectively test
+your error handling. Simply set the connected attribute of the server to
+`False` after initialization.
+
+```pycon
+>>> import fakeredis
+>>> server = fakeredis.FakeServer()
+>>> server.connected = False
+>>> r = fakeredis.FakeStrictRedis(server=server)
+>>> r.set('foo', 'bar')
+ConnectionError: FakeRedis is emulating a connection error.
+>>> server.connected = True
+>>> r.set('foo', 'bar')
+True
+```
+
+Fakeredis implements the same interface as `redis-py`, the popular
+redis client for python, and models the responses of redis 6.x or 7.x.
+
+### async Redis
+
+async redis client is supported. Instead of using `fakeredis.FakeRedis`, use `fakeredis.aioredis.FakeRedis`.
+
+```pycon
+>>> from fakeredis import aioredis
+>>> r1 = aioredis.FakeRedis()
+>>> await r1.set('foo', 'bar')
+True
+>>> await r1.get('foo')
+'bar'
+```
+
+### Use to test django cache
+
+Update your cache settings:
+
+```python
+from fakeredis import FakeConnection
+
+CACHES = {
+    'default': {
+        'BACKEND': 'django.core.cache.backends.redis.RedisCache',
+        'LOCATION': '...',
+        'OPTIONS': {
+            'connection_class': FakeConnection
+        }
+    }
+}
+```
+
+You can use
+django [`@override_settings` decorator](https://docs.djangoproject.com/en/4.1/topics/testing/tools/#django.test.override_settings)
+
+### Use to test django-rq
+
+There is a need to override `django_rq.queues.get_redis_connection` with
+a method returning the same connection.
+
+```python
+from fakeredis import FakeRedisConnSingleton
+
+django_rq.queues.get_redis_connection = FakeRedisConnSingleton()
+```
+
+## Known Limitations
+
+Apart from unimplemented commands, there are a number of cases where fakeredis
+won't give identical results to real redis. The following are differences that
+are unlikely to ever be fixed; there are also differences that are fixable
+(such as commands that do not support all features) which should be filed as
+bugs in GitHub.
+
+- Hyperloglogs are implemented using sets underneath. This means that the
+  `type` command will return the wrong answer, you can't use `get` to retrieve
+  the encoded value, and counts will be slightly different (they will in fact be
+  exact).
+- When a command has multiple error conditions, such as operating on a key of
+  the wrong type and an integer argument is not well-formed, the choice of
+  error to return may not match redis.
+
+- The `incrbyfloat` and `hincrbyfloat` commands in redis use the C `long
+  double` type, which typically has more precision than Python's `float`
+  type.
+
+- Redis makes guarantees about the order in which clients blocked on blocking
+  commands are woken up. Fakeredis does not honour these guarantees.
+
+- Where redis contains bugs, fakeredis generally does not try to provide exact
+  bug-compatibility. It's not practical for fakeredis to try to match the set
+  of bugs in your specific version of redis.
+
+- There are a number of cases where the behaviour of redis is undefined, such
+  as the order of elements returned by set and hash commands. Fakeredis will
+  generally not produce the same results, and in Python versions before 3.6
+  may produce different results each time the process is re-run.
+
+- SCAN/ZSCAN/HSCAN/SSCAN will not necessarily iterate all items if items are
+  deleted or renamed during iteration. They also won't necessarily iterate in
+  the same chunk sizes or the same order as redis. This is aligned with redis behavior as
+  can be seen in tests `test_scan_delete_key_while_scanning_should_not_returns_it_in_scan`.
+
+- DUMP/RESTORE will not return or expect data in the RDB format. Instead, the
+  `pickle` module is used to mimic an opaque and non-standard format.
+  **WARNING**: Do not use RESTORE with untrusted data, as a malicious pickle
+  can execute arbitrary code.
+
+## Local development environment
+
+To ensure parity with the real redis, there are a set of integration tests
+that mirror the unittests. For every unittest that is written, the same
+test is run against a real redis instance using a real redis-py client
+instance. In order to run these tests you must have a redis server running
+on localhost, port 6379 (the default settings). **WARNING**: the tests will
+completely wipe your database!
+
+First install poetry if you don't have it, and then install all the dependencies:
+
+```bash
+pip install poetry
+poetry install
+``` 
+
+To run all the tests:
+
+```bash
+poetry run pytest -v
+```
+
+If you only want to run tests against fake redis, without a real redis::
+
+```bash
+poetry run pytest -m fake
+```
+
+Because this module is attempting to provide the same interface as `redis-py`,
+the python bindings to redis, a reasonable way to test this to take each
+unittest and run it against a real redis server. fakeredis and the real redis
+server should give the same result. To run tests against a real redis instance
+instead:
+
+```bash
+poetry run pytest -m real
+```
+
+If redis is not running, and you try to run tests against a real redis server,
+these tests will have a result of 's' for skipped.
+
+There are some tests that test redis blocking operations that are somewhat
+slow. If you want to skip these tests during day to day development,
+they have all been tagged as 'slow' so you can skip them by running:
+
+```bash
+poetry run pytest -m "not slow"
+```
+
+## Contributing
+
+Contributions are welcome.
+You can contribute in many ways:
+Open issues for bugs you found, implementing a command which is not yet implemented,
+implement a test for scenario that is not covered yet, write a guide how to use fakeredis, etc.
+
+Please see the [contributing guide](./about/contributing.md) for more details.
+If you'd like to help out, you can start with any of the issues labeled with `Help wanted`.
+
+There are guides how to [implement a new command](#implementing-support-for-a-command) and
+how to [write new test cases](#write-a-new-test-case).
+
+New contribution guides are welcome.
+
+## Sponsor
+
+fakeredis-py is developed for free.
+
+You can support this project by becoming a sponsor using [this link](https://github.com/sponsors/cunla).
diff --git a/docs/redis-commands/Redis.md b/docs/redis-commands/Redis.md
new file mode 100644
index 0000000..f5f7b25
--- /dev/null
+++ b/docs/redis-commands/Redis.md
@@ -0,0 +1,1580 @@
+# Redis commands
+
+## server commands
+
+### [BGSAVE](https://redis.io/commands/bgsave/)
+
+Asynchronously saves the database(s) to disk.
+
+### [DBSIZE](https://redis.io/commands/dbsize/)
+
+Returns the number of keys in the database.
+
+### [FLUSHALL](https://redis.io/commands/flushall/)
+
+Removes all keys from all databases.
+
+### [FLUSHDB](https://redis.io/commands/flushdb/)
+
+Remove all keys from the current database.
+
+### [LASTSAVE](https://redis.io/commands/lastsave/)
+
+Returns the Unix timestamp of the last successful save to disk.
+
+### [SAVE](https://redis.io/commands/save/)
+
+Synchronously saves the database(s) to disk.
+
+### [SWAPDB](https://redis.io/commands/swapdb/)
+
+Swaps two Redis databases.
+
+### [TIME](https://redis.io/commands/time/)
+
+Returns the server time.
+
+
+### Unsupported server commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [ACL](https://redis.io/commands/acl/) <small>(not implemented)</small>
+
+A container for Access List Control commands.
+
+#### [ACL CAT](https://redis.io/commands/acl-cat/) <small>(not implemented)</small>
+
+Lists the ACL categories, or the commands inside a category.
+
+#### [ACL DELUSER](https://redis.io/commands/acl-deluser/) <small>(not implemented)</small>
+
+Deletes ACL users, and terminates their connections.
+
+#### [ACL DRYRUN](https://redis.io/commands/acl-dryrun/) <small>(not implemented)</small>
+
+Simulates the execution of a command by a user, without executing the command.
+
+#### [ACL GENPASS](https://redis.io/commands/acl-genpass/) <small>(not implemented)</small>
+
+Generates a pseudorandom, secure password that can be used to identify ACL users.
+
+#### [ACL GETUSER](https://redis.io/commands/acl-getuser/) <small>(not implemented)</small>
+
+Lists the ACL rules of a user.
+
+#### [ACL HELP](https://redis.io/commands/acl-help/) <small>(not implemented)</small>
+
+Returns helpful text about the different subcommands.
+
+#### [ACL LIST](https://redis.io/commands/acl-list/) <small>(not implemented)</small>
+
+Dumps the effective rules in ACL file format.
+
+#### [ACL LOAD](https://redis.io/commands/acl-load/) <small>(not implemented)</small>
+
+Reloads the rules from the configured ACL file.
+
+#### [ACL LOG](https://redis.io/commands/acl-log/) <small>(not implemented)</small>
+
+Lists recent security events generated due to ACL rules.
+
+#### [ACL SAVE](https://redis.io/commands/acl-save/) <small>(not implemented)</small>
+
+Saves the effective ACL rules in the configured ACL file.
+
+#### [ACL SETUSER](https://redis.io/commands/acl-setuser/) <small>(not implemented)</small>
+
+Creates and modifies an ACL user and its rules.
+
+#### [ACL USERS](https://redis.io/commands/acl-users/) <small>(not implemented)</small>
+
+Lists all ACL users.
+
+#### [ACL WHOAMI](https://redis.io/commands/acl-whoami/) <small>(not implemented)</small>
+
+Returns the authenticated username of the current connection.
+
+#### [BGREWRITEAOF](https://redis.io/commands/bgrewriteaof/) <small>(not implemented)</small>
+
+Asynchronously rewrites the append-only file to disk.
+
+#### [COMMAND](https://redis.io/commands/command/) <small>(not implemented)</small>
+
+Returns detailed information about all commands.
+
+#### [COMMAND COUNT](https://redis.io/commands/command-count/) <small>(not implemented)</small>
+
+Returns a count of commands.
+
+#### [COMMAND DOCS](https://redis.io/commands/command-docs/) <small>(not implemented)</small>
+
+Returns documentary information about a command.
+
+#### [COMMAND GETKEYS](https://redis.io/commands/command-getkeys/) <small>(not implemented)</small>
+
+Extracts the key names from an arbitrary command.
+
+#### [COMMAND GETKEYSANDFLAGS](https://redis.io/commands/command-getkeysandflags/) <small>(not implemented)</small>
+
+Extracts the key names and access flags for an arbitrary command.
+
+#### [COMMAND HELP](https://redis.io/commands/command-help/) <small>(not implemented)</small>
+
+Returns helpful text about the different subcommands.
+
+#### [COMMAND INFO](https://redis.io/commands/command-info/) <small>(not implemented)</small>
+
+Returns information about one, multiple or all commands.
+
+#### [COMMAND LIST](https://redis.io/commands/command-list/) <small>(not implemented)</small>
+
+Returns a list of command names.
+
+#### [CONFIG](https://redis.io/commands/config/) <small>(not implemented)</small>
+
+A container for server configuration commands.
+
+#### [CONFIG GET](https://redis.io/commands/config-get/) <small>(not implemented)</small>
+
+Returns the effective values of configuration parameters.
+
+#### [CONFIG HELP](https://redis.io/commands/config-help/) <small>(not implemented)</small>
+
+Returns helpful text about the different subcommands.
+
+#### [CONFIG RESETSTAT](https://redis.io/commands/config-resetstat/) <small>(not implemented)</small>
+
+Resets the server's statistics.
+
+#### [CONFIG REWRITE](https://redis.io/commands/config-rewrite/) <small>(not implemented)</small>
+
+Persists the effective configuration to file.
+
+#### [CONFIG SET](https://redis.io/commands/config-set/) <small>(not implemented)</small>
+
+Sets configuration parameters in-flight.
+
+#### [DEBUG](https://redis.io/commands/debug/) <small>(not implemented)</small>
+
+A container for debugging commands.
+
+#### [FAILOVER](https://redis.io/commands/failover/) <small>(not implemented)</small>
+
+Starts a coordinated failover from a server to one of its replicas.
+
+#### [INFO](https://redis.io/commands/info/) <small>(not implemented)</small>
+
+Returns information and statistics about the server.
+
+#### [LATENCY](https://redis.io/commands/latency/) <small>(not implemented)</small>
+
+A container for latency diagnostics commands.
+
+#### [LATENCY DOCTOR](https://redis.io/commands/latency-doctor/) <small>(not implemented)</small>
+
+Returns a human-readable latency analysis report.
+
+#### [LATENCY GRAPH](https://redis.io/commands/latency-graph/) <small>(not implemented)</small>
+
+Returns a latency graph for an event.
+
+#### [LATENCY HELP](https://redis.io/commands/latency-help/) <small>(not implemented)</small>
+
+Returns helpful text about the different subcommands.
+
+#### [LATENCY HISTOGRAM](https://redis.io/commands/latency-histogram/) <small>(not implemented)</small>
+
+Returns the cumulative distribution of latencies of a subset or all commands.
+
+#### [LATENCY HISTORY](https://redis.io/commands/latency-history/) <small>(not implemented)</small>
+
+Returns timestamp-latency samples for an event.
+
+#### [LATENCY LATEST](https://redis.io/commands/latency-latest/) <small>(not implemented)</small>
+
+Returns the latest latency samples for all events.
+
+#### [LATENCY RESET](https://redis.io/commands/latency-reset/) <small>(not implemented)</small>
+
+Resets the latency data for one or more events.
+
+#### [LOLWUT](https://redis.io/commands/lolwut/) <small>(not implemented)</small>
+
+Displays computer art and the Redis version
+
+#### [MEMORY](https://redis.io/commands/memory/) <small>(not implemented)</small>
+
+A container for memory diagnostics commands.
+
+#### [MEMORY DOCTOR](https://redis.io/commands/memory-doctor/) <small>(not implemented)</small>
+
+Outputs a memory problems report.
+
+#### [MEMORY HELP](https://redis.io/commands/memory-help/) <small>(not implemented)</small>
+
+Returns helpful text about the different subcommands.
+
+#### [MEMORY MALLOC-STATS](https://redis.io/commands/memory-malloc-stats/) <small>(not implemented)</small>
+
+Returns the allocator statistics.
+
+#### [MEMORY PURGE](https://redis.io/commands/memory-purge/) <small>(not implemented)</small>
+
+Asks the allocator to release memory.
+
+#### [MEMORY STATS](https://redis.io/commands/memory-stats/) <small>(not implemented)</small>
+
+Returns details about memory usage.
+
+#### [MEMORY USAGE](https://redis.io/commands/memory-usage/) <small>(not implemented)</small>
+
+Estimates the memory usage of a key.
+
+#### [MODULE](https://redis.io/commands/module/) <small>(not implemented)</small>
+
+A container for module commands.
+
+#### [MODULE HELP](https://redis.io/commands/module-help/) <small>(not implemented)</small>
+
+Returns helpful text about the different subcommands.
+
+#### [MODULE LIST](https://redis.io/commands/module-list/) <small>(not implemented)</small>
+
+Returns all loaded modules.
+
+#### [MODULE LOAD](https://redis.io/commands/module-load/) <small>(not implemented)</small>
+
+Loads a module.
+
+#### [MODULE LOADEX](https://redis.io/commands/module-loadex/) <small>(not implemented)</small>
+
+Loads a module using extended parameters.
+
+#### [MODULE UNLOAD](https://redis.io/commands/module-unload/) <small>(not implemented)</small>
+
+Unloads a module.
+
+#### [MONITOR](https://redis.io/commands/monitor/) <small>(not implemented)</small>
+
+Listens for all requests received by the server in real-time.
+
+#### [PSYNC](https://redis.io/commands/psync/) <small>(not implemented)</small>
+
+An internal command used in replication.
+
+#### [REPLCONF](https://redis.io/commands/replconf/) <small>(not implemented)</small>
+
+An internal command for configuring the replication stream.
+
+#### [REPLICAOF](https://redis.io/commands/replicaof/) <small>(not implemented)</small>
+
+Configures a server as replica of another, or promotes it to a master.
+
+#### [RESTORE-ASKING](https://redis.io/commands/restore-asking/) <small>(not implemented)</small>
+
+An internal command for migrating keys in a cluster.
+
+#### [ROLE](https://redis.io/commands/role/) <small>(not implemented)</small>
+
+Returns the replication role.
+
+#### [SHUTDOWN](https://redis.io/commands/shutdown/) <small>(not implemented)</small>
+
+Synchronously saves the database(s) to disk and shuts down the Redis server.
+
+#### [SLAVEOF](https://redis.io/commands/slaveof/) <small>(not implemented)</small>
+
+Sets a Redis server as a replica of another, or promotes it to being a master.
+
+#### [SLOWLOG](https://redis.io/commands/slowlog/) <small>(not implemented)</small>
+
+A container for slow log commands.
+
+#### [SLOWLOG GET](https://redis.io/commands/slowlog-get/) <small>(not implemented)</small>
+
+Returns the slow log's entries.
+
+#### [SLOWLOG HELP](https://redis.io/commands/slowlog-help/) <small>(not implemented)</small>
+
+Show helpful text about the different subcommands
+
+#### [SLOWLOG LEN](https://redis.io/commands/slowlog-len/) <small>(not implemented)</small>
+
+Returns the number of entries in the slow log.
+
+#### [SLOWLOG RESET](https://redis.io/commands/slowlog-reset/) <small>(not implemented)</small>
+
+Clears all entries from the slow log.
+
+#### [SYNC](https://redis.io/commands/sync/) <small>(not implemented)</small>
+
+An internal command used in replication.
+
+
+## string commands
+
+### [APPEND](https://redis.io/commands/append/)
+
+Appends a string to the value of a key. Creates the key if it doesn't exist.
+
+### [DECR](https://redis.io/commands/decr/)
+
+Decrements the integer value of a key by one. Uses 0 as initial value if the key doesn't exist.
+
+### [DECRBY](https://redis.io/commands/decrby/)
+
+Decrements a number from the integer value of a key. Uses 0 as initial value if the key doesn't exist.
+
+### [GET](https://redis.io/commands/get/)
+
+Returns the string value of a key.
+
+### [GETDEL](https://redis.io/commands/getdel/)
+
+Returns the string value of a key after deleting the key.
+
+### [GETEX](https://redis.io/commands/getex/)
+
+Returns the string value of a key after setting its expiration time.
+
+### [GETRANGE](https://redis.io/commands/getrange/)
+
+Returns a substring of the string stored at a key.
+
+### [GETSET](https://redis.io/commands/getset/)
+
+Returns the previous string value of a key after setting it to a new value.
+
+### [INCR](https://redis.io/commands/incr/)
+
+Increments the integer value of a key by one. Uses 0 as initial value if the key doesn't exist.
+
+### [INCRBY](https://redis.io/commands/incrby/)
+
+Increments the integer value of a key by a number. Uses 0 as initial value if the key doesn't exist.
+
+### [INCRBYFLOAT](https://redis.io/commands/incrbyfloat/)
+
+Increment the floating point value of a key by a number. Uses 0 as initial value if the key doesn't exist.
+
+### [LCS](https://redis.io/commands/lcs/)
+
+Finds the longest common substring.
+
+### [MGET](https://redis.io/commands/mget/)
+
+Atomically returns the string values of one or more keys.
+
+### [MSET](https://redis.io/commands/mset/)
+
+Atomically creates or modifies the string values of one or more keys.
+
+### [MSETNX](https://redis.io/commands/msetnx/)
+
+Atomically modifies the string values of one or more keys only when all keys don't exist.
+
+### [PSETEX](https://redis.io/commands/psetex/)
+
+Sets both string value and expiration time in milliseconds of a key. The key is created if it doesn't exist.
+
+### [SET](https://redis.io/commands/set/)
+
+Sets the string value of a key, ignoring its type. The key is created if it doesn't exist.
+
+### [SETEX](https://redis.io/commands/setex/)
+
+Sets the string value and expiration time of a key. Creates the key if it doesn't exist.
+
+### [SETNX](https://redis.io/commands/setnx/)
+
+Set the string value of a key only when the key doesn't exist.
+
+### [SETRANGE](https://redis.io/commands/setrange/)
+
+Overwrites a part of a string value with another by an offset. Creates the key if it doesn't exist.
+
+### [STRLEN](https://redis.io/commands/strlen/)
+
+Returns the length of a string value.
+
+### [SUBSTR](https://redis.io/commands/substr/)
+
+Returns a substring from a string value.
+
+
+
+
+### Unsupported cluster commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [ASKING](https://redis.io/commands/asking/) <small>(not implemented)</small>
+
+Signals that a cluster client is following an -ASK redirect.
+
+#### [CLUSTER](https://redis.io/commands/cluster/) <small>(not implemented)</small>
+
+A container for Redis Cluster commands.
+
+#### [CLUSTER ADDSLOTS](https://redis.io/commands/cluster-addslots/) <small>(not implemented)</small>
+
+Assigns new hash slots to a node.
+
+#### [CLUSTER ADDSLOTSRANGE](https://redis.io/commands/cluster-addslotsrange/) <small>(not implemented)</small>
+
+Assigns new hash slot ranges to a node.
+
+#### [CLUSTER BUMPEPOCH](https://redis.io/commands/cluster-bumpepoch/) <small>(not implemented)</small>
+
+Advances the cluster config epoch.
+
+#### [CLUSTER COUNT-FAILURE-REPORTS](https://redis.io/commands/cluster-count-failure-reports/) <small>(not implemented)</small>
+
+Returns the number of active failure reports active for a node.
+
+#### [CLUSTER COUNTKEYSINSLOT](https://redis.io/commands/cluster-countkeysinslot/) <small>(not implemented)</small>
+
+Returns the number of keys in a hash slot.
+
+#### [CLUSTER DELSLOTS](https://redis.io/commands/cluster-delslots/) <small>(not implemented)</small>
+
+Sets hash slots as unbound for a node.
+
+#### [CLUSTER DELSLOTSRANGE](https://redis.io/commands/cluster-delslotsrange/) <small>(not implemented)</small>
+
+Sets hash slot ranges as unbound for a node.
+
+#### [CLUSTER FAILOVER](https://redis.io/commands/cluster-failover/) <small>(not implemented)</small>
+
+Forces a replica to perform a manual failover of its master.
+
+#### [CLUSTER FLUSHSLOTS](https://redis.io/commands/cluster-flushslots/) <small>(not implemented)</small>
+
+Deletes all slots information from a node.
+
+#### [CLUSTER FORGET](https://redis.io/commands/cluster-forget/) <small>(not implemented)</small>
+
+Removes a node from the nodes table.
+
+#### [CLUSTER GETKEYSINSLOT](https://redis.io/commands/cluster-getkeysinslot/) <small>(not implemented)</small>
+
+Returns the key names in a hash slot.
+
+#### [CLUSTER HELP](https://redis.io/commands/cluster-help/) <small>(not implemented)</small>
+
+Returns helpful text about the different subcommands.
+
+#### [CLUSTER INFO](https://redis.io/commands/cluster-info/) <small>(not implemented)</small>
+
+Returns information about the state of a node.
+
+#### [CLUSTER KEYSLOT](https://redis.io/commands/cluster-keyslot/) <small>(not implemented)</small>
+
+Returns the hash slot for a key.
+
+#### [CLUSTER LINKS](https://redis.io/commands/cluster-links/) <small>(not implemented)</small>
+
+Returns a list of all TCP links to and from peer nodes.
+
+#### [CLUSTER MEET](https://redis.io/commands/cluster-meet/) <small>(not implemented)</small>
+
+Forces a node to handshake with another node.
+
+#### [CLUSTER MYID](https://redis.io/commands/cluster-myid/) <small>(not implemented)</small>
+
+Returns the ID of a node.
+
+#### [CLUSTER MYSHARDID](https://redis.io/commands/cluster-myshardid/) <small>(not implemented)</small>
+
+Returns the shard ID of a node.
+
+#### [CLUSTER NODES](https://redis.io/commands/cluster-nodes/) <small>(not implemented)</small>
+
+Returns the cluster configuration for a node.
+
+#### [CLUSTER REPLICAS](https://redis.io/commands/cluster-replicas/) <small>(not implemented)</small>
+
+Lists the replica nodes of a master node.
+
+#### [CLUSTER REPLICATE](https://redis.io/commands/cluster-replicate/) <small>(not implemented)</small>
+
+Configure a node as replica of a master node.
+
+#### [CLUSTER RESET](https://redis.io/commands/cluster-reset/) <small>(not implemented)</small>
+
+Resets a node.
+
+#### [CLUSTER SAVECONFIG](https://redis.io/commands/cluster-saveconfig/) <small>(not implemented)</small>
+
+Forces a node to save the cluster configuration to disk.
+
+#### [CLUSTER SET-CONFIG-EPOCH](https://redis.io/commands/cluster-set-config-epoch/) <small>(not implemented)</small>
+
+Sets the configuration epoch for a new node.
+
+#### [CLUSTER SETSLOT](https://redis.io/commands/cluster-setslot/) <small>(not implemented)</small>
+
+Binds a hash slot to a node.
+
+#### [CLUSTER SHARDS](https://redis.io/commands/cluster-shards/) <small>(not implemented)</small>
+
+Returns the mapping of cluster slots to shards.
+
+#### [CLUSTER SLAVES](https://redis.io/commands/cluster-slaves/) <small>(not implemented)</small>
+
+Lists the replica nodes of a master node.
+
+#### [CLUSTER SLOTS](https://redis.io/commands/cluster-slots/) <small>(not implemented)</small>
+
+Returns the mapping of cluster slots to nodes.
+
+#### [READONLY](https://redis.io/commands/readonly/) <small>(not implemented)</small>
+
+Enables read-only queries for a connection to a Redis Cluster replica node.
+
+#### [READWRITE](https://redis.io/commands/readwrite/) <small>(not implemented)</small>
+
+Enables read-write queries for a connection to a Reids Cluster replica node.
+
+
+## connection commands
+
+### [ECHO](https://redis.io/commands/echo/)
+
+Returns the given string.
+
+### [PING](https://redis.io/commands/ping/)
+
+Returns the server's liveliness response.
+
+### [SELECT](https://redis.io/commands/select/)
+
+Changes the selected database.
+
+
+### Unsupported connection commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [AUTH](https://redis.io/commands/auth/) <small>(not implemented)</small>
+
+Authenticates the connection.
+
+#### [CLIENT](https://redis.io/commands/client/) <small>(not implemented)</small>
+
+A container for client connection commands.
+
+#### [CLIENT CACHING](https://redis.io/commands/client-caching/) <small>(not implemented)</small>
+
+Instructs the server whether to track the keys in the next request.
+
+#### [CLIENT GETNAME](https://redis.io/commands/client-getname/) <small>(not implemented)</small>
+
+Returns the name of the connection.
+
+#### [CLIENT GETREDIR](https://redis.io/commands/client-getredir/) <small>(not implemented)</small>
+
+Returns the client ID to which the connection's tracking notifications are redirected.
+
+#### [CLIENT HELP](https://redis.io/commands/client-help/) <small>(not implemented)</small>
+
+Returns helpful text about the different subcommands.
+
+#### [CLIENT ID](https://redis.io/commands/client-id/) <small>(not implemented)</small>
+
+Returns the unique client ID of the connection.
+
+#### [CLIENT INFO](https://redis.io/commands/client-info/) <small>(not implemented)</small>
+
+Returns information about the connection.
+
+#### [CLIENT KILL](https://redis.io/commands/client-kill/) <small>(not implemented)</small>
+
+Terminates open connections.
+
+#### [CLIENT LIST](https://redis.io/commands/client-list/) <small>(not implemented)</small>
+
+Lists open connections.
+
+#### [CLIENT NO-EVICT](https://redis.io/commands/client-no-evict/) <small>(not implemented)</small>
+
+Sets the client eviction mode of the connection.
+
+#### [CLIENT NO-TOUCH](https://redis.io/commands/client-no-touch/) <small>(not implemented)</small>
+
+Controls whether commands sent by the client affect the LRU/LFU of accessed keys.
+
+#### [CLIENT PAUSE](https://redis.io/commands/client-pause/) <small>(not implemented)</small>
+
+Suspends commands processing.
+
+#### [CLIENT REPLY](https://redis.io/commands/client-reply/) <small>(not implemented)</small>
+
+Instructs the server whether to reply to commands.
+
+#### [CLIENT SETINFO](https://redis.io/commands/client-setinfo/) <small>(not implemented)</small>
+
+Sets information specific to the client or connection.
+
+#### [CLIENT SETNAME](https://redis.io/commands/client-setname/) <small>(not implemented)</small>
+
+Sets the connection name.
+
+#### [CLIENT TRACKING](https://redis.io/commands/client-tracking/) <small>(not implemented)</small>
+
+Controls server-assisted client-side caching for the connection.
+
+#### [CLIENT TRACKINGINFO](https://redis.io/commands/client-trackinginfo/) <small>(not implemented)</small>
+
+Returns information about server-assisted client-side caching for the connection.
+
+#### [CLIENT UNBLOCK](https://redis.io/commands/client-unblock/) <small>(not implemented)</small>
+
+Unblocks a client blocked by a blocking command from a different connection.
+
+#### [CLIENT UNPAUSE](https://redis.io/commands/client-unpause/) <small>(not implemented)</small>
+
+Resumes processing commands from paused clients.
+
+#### [HELLO](https://redis.io/commands/hello/) <small>(not implemented)</small>
+
+Handshakes with the Redis server.
+
+#### [QUIT](https://redis.io/commands/quit/) <small>(not implemented)</small>
+
+Closes the connection.
+
+#### [RESET](https://redis.io/commands/reset/) <small>(not implemented)</small>
+
+Resets the connection.
+
+
+## bitmap commands
+
+### [BITCOUNT](https://redis.io/commands/bitcount/)
+
+Counts the number of set bits (population counting) in a string.
+
+### [BITOP](https://redis.io/commands/bitop/)
+
+Performs bitwise operations on multiple strings, and stores the result.
+
+### [BITPOS](https://redis.io/commands/bitpos/)
+
+Finds the first set (1) or clear (0) bit in a string.
+
+### [GETBIT](https://redis.io/commands/getbit/)
+
+Returns a bit value by offset.
+
+### [SETBIT](https://redis.io/commands/setbit/)
+
+Sets or clears the bit at offset of the string value. Creates the key if it doesn't exist.
+
+
+### Unsupported bitmap commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [BITFIELD](https://redis.io/commands/bitfield/) <small>(not implemented)</small>
+
+Performs arbitrary bitfield integer operations on strings.
+
+#### [BITFIELD_RO](https://redis.io/commands/bitfield_ro/) <small>(not implemented)</small>
+
+Performs arbitrary read-only bitfield integer operations on strings.
+
+
+## list commands
+
+### [BLPOP](https://redis.io/commands/blpop/)
+
+Removes and returns the first element in a list. Blocks until an element is available otherwise. Deletes the list if the last element was popped.
+
+### [BRPOP](https://redis.io/commands/brpop/)
+
+Removes and returns the last element in a list. Blocks until an element is available otherwise. Deletes the list if the last element was popped.
+
+### [BRPOPLPUSH](https://redis.io/commands/brpoplpush/)
+
+Pops an element from a list, pushes it to another list and returns it. Block until an element is available otherwise. Deletes the list if the last element was popped.
+
+### [LINDEX](https://redis.io/commands/lindex/)
+
+Returns an element from a list by its index.
+
+### [LINSERT](https://redis.io/commands/linsert/)
+
+Inserts an element before or after another element in a list.
+
+### [LLEN](https://redis.io/commands/llen/)
+
+Returns the length of a list.
+
+### [LMOVE](https://redis.io/commands/lmove/)
+
+Returns an element after popping it from one list and pushing it to another. Deletes the list if the last element was moved.
+
+### [LPOP](https://redis.io/commands/lpop/)
+
+Returns the first elements in a list after removing it. Deletes the list if the last element was popped.
+
+### [LPUSH](https://redis.io/commands/lpush/)
+
+Prepends one or more elements to a list. Creates the key if it doesn't exist.
+
+### [LPUSHX](https://redis.io/commands/lpushx/)
+
+Prepends one or more elements to a list only when the list exists.
+
+### [LRANGE](https://redis.io/commands/lrange/)
+
+Returns a range of elements from a list.
+
+### [LREM](https://redis.io/commands/lrem/)
+
+Removes elements from a list. Deletes the list if the last element was removed.
+
+### [LSET](https://redis.io/commands/lset/)
+
+Sets the value of an element in a list by its index.
+
+### [LTRIM](https://redis.io/commands/ltrim/)
+
+Removes elements from both ends a list. Deletes the list if all elements were trimmed.
+
+### [RPOP](https://redis.io/commands/rpop/)
+
+Returns and removes the last elements of a list. Deletes the list if the last element was popped.
+
+### [RPOPLPUSH](https://redis.io/commands/rpoplpush/)
+
+Returns the last element of a list after removing and pushing it to another list. Deletes the list if the last element was popped.
+
+### [RPUSH](https://redis.io/commands/rpush/)
+
+Appends one or more elements to a list. Creates the key if it doesn't exist.
+
+### [RPUSHX](https://redis.io/commands/rpushx/)
+
+Appends an element to a list only when the list exists.
+
+
+### Unsupported list commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [BLMOVE](https://redis.io/commands/blmove/) <small>(not implemented)</small>
+
+Pops an element from a list, pushes it to another list and returns it. Blocks until an element is available otherwise. Deletes the list if the last element was moved.
+
+#### [BLMPOP](https://redis.io/commands/blmpop/) <small>(not implemented)</small>
+
+Pops the first element from one of multiple lists. Blocks until an element is available otherwise. Deletes the list if the last element was popped.
+
+#### [LMPOP](https://redis.io/commands/lmpop/) <small>(not implemented)</small>
+
+Returns multiple elements from a list after removing them. Deletes the list if the last element was popped.
+
+#### [LPOS](https://redis.io/commands/lpos/) <small>(not implemented)</small>
+
+Returns the index of matching elements in a list.
+
+
+## sorted-set commands
+
+### [BZPOPMAX](https://redis.io/commands/bzpopmax/)
+
+Removes and returns the member with the highest score from one or more sorted sets. Blocks until a member available otherwise.  Deletes the sorted set if the last element was popped.
+
+### [BZPOPMIN](https://redis.io/commands/bzpopmin/)
+
+Removes and returns the member with the lowest score from one or more sorted sets. Blocks until a member is available otherwise. Deletes the sorted set if the last element was popped.
+
+### [ZADD](https://redis.io/commands/zadd/)
+
+Adds one or more members to a sorted set, or updates their scores. Creates the key if it doesn't exist.
+
+### [ZCARD](https://redis.io/commands/zcard/)
+
+Returns the number of members in a sorted set.
+
+### [ZCOUNT](https://redis.io/commands/zcount/)
+
+Returns the count of members in a sorted set that have scores within a range.
+
+### [ZINCRBY](https://redis.io/commands/zincrby/)
+
+Increments the score of a member in a sorted set.
+
+### [ZINTERSTORE](https://redis.io/commands/zinterstore/)
+
+Stores the intersect of multiple sorted sets in a key.
+
+### [ZLEXCOUNT](https://redis.io/commands/zlexcount/)
+
+Returns the number of members in a sorted set within a lexicographical range.
+
+### [ZMSCORE](https://redis.io/commands/zmscore/)
+
+Returns the score of one or more members in a sorted set.
+
+### [ZPOPMAX](https://redis.io/commands/zpopmax/)
+
+Returns the highest-scoring members from a sorted set after removing them. Deletes the sorted set if the last member was popped.
+
+### [ZPOPMIN](https://redis.io/commands/zpopmin/)
+
+Returns the lowest-scoring members from a sorted set after removing them. Deletes the sorted set if the last member was popped.
+
+### [ZRANGE](https://redis.io/commands/zrange/)
+
+Returns members in a sorted set within a range of indexes.
+
+### [ZRANGEBYLEX](https://redis.io/commands/zrangebylex/)
+
+Returns members in a sorted set within a lexicographical range.
+
+### [ZRANGEBYSCORE](https://redis.io/commands/zrangebyscore/)
+
+Returns members in a sorted set within a range of scores.
+
+### [ZRANK](https://redis.io/commands/zrank/)
+
+Returns the index of a member in a sorted set ordered by ascending scores.
+
+### [ZREM](https://redis.io/commands/zrem/)
+
+Removes one or more members from a sorted set. Deletes the sorted set if all members were removed.
+
+### [ZREMRANGEBYLEX](https://redis.io/commands/zremrangebylex/)
+
+Removes members in a sorted set within a lexicographical range. Deletes the sorted set if all members were removed.
+
+### [ZREMRANGEBYRANK](https://redis.io/commands/zremrangebyrank/)
+
+Removes members in a sorted set within a range of indexes. Deletes the sorted set if all members were removed.
+
+### [ZREMRANGEBYSCORE](https://redis.io/commands/zremrangebyscore/)
+
+Removes members in a sorted set within a range of scores. Deletes the sorted set if all members were removed.
+
+### [ZREVRANGE](https://redis.io/commands/zrevrange/)
+
+Returns members in a sorted set within a range of indexes in reverse order.
+
+### [ZREVRANGEBYLEX](https://redis.io/commands/zrevrangebylex/)
+
+Returns members in a sorted set within a lexicographical range in reverse order.
+
+### [ZREVRANGEBYSCORE](https://redis.io/commands/zrevrangebyscore/)
+
+Returns members in a sorted set within a range of scores in reverse order.
+
+### [ZREVRANK](https://redis.io/commands/zrevrank/)
+
+Returns the index of a member in a sorted set ordered by descending scores.
+
+### [ZSCAN](https://redis.io/commands/zscan/)
+
+Iterates over members and scores of a sorted set.
+
+### [ZSCORE](https://redis.io/commands/zscore/)
+
+Returns the score of a member in a sorted set.
+
+### [ZUNIONSTORE](https://redis.io/commands/zunionstore/)
+
+Stores the union of multiple sorted sets in a key.
+
+
+### Unsupported sorted-set commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [BZMPOP](https://redis.io/commands/bzmpop/) <small>(not implemented)</small>
+
+Removes and returns a member by score from one or more sorted sets. Blocks until a member is available otherwise. Deletes the sorted set if the last element was popped.
+
+#### [ZDIFF](https://redis.io/commands/zdiff/) <small>(not implemented)</small>
+
+Returns the difference between multiple sorted sets.
+
+#### [ZDIFFSTORE](https://redis.io/commands/zdiffstore/) <small>(not implemented)</small>
+
+Stores the difference of multiple sorted sets in a key.
+
+#### [ZINTER](https://redis.io/commands/zinter/) <small>(not implemented)</small>
+
+Returns the intersect of multiple sorted sets.
+
+#### [ZINTERCARD](https://redis.io/commands/zintercard/) <small>(not implemented)</small>
+
+Returns the number of members of the intersect of multiple sorted sets.
+
+#### [ZMPOP](https://redis.io/commands/zmpop/) <small>(not implemented)</small>
+
+Returns the highest- or lowest-scoring members from one or more sorted sets after removing them. Deletes the sorted set if the last member was popped.
+
+#### [ZRANDMEMBER](https://redis.io/commands/zrandmember/) <small>(not implemented)</small>
+
+Returns one or more random members from a sorted set.
+
+#### [ZRANGESTORE](https://redis.io/commands/zrangestore/) <small>(not implemented)</small>
+
+Stores a range of members from sorted set in a key.
+
+#### [ZUNION](https://redis.io/commands/zunion/) <small>(not implemented)</small>
+
+Returns the union of multiple sorted sets.
+
+
+## generic commands
+
+### [DEL](https://redis.io/commands/del/)
+
+Deletes one or more keys.
+
+### [DUMP](https://redis.io/commands/dump/)
+
+Returns a serialized representation of the value stored at a key.
+
+### [EXISTS](https://redis.io/commands/exists/)
+
+Determines whether one or more keys exist.
+
+### [EXPIRE](https://redis.io/commands/expire/)
+
+Sets the expiration time of a key in seconds.
+
+### [EXPIREAT](https://redis.io/commands/expireat/)
+
+Sets the expiration time of a key to a Unix timestamp.
+
+### [KEYS](https://redis.io/commands/keys/)
+
+Returns all key names that match a pattern.
+
+### [MOVE](https://redis.io/commands/move/)
+
+Moves a key to another database.
+
+### [PERSIST](https://redis.io/commands/persist/)
+
+Removes the expiration time of a key.
+
+### [PEXPIRE](https://redis.io/commands/pexpire/)
+
+Sets the expiration time of a key in milliseconds.
+
+### [PEXPIREAT](https://redis.io/commands/pexpireat/)
+
+Sets the expiration time of a key to a Unix milliseconds timestamp.
+
+### [PTTL](https://redis.io/commands/pttl/)
+
+Returns the expiration time in milliseconds of a key.
+
+### [RANDOMKEY](https://redis.io/commands/randomkey/)
+
+Returns a random key name from the database.
+
+### [RENAME](https://redis.io/commands/rename/)
+
+Renames a key and overwrites the destination.
+
+### [RENAMENX](https://redis.io/commands/renamenx/)
+
+Renames a key only when the target key name doesn't exist.
+
+### [RESTORE](https://redis.io/commands/restore/)
+
+Creates a key from the serialized representation of a value.
+
+### [SCAN](https://redis.io/commands/scan/)
+
+Iterates over the key names in the database.
+
+### [SORT](https://redis.io/commands/sort/)
+
+Sorts the elements in a list, a set, or a sorted set, optionally storing the result.
+
+### [TTL](https://redis.io/commands/ttl/)
+
+Returns the expiration time in seconds of a key.
+
+### [TYPE](https://redis.io/commands/type/)
+
+Determines the type of value stored at a key.
+
+### [UNLINK](https://redis.io/commands/unlink/)
+
+Asynchronously deletes one or more keys.
+
+
+### Unsupported generic commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [COPY](https://redis.io/commands/copy/) <small>(not implemented)</small>
+
+Copies the value of a key to a new key.
+
+#### [EXPIRETIME](https://redis.io/commands/expiretime/) <small>(not implemented)</small>
+
+Returns the expiration time of a key as a Unix timestamp.
+
+#### [MIGRATE](https://redis.io/commands/migrate/) <small>(not implemented)</small>
+
+Atomically transfers a key from one Redis instance to another.
+
+#### [OBJECT](https://redis.io/commands/object/) <small>(not implemented)</small>
+
+A container for object introspection commands.
+
+#### [OBJECT ENCODING](https://redis.io/commands/object-encoding/) <small>(not implemented)</small>
+
+Returns the internal encoding of a Redis object.
+
+#### [OBJECT FREQ](https://redis.io/commands/object-freq/) <small>(not implemented)</small>
+
+Returns the logarithmic access frequency counter of a Redis object.
+
+#### [OBJECT HELP](https://redis.io/commands/object-help/) <small>(not implemented)</small>
+
+Returns helpful text about the different subcommands.
+
+#### [OBJECT IDLETIME](https://redis.io/commands/object-idletime/) <small>(not implemented)</small>
+
+Returns the time since the last access to a Redis object.
+
+#### [OBJECT REFCOUNT](https://redis.io/commands/object-refcount/) <small>(not implemented)</small>
+
+Returns the reference count of a value of a key.
+
+#### [PEXPIRETIME](https://redis.io/commands/pexpiretime/) <small>(not implemented)</small>
+
+Returns the expiration time of a key as a Unix milliseconds timestamp.
+
+#### [SORT_RO](https://redis.io/commands/sort_ro/) <small>(not implemented)</small>
+
+Returns the sorted elements of a list, a set, or a sorted set.
+
+#### [TOUCH](https://redis.io/commands/touch/) <small>(not implemented)</small>
+
+Returns the number of existing keys out of those specified after updating the time they were last accessed.
+
+#### [WAIT](https://redis.io/commands/wait/) <small>(not implemented)</small>
+
+Blocks until the asynchronous replication of all preceding write commands sent by the connection is completed.
+
+#### [WAITAOF](https://redis.io/commands/waitaof/) <small>(not implemented)</small>
+
+Blocks until all of the preceding write commands sent by the connection are written to the append-only file of the master and/or replicas.
+
+
+## transactions commands
+
+### [DISCARD](https://redis.io/commands/discard/)
+
+Discards a transaction.
+
+### [EXEC](https://redis.io/commands/exec/)
+
+Executes all commands in a transaction.
+
+### [MULTI](https://redis.io/commands/multi/)
+
+Starts a transaction.
+
+### [UNWATCH](https://redis.io/commands/unwatch/)
+
+Forgets about watched keys of a transaction.
+
+### [WATCH](https://redis.io/commands/watch/)
+
+Monitors changes to keys to determine the execution of a transaction.
+
+
+
+## scripting commands
+
+### [EVAL](https://redis.io/commands/eval/)
+
+Executes a server-side Lua script.
+
+### [EVALSHA](https://redis.io/commands/evalsha/)
+
+Executes a server-side Lua script by SHA1 digest.
+
+### [SCRIPT](https://redis.io/commands/script/)
+
+A container for Lua scripts management commands.
+
+### [SCRIPT EXISTS](https://redis.io/commands/script-exists/)
+
+Determines whether server-side Lua scripts exist in the script cache.
+
+### [SCRIPT FLUSH](https://redis.io/commands/script-flush/)
+
+Removes all server-side Lua scripts from the script cache.
+
+### [SCRIPT HELP](https://redis.io/commands/script-help/)
+
+Returns helpful text about the different subcommands.
+
+### [SCRIPT LOAD](https://redis.io/commands/script-load/)
+
+Loads a server-side Lua script to the script cache.
+
+
+### Unsupported scripting commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [EVALSHA_RO](https://redis.io/commands/evalsha_ro/) <small>(not implemented)</small>
+
+Executes a read-only server-side Lua script by SHA1 digest.
+
+#### [EVAL_RO](https://redis.io/commands/eval_ro/) <small>(not implemented)</small>
+
+Executes a read-only server-side Lua script.
+
+#### [FCALL](https://redis.io/commands/fcall/) <small>(not implemented)</small>
+
+Invokes a function.
+
+#### [FCALL_RO](https://redis.io/commands/fcall_ro/) <small>(not implemented)</small>
+
+Invokes a read-only function.
+
+#### [FUNCTION](https://redis.io/commands/function/) <small>(not implemented)</small>
+
+A container for function commands.
+
+#### [FUNCTION DELETE](https://redis.io/commands/function-delete/) <small>(not implemented)</small>
+
+Deletes a library and its functions.
+
+#### [FUNCTION DUMP](https://redis.io/commands/function-dump/) <small>(not implemented)</small>
+
+Dumps all libraries into a serialized binary payload.
+
+#### [FUNCTION FLUSH](https://redis.io/commands/function-flush/) <small>(not implemented)</small>
+
+Deletes all libraries and functions.
+
+#### [FUNCTION HELP](https://redis.io/commands/function-help/) <small>(not implemented)</small>
+
+Returns helpful text about the different subcommands.
+
+#### [FUNCTION KILL](https://redis.io/commands/function-kill/) <small>(not implemented)</small>
+
+Terminates a function during execution.
+
+#### [FUNCTION LIST](https://redis.io/commands/function-list/) <small>(not implemented)</small>
+
+Returns information about all libraries.
+
+#### [FUNCTION LOAD](https://redis.io/commands/function-load/) <small>(not implemented)</small>
+
+Creates a library.
+
+#### [FUNCTION RESTORE](https://redis.io/commands/function-restore/) <small>(not implemented)</small>
+
+Restores all libraries from a payload.
+
+#### [FUNCTION STATS](https://redis.io/commands/function-stats/) <small>(not implemented)</small>
+
+Returns information about a function during execution.
+
+#### [SCRIPT DEBUG](https://redis.io/commands/script-debug/) <small>(not implemented)</small>
+
+Sets the debug mode of server-side Lua scripts.
+
+#### [SCRIPT KILL](https://redis.io/commands/script-kill/) <small>(not implemented)</small>
+
+Terminates a server-side Lua script during execution.
+
+
+## geo commands
+
+### [GEOADD](https://redis.io/commands/geoadd/)
+
+Adds one or more members to a geospatial index. The key is created if it doesn't exist.
+
+### [GEODIST](https://redis.io/commands/geodist/)
+
+Returns the distance between two members of a geospatial index.
+
+### [GEOHASH](https://redis.io/commands/geohash/)
+
+Returns members from a geospatial index as geohash strings.
+
+### [GEOPOS](https://redis.io/commands/geopos/)
+
+Returns the longitude and latitude of members from a geospatial index.
+
+### [GEORADIUS](https://redis.io/commands/georadius/)
+
+Queries a geospatial index for members within a distance from a coordinate, optionally stores the result.
+
+### [GEORADIUSBYMEMBER](https://redis.io/commands/georadiusbymember/)
+
+Queries a geospatial index for members within a distance from a member, optionally stores the result.
+
+### [GEORADIUSBYMEMBER_RO](https://redis.io/commands/georadiusbymember_ro/)
+
+Returns members from a geospatial index that are within a distance from a member.
+
+### [GEORADIUS_RO](https://redis.io/commands/georadius_ro/)
+
+Returns members from a geospatial index that are within a distance from a coordinate.
+
+### [GEOSEARCH](https://redis.io/commands/geosearch/)
+
+Queries a geospatial index for members inside an area of a box or a circle.
+
+### [GEOSEARCHSTORE](https://redis.io/commands/geosearchstore/)
+
+Queries a geospatial index for members inside an area of a box or a circle, optionally stores the result.
+
+
+
+## hash commands
+
+### [HDEL](https://redis.io/commands/hdel/)
+
+Deletes one or more fields and their values from a hash. Deletes the hash if no fields remain.
+
+### [HEXISTS](https://redis.io/commands/hexists/)
+
+Determines whether a field exists in a hash.
+
+### [HGET](https://redis.io/commands/hget/)
+
+Returns the value of a field in a hash.
+
+### [HGETALL](https://redis.io/commands/hgetall/)
+
+Returns all fields and values in a hash.
+
+### [HINCRBY](https://redis.io/commands/hincrby/)
+
+Increments the integer value of a field in a hash by a number. Uses 0 as initial value if the field doesn't exist.
+
+### [HINCRBYFLOAT](https://redis.io/commands/hincrbyfloat/)
+
+Increments the floating point value of a field by a number. Uses 0 as initial value if the field doesn't exist.
+
+### [HKEYS](https://redis.io/commands/hkeys/)
+
+Returns all fields in a hash.
+
+### [HLEN](https://redis.io/commands/hlen/)
+
+Returns the number of fields in a hash.
+
+### [HMGET](https://redis.io/commands/hmget/)
+
+Returns the values of all fields in a hash.
+
+### [HMSET](https://redis.io/commands/hmset/)
+
+Sets the values of multiple fields.
+
+### [HSCAN](https://redis.io/commands/hscan/)
+
+Iterates over fields and values of a hash.
+
+### [HSET](https://redis.io/commands/hset/)
+
+Creates or modifies the value of a field in a hash.
+
+### [HSETNX](https://redis.io/commands/hsetnx/)
+
+Sets the value of a field in a hash only when the field doesn't exist.
+
+### [HSTRLEN](https://redis.io/commands/hstrlen/)
+
+Returns the length of the value of a field.
+
+### [HVALS](https://redis.io/commands/hvals/)
+
+Returns all values in a hash.
+
+
+### Unsupported hash commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [HRANDFIELD](https://redis.io/commands/hrandfield/) <small>(not implemented)</small>
+
+Returns one or more random fields from a hash.
+
+
+## hyperloglog commands
+
+### [PFADD](https://redis.io/commands/pfadd/)
+
+Adds elements to a HyperLogLog key. Creates the key if it doesn't exist.
+
+### [PFCOUNT](https://redis.io/commands/pfcount/)
+
+Returns the approximated cardinality of the set(s) observed by the HyperLogLog key(s).
+
+### [PFMERGE](https://redis.io/commands/pfmerge/)
+
+Merges one or more HyperLogLog values into a single key.
+
+
+### Unsupported hyperloglog commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [PFDEBUG](https://redis.io/commands/pfdebug/) <small>(not implemented)</small>
+
+Internal commands for debugging HyperLogLog values.
+
+#### [PFSELFTEST](https://redis.io/commands/pfselftest/) <small>(not implemented)</small>
+
+An internal command for testing HyperLogLog values.
+
+
+## pubsub commands
+
+### [PSUBSCRIBE](https://redis.io/commands/psubscribe/)
+
+Listens for messages published to channels that match one or more patterns.
+
+### [PUBLISH](https://redis.io/commands/publish/)
+
+Posts a message to a channel.
+
+### [PUBSUB](https://redis.io/commands/pubsub/)
+
+A container for Pub/Sub commands.
+
+### [PUBSUB CHANNELS](https://redis.io/commands/pubsub-channels/)
+
+Returns the active channels.
+
+### [PUBSUB HELP](https://redis.io/commands/pubsub-help/)
+
+Returns helpful text about the different subcommands.
+
+### [PUBSUB NUMSUB](https://redis.io/commands/pubsub-numsub/)
+
+Returns a count of subscribers to channels.
+
+### [PUNSUBSCRIBE](https://redis.io/commands/punsubscribe/)
+
+Stops listening to messages published to channels that match one or more patterns.
+
+### [SUBSCRIBE](https://redis.io/commands/subscribe/)
+
+Listens for messages published to channels.
+
+### [UNSUBSCRIBE](https://redis.io/commands/unsubscribe/)
+
+Stops listening to messages posted to channels.
+
+
+### Unsupported pubsub commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [PUBSUB NUMPAT](https://redis.io/commands/pubsub-numpat/) <small>(not implemented)</small>
+
+Returns a count of unique pattern subscriptions.
+
+#### [PUBSUB SHARDCHANNELS](https://redis.io/commands/pubsub-shardchannels/) <small>(not implemented)</small>
+
+Returns the active shard channels.
+
+#### [PUBSUB SHARDNUMSUB](https://redis.io/commands/pubsub-shardnumsub/) <small>(not implemented)</small>
+
+Returns the count of subscribers of shard channels.
+
+#### [SPUBLISH](https://redis.io/commands/spublish/) <small>(not implemented)</small>
+
+Post a message to a shard channel
+
+#### [SSUBSCRIBE](https://redis.io/commands/ssubscribe/) <small>(not implemented)</small>
+
+Listens for messages published to shard channels.
+
+#### [SUNSUBSCRIBE](https://redis.io/commands/sunsubscribe/) <small>(not implemented)</small>
+
+Stops listening to messages posted to shard channels.
+
+
+## set commands
+
+### [SADD](https://redis.io/commands/sadd/)
+
+Adds one or more members to a set. Creates the key if it doesn't exist.
+
+### [SCARD](https://redis.io/commands/scard/)
+
+Returns the number of members in a set.
+
+### [SDIFF](https://redis.io/commands/sdiff/)
+
+Returns the difference of multiple sets.
+
+### [SDIFFSTORE](https://redis.io/commands/sdiffstore/)
+
+Stores the difference of multiple sets in a key.
+
+### [SINTER](https://redis.io/commands/sinter/)
+
+Returns the intersect of multiple sets.
+
+### [SINTERCARD](https://redis.io/commands/sintercard/)
+
+Returns the number of members of the intersect of multiple sets.
+
+### [SINTERSTORE](https://redis.io/commands/sinterstore/)
+
+Stores the intersect of multiple sets in a key.
+
+### [SISMEMBER](https://redis.io/commands/sismember/)
+
+Determines whether a member belongs to a set.
+
+### [SMEMBERS](https://redis.io/commands/smembers/)
+
+Returns all members of a set.
+
+### [SMISMEMBER](https://redis.io/commands/smismember/)
+
+Determines whether multiple members belong to a set.
+
+### [SMOVE](https://redis.io/commands/smove/)
+
+Moves a member from one set to another.
+
+### [SPOP](https://redis.io/commands/spop/)
+
+Returns one or more random members from a set after removing them. Deletes the set if the last member was popped.
+
+### [SRANDMEMBER](https://redis.io/commands/srandmember/)
+
+Get one or multiple random members from a set
+
+### [SREM](https://redis.io/commands/srem/)
+
+Removes one or more members from a set. Deletes the set if the last member was removed.
+
+### [SSCAN](https://redis.io/commands/sscan/)
+
+Iterates over members of a set.
+
+### [SUNION](https://redis.io/commands/sunion/)
+
+Returns the union of multiple sets.
+
+### [SUNIONSTORE](https://redis.io/commands/sunionstore/)
+
+Stores the union of multiple sets in a key.
+
+
+
+## stream commands
+
+### [XADD](https://redis.io/commands/xadd/)
+
+Appends a new message to a stream. Creates the key if it doesn't exist.
+
+### [XLEN](https://redis.io/commands/xlen/)
+
+Return the number of messages in a stream.
+
+### [XRANGE](https://redis.io/commands/xrange/)
+
+Returns the messages from a stream within a range of IDs.
+
+### [XREAD](https://redis.io/commands/xread/)
+
+Returns messages from multiple streams with IDs greater than the ones requested. Blocks until a message is available otherwise.
+
+### [XREVRANGE](https://redis.io/commands/xrevrange/)
+
+Returns the messages from a stream within a range of IDs in reverse order.
+
+### [XTRIM](https://redis.io/commands/xtrim/)
+
+Deletes messages from the beginning of a stream.
+
+
+### Unsupported stream commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [XACK](https://redis.io/commands/xack/) <small>(not implemented)</small>
+
+Returns the number of messages that were successfully acknowledged by the consumer group member of a stream.
+
+#### [XAUTOCLAIM](https://redis.io/commands/xautoclaim/) <small>(not implemented)</small>
+
+Changes, or acquires, ownership of messages in a consumer group, as if the messages were delivered to as consumer group member.
+
+#### [XCLAIM](https://redis.io/commands/xclaim/) <small>(not implemented)</small>
+
+Changes, or acquires, ownership of a message in a consumer group, as if the message was delivered a consumer group member.
+
+#### [XDEL](https://redis.io/commands/xdel/) <small>(not implemented)</small>
+
+Returns the number of messages after removing them from a stream.
+
+#### [XGROUP](https://redis.io/commands/xgroup/) <small>(not implemented)</small>
+
+A container for consumer groups commands.
+
+#### [XGROUP CREATE](https://redis.io/commands/xgroup-create/) <small>(not implemented)</small>
+
+Creates a consumer group.
+
+#### [XGROUP CREATECONSUMER](https://redis.io/commands/xgroup-createconsumer/) <small>(not implemented)</small>
+
+Creates a consumer in a consumer group.
+
+#### [XGROUP DELCONSUMER](https://redis.io/commands/xgroup-delconsumer/) <small>(not implemented)</small>
+
+Deletes a consumer from a consumer group.
+
+#### [XGROUP DESTROY](https://redis.io/commands/xgroup-destroy/) <small>(not implemented)</small>
+
+Destroys a consumer group.
+
+#### [XGROUP HELP](https://redis.io/commands/xgroup-help/) <small>(not implemented)</small>
+
+Returns helpful text about the different subcommands.
+
+#### [XGROUP SETID](https://redis.io/commands/xgroup-setid/) <small>(not implemented)</small>
+
+Sets the last-delivered ID of a consumer group.
+
+#### [XINFO](https://redis.io/commands/xinfo/) <small>(not implemented)</small>
+
+A container for stream introspection commands.
+
+#### [XINFO CONSUMERS](https://redis.io/commands/xinfo-consumers/) <small>(not implemented)</small>
+
+Returns a list of the consumers in a consumer group.
+
+#### [XINFO GROUPS](https://redis.io/commands/xinfo-groups/) <small>(not implemented)</small>
+
+Returns a list of the consumer groups of a stream.
+
+#### [XINFO HELP](https://redis.io/commands/xinfo-help/) <small>(not implemented)</small>
+
+Returns helpful text about the different subcommands.
+
+#### [XINFO STREAM](https://redis.io/commands/xinfo-stream/) <small>(not implemented)</small>
+
+Returns information about a stream.
+
+#### [XPENDING](https://redis.io/commands/xpending/) <small>(not implemented)</small>
+
+Returns the information and entries from a stream consumer group's pending entries list.
+
+#### [XREADGROUP](https://redis.io/commands/xreadgroup/) <small>(not implemented)</small>
+
+Returns new or historical messages from a stream for a consumer in agroup. Blocks until a message is available otherwise.
+
+#### [XSETID](https://redis.io/commands/xsetid/) <small>(not implemented)</small>
+
+An internal command for replicating stream values.
+
+
diff --git a/docs/redis-commands/RedisBloom.md b/docs/redis-commands/RedisBloom.md
new file mode 100644
index 0000000..f4f627d
--- /dev/null
+++ b/docs/redis-commands/RedisBloom.md
@@ -0,0 +1,225 @@
+# Probabilistic commands
+
+Module currently not implemented in fakeredis.
+
+
+### Unsupported bf commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [BF.RESERVE](https://redis.io/commands/bf.reserve/) <small>(not implemented)</small>
+
+Creates a new Bloom Filter
+
+#### [BF.ADD](https://redis.io/commands/bf.add/) <small>(not implemented)</small>
+
+Adds an item to a Bloom Filter
+
+#### [BF.MADD](https://redis.io/commands/bf.madd/) <small>(not implemented)</small>
+
+Adds one or more items to a Bloom Filter. A filter will be created if it does not exist
+
+#### [BF.INSERT](https://redis.io/commands/bf.insert/) <small>(not implemented)</small>
+
+Adds one or more items to a Bloom Filter. A filter will be created if it does not exist
+
+#### [BF.EXISTS](https://redis.io/commands/bf.exists/) <small>(not implemented)</small>
+
+Checks whether an item exists in a Bloom Filter
+
+#### [BF.MEXISTS](https://redis.io/commands/bf.mexists/) <small>(not implemented)</small>
+
+Checks whether one or more items exist in a Bloom Filter
+
+#### [BF.SCANDUMP](https://redis.io/commands/bf.scandump/) <small>(not implemented)</small>
+
+Begins an incremental save of the bloom filter
+
+#### [BF.LOADCHUNK](https://redis.io/commands/bf.loadchunk/) <small>(not implemented)</small>
+
+Restores a filter previously saved using SCANDUMP
+
+#### [BF.INFO](https://redis.io/commands/bf.info/) <small>(not implemented)</small>
+
+Returns information about a Bloom Filter
+
+#### [BF.CARD](https://redis.io/commands/bf.card/) <small>(not implemented)</small>
+
+Returns the cardinality of a Bloom filter
+
+
+
+### Unsupported cf commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [CF.RESERVE](https://redis.io/commands/cf.reserve/) <small>(not implemented)</small>
+
+Creates a new Cuckoo Filter
+
+#### [CF.ADD](https://redis.io/commands/cf.add/) <small>(not implemented)</small>
+
+Adds an item to a Cuckoo Filter
+
+#### [CF.ADDNX](https://redis.io/commands/cf.addnx/) <small>(not implemented)</small>
+
+Adds an item to a Cuckoo Filter if the item did not exist previously.
+
+#### [CF.INSERT](https://redis.io/commands/cf.insert/) <small>(not implemented)</small>
+
+Adds one or more items to a Cuckoo Filter. A filter will be created if it does not exist
+
+#### [CF.INSERTNX](https://redis.io/commands/cf.insertnx/) <small>(not implemented)</small>
+
+Adds one or more items to a Cuckoo Filter if the items did not exist previously. A filter will be created if it does not exist
+
+#### [CF.EXISTS](https://redis.io/commands/cf.exists/) <small>(not implemented)</small>
+
+Checks whether one or more items exist in a Cuckoo Filter
+
+#### [CF.MEXISTS](https://redis.io/commands/cf.mexists/) <small>(not implemented)</small>
+
+Checks whether one or more items exist in a Cuckoo Filter
+
+#### [CF.DEL](https://redis.io/commands/cf.del/) <small>(not implemented)</small>
+
+Deletes an item from a Cuckoo Filter
+
+#### [CF.COUNT](https://redis.io/commands/cf.count/) <small>(not implemented)</small>
+
+Return the number of times an item might be in a Cuckoo Filter
+
+#### [CF.SCANDUMP](https://redis.io/commands/cf.scandump/) <small>(not implemented)</small>
+
+Begins an incremental save of the bloom filter
+
+#### [CF.LOADCHUNK](https://redis.io/commands/cf.loadchunk/) <small>(not implemented)</small>
+
+Restores a filter previously saved using SCANDUMP
+
+#### [CF.INFO](https://redis.io/commands/cf.info/) <small>(not implemented)</small>
+
+Returns information about a Cuckoo Filter
+
+
+
+### Unsupported cms commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [CMS.INITBYDIM](https://redis.io/commands/cms.initbydim/) <small>(not implemented)</small>
+
+Initializes a Count-Min Sketch to dimensions specified by user
+
+#### [CMS.INITBYPROB](https://redis.io/commands/cms.initbyprob/) <small>(not implemented)</small>
+
+Initializes a Count-Min Sketch to accommodate requested tolerances.
+
+#### [CMS.INCRBY](https://redis.io/commands/cms.incrby/) <small>(not implemented)</small>
+
+Increases the count of one or more items by increment
+
+#### [CMS.QUERY](https://redis.io/commands/cms.query/) <small>(not implemented)</small>
+
+Returns the count for one or more items in a sketch
+
+#### [CMS.MERGE](https://redis.io/commands/cms.merge/) <small>(not implemented)</small>
+
+Merges several sketches into one sketch
+
+#### [CMS.INFO](https://redis.io/commands/cms.info/) <small>(not implemented)</small>
+
+Returns information about a sketch
+
+
+
+### Unsupported topk commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [TOPK.RESERVE](https://redis.io/commands/topk.reserve/) <small>(not implemented)</small>
+
+Initializes a TopK with specified parameters
+
+#### [TOPK.ADD](https://redis.io/commands/topk.add/) <small>(not implemented)</small>
+
+Increases the count of one or more items by increment
+
+#### [TOPK.INCRBY](https://redis.io/commands/topk.incrby/) <small>(not implemented)</small>
+
+Increases the count of one or more items by increment
+
+#### [TOPK.QUERY](https://redis.io/commands/topk.query/) <small>(not implemented)</small>
+
+Checks whether one or more items are in a sketch
+
+#### [TOPK.COUNT](https://redis.io/commands/topk.count/) <small>(not implemented)</small>
+
+Return the count for one or more items are in a sketch
+
+#### [TOPK.LIST](https://redis.io/commands/topk.list/) <small>(not implemented)</small>
+
+Return full list of items in Top K list
+
+#### [TOPK.INFO](https://redis.io/commands/topk.info/) <small>(not implemented)</small>
+
+Returns information about a sketch
+
+
+
+### Unsupported tdigest commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [TDIGEST.CREATE](https://redis.io/commands/tdigest.create/) <small>(not implemented)</small>
+
+Allocates memory and initializes a new t-digest sketch
+
+#### [TDIGEST.RESET](https://redis.io/commands/tdigest.reset/) <small>(not implemented)</small>
+
+Resets a t-digest sketch: empty the sketch and re-initializes it.
+
+#### [TDIGEST.ADD](https://redis.io/commands/tdigest.add/) <small>(not implemented)</small>
+
+Adds one or more observations to a t-digest sketch
+
+#### [TDIGEST.MERGE](https://redis.io/commands/tdigest.merge/) <small>(not implemented)</small>
+
+Merges multiple t-digest sketches into a single sketch
+
+#### [TDIGEST.MIN](https://redis.io/commands/tdigest.min/) <small>(not implemented)</small>
+
+Returns the minimum observation value from a t-digest sketch
+
+#### [TDIGEST.MAX](https://redis.io/commands/tdigest.max/) <small>(not implemented)</small>
+
+Returns the maximum observation value from a t-digest sketch
+
+#### [TDIGEST.QUANTILE](https://redis.io/commands/tdigest.quantile/) <small>(not implemented)</small>
+
+Returns, for each input fraction, an estimation of the value (floating point) that is smaller than the given fraction of observations
+
+#### [TDIGEST.CDF](https://redis.io/commands/tdigest.cdf/) <small>(not implemented)</small>
+
+Returns, for each input value, an estimation of the fraction (floating-point) of (observations smaller than the given value + half the observations equal to the given value)
+
+#### [TDIGEST.TRIMMED_MEAN](https://redis.io/commands/tdigest.trimmed_mean/) <small>(not implemented)</small>
+
+Returns an estimation of the mean value from the sketch, excluding observation values outside the low and high cutoff quantiles
+
+#### [TDIGEST.RANK](https://redis.io/commands/tdigest.rank/) <small>(not implemented)</small>
+
+Returns, for each input value (floating-point), the estimated rank of the value (the number of observations in the sketch that are smaller than the value + half the number of observations that are equal to the value)
+
+#### [TDIGEST.REVRANK](https://redis.io/commands/tdigest.revrank/) <small>(not implemented)</small>
+
+Returns, for each input value (floating-point), the estimated reverse rank of the value (the number of observations in the sketch that are larger than the value + half the number of observations that are equal to the value)
+
+#### [TDIGEST.BYRANK](https://redis.io/commands/tdigest.byrank/) <small>(not implemented)</small>
+
+Returns, for each input rank, an estimation of the value (floating-point) with that rank
+
+#### [TDIGEST.BYREVRANK](https://redis.io/commands/tdigest.byrevrank/) <small>(not implemented)</small>
+
+Returns, for each input reverse rank, an estimation of the value (floating-point) with that reverse rank
+
+#### [TDIGEST.INFO](https://redis.io/commands/tdigest.info/) <small>(not implemented)</small>
+
+Returns information and statistics about a t-digest sketch
+
+
diff --git a/docs/redis-commands/RedisGraph.md b/docs/redis-commands/RedisGraph.md
new file mode 100644
index 0000000..0c72644
--- /dev/null
+++ b/docs/redis-commands/RedisGraph.md
@@ -0,0 +1,53 @@
+# Graph commands
+
+Module currently not implemented in fakeredis.
+
+
+### Unsupported graph commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [GRAPH.QUERY](https://redis.io/commands/graph.query/) <small>(not implemented)</small>
+
+Executes the given query against a specified graph
+
+#### [GRAPH.RO_QUERY](https://redis.io/commands/graph.ro_query/) <small>(not implemented)</small>
+
+Executes a given read only query against a specified graph
+
+#### [GRAPH.DELETE](https://redis.io/commands/graph.delete/) <small>(not implemented)</small>
+
+Completely removes the graph and all of its entities
+
+#### [GRAPH.EXPLAIN](https://redis.io/commands/graph.explain/) <small>(not implemented)</small>
+
+Returns a query execution plan without running the query
+
+#### [GRAPH.PROFILE](https://redis.io/commands/graph.profile/) <small>(not implemented)</small>
+
+Executes a query and returns an execution plan augmented with metrics for each operation's execution
+
+#### [GRAPH.SLOWLOG](https://redis.io/commands/graph.slowlog/) <small>(not implemented)</small>
+
+Returns a list containing up to 10 of the slowest queries issued against the given graph
+
+#### [GRAPH.CONFIG GET](https://redis.io/commands/graph.config-get/) <small>(not implemented)</small>
+
+Retrieves a RedisGraph configuration
+
+#### [GRAPH.CONFIG SET](https://redis.io/commands/graph.config-set/) <small>(not implemented)</small>
+
+Updates a RedisGraph configuration
+
+#### [GRAPH.LIST](https://redis.io/commands/graph.list/) <small>(not implemented)</small>
+
+Lists all graph keys in the keyspace
+
+#### [GRAPH.CONSTRAINT DROP](https://redis.io/commands/graph.constraint-drop/) <small>(not implemented)</small>
+
+Deletes a constraint from specified graph
+
+#### [GRAPH.CONSTRAINT CREATE](https://redis.io/commands/graph.constraint-create/) <small>(not implemented)</small>
+
+Creates a constraint on specified graph
+
+
diff --git a/docs/redis-commands/RedisJson.md b/docs/redis-commands/RedisJson.md
new file mode 100644
index 0000000..3e60944
--- /dev/null
+++ b/docs/redis-commands/RedisJson.md
@@ -0,0 +1,105 @@
+# JSON commands
+
+## json commands
+
+### [JSON.DEL](https://redis.io/commands/json.del/)
+
+Deletes a value
+
+### [JSON.FORGET](https://redis.io/commands/json.forget/)
+
+Deletes a value
+
+### [JSON.GET](https://redis.io/commands/json.get/)
+
+Gets the value at one or more paths in JSON serialized form
+
+### [JSON.TOGGLE](https://redis.io/commands/json.toggle/)
+
+Toggles a boolean value
+
+### [JSON.CLEAR](https://redis.io/commands/json.clear/)
+
+Clears all values from an array or an object and sets numeric values to `0`
+
+### [JSON.SET](https://redis.io/commands/json.set/)
+
+Sets or updates the JSON value at a path
+
+### [JSON.MGET](https://redis.io/commands/json.mget/)
+
+Returns the values at a path from one or more keys
+
+### [JSON.NUMINCRBY](https://redis.io/commands/json.numincrby/)
+
+Increments the numeric value at path by a value
+
+### [JSON.NUMMULTBY](https://redis.io/commands/json.nummultby/)
+
+Multiplies the numeric value at path by a value
+
+### [JSON.STRAPPEND](https://redis.io/commands/json.strappend/)
+
+Appends a string to a JSON string value at path
+
+### [JSON.STRLEN](https://redis.io/commands/json.strlen/)
+
+Returns the length of the JSON String at path in key
+
+### [JSON.ARRAPPEND](https://redis.io/commands/json.arrappend/)
+
+Append one or more json values into the array at path after the last element in it.
+
+### [JSON.ARRINDEX](https://redis.io/commands/json.arrindex/)
+
+Returns the index of the first occurrence of a JSON scalar value in the array at path
+
+### [JSON.ARRINSERT](https://redis.io/commands/json.arrinsert/)
+
+Inserts the JSON scalar(s) value at the specified index in the array at path
+
+### [JSON.ARRLEN](https://redis.io/commands/json.arrlen/)
+
+Returns the length of the array at path
+
+### [JSON.ARRPOP](https://redis.io/commands/json.arrpop/)
+
+Removes and returns the element at the specified index in the array at path
+
+### [JSON.ARRTRIM](https://redis.io/commands/json.arrtrim/)
+
+Trims the array at path to contain only the specified inclusive range of indices from start to stop
+
+### [JSON.OBJKEYS](https://redis.io/commands/json.objkeys/)
+
+Returns the JSON keys of the object at path
+
+### [JSON.OBJLEN](https://redis.io/commands/json.objlen/)
+
+Returns the number of keys of the object at path
+
+### [JSON.TYPE](https://redis.io/commands/json.type/)
+
+Returns the type of the JSON value at path
+
+
+### Unsupported json commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [JSON.RESP](https://redis.io/commands/json.resp/) <small>(not implemented)</small>
+
+Returns the JSON value at path in Redis Serialization Protocol (RESP)
+
+#### [JSON.DEBUG](https://redis.io/commands/json.debug/) <small>(not implemented)</small>
+
+Debugging container command
+
+#### [JSON.DEBUG HELP](https://redis.io/commands/json.debug-help/) <small>(not implemented)</small>
+
+Shows helpful information
+
+#### [JSON.DEBUG MEMORY](https://redis.io/commands/json.debug-memory/) <small>(not implemented)</small>
+
+Reports the size in bytes of a key
+
+
diff --git a/docs/redis-commands/RedisSearch.md b/docs/redis-commands/RedisSearch.md
new file mode 100644
index 0000000..2d407cc
--- /dev/null
+++ b/docs/redis-commands/RedisSearch.md
@@ -0,0 +1,130 @@
+# Search commands
+
+Module currently not implemented in fakeredis.
+
+
+### Unsupported search commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [FT.CREATE](https://redis.io/commands/ft.create/) <small>(not implemented)</small>
+
+Creates an index with the given spec
+
+#### [FT.INFO](https://redis.io/commands/ft.info/) <small>(not implemented)</small>
+
+Returns information and statistics on the index
+
+#### [FT.EXPLAIN](https://redis.io/commands/ft.explain/) <small>(not implemented)</small>
+
+Returns the execution plan for a complex query
+
+#### [FT.EXPLAINCLI](https://redis.io/commands/ft.explaincli/) <small>(not implemented)</small>
+
+Returns the execution plan for a complex query
+
+#### [FT.ALTER](https://redis.io/commands/ft.alter/) <small>(not implemented)</small>
+
+Adds a new field to the index
+
+#### [FT.DROPINDEX](https://redis.io/commands/ft.dropindex/) <small>(not implemented)</small>
+
+Deletes the index
+
+#### [FT.ALIASADD](https://redis.io/commands/ft.aliasadd/) <small>(not implemented)</small>
+
+Adds an alias to the index
+
+#### [FT.ALIASUPDATE](https://redis.io/commands/ft.aliasupdate/) <small>(not implemented)</small>
+
+Adds or updates an alias to the index
+
+#### [FT.ALIASDEL](https://redis.io/commands/ft.aliasdel/) <small>(not implemented)</small>
+
+Deletes an alias from the index
+
+#### [FT.TAGVALS](https://redis.io/commands/ft.tagvals/) <small>(not implemented)</small>
+
+Returns the distinct tags indexed in a Tag field
+
+#### [FT.SYNUPDATE](https://redis.io/commands/ft.synupdate/) <small>(not implemented)</small>
+
+Creates or updates a synonym group with additional terms
+
+#### [FT.SYNDUMP](https://redis.io/commands/ft.syndump/) <small>(not implemented)</small>
+
+Dumps the contents of a synonym group
+
+#### [FT.SPELLCHECK](https://redis.io/commands/ft.spellcheck/) <small>(not implemented)</small>
+
+Performs spelling correction on a query, returning suggestions for misspelled terms
+
+#### [FT.DICTADD](https://redis.io/commands/ft.dictadd/) <small>(not implemented)</small>
+
+Adds terms to a dictionary
+
+#### [FT.DICTDEL](https://redis.io/commands/ft.dictdel/) <small>(not implemented)</small>
+
+Deletes terms from a dictionary
+
+#### [FT.DICTDUMP](https://redis.io/commands/ft.dictdump/) <small>(not implemented)</small>
+
+Dumps all terms in the given dictionary
+
+#### [FT._LIST](https://redis.io/commands/ft._list/) <small>(not implemented)</small>
+
+Returns a list of all existing indexes
+
+#### [FT.CONFIG SET](https://redis.io/commands/ft.config-set/) <small>(not implemented)</small>
+
+Sets runtime configuration options
+
+#### [FT.CONFIG GET](https://redis.io/commands/ft.config-get/) <small>(not implemented)</small>
+
+Retrieves runtime configuration options
+
+#### [FT.CONFIG HELP](https://redis.io/commands/ft.config-help/) <small>(not implemented)</small>
+
+Help description of runtime configuration options
+
+#### [FT.SEARCH](https://redis.io/commands/ft.search/) <small>(not implemented)</small>
+
+Searches the index with a textual query, returning either documents or just ids
+
+#### [FT.AGGREGATE](https://redis.io/commands/ft.aggregate/) <small>(not implemented)</small>
+
+Run a search query on an index and perform aggregate transformations on the results
+
+#### [FT.PROFILE](https://redis.io/commands/ft.profile/) <small>(not implemented)</small>
+
+Performs a `FT.SEARCH` or `FT.AGGREGATE` command and collects performance information
+
+#### [FT.CURSOR READ](https://redis.io/commands/ft.cursor-read/) <small>(not implemented)</small>
+
+Reads from a cursor
+
+#### [FT.CURSOR DEL](https://redis.io/commands/ft.cursor-del/) <small>(not implemented)</small>
+
+Deletes a cursor
+
+
+
+### Unsupported suggestion commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [FT.SUGADD](https://redis.io/commands/ft.sugadd/) <small>(not implemented)</small>
+
+Adds a suggestion string to an auto-complete suggestion dictionary
+
+#### [FT.SUGGET](https://redis.io/commands/ft.sugget/) <small>(not implemented)</small>
+
+Gets completion suggestions for a prefix
+
+#### [FT.SUGDEL](https://redis.io/commands/ft.sugdel/) <small>(not implemented)</small>
+
+Deletes a string from a suggestion index
+
+#### [FT.SUGLEN](https://redis.io/commands/ft.suglen/) <small>(not implemented)</small>
+
+Gets the size of an auto-complete suggestion dictionary
+
+
diff --git a/docs/redis-commands/RedisTimeSeries.md b/docs/redis-commands/RedisTimeSeries.md
new file mode 100644
index 0000000..3ac13f9
--- /dev/null
+++ b/docs/redis-commands/RedisTimeSeries.md
@@ -0,0 +1,77 @@
+# Time Series commands
+
+Module currently not implemented in fakeredis.
+
+
+### Unsupported timeseries commands 
+> To implement support for a command, see [here](/guides/implement-command/) 
+
+#### [TS.CREATE](https://redis.io/commands/ts.create/) <small>(not implemented)</small>
+
+Create a new time series
+
+#### [TS.DEL](https://redis.io/commands/ts.del/) <small>(not implemented)</small>
+
+Delete all samples between two timestamps for a given time series
+
+#### [TS.ALTER](https://redis.io/commands/ts.alter/) <small>(not implemented)</small>
+
+Update the retention, chunk size, duplicate policy, and labels of an existing time series
+
+#### [TS.ADD](https://redis.io/commands/ts.add/) <small>(not implemented)</small>
+
+Append a sample to a time series
+
+#### [TS.MADD](https://redis.io/commands/ts.madd/) <small>(not implemented)</small>
+
+Append new samples to one or more time series
+
+#### [TS.INCRBY](https://redis.io/commands/ts.incrby/) <small>(not implemented)</small>
+
+Increase the value of the sample with the maximal existing timestamp, or create a new sample with a value equal to the value of the sample with the maximal existing timestamp with a given increment
+
+#### [TS.DECRBY](https://redis.io/commands/ts.decrby/) <small>(not implemented)</small>
+
+Decrease the value of the sample with the maximal existing timestamp, or create a new sample with a value equal to the value of the sample with the maximal existing timestamp with a given decrement
+
+#### [TS.CREATERULE](https://redis.io/commands/ts.createrule/) <small>(not implemented)</small>
+
+Create a compaction rule
+
+#### [TS.DELETERULE](https://redis.io/commands/ts.deleterule/) <small>(not implemented)</small>
+
+Delete a compaction rule
+
+#### [TS.RANGE](https://redis.io/commands/ts.range/) <small>(not implemented)</small>
+
+Query a range in forward direction
+
+#### [TS.REVRANGE](https://redis.io/commands/ts.revrange/) <small>(not implemented)</small>
+
+Query a range in reverse direction
+
+#### [TS.MRANGE](https://redis.io/commands/ts.mrange/) <small>(not implemented)</small>
+
+Query a range across multiple time series by filters in forward direction
+
+#### [TS.MREVRANGE](https://redis.io/commands/ts.mrevrange/) <small>(not implemented)</small>
+
+Query a range across multiple time-series by filters in reverse direction
+
+#### [TS.GET](https://redis.io/commands/ts.get/) <small>(not implemented)</small>
+
+Get the sample with the highest timestamp from a given time series
+
+#### [TS.MGET](https://redis.io/commands/ts.mget/) <small>(not implemented)</small>
+
+Get the sample with the highest timestamp from each time series matching a specific filter
+
+#### [TS.INFO](https://redis.io/commands/ts.info/) <small>(not implemented)</small>
+
+Returns information and statistics for a time series
+
+#### [TS.QUERYINDEX](https://redis.io/commands/ts.queryindex/) <small>(not implemented)</small>
+
+Get all time series keys matching a filter list
+
+
diff --git a/docs/redis-stack.md b/docs/redis-stack.md
new file mode 100644
index 0000000..e9f2154
--- /dev/null
+++ b/docs/redis-stack.md
@@ -0,0 +1,24 @@
+# Support for redis-stack
+
+## RedisJson support
+
+Currently, Redis Json module is partially implemented (
+see [supported commands](./redis-commands/implemented_commands.md#json-commands)).
+Support for JSON commands (eg, [`JSON.GET`](https://redis.io/commands/json.get/)) is implemented using
+[jsonpath-ng](https://github.com/h2non/jsonpath-ng), you can simply install it using `pip install 'fakeredis[json]'`.
+
+```pycon
+>>> import fakeredis
+>>> from redis.commands.json.path import Path
+>>> r = fakeredis.FakeStrictRedis()
+>>> assert r.json().set("foo", Path.root_path(), {"x": "bar"}, ) == 1
+>>> r.json().get("foo")
+{'x': 'bar'}
+>>> r.json().get("foo", Path("x"))
+'bar'
+```
+
+## Lua support
+
+If you wish to have Lua scripting support (this includes features like ``redis.lock.Lock``, which are implemented in
+Lua), you will need [lupa](https://pypi.org/project/lupa/), you can simply install it using `pip install 'fakeredis[lua]'`
diff --git a/docs/requirements.txt b/docs/requirements.txt
new file mode 100644
index 0000000..bf8d93f
--- /dev/null
+++ b/docs/requirements.txt
@@ -0,0 +1,2 @@
+mkdocs==1.4.3
+mkdocs-material==9.1.11
diff --git a/fakeredis/__init__.py b/fakeredis/__init__.py
index 5502f4c..ec48efe 100644
--- a/fakeredis/__init__.py
+++ b/fakeredis/__init__.py
@@ -1,7 +1,10 @@
-from ._server import FakeServer, FakeRedis, FakeStrictRedis, FakeConnection, FakeRedisConnSingleton  # noqa: F401
+from ._server import FakeServer, FakeRedis, FakeStrictRedis, FakeConnection, FakeRedisConnSingleton
 
 try:
     from importlib import metadata
 except ImportError:  # for Python<3.8
-    import importlib_metadata as metadata
+    import importlib_metadata as metadata  # type: ignore
 __version__ = metadata.version("fakeredis")
+
+
+__all__ = ["FakeServer", "FakeRedis", "FakeStrictRedis", "FakeConnection", "FakeRedisConnSingleton"]
diff --git a/fakeredis/_aioredis1.py b/fakeredis/_aioredis1.py
deleted file mode 100644
index 7679f2e..0000000
--- a/fakeredis/_aioredis1.py
+++ /dev/null
@@ -1,181 +0,0 @@
-import asyncio
-import sys
-import warnings
-
-import aioredis
-
-from . import _async, _server
-
-
-class FakeSocket(_async.AsyncFakeSocket):
-    def _decode_error(self, error):
-        return aioredis.ReplyError(error.value)
-
-
-class FakeReader:
-    """Re-implementation of aioredis.stream.StreamReader.
-
-    It does not use a socket, but instead provides a queue that feeds
-    `readobj`.
-    """
-
-    def __init__(self, socket):
-        self._socket = socket
-
-    def set_parser(self, parser):
-        pass       # No parser needed, we get already-parsed data
-
-    async def readobj(self):
-        if self._socket.responses is None:
-            raise asyncio.CancelledError
-        result = await self._socket.responses.get()
-        return result
-
-    def at_eof(self):
-        return self._socket.responses is None
-
-    def feed_obj(self, obj):
-        self._queue.put_nowait(obj)
-
-
-class FakeWriter:
-    """Replaces a StreamWriter for an aioredis connection."""
-
-    def __init__(self, socket):
-        self.transport = socket       # So that aioredis can call writer.transport.close()
-
-    def write(self, data):
-        self.transport.sendall(data)
-
-
-class FakeConnectionsPool(aioredis.ConnectionsPool):
-    def __init__(self, server=None, db=None, password=None, encoding=None,
-                 *, minsize, maxsize, ssl=None, parser=None,
-                 create_connection_timeout=None,
-                 connection_cls=None,
-                 loop=None):
-        super().__init__('fakeredis',
-                         db=db,
-                         password=password,
-                         encoding=encoding,
-                         minsize=minsize,
-                         maxsize=maxsize,
-                         ssl=ssl,
-                         parser=parser,
-                         create_connection_timeout=create_connection_timeout,
-                         connection_cls=connection_cls,
-                         loop=loop)
-        if server is None:
-            server = _server.FakeServer()
-        self._server = server
-
-    def _create_new_connection(self, address):
-        # TODO: what does address do here? Might just be for sentinel?
-        return create_connection(self._server,
-                                 db=self._db,
-                                 password=self._password,
-                                 ssl=self._ssl,
-                                 encoding=self._encoding,
-                                 parser=self._parser_class,
-                                 timeout=self._create_connection_timeout,
-                                 connection_cls=self._connection_cls,
-                                 )
-
-
-async def create_connection(server=None, *, db=None, password=None, ssl=None,
-                            encoding=None, parser=None, loop=None,
-                            timeout=None, connection_cls=None):
-    # This is mostly copied from aioredis.connection.create_connection
-    if timeout is not None and timeout <= 0:
-        raise ValueError("Timeout has to be None or a number greater than 0")
-
-    if connection_cls:
-        assert issubclass(connection_cls, aioredis.abc.AbcConnection),\
-                "connection_class does not meet the AbcConnection contract"
-        cls = connection_cls
-    else:
-        cls = aioredis.connection.RedisConnection
-
-    if loop is not None and sys.version_info >= (3, 8, 0):
-        warnings.warn("The loop argument is deprecated",
-                      DeprecationWarning)
-
-    if server is None:
-        server = _server.FakeServer()
-    socket = FakeSocket(server)
-    reader = FakeReader(socket)
-    writer = FakeWriter(socket)
-    conn = cls(reader, writer, encoding=encoding,
-               address='fakeredis', parser=parser)
-
-    try:
-        if password is not None:
-            await conn.auth(password)
-        if db is not None:
-            await conn.select(db)
-    except Exception:
-        conn.close()
-        await conn.wait_closed()
-        raise
-    return conn
-
-
-async def create_redis(server=None, *, db=None, password=None, ssl=None,
-                       encoding=None, commands_factory=aioredis.Redis,
-                       parser=None, timeout=None,
-                       connection_cls=None, loop=None):
-    conn = await create_connection(server, db=db,
-                                   password=password,
-                                   ssl=ssl,
-                                   encoding=encoding,
-                                   parser=parser,
-                                   timeout=timeout,
-                                   connection_cls=connection_cls,
-                                   loop=loop)
-    return commands_factory(conn)
-
-
-async def create_pool(server=None, *, db=None, password=None, ssl=None,
-                      encoding=None, minsize=1, maxsize=10,
-                      parser=None, loop=None, create_connection_timeout=None,
-                      pool_cls=None, connection_cls=None):
-    # Mostly copied from aioredis.pool.create_pool.
-    if pool_cls:
-        assert issubclass(pool_cls, aioredis.AbcPool),\
-                "pool_class does not meet the AbcPool contract"
-        cls = pool_cls
-    else:
-        cls = FakeConnectionsPool
-
-    pool = cls(server, db, password, encoding,
-               minsize=minsize, maxsize=maxsize,
-               ssl=ssl, parser=parser,
-               create_connection_timeout=create_connection_timeout,
-               connection_cls=connection_cls,
-               loop=loop)
-    try:
-        await pool._fill_free(override_min=False)
-    except Exception:
-        pool.close()
-        await pool.wait_closed()
-        raise
-    return pool
-
-
-async def create_redis_pool(server=None, *, db=None, password=None, ssl=None,
-                            encoding=None, commands_factory=aioredis.Redis,
-                            minsize=1, maxsize=10, parser=None,
-                            timeout=None, pool_cls=None,
-                            connection_cls=None, loop=None):
-    pool = await create_pool(server, db=db,
-                             password=password,
-                             ssl=ssl,
-                             encoding=encoding,
-                             minsize=minsize,
-                             maxsize=maxsize,
-                             parser=parser,
-                             create_connection_timeout=timeout,
-                             pool_cls=pool_cls,
-                             connection_cls=connection_cls,
-                             loop=loop)
-    return commands_factory(pool)
diff --git a/fakeredis/_aioredis2.py b/fakeredis/_aioredis2.py
deleted file mode 100644
index c0f3b91..0000000
--- a/fakeredis/_aioredis2.py
+++ /dev/null
@@ -1,173 +0,0 @@
-import asyncio
-from typing import Union
-
-try:
-    from redis import asyncio as aioredis
-except ImportError:
-    import aioredis
-
-from . import _async, _server
-
-
-class FakeSocket(_async.AsyncFakeSocket):
-    _connection_error_class = aioredis.ConnectionError
-
-    def _decode_error(self, error):
-        return aioredis.connection.BaseParser(1).parse_error(error.value)
-
-
-class FakeReader:
-    pass
-
-
-class FakeWriter:
-    def __init__(self, socket: FakeSocket) -> None:
-        self._socket = socket
-
-    def close(self):
-        self._socket = None
-
-    async def wait_closed(self):
-        pass
-
-    async def drain(self):
-        pass
-
-    def writelines(self, data):
-        for chunk in data:
-            self._socket.sendall(chunk)
-
-
-class FakeConnection(aioredis.Connection):
-    def __init__(self, *args, **kwargs):
-        self._server = kwargs.pop('server')
-        self._sock = None
-        super().__init__(*args, **kwargs)
-
-    async def _connect(self):
-        if not self._server.connected:
-            raise aioredis.ConnectionError(_server.CONNECTION_ERROR_MSG)
-        self._sock = FakeSocket(self._server)
-        self._reader = FakeReader()
-        self._writer = FakeWriter(self._sock)
-
-    async def disconnect(self):
-        await super().disconnect()
-        self._sock = None
-
-    async def can_read(self, timeout: float = 0):
-        if not self.is_connected:
-            await self.connect()
-        if timeout == 0:
-            return not self._sock.responses.empty()
-        # asyncio.Queue doesn't have a way to wait for the queue to be
-        # non-empty without consuming an item, so kludge it with a sleep/poll
-        # loop.
-        loop = asyncio.get_event_loop()
-        start = loop.time()
-        while True:
-            if not self._sock.responses.empty():
-                return True
-            await asyncio.sleep(0.01)
-            now = loop.time()
-            if timeout is not None and now > start + timeout:
-                return False
-
-    def _decode(self, response):
-        if isinstance(response, list):
-            return [self._decode(item) for item in response]
-        elif isinstance(response, bytes):
-            return self.encoder.decode(response, )
-        else:
-            return response
-
-    async def read_response(self):
-        if not self._server.connected:
-            try:
-                response = self._sock.responses.get_nowait()
-            except asyncio.QueueEmpty:
-                raise aioredis.ConnectionError(_server.CONNECTION_ERROR_MSG)
-        else:
-            response = await self._sock.responses.get()
-        if isinstance(response, aioredis.ResponseError):
-            raise response
-        return self._decode(response)
-
-    def repr_pieces(self):
-        pieces = [
-            ('server', self._server),
-            ('db', self.db)
-        ]
-        if self.client_name:
-            pieces.append(('client_name', self.client_name))
-        return pieces
-
-
-class FakeRedis(aioredis.Redis):
-    def __init__(
-            self,
-            *,
-            db: Union[str, int] = 0,
-            password: str = None,
-            socket_timeout: float = None,
-            connection_pool: aioredis.ConnectionPool = None,
-            encoding: str = "utf-8",
-            encoding_errors: str = "strict",
-            decode_responses: bool = False,
-            retry_on_timeout: bool = False,
-            max_connections: int = None,
-            health_check_interval: int = 0,
-            client_name: str = None,
-            username: str = None,
-            server: _server.FakeServer = None,
-            connected: bool = True,
-            **kwargs
-    ):
-        if not connection_pool:
-            # Adapted from aioredis
-            if server is None:
-                server = _server.FakeServer()
-                server.connected = connected
-            connection_kwargs = {
-                "db": db,
-                "username": username,
-                "password": password,
-                "socket_timeout": socket_timeout,
-                "encoding": encoding,
-                "encoding_errors": encoding_errors,
-                "decode_responses": decode_responses,
-                "retry_on_timeout": retry_on_timeout,
-                "max_connections": max_connections,
-                "health_check_interval": health_check_interval,
-                "client_name": client_name,
-                "server": server,
-                "connection_class": FakeConnection
-            }
-            connection_pool = aioredis.ConnectionPool(**connection_kwargs)
-        super().__init__(
-            db=db,
-            password=password,
-            socket_timeout=socket_timeout,
-            connection_pool=connection_pool,
-            encoding=encoding,
-            encoding_errors=encoding_errors,
-            decode_responses=decode_responses,
-            retry_on_timeout=retry_on_timeout,
-            max_connections=max_connections,
-            health_check_interval=health_check_interval,
-            client_name=client_name,
-            username=username,
-            **kwargs
-        )
-
-    @classmethod
-    def from_url(cls, url: str, **kwargs):
-        server = kwargs.pop('server', None)
-        if server is None:
-            server = _server.FakeServer()
-        self = super().from_url(url, **kwargs)
-        # Now override how it creates connections
-        pool = self.connection_pool
-        pool.connection_class = FakeConnection
-        pool.connection_kwargs['server'] = server
-        return self
diff --git a/fakeredis/_async.py b/fakeredis/_async.py
deleted file mode 100644
index 93484b5..0000000
--- a/fakeredis/_async.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import asyncio
-
-import async_timeout
-
-from . import _helpers
-from . import _fakesocket
-
-
-class AsyncFakeSocket(_fakesocket.FakeSocket):
-    def __init__(self, *args, **kwargs):
-        super().__init__(*args, **kwargs)
-        self.responses = asyncio.Queue()
-
-    def put_response(self, msg):
-        self.responses.put_nowait(msg)
-
-    async def _async_blocking(self, timeout, func, event, callback):
-        try:
-            result = None
-            async with async_timeout.timeout(timeout if timeout else None):
-                while True:
-                    await event.wait()
-                    event.clear()
-                    # This is a coroutine outside the normal control flow that
-                    # locks the server, so we have to take our own lock.
-                    with self._server.lock:
-                        ret = func(False)
-                        if ret is not None:
-                            result = self._decode_result(ret)
-                            self.put_response(result)
-                            break
-        except asyncio.TimeoutError:
-            result = None
-        finally:
-            with self._server.lock:
-                self._db.remove_change_callback(callback)
-            self.put_response(result)
-            self.resume()
-
-    def _blocking(self, timeout, func):
-        loop = asyncio.get_event_loop()
-        ret = func(True)
-        if ret is not None or self._in_transaction:
-            return ret
-        event = asyncio.Event()
-
-        def callback():
-            loop.call_soon_threadsafe(event.set)
-        self._db.add_change_callback(callback)
-        self.pause()
-        loop.create_task(self._async_blocking(timeout, func, event, callback))
-        return _helpers.NoResponse()
diff --git a/fakeredis/_basefakesocket.py b/fakeredis/_basefakesocket.py
new file mode 100644
index 0000000..ae2c406
--- /dev/null
+++ b/fakeredis/_basefakesocket.py
@@ -0,0 +1,346 @@
+import itertools
+import queue
+import time
+import weakref
+from typing import List, Any, Tuple
+
+import redis
+
+if redis.VERSION >= (5, 0):
+    from redis.parsers import BaseParser
+else:
+    from redis.connection import BaseParser
+
+from . import _msgs as msgs
+from ._command_args_parsing import extract_args
+from ._commands import (
+    Int, Float, SUPPORTED_COMMANDS, COMMANDS_WITH_SUB, key_value_type)
+from ._helpers import (
+    SimpleError, valid_response_type, SimpleString, NoResponse, casematch,
+    compile_pattern, QUEUED, encode_command)
+
+
+def _extract_command(fields) -> Tuple[Any, List[Any]]:
+    cmd = encode_command(fields[0])
+    if cmd in COMMANDS_WITH_SUB and len(fields) >= 2:
+        cmd += ' ' + encode_command(fields[1])
+        cmd_arguments = fields[2:]
+    else:
+        cmd_arguments = fields[1:]
+    return cmd, cmd_arguments
+
+
+def bin_reverse(x, bits_count):
+    result = 0
+    for i in range(bits_count):
+        if (x >> i) & 1:
+            result |= 1 << (bits_count - 1 - i)
+    return result
+
+
+class BaseFakeSocket:
+    _connection_error_class = redis.ConnectionError
+
+    def __init__(self, server, db, *args, **kwargs):
+        super(BaseFakeSocket, self).__init__(*args, **kwargs)
+        self._server = server
+        self._db_num = db
+        self._db = server.dbs[self._db_num]
+        self.responses = queue.Queue()
+        # Prevents parser from processing commands. Not used in this module,
+        # but set by aioredis module to prevent new commands being processed
+        # while handling a blocking command.
+        self._paused = False
+        self._parser = self._parse_commands()
+        self._parser.send(None)
+        self.version = server.version
+
+    def put_response(self, msg):
+        # redis.Connection.__del__ might call self.close at any time, which
+        # will set self.responses to None. We assume this will happen
+        # atomically, and the code below then protects us against this.
+        responses = self.responses
+        if responses:
+            responses.put(msg)
+
+    def pause(self):
+        self._paused = True
+
+    def resume(self):
+        self._paused = False
+        self._parser.send(b'')
+
+    def shutdown(self, flags):
+        self._parser.close()
+
+    @staticmethod
+    def fileno():
+        # Our fake socket must return an integer from `FakeSocket.fileno()` since a real selector
+        # will be created. The value does not matter since we replace the selector with our own
+        # `FakeSelector` before it is ever used.
+        return 0
+
+    def _cleanup(self, server):
+        """Remove all the references to `self` from `server`.
+
+        This is called with the server lock held, but it may be some time after
+        self.close.
+        """
+        for subs in server.subscribers.values():
+            subs.discard(self)
+        for subs in server.psubscribers.values():
+            subs.discard(self)
+        self._clear_watches()
+
+    def close(self):
+        # Mark ourselves for cleanup. This might be called from
+        # redis.Connection.__del__, which the garbage collection could call
+        # at any time, and hence we can't safely take the server lock.
+        # We rely on list.append being atomic.
+        self._server.closed_sockets.append(weakref.ref(self))
+        self._server = None
+        self._db = None
+        self.responses = None
+
+    @staticmethod
+    def _extract_line(buf):
+        pos = buf.find(b'\n') + 1
+        assert pos > 0
+        line = buf[:pos]
+        buf = buf[pos:]
+        assert line.endswith(b'\r\n')
+        return line, buf
+
+    def _parse_commands(self):
+        """Generator that parses commands.
+
+        It is fed pieces of redis protocol data (via `send`) and calls
+        `_process_command` whenever it has a complete one.
+        """
+        buf = b''
+        while True:
+            while self._paused or b'\n' not in buf:
+                buf += yield
+            line, buf = self._extract_line(buf)
+            assert line[:1] == b'*'  # array
+            n_fields = int(line[1:-2])
+            fields = []
+            for i in range(n_fields):
+                while b'\n' not in buf:
+                    buf += yield
+                line, buf = self._extract_line(buf)
+                assert line[:1] == b'$'  # string
+                length = int(line[1:-2])
+                while len(buf) < length + 2:
+                    buf += yield
+                fields.append(buf[:length])
+                buf = buf[length + 2:]  # +2 to skip the CRLF
+            self._process_command(fields)
+
+    def _run_command(self, func, sig, args, from_script):
+        command_items = {}
+        try:
+            ret = sig.apply(args, self._db, self.version)
+            if len(ret) == 1:
+                result = ret[0]
+            else:
+                args, command_items = ret
+                if from_script and msgs.FLAG_NO_SCRIPT in sig.flags:
+                    raise SimpleError(msgs.COMMAND_IN_SCRIPT_MSG)
+                if self._pubsub and sig.name not in [
+                    'ping',
+                    'subscribe',
+                    'unsubscribe',
+                    'psubscribe',
+                    'punsubscribe',
+                    'quit'
+                ]:
+                    raise SimpleError(msgs.BAD_COMMAND_IN_PUBSUB_MSG)
+                result = func(*args)
+                assert valid_response_type(result)
+        except SimpleError as exc:
+            result = exc
+        for command_item in command_items:
+            command_item.writeback(remove_empty_val=msgs.FLAG_LEAVE_EMPTY_VAL not in sig.flags)
+        return result
+
+    def _decode_error(self, error):
+        return BaseParser().parse_error(error.value)
+
+    def _decode_result(self, result):
+        """Convert SimpleString and SimpleError, recursively"""
+        if isinstance(result, list):
+            return [self._decode_result(r) for r in result]
+        elif isinstance(result, SimpleString):
+            return result.value
+        elif isinstance(result, SimpleError):
+            return self._decode_error(result)
+        else:
+            return result
+
+    def _blocking(self, timeout, func):
+        """Run a function until it succeeds or timeout is reached.
+
+        The timeout must be an integer, and 0 means infinite. The function
+        is called with a boolean to indicate whether this is the first call.
+        If it returns None it is considered to have "failed" and is retried
+        each time the condition variable is notified, until the timeout is
+        reached.
+
+        Returns the function return value, or None if the timeout was reached.
+        """
+        ret = func(True)
+        if ret is not None or self._in_transaction:
+            return ret
+        if timeout:
+            deadline = time.time() + timeout
+        else:
+            deadline = None
+        while True:
+            timeout = deadline - time.time() if deadline is not None else None
+            if timeout is not None and timeout <= 0:
+                return None
+            # Python <3.2 doesn't return a status from wait. On Python 3.2+
+            # we bail out early on False.
+            if self._db.condition.wait(timeout=timeout) is False:
+                return None  # Timeout expired
+            ret = func(False)
+            if ret is not None:
+                return ret
+
+    def _name_to_func(self, cmd_name: str):
+        """Get the signature and the method from the command name.
+        """
+        if cmd_name not in SUPPORTED_COMMANDS:
+            # redis remaps \r or \n in an error to ' ' to make it legal protocol
+            clean_name = cmd_name.replace('\r', ' ').replace('\n', ' ')
+            raise SimpleError(msgs.UNKNOWN_COMMAND_MSG.format(clean_name))
+        sig = SUPPORTED_COMMANDS[cmd_name]
+        func = getattr(self, sig.func_name, None)
+        return func, sig
+
+    def sendall(self, data):
+        if not self._server.connected:
+            raise self._connection_error_class(msgs.CONNECTION_ERROR_MSG)
+        if isinstance(data, str):
+            data = data.encode('ascii')
+        self._parser.send(data)
+
+    def _process_command(self, fields: List[bytes]):
+        if not fields:
+            return
+
+        cmd, cmd_arguments = _extract_command(fields)
+        try:
+            func, sig = self._name_to_func(cmd)
+            with self._server.lock:
+                # Clean out old connections
+                while True:
+                    try:
+                        weak_sock = self._server.closed_sockets.pop()
+                    except IndexError:
+                        break
+                    else:
+                        sock = weak_sock()
+                        if sock:
+                            sock._cleanup(self._server)
+                now = time.time()
+                for db in self._server.dbs.values():
+                    db.time = now
+                sig.check_arity(cmd_arguments, self.version)
+                if self._transaction is not None and msgs.FLAG_TRANSACTION not in sig.flags:
+                    self._transaction.append((func, sig, cmd_arguments))
+                    result = QUEUED
+                else:
+                    result = self._run_command(func, sig, cmd_arguments, False)
+        except SimpleError as exc:
+            if self._transaction is not None:
+                # TODO: should not apply if the exception is from _run_command
+                # e.g. watch inside multi
+                self._transaction_failed = True
+            if cmd == 'exec' and exc.value.startswith('ERR '):
+                exc.value = 'EXECABORT Transaction discarded because of: ' + exc.value[4:]
+                self._transaction = None
+                self._transaction_failed = False
+                self._clear_watches()
+            result = exc
+        result = self._decode_result(result)
+        if not isinstance(result, NoResponse):
+            self.put_response(result)
+
+    def _scan(self, keys, cursor, *args):
+        """This is the basis of most of the ``scan`` methods.
+
+        This implementation is KNOWN to be un-performant, as it requires grabbing the full set of keys over which
+        we are investigating subsets.
+
+        The SCAN command, and the other commands in the SCAN family, are able to provide to the user a set of
+        guarantees associated to full iterations.
+
+        - A full iteration always retrieves all the elements that were present in the collection from the start to the
+          end of a full iteration. This means that if a given element is inside the collection when an iteration is
+          started, and is still there when an iteration terminates, then at some point SCAN returned it to the user.
+
+        - A full iteration never returns any element that was NOT present in the collection from the start to the end
+          of a full iteration. So if an element was removed before the start of an iteration, and is never added back
+          to the collection for all the time an iteration lasts, SCAN ensures that this element will never be returned.
+
+        However because SCAN has very little state associated (just the cursor) it has the following drawbacks:
+
+        - A given element may be returned multiple times. It is up to the application to handle the case of duplicated
+          elements, for example only using the returned elements in order to perform operations that are safe when
+          re-applied multiple times.
+        - Elements that were not constantly present in the collection during a full iteration, may be returned or not:
+          it is undefined.
+
+        """
+        cursor = int(cursor)
+        (pattern, _type, count), _ = extract_args(args, ('*match', '*type', '+count'))
+        count = 10 if count is None else count
+        data = sorted(keys)
+        bits_len = (len(keys) - 1).bit_length()
+        cursor = bin_reverse(cursor, bits_len)
+        if cursor >= len(keys):
+            return [0, []]
+        result_cursor = cursor + count
+        result_data = []
+
+        regex = compile_pattern(pattern) if pattern is not None else None
+
+        def match_key(key):
+            return regex.match(key) if pattern is not None else True
+
+        def match_type(key):
+            if _type is not None:
+                return casematch(key_value_type(self._db[key]).value, _type)
+            return True
+
+        if pattern is not None or _type is not None:
+            for val in itertools.islice(data, cursor, cursor + count):
+                compare_val = val[0] if isinstance(val, tuple) else val
+                if match_key(compare_val) and match_type(compare_val):
+                    result_data.append(val)
+        else:
+            result_data = data[cursor:cursor + count]
+
+        if result_cursor >= len(data):
+            result_cursor = 0
+        return [str(bin_reverse(result_cursor, bits_len)).encode(), result_data]
+
+    def _ttl(self, key, scale):
+        if not key:
+            return -2
+        elif key.expireat is None:
+            return -1
+        else:
+            return int(round((key.expireat - self._db.time) * scale))
+
+    def _encodefloat(self, value, humanfriendly):
+        if self.version >= 7:
+            value = 0 + value
+        return Float.encode(value, humanfriendly)
+
+    def _encodeint(self, value):
+        if self.version >= 7:
+            value = 0 + value
+        return Int.encode(value)
diff --git a/fakeredis/_command_args_parsing.py b/fakeredis/_command_args_parsing.py
new file mode 100644
index 0000000..af62008
--- /dev/null
+++ b/fakeredis/_command_args_parsing.py
@@ -0,0 +1,143 @@
+from typing import Tuple, List, Dict, Any
+
+from . import _msgs as msgs
+from ._commands import Int, Float
+from ._helpers import SimpleError, null_terminate
+
+
+def _count_params(s: str):
+    res = 0
+    while s[res] in '.+*~':
+        res += 1
+    return res
+
+
+def _encode_arg(s: str):
+    return s[_count_params(s):].encode()
+
+
+def _default_value(s: str):
+    if s[0] == '~':
+        return None
+    ind = _count_params(s)
+    if ind == 0:
+        return False
+    elif ind == 1:
+        return None
+    else:
+        return [None] * ind
+
+
+def extract_args(
+        actual_args: Tuple[bytes, ...],
+        expected: Tuple[str, ...],
+        error_on_unexpected: bool = True,
+        left_from_first_unexpected: bool = True,
+) -> Tuple[List, List]:
+    """Parse argument values
+
+    Extract from actual arguments which arguments exist and their value if relevant.
+
+    Parameters:
+    - actual_args:
+        The actual arguments to parse
+    - expected:
+        Arguments to look for, see below explanation.
+    - error_on_unexpected:
+        Should an error be raised when actual_args contain an unexpected argument?
+    - left_from_first_unexpected:
+        Once reaching an unexpected argument in actual_args,
+        Should parsing stop?
+    Returns:
+    - List of values for expected arguments.
+    - List of remaining args.
+
+    An expected argument can have parameters:
+    - A numerical (Int) parameter is identified with +.
+    - A float (Float) parameter is identified with .
+    - A non-numerical parameter is identified with a *.
+    - A argument with potentially ~ or = between the
+      argument name and the value is identified with a ~.
+    - A numberical argument with potentially ~ or = between the
+      argument name and the value is identified with a ~+.
+
+    e.g.
+    '++limit' will translate as an argument with 2 int parameters.
+
+    >>> extract_args((b'nx', b'ex', b'324', b'xx',), ('nx', 'xx', '+ex', 'keepttl'))
+    [True, True, 324, False], None
+
+    >>> extract_args(
+        (b'maxlen', b'10',b'nx', b'ex', b'324', b'xx',),
+        ('~+maxlen', 'nx', 'xx', '+ex', 'keepttl'))
+    10, [True, True, 324, False], None
+    """
+    args_info: Dict[bytes, int] = {
+        _encode_arg(k): (i, _count_params(k))
+        for (i, k) in enumerate(expected)
+    }
+
+    def _parse_params(
+            key: str,
+            ind: int,
+            actual_args: Tuple[bytes, ...]) -> Tuple[Any, int]:
+        """
+        Parse an argument from actual args.
+        """
+        pos, expected_following = args_info[key]
+        argument_name = expected[pos]
+
+        # Deal with parameters with optional ~/= before numerical value.
+        if argument_name[0] == '~':
+            if ind + 1 >= len(actual_args):
+                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+            if actual_args[ind + 1] != b'~' and actual_args[ind + 1] != b'=':
+                arg, parsed = actual_args[ind + 1], 1
+            elif ind + 2 >= len(actual_args):
+                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+            else:
+                arg, parsed = actual_args[ind + 2], 2
+            if argument_name[1] == '+':
+                arg = Int.decode(arg)
+            return arg, parsed
+        # Boolean parameters
+        if expected_following == 0:
+            return True, 0
+
+        if ind + expected_following >= len(actual_args):
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        temp_res = []
+        for i in range(expected_following):
+            curr_arg = actual_args[ind + i + 1]
+            if argument_name[i] == '+':
+                curr_arg = Int.decode(curr_arg)
+            elif argument_name[i] == '.':
+                curr_arg = Float.decode(curr_arg)
+            temp_res.append(curr_arg)
+
+        if len(temp_res) == 1:
+            return temp_res[0], expected_following
+        else:
+            return temp_res, expected_following
+
+    results: List = [_default_value(key) for key in expected]
+    left_args = []
+    i = 0
+    while i < len(actual_args):
+        found = False
+        for key in args_info:
+            if null_terminate(actual_args[i]) == key:
+                arg_position, _ = args_info[key]
+                results[arg_position], parsed = _parse_params(key, i, actual_args)
+                i += parsed
+                found = True
+                break
+
+        if not found:
+            if error_on_unexpected:
+                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+            if left_from_first_unexpected:
+                return results, actual_args[i:]
+            left_args.append(actual_args[i])
+        i += 1
+    return results, left_args
diff --git a/fakeredis/_commands.py b/fakeredis/_commands.py
index f5c8b99..3626627 100644
--- a/fakeredis/_commands.py
+++ b/fakeredis/_commands.py
@@ -1,10 +1,18 @@
+"""
+Helper classes and methods used in mixins implementing various commands.
+Unlike _helpers.py, here the methods should be used only in mixins.
+"""
 import functools
-import re
-
 import math
+import re
 
 from . import _msgs as msgs
-from ._helpers import MAX_STRING_SIZE, null_terminate, SimpleError
+from ._helpers import null_terminate, SimpleError, SimpleString
+from ._zset import ZSet
+
+MAX_STRING_SIZE = 512 * 1024 * 1024
+SUPPORTED_COMMANDS = dict()  # Dictionary of supported commands name => Signature
+COMMANDS_WITH_SUB = set()  # Commands with sub-commands
 
 
 class Key:
@@ -74,10 +82,11 @@ class CommandItem:
     def updated(self):
         self._modified = True
 
-    def writeback(self):
+    def writeback(self, remove_empty_val=True):
         if self._modified:
             self.db.notify_watch(self.key)
-            if not isinstance(self.value, bytes) and not self.value:
+            if (not isinstance(self.value, bytes) and (
+                    self.value is None or (not self.value and remove_empty_val))):
                 self.db.pop(self.key, None)
                 return
             else:
@@ -94,6 +103,7 @@ class CommandItem:
 
 
 class Hash(dict):
+    DECODE_ERROR = msgs.INVALID_HASH_MSG
     redis_type = b'hash'
 
 
@@ -110,13 +120,13 @@ class Int:
         return cls.MIN_VALUE <= value <= cls.MAX_VALUE
 
     @classmethod
-    def decode(cls, value):
+    def decode(cls, value, decode_error=None):
         try:
             out = int(value)
             if not cls.valid(out) or str(out).encode() != value:
                 raise ValueError
         except ValueError:
-            raise SimpleError(cls.DECODE_ERROR)
+            raise SimpleError(decode_error or cls.DECODE_ERROR)
         return out
 
     @classmethod
@@ -171,7 +181,8 @@ class Float:
                allow_leading_whitespace=False,
                allow_erange=False,
                allow_empty=False,
-               crop_null=False):
+               crop_null=False,
+               decode_error=None):
         # redis has some quirks in float parsing, with several variants.
         # See https://github.com/antirez/redis/issues/5706
         try:
@@ -194,7 +205,7 @@ class Float:
                     raise ValueError
             return out
         except ValueError:
-            raise SimpleError(cls.DECODE_ERROR)
+            raise SimpleError(decode_error or cls.DECODE_ERROR)
 
     @classmethod
     def encode(cls, value, humanfriendly):
@@ -239,13 +250,15 @@ class AfterAny:
 class ScoreTest:
     """Argument converter for sorted set score endpoints."""
 
-    def __init__(self, value, exclusive=False):
+    def __init__(self, value, exclusive=False, bytes_val=None):
         self.value = value
         self.exclusive = exclusive
+        self.bytes_val = bytes_val
 
     @classmethod
     def decode(cls, value):
         try:
+            original_value = value
             exclusive = False
             if value[:1] == b'(':
                 exclusive = True
@@ -253,7 +266,7 @@ class ScoreTest:
             value = Float.decode(
                 value, allow_leading_whitespace=True, allow_erange=True,
                 allow_empty=True, crop_null=True)
-            return cls(value, exclusive)
+            return cls(value, exclusive, original_value)
         except SimpleError:
             raise SimpleError(msgs.INVALID_MIN_MAX_FLOAT_MSG)
 
@@ -294,28 +307,31 @@ class StringTest:
 
 
 class Signature:
-    def __init__(self, name, fixed, repeat=(), flags=""):
+    def __init__(self, name, func_name, fixed, repeat=(), flags=""):
         self.name = name
+        self.func_name = func_name
         self.fixed = fixed
         self.repeat = repeat
-        self.flags = flags
+        self.flags = set(flags)
 
-    def check_arity(self, args):
+    def check_arity(self, args, version):
         if len(args) != len(self.fixed):
             delta = len(args) - len(self.fixed)
             if delta < 0 or not self.repeat:
-                raise SimpleError(msgs.WRONG_ARGS_MSG.format(self.name))
+                msg = msgs.WRONG_ARGS_MSG6.format(self.name)
+                raise SimpleError(msg)
 
-    def apply(self, args, db):
+    def apply(self, args, db, version):
         """Returns a tuple, which is either:
         - transformed args and a dict of CommandItems; or
         - a single containing a short-circuit return value
         """
-        self.check_arity(args)
+        self.check_arity(args, version)
         if self.repeat:
             delta = len(args) - len(self.fixed)
             if delta % len(self.repeat) != 0:
-                raise SimpleError(msgs.WRONG_ARGS_MSG.format(self.name))
+                msg = msgs.WRONG_ARGS_MSG7 if version >= 7 else msgs.WRONG_ARGS_MSG6.format(self.name)
+                raise SimpleError(msg)
 
         types = list(self.fixed)
         for i in range(len(args) - len(types)):
@@ -326,7 +342,7 @@ class Signature:
         for i, (arg, type_) in enumerate(zip(args, types)):
             if isinstance(type_, Key):
                 if type_.missing_return is not Key.UNSPECIFIED and arg not in db:
-                    return (type_.missing_return,)
+                    return type_.missing_return,
             elif type_ != bytes:
                 args[i] = type_.decode(args[i], )
 
@@ -349,9 +365,72 @@ class Signature:
 
 
 def command(*args, **kwargs):
+    def create_signature(func, cmd_name):
+        if ' ' in cmd_name:
+            COMMANDS_WITH_SUB.add(cmd_name.split(' ')[0])
+        SUPPORTED_COMMANDS[cmd_name] = Signature(cmd_name, func.__name__, *args, **kwargs)
+
     def decorator(func):
-        name = kwargs.pop('name', func.__name__)
-        func._fakeredis_sig = Signature(name, *args, **kwargs)
+        cmd_names = kwargs.pop('name', func.__name__)
+        if isinstance(cmd_names, list):  # Support for alias commands
+            for cmd_name in cmd_names:
+                create_signature(func, cmd_name.lower())
+        elif isinstance(cmd_names, str):
+            create_signature(func, cmd_names.lower())
+        else:
+            raise ValueError("command name should be a string or list of strings")
         return func
 
     return decorator
+
+
+def delete_keys(*keys):
+    ans = 0
+    done = set()
+    for key in keys:
+        if key and key.key not in done:
+            key.value = None
+            done.add(key.key)
+            ans += 1
+    return ans
+
+
+def fix_range(start, end, length):
+    # Redis handles negative slightly differently for zrange
+    if start < 0:
+        start = max(0, start + length)
+    if end < 0:
+        end += length
+    if start > end or start >= length:
+        return -1, -1
+    end = min(end, length - 1)
+    return start, end + 1
+
+
+def fix_range_string(start, end, length):
+    # Negative number handling is based on the redis source code
+    if 0 > start > end and end < 0:
+        return -1, -1
+    if start < 0:
+        start = max(0, start + length)
+    if end < 0:
+        end = max(0, end + length)
+    end = min(end, length - 1)
+    return start, end + 1
+
+
+def key_value_type(key):
+    if key.value is None:
+        return SimpleString(b'none')
+    elif isinstance(key.value, bytes):
+        return SimpleString(b'string')
+    elif isinstance(key.value, list):
+        return SimpleString(b'list')
+    elif isinstance(key.value, set):
+        return SimpleString(b'set')
+    elif isinstance(key.value, ZSet):
+        return SimpleString(b'zset')
+    elif isinstance(key.value, dict):
+        return SimpleString(b'hash')
+    else:
+        assert False  # pragma: nocover
diff --git a/fakeredis/_fakesocket.py b/fakeredis/_fakesocket.py
index 3b3a3b2..c839d9e 100644
--- a/fakeredis/_fakesocket.py
+++ b/fakeredis/_fakesocket.py
@@ -1,2090 +1,39 @@
-import functools
-import hashlib
-import itertools
-import pickle
-import queue
-import random
-import time
-import weakref
-
-import math
-import redis
-import six
-
-from . import _msgs as msgs
-from ._commands import (
-    Key, command, DbIndex, Int, CommandItem, BeforeAny, SortFloat, Float, BitOffset, BitValue, Hash,
-    StringTest, ScoreTest, Timeout)
-from ._helpers import (
-    PONG, OK, MAX_STRING_SIZE, SimpleError, valid_response_type, SimpleString, NoResponse, casematch,
-    BGSAVE_STARTED, REDIS_LOG_LEVELS_TO_LOGGING, LOGGER, REDIS_LOG_LEVELS, casenorm, compile_pattern, QUEUED)
-from ._msgs import LUA_COMMAND_ARG_MSG, LUA_COMMAND_ARG_MSG6
-from ._zset import ZSet
-
-
-class FakeSocket:
-    _connection_error_class = redis.ConnectionError
-
-    def __init__(self, server):
-        self._server = server
-        self._db = server.dbs[0]
-        self._db_num = 0
-        # When in a MULTI, set to a list of function calls
-        self._transaction = None
-        self._transaction_failed = False
-        # Set when executing the commands from EXEC
-        self._in_transaction = False
-        self._watch_notified = False
-        self._watches = set()
-        self._pubsub = 0  # Count of subscriptions
-        self.responses = queue.Queue()
-        # Prevents parser from processing commands. Not used in this module,
-        # but set by aioredis module to prevent new commands being processed
-        # while handling a blocking command.
-        self._paused = False
-        self._parser = self._parse_commands()
-        self._parser.send(None)
-        self.version = server.version
-
-    def put_response(self, msg):
-        # redis.Connection.__del__ might call self.close at any time, which
-        # will set self.responses to None. We assume this will happen
-        # atomically, and the code below then protects us against this.
-        responses = self.responses
-        if responses:
-            responses.put(msg)
-
-    def pause(self):
-        self._paused = True
-
-    def resume(self):
-        self._paused = False
-        self._parser.send(b'')
-
-    def shutdown(self, flags):
-        self._parser.close()
-
-    def fileno(self):
-        # Our fake socket must return an integer from `FakeSocket.fileno()` since a real selector
-        # will be created. The value does not matter since we replace the selector with our own
-        # `FakeSelector` before it is ever used.
-        return 0
-
-    def _cleanup(self, server):
-        """Remove all the references to `self` from `server`.
-
-        This is called with the server lock held, but it may be some time after
-        self.close.
-        """
-        for subs in server.subscribers.values():
-            subs.discard(self)
-        for subs in server.psubscribers.values():
-            subs.discard(self)
-        self._clear_watches()
-
-    def close(self):
-        # Mark ourselves for cleanup. This might be called from
-        # redis.Connection.__del__, which the garbage collection could call
-        # at any time, and hence we can't safely take the server lock.
-        # We rely on list.append being atomic.
-        self._server.closed_sockets.append(weakref.ref(self))
-        self._server = None
-        self._db = None
-        self.responses = None
-
-    @staticmethod
-    def _extract_line(buf):
-        pos = buf.find(b'\n') + 1
-        assert pos > 0
-        line = buf[:pos]
-        buf = buf[pos:]
-        assert line.endswith(b'\r\n')
-        return line, buf
-
-    def _parse_commands(self):
-        """Generator that parses commands.
-
-        It is fed pieces of redis protocol data (via `send`) and calls
-        `_process_command` whenever it has a complete one.
-        """
-        buf = b''
-        while True:
-            while self._paused or b'\n' not in buf:
-                buf += yield
-            line, buf = self._extract_line(buf)
-            assert line[:1] == b'*'  # array
-            n_fields = int(line[1:-2])
-            fields = []
-            for i in range(n_fields):
-                while b'\n' not in buf:
-                    buf += yield
-                line, buf = self._extract_line(buf)
-                assert line[:1] == b'$'  # string
-                length = int(line[1:-2])
-                while len(buf) < length + 2:
-                    buf += yield
-                fields.append(buf[:length])
-                buf = buf[length + 2:]  # +2 to skip the CRLF
-            self._process_command(fields)
-
-    def _run_command(self, func, sig, args, from_script):
-        command_items = {}
-        try:
-            ret = sig.apply(args, self._db)
-            if len(ret) == 1:
-                result = ret[0]
-            else:
-                args, command_items = ret
-                if from_script and msgs.FLAG_NO_SCRIPT in sig.flags:
-                    raise SimpleError(msgs.COMMAND_IN_SCRIPT_MSG)
-                if self._pubsub and sig.name not in [
-                    'ping',
-                    'subscribe',
-                    'unsubscribe',
-                    'psubscribe',
-                    'punsubscribe',
-                    'quit'
-                ]:
-                    raise SimpleError(msgs.BAD_COMMAND_IN_PUBSUB_MSG)
-                result = func(*args)
-                assert valid_response_type(result)
-        except SimpleError as exc:
-            result = exc
-        for command_item in command_items:
-            command_item.writeback()
-        return result
-
-    def _decode_error(self, error):
-        return redis.connection.BaseParser().parse_error(error.value)
-
-    def _decode_result(self, result):
-        """Convert SimpleString and SimpleError, recursively"""
-        if isinstance(result, list):
-            return [self._decode_result(r) for r in result]
-        elif isinstance(result, SimpleString):
-            return result.value
-        elif isinstance(result, SimpleError):
-            return self._decode_error(result)
-        else:
-            return result
-
-    def _blocking(self, timeout, func):
-        """Run a function until it succeeds or timeout is reached.
-
-        The timeout must be an integer, and 0 means infinite. The function
-        is called with a boolean to indicate whether this is the first call.
-        If it returns None it is considered to have "failed" and is retried
-        each time the condition variable is notified, until the timeout is
-        reached.
-
-        Returns the function return value, or None if the timeout was reached.
-        """
-        ret = func(True)
-        if ret is not None or self._in_transaction:
-            return ret
-        if timeout:
-            deadline = time.time() + timeout
-        else:
-            deadline = None
-        while True:
-            timeout = deadline - time.time() if deadline is not None else None
-            if timeout is not None and timeout <= 0:
-                return None
-            # Python <3.2 doesn't return a status from wait. On Python 3.2+
-            # we bail out early on False.
-            if self._db.condition.wait(timeout=timeout) is False:
-                return None  # Timeout expired
-            ret = func(False)
-            if ret is not None:
-                return ret
-
-    def _name_to_func(self, name):
-        name = six.ensure_str(name, encoding='utf-8', errors='replace')
-        func_name = name.lower()
-        func = getattr(self, func_name, None)
-        if name.startswith('_') or not func or not hasattr(func, '_fakeredis_sig'):
-            # redis remaps \r or \n in an error to ' ' to make it legal protocol
-            clean_name = name.replace('\r', ' ').replace('\n', ' ')
-            raise SimpleError(msgs.UNKNOWN_COMMAND_MSG.format(clean_name))
-        return func, func_name
-
-    def sendall(self, data):
-        if not self._server.connected:
-            raise self._connection_error_class(msgs.CONNECTION_ERROR_MSG)
-        if isinstance(data, str):
-            data = data.encode('ascii')
-        self._parser.send(data)
-
-    def _process_command(self, fields):
-        if not fields:
-            return
-        func_name = None
-        try:
-            func, func_name = self._name_to_func(fields[0])
-            sig = func._fakeredis_sig
-            with self._server.lock:
-                # Clean out old connections
-                while True:
-                    try:
-                        weak_sock = self._server.closed_sockets.pop()
-                    except IndexError:
-                        break
-                    else:
-                        sock = weak_sock()
-                        if sock:
-                            sock._cleanup(self._server)
-                now = time.time()
-                for db in self._server.dbs.values():
-                    db.time = now
-                sig.check_arity(fields[1:])
-                # TODO: make a signature attribute for transactions
-                if self._transaction is not None \
-                        and func_name not in ('exec', 'discard', 'multi', 'watch'):
-                    self._transaction.append((func, sig, fields[1:]))
-                    result = QUEUED
-                else:
-                    result = self._run_command(func, sig, fields[1:], False)
-        except SimpleError as exc:
-            if self._transaction is not None:
-                # TODO: should not apply if the exception is from _run_command
-                # e.g. watch inside multi
-                self._transaction_failed = True
-            if func_name == 'exec' and exc.value.startswith('ERR '):
-                exc.value = 'EXECABORT Transaction discarded because of: ' + exc.value[4:]
-                self._transaction = None
-                self._transaction_failed = False
-                self._clear_watches()
-            result = exc
-        result = self._decode_result(result)
-        if not isinstance(result, NoResponse):
-            self.put_response(result)
-
-    def notify_watch(self):
-        self._watch_notified = True
-
-    # redis has inconsistent handling of negative indices, hence two versions
-    # of this code.
-
-    @staticmethod
-    def _fix_range_string(start, end, length):
-        # Negative number handling is based on the redis source code
-        if start < 0 and end < 0 and start > end:
-            return -1, -1
-        if start < 0:
-            start = max(0, start + length)
-        if end < 0:
-            end = max(0, end + length)
-        end = min(end, length - 1)
-        return start, end + 1
-
-    @staticmethod
-    def _fix_range(start, end, length):
-        # Redis handles negative slightly differently for zrange
-        if start < 0:
-            start = max(0, start + length)
-        if end < 0:
-            end += length
-        if start > end or start >= length:
-            return -1, -1
-        end = min(end, length - 1)
-        return start, end + 1
-
-    def _scan(self, keys, cursor, *args):
-        """
-        This is the basis of most of the ``scan`` methods.
-
-        This implementation is KNOWN to be un-performant, as it requires
-        grabbing the full set of keys over which we are investigating subsets.
-
-        It also doesn't adhere to the guarantee that every key will be iterated
-        at least once even if the database is modified during the scan.
-        However, provided the database is not modified, every key will be
-        returned exactly once.
-        """
-        cursor = int(cursor)
-        pattern = None
-        type = None
-        count = 10
-        if len(args) % 2 != 0:
-            raise SimpleError(msgs.msgs.SYNTAX_ERROR_MSG)
-        for i in range(0, len(args), 2):
-            if casematch(args[i], b'match'):
-                pattern = args[i + 1]
-            elif casematch(args[i], b'count'):
-                count = Int.decode(args[i + 1])
-                if count <= 0:
-                    raise SimpleError(msgs.msgs.SYNTAX_ERROR_MSG)
-            elif casematch(args[i], b'type'):
-                type = args[i + 1]
-            else:
-                raise SimpleError(msgs.msgs.SYNTAX_ERROR_MSG)
-
-        if cursor >= len(keys):
-            return [0, []]
-        data = sorted(keys)
-        result_cursor = cursor + count
-        result_data = []
-
-        regex = compile_pattern(pattern) if pattern is not None else None
-
-        def match_key(key):
-            return regex.match(key) if pattern is not None else True
-
-        def match_type(key):
-            if type is not None:
-                return casematch(self.type(self._db[key]).value, type)
-            return True
-
-        if pattern is not None or type is not None:
-            for val in itertools.islice(data, cursor, result_cursor):
-                compare_val = val[0] if isinstance(val, tuple) else val
-                if match_key(compare_val) and match_type(compare_val):
-                    result_data.append(val)
-        else:
-            result_data = data[cursor:result_cursor]
-
-        if result_cursor >= len(data):
-            result_cursor = 0
-        return [str(result_cursor).encode(), result_data]
-
-    # Connection commands
-    # TODO: auth, quit
-
-    @command((bytes,))
-    def echo(self, message):
-        return message
-
-    @command((), (bytes,))
-    def ping(self, *args):
-        if len(args) > 1:
-            raise SimpleError(msgs.WRONG_ARGS_MSG.format('ping'))
-        if self._pubsub:
-            return [b'pong', args[0] if args else b'']
-        else:
-            return args[0] if args else PONG
-
-    @command((DbIndex,))
-    def select(self, index):
-        self._db = self._server.dbs[index]
-        self._db_num = index
-        return OK
-
-    @command((DbIndex, DbIndex))
-    def swapdb(self, index1, index2):
-        if index1 != index2:
-            db1 = self._server.dbs[index1]
-            db2 = self._server.dbs[index2]
-            db1.swap(db2)
-        return OK
-
-    # Key commands
-    # TODO: lots
-
-    def _delete(self, *keys):
-        ans = 0
-        done = set()
-        for key in keys:
-            if key and key.key not in done:
-                key.value = None
-                done.add(key.key)
-                ans += 1
-        return ans
-
-    @command((Key(),), (Key(),), name='del')
-    def del_(self, *keys):
-        return self._delete(*keys)
-
-    @command((Key(),), (Key(),), name='unlink')
-    def unlink(self, *keys):
-        return self._delete(*keys)
-
-    @command((Key(),), (Key(),))
-    def exists(self, *keys):
-        ret = 0
-        for key in keys:
-            if key:
-                ret += 1
-        return ret
-
-    def _expireat(self, key, timestamp):
-        if not key:
-            return 0
-        else:
-            key.expireat = timestamp
-            return 1
-
-    def _ttl(self, key, scale):
-        if not key:
-            return -2
-        elif key.expireat is None:
-            return -1
-        else:
-            return int(round((key.expireat - self._db.time) * scale))
-
-    @command((Key(), Int))
-    def expire(self, key, seconds):
-        return self._expireat(key, self._db.time + seconds)
-
-    @command((Key(), Int))
-    def expireat(self, key, timestamp):
-        return self._expireat(key, float(timestamp))
-
-    @command((Key(), Int))
-    def pexpire(self, key, ms):
-        return self._expireat(key, self._db.time + ms / 1000.0)
-
-    @command((Key(), Int))
-    def pexpireat(self, key, ms_timestamp):
-        return self._expireat(key, ms_timestamp / 1000.0)
-
-    @command((Key(),))
-    def ttl(self, key):
-        return self._ttl(key, 1.0)
-
-    @command((Key(),))
-    def pttl(self, key):
-        return self._ttl(key, 1000.0)
-
-    @command((Key(),))
-    def type(self, key):
-        if key.value is None:
-            return SimpleString(b'none')
-        elif isinstance(key.value, bytes):
-            return SimpleString(b'string')
-        elif isinstance(key.value, list):
-            return SimpleString(b'list')
-        elif isinstance(key.value, set):
-            return SimpleString(b'set')
-        elif isinstance(key.value, ZSet):
-            return SimpleString(b'zset')
-        elif isinstance(key.value, dict):
-            return SimpleString(b'hash')
-        else:
-            assert False  # pragma: nocover
-
-    @command((Key(),))
-    def persist(self, key):
-        if key.expireat is None:
-            return 0
-        key.expireat = None
-        return 1
-
-    @command((bytes,))
-    def keys(self, pattern):
-        if pattern == b'*':
-            return list(self._db)
-        else:
-            regex = compile_pattern(pattern)
-            return [key for key in self._db if regex.match(key)]
-
-    @command((Key(), DbIndex))
-    def move(self, key, db):
-        if db == self._db_num:
-            raise SimpleError(msgs.SRC_DST_SAME_MSG)
-        if not key or key.key in self._server.dbs[db]:
-            return 0
-        # TODO: what is the interaction with expiry?
-        self._server.dbs[db][key.key] = self._server.dbs[self._db_num][key.key]
-        key.value = None  # Causes deletion
-        return 1
-
-    @command(())
-    def randomkey(self):
-        keys = list(self._db.keys())
-        if not keys:
-            return None
-        return random.choice(keys)
-
-    @command((Key(), Key()))
-    def rename(self, key, newkey):
-        if not key:
-            raise SimpleError(msgs.NO_KEY_MSG)
-        # TODO: check interaction with WATCH
-        if newkey.key != key.key:
-            newkey.value = key.value
-            newkey.expireat = key.expireat
-            key.value = None
-        return OK
-
-    @command((Key(), Key()))
-    def renamenx(self, key, newkey):
-        if not key:
-            raise SimpleError(msgs.NO_KEY_MSG)
-        if newkey:
-            return 0
-        self.rename(key, newkey)
-        return 1
-
-    @command((Int,), (bytes, bytes))
-    def scan(self, cursor, *args):
-        return self._scan(list(self._db), cursor, *args)
-
-    def _lookup_key(self, key, pattern):
-        """Python implementation of lookupKeyByPattern from redis"""
-        if pattern == b'#':
-            return key
-        p = pattern.find(b'*')
-        if p == -1:
-            return None
-        prefix = pattern[:p]
-        suffix = pattern[p + 1:]
-        arrow = suffix.find(b'->', 0, -1)
-        if arrow != -1:
-            field = suffix[arrow + 2:]
-            suffix = suffix[:arrow]
-        else:
-            field = None
-        new_key = prefix + key + suffix
-        item = CommandItem(new_key, self._db, item=self._db.get(new_key))
-        if item.value is None:
-            return None
-        if field is not None:
-            if not isinstance(item.value, dict):
-                return None
-            return item.value.get(field)
-        else:
-            if not isinstance(item.value, bytes):
-                return None
-            return item.value
-
-    @command((Key(),), (bytes,))
-    def sort(self, key, *args):
-        i = 0
-        desc = False
-        alpha = False
-        limit_start = 0
-        limit_count = -1
-        store = None
-        sortby = None
-        dontsort = False
-        get = []
-        if key.value is not None:
-            if not isinstance(key.value, (set, list, ZSet)):
-                raise SimpleError(msgs.WRONGTYPE_MSG)
-
-        while i < len(args):
-            arg = args[i]
-            if casematch(arg, b'asc'):
-                desc = False
-            elif casematch(arg, b'desc'):
-                desc = True
-            elif casematch(arg, b'alpha'):
-                alpha = True
-            elif casematch(arg, b'limit') and i + 2 < len(args):
-                try:
-                    limit_start = Int.decode(args[i + 1])
-                    limit_count = Int.decode(args[i + 2])
-                except SimpleError:
-                    raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-                else:
-                    i += 2
-            elif casematch(arg, b'store') and i + 1 < len(args):
-                store = args[i + 1]
-                i += 1
-            elif casematch(arg, b'by') and i + 1 < len(args):
-                sortby = args[i + 1]
-                if b'*' not in sortby:
-                    dontsort = True
-                i += 1
-            elif casematch(arg, b'get') and i + 1 < len(args):
-                get.append(args[i + 1])
-                i += 1
-            else:
-                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-            i += 1
-
-        # TODO: force sorting if the object is a set and either in Lua or
-        # storing to a key, to match redis behaviour.
-        items = list(key.value) if key.value is not None else []
-
-        # These transformations are based on the redis implementation, but
-        # changed to produce a half-open range.
-        start = max(limit_start, 0)
-        end = len(items) if limit_count < 0 else start + limit_count
-        if start >= len(items):
-            start = end = len(items) - 1
-        end = min(end, len(items))
-
-        if not get:
-            get.append(b'#')
-        if sortby is None:
-            sortby = b'#'
-
-        if not dontsort:
-            if alpha:
-                def sort_key(v):
-                    byval = self._lookup_key(v, sortby)
-                    # TODO: use locale.strxfrm when not storing? But then need
-                    # to decode too.
-                    if byval is None:
-                        byval = BeforeAny()
-                    return byval
-
-            else:
-                def sort_key(v):
-                    byval = self._lookup_key(v, sortby)
-                    score = SortFloat.decode(byval, ) if byval is not None else 0.0
-                    return (score, v)
-
-            items.sort(key=sort_key, reverse=desc)
-        elif isinstance(key.value, (list, ZSet)):
-            items.reverse()
-
-        out = []
-        for row in items[start:end]:
-            for g in get:
-                v = self._lookup_key(row, g)
-                if store is not None and v is None:
-                    v = b''
-                out.append(v)
-        if store is not None:
-            item = CommandItem(store, self._db, item=self._db.get(store))
-            item.value = out
-            item.writeback()
-            return len(out)
-        else:
-            return out
-
-    @command((Key(missing_return=None),))
-    def dump(self, key):
-        value = pickle.dumps(key.value)
-        checksum = hashlib.sha1(value).digest()
-        return checksum + value
-
-    @command((Key(), Int, bytes), (bytes,))
-    def restore(self, key, ttl, value, *args):
-        replace = False
-        i = 0
-        while i < len(args):
-            if casematch(args[i], b'replace'):
-                replace = True
-                i += 1
-            else:
-                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-        if key and not replace:
-            raise SimpleError(msgs.RESTORE_KEY_EXISTS)
-        checksum, value = value[:20], value[20:]
-        if hashlib.sha1(value).digest() != checksum:
-            raise SimpleError(msgs.RESTORE_INVALID_CHECKSUM_MSG)
-        if ttl < 0:
-            raise SimpleError(msgs.RESTORE_INVALID_TTL_MSG)
-        if ttl == 0:
-            expireat = None
-        else:
-            expireat = self._db.time + ttl / 1000.0
-        key.value = pickle.loads(value)
-        key.expireat = expireat
-        return OK
-
-    # Transaction commands
-
-    def _clear_watches(self):
-        self._watch_notified = False
-        while self._watches:
-            (key, db) = self._watches.pop()
-            db.remove_watch(key, self)
-
-    @command((), flags='s')
-    def multi(self):
-        if self._transaction is not None:
-            raise SimpleError(msgs.MULTI_NESTED_MSG)
-        self._transaction = []
-        self._transaction_failed = False
-        return OK
-
-    @command((), flags='s')
-    def discard(self):
-        if self._transaction is None:
-            raise SimpleError(msgs.WITHOUT_MULTI_MSG.format('DISCARD'))
-        self._transaction = None
-        self._transaction_failed = False
-        self._clear_watches()
-        return OK
-
-    @command((), name='exec', flags='s')
-    def exec_(self):
-        if self._transaction is None:
-            raise SimpleError(msgs.WITHOUT_MULTI_MSG.format('EXEC'))
-        if self._transaction_failed:
-            self._transaction = None
-            self._clear_watches()
-            raise SimpleError(msgs.EXECABORT_MSG)
-        transaction = self._transaction
-        self._transaction = None
-        self._transaction_failed = False
-        watch_notified = self._watch_notified
-        self._clear_watches()
-        if watch_notified:
-            return None
-        result = []
-        for func, sig, args in transaction:
-            try:
-                self._in_transaction = True
-                ans = self._run_command(func, sig, args, False)
-            except SimpleError as exc:
-                ans = exc
-            finally:
-                self._in_transaction = False
-            result.append(ans)
-        return result
-
-    @command((Key(),), (Key(),), flags='s')
-    def watch(self, *keys):
-        if self._transaction is not None:
-            raise SimpleError(msgs.WATCH_INSIDE_MULTI_MSG)
-        for key in keys:
-            if key not in self._watches:
-                self._watches.add((key.key, self._db))
-                self._db.add_watch(key.key, self)
-        return OK
-
-    @command((), flags='s')
-    def unwatch(self):
-        self._clear_watches()
-        return OK
-
-    # String commands
-    # TODO: bitfield, bitop, bitpos
-
-    @command((Key(bytes), bytes))
-    def append(self, key, value):
-        old = key.get(b'')
-        if len(old) + len(value) > MAX_STRING_SIZE:
-            raise SimpleError(msgs.STRING_OVERFLOW_MSG)
-        key.update(key.get(b'') + value)
-        return len(key.value)
-
-    @command((Key(bytes, 0),), (bytes,))
-    def bitcount(self, key, *args):
-        # Redis checks the argument count before decoding integers. That's why
-        # we can't declare them as Int.
-        if args:
-            if len(args) != 2:
-                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-            start = Int.decode(args[0])
-            end = Int.decode(args[1])
-            start, end = self._fix_range_string(start, end, len(key.value))
-            value = key.value[start:end]
-        else:
-            value = key.value
-        return bin(int.from_bytes(value, 'little')).count('1')
-
-    @command((Key(bytes), Int))
-    def decrby(self, key, amount):
-        return self.incrby(key, -amount)
-
-    @command((Key(bytes),))
-    def decr(self, key):
-        return self.incrby(key, -1)
-
-    @command((Key(bytes), Int))
-    def incrby(self, key, amount):
-        c = Int.decode(key.get(b'0')) + amount
-        key.update(self._encodeint(c))
-        return c
-
-    @command((Key(bytes),))
-    def incr(self, key):
-        return self.incrby(key, 1)
-
-    @command((Key(bytes), bytes))
-    def incrbyfloat(self, key, amount):
-        # TODO: introduce convert_order so that we can specify amount is Float
-        c = Float.decode(key.get(b'0')) + Float.decode(amount)
-        if not math.isfinite(c):
-            raise SimpleError(msgs.NONFINITE_MSG)
-        encoded = self._encodefloat(c, True, )
-        key.update(encoded)
-        return encoded
-
-    @command((Key(bytes),))
-    def get(self, key):
-        return key.get(None)
-
-    @command((Key(bytes), BitOffset))
-    def getbit(self, key, offset):
-        value = key.get(b'')
-        byte = offset // 8
-        remaining = offset % 8
-        actual_bitoffset = 7 - remaining
-        try:
-            actual_val = value[byte]
-        except IndexError:
-            return 0
-        return 1 if (1 << actual_bitoffset) & actual_val else 0
-
-    @command((Key(bytes), BitOffset, BitValue))
-    def setbit(self, key, offset, value):
-        val = key.get(b'\x00')
-        byte = offset // 8
-        remaining = offset % 8
-        actual_bitoffset = 7 - remaining
-        if len(val) - 1 < byte:
-            # We need to expand val so that we can set the appropriate
-            # bit.
-            needed = byte - (len(val) - 1)
-            val += b'\x00' * needed
-        old_byte = val[byte]
-        if value == 1:
-            new_byte = old_byte | (1 << actual_bitoffset)
-        else:
-            new_byte = old_byte & ~(1 << actual_bitoffset)
-        old_value = value if old_byte == new_byte else 1 - value
-        reconstructed = bytearray(val)
-        reconstructed[byte] = new_byte
-        key.update(bytes(reconstructed))
-        return old_value
-
-    @command((Key(bytes), Int, Int))
-    def getrange(self, key, start, end):
-        value = key.get(b'')
-        start, end = self._fix_range_string(start, end, len(value))
-        return value[start:end]
-
-    # substr is a deprecated alias for getrange
-    @command((Key(bytes), Int, Int))
-    def substr(self, key, start, end):
-        return self.getrange(key, start, end)
-
-    @command((Key(bytes), bytes))
-    def getset(self, key, value):
-        old = key.value
-        key.value = value
-        return old
-
-    @command((Key(),), (Key(),))
-    def mget(self, *keys):
-        return [key.value if isinstance(key.value, bytes) else None for key in keys]
-
-    @command((Key(), bytes), (Key(), bytes))
-    def mset(self, *args):
-        for i in range(0, len(args), 2):
-            args[i].value = args[i + 1]
-        return OK
-
-    @command((Key(), bytes), (Key(), bytes))
-    def msetnx(self, *args):
-        for i in range(0, len(args), 2):
-            if args[i]:
-                return 0
-        for i in range(0, len(args), 2):
-            args[i].value = args[i + 1]
-        return 1
-
-    @command((Key(), bytes), (bytes,), name='set')
-    def set_(self, key, value, *args):
-        i = 0
-        ex = None
-        px = None
-        xx = False
-        nx = False
-        keepttl = False
-        get = False
-        while i < len(args):
-            if casematch(args[i], b'nx'):
-                nx = True
-                i += 1
-            elif casematch(args[i], b'xx'):
-                xx = True
-                i += 1
-            elif casematch(args[i], b'ex') and i + 1 < len(args):
-                ex = Int.decode(args[i + 1])
-                if ex <= 0 or (self._db.time + ex) * 1000 >= 2 ** 63:
-                    raise SimpleError(msgs.INVALID_EXPIRE_MSG.format('set'))
-                i += 2
-            elif casematch(args[i], b'px') and i + 1 < len(args):
-                px = Int.decode(args[i + 1])
-                if px <= 0 or self._db.time * 1000 + px >= 2 ** 63:
-                    raise SimpleError(msgs.INVALID_EXPIRE_MSG.format('set'))
-                i += 2
-            elif casematch(args[i], b'keepttl'):
-                keepttl = True
-                i += 1
-            elif casematch(args[i], b'get'):
-                get = True
-                i += 1
-            else:
-                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-        if (xx and nx) or ((px is not None) + (ex is not None) + keepttl > 1):
-            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-        if nx and get and self.version < 7:
-            # The command docs say this is allowed from Redis 7.0.
-            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-
-        old_value = None
-        if get:
-            if key.value is not None and type(key.value) is not bytes:
-                raise SimpleError(msgs.WRONGTYPE_MSG)
-            old_value = key.value
-
-        if nx and key:
-            return old_value
-        if xx and not key:
-            return old_value
-        if not keepttl:
-            key.value = value
-        else:
-            key.update(value)
-        if ex is not None:
-            key.expireat = self._db.time + ex
-        if px is not None:
-            key.expireat = self._db.time + px / 1000.0
-        return OK if not get else old_value
-
-    @command((Key(), Int, bytes))
-    def setex(self, key, seconds, value):
-        if seconds <= 0 or (self._db.time + seconds) * 1000 >= 2 ** 63:
-            raise SimpleError(msgs.INVALID_EXPIRE_MSG.format('setex'))
-        key.value = value
-        key.expireat = self._db.time + seconds
-        return OK
-
-    @command((Key(), Int, bytes))
-    def psetex(self, key, ms, value):
-        if ms <= 0 or self._db.time * 1000 + ms >= 2 ** 63:
-            raise SimpleError(msgs.INVALID_EXPIRE_MSG.format('psetex'))
-        key.value = value
-        key.expireat = self._db.time + ms / 1000.0
-        return OK
-
-    @command((Key(), bytes))
-    def setnx(self, key, value):
-        if key:
-            return 0
-        key.value = value
-        return 1
-
-    @command((Key(bytes), Int, bytes))
-    def setrange(self, key, offset, value):
-        if offset < 0:
-            raise SimpleError(msgs.INVALID_OFFSET_MSG)
-        elif not value:
-            return len(key.get(b''))
-        elif offset + len(value) > MAX_STRING_SIZE:
-            raise SimpleError(msgs.STRING_OVERFLOW_MSG)
-        else:
-            out = key.get(b'')
-            if len(out) < offset:
-                out += b'\x00' * (offset - len(out))
-            out = out[0:offset] + value + out[offset + len(value):]
-            key.update(out)
-            return len(out)
-
-    @command((Key(bytes),))
-    def strlen(self, key):
-        return len(key.get(b''))
-
-    # Hash commands
-
-    @command((Key(Hash), bytes), (bytes,))
-    def hdel(self, key, *fields):
-        h = key.value
-        rem = 0
-        for field in fields:
-            if field in h:
-                del h[field]
-                key.updated()
-                rem += 1
-        return rem
-
-    @command((Key(Hash), bytes))
-    def hexists(self, key, field):
-        return int(field in key.value)
-
-    @command((Key(Hash), bytes))
-    def hget(self, key, field):
-        return key.value.get(field)
-
-    @command((Key(Hash),))
-    def hgetall(self, key):
-        return list(itertools.chain(*key.value.items()))
-
-    @command((Key(Hash), bytes, Int))
-    def hincrby(self, key, field, amount):
-        c = Int.decode(key.value.get(field, b'0')) + amount
-        key.value[field] = self._encodeint(c)
-        key.updated()
-        return c
-
-    @command((Key(Hash), bytes, bytes))
-    def hincrbyfloat(self, key, field, amount):
-        c = Float.decode(key.value.get(field, b'0')) + Float.decode(amount)
-        if not math.isfinite(c):
-            raise SimpleError(msgs.NONFINITE_MSG)
-        encoded = self._encodefloat(c, True)
-        key.value[field] = encoded
-        key.updated()
-        return encoded
-
-    @command((Key(Hash),))
-    def hkeys(self, key):
-        return list(key.value.keys())
-
-    @command((Key(Hash),))
-    def hlen(self, key):
-        return len(key.value)
-
-    @command((Key(Hash), bytes), (bytes,))
-    def hmget(self, key, *fields):
-        return [key.value.get(field) for field in fields]
-
-    @command((Key(Hash), bytes, bytes), (bytes, bytes))
-    def hmset(self, key, *args):
-        self.hset(key, *args)
-        return OK
-
-    @command((Key(Hash), Int,), (bytes, bytes))
-    def hscan(self, key, cursor, *args):
-        cursor, keys = self._scan(key.value, cursor, *args)
-        items = []
-        for k in keys:
-            items.append(k)
-            items.append(key.value[k])
-        return [cursor, items]
-
-    @command((Key(Hash), bytes, bytes), (bytes, bytes))
-    def hset(self, key, *args):
-        h = key.value
-        created = 0
-        for i in range(0, len(args), 2):
-            if args[i] not in h:
-                created += 1
-            h[args[i]] = args[i + 1]
-        key.updated()
-        return created
-
-    @command((Key(Hash), bytes, bytes))
-    def hsetnx(self, key, field, value):
-        if field in key.value:
-            return 0
-        return self.hset(key, field, value)
-
-    @command((Key(Hash), bytes))
-    def hstrlen(self, key, field):
-        return len(key.value.get(field, b''))
-
-    @command((Key(Hash),))
-    def hvals(self, key):
-        return list(key.value.values())
-
-    # List commands
-
-    def _bpop_pass(self, keys, op, first_pass):
-        for key in keys:
-            item = CommandItem(key, self._db, item=self._db.get(key), default=[])
-            if not isinstance(item.value, list):
-                if first_pass:
-                    raise SimpleError(msgs.WRONGTYPE_MSG)
-                else:
-                    continue
-            if item.value:
-                ret = op(item.value)
-                item.updated()
-                item.writeback()
-                return [key, ret]
-        return None
-
-    def _bpop(self, args, op):
-        keys = args[:-1]
-        timeout = Timeout.decode(args[-1])
-        return self._blocking(timeout, functools.partial(self._bpop_pass, keys, op))
-
-    @command((bytes, bytes), (bytes,), flags='s')
-    def blpop(self, *args):
-        return self._bpop(args, lambda lst: lst.pop(0))
-
-    @command((bytes, bytes), (bytes,), flags='s')
-    def brpop(self, *args):
-        return self._bpop(args, lambda lst: lst.pop())
-
-    def _brpoplpush_pass(self, source, destination, first_pass):
-        src = CommandItem(source, self._db, item=self._db.get(source), default=[])
-        if not isinstance(src.value, list):
-            if first_pass:
-                raise SimpleError(msgs.WRONGTYPE_MSG)
-            else:
-                return None
-        if not src.value:
-            return None  # Empty list
-        dst = CommandItem(destination, self._db, item=self._db.get(destination), default=[])
-        if not isinstance(dst.value, list):
-            raise SimpleError(msgs.WRONGTYPE_MSG)
-        el = src.value.pop()
-        dst.value.insert(0, el)
-        src.updated()
-        src.writeback()
-        if destination != source:
-            # Ensure writeback only happens once
-            dst.updated()
-            dst.writeback()
-        return el
-
-    @command((bytes, bytes, Timeout), flags='s')
-    def brpoplpush(self, source, destination, timeout):
-        return self._blocking(timeout,
-                              functools.partial(self._brpoplpush_pass, source, destination))
-
-    @command((Key(list, None), Int))
-    def lindex(self, key, index):
-        try:
-            return key.value[index]
-        except IndexError:
-            return None
-
-    @command((Key(list), bytes, bytes, bytes))
-    def linsert(self, key, where, pivot, value):
-        if not casematch(where, b'before') and not casematch(where, b'after'):
-            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-        if not key:
-            return 0
-        else:
-            try:
-                index = key.value.index(pivot)
-            except ValueError:
-                return -1
-            if casematch(where, b'after'):
-                index += 1
-            key.value.insert(index, value)
-            key.updated()
-            return len(key.value)
-
-    @command((Key(list),))
-    def llen(self, key):
-        return len(key.value)
-
-    @command((Key(list, None), Key(list), SimpleString, SimpleString))
-    def lmove(self, first_list, second_list, src, dst):
-        if src not in [b'LEFT', b'RIGHT']:
-            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-        if dst not in [b'LEFT', b'RIGHT']:
-            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-        el = self.rpop(first_list) if src == b'RIGHT' else self.lpop(first_list)
-        self.lpush(second_list, el) if dst == b'LEFT' else self.rpush(second_list, el)
-        return el
-
-    def _list_pop(self, get_slice, key, *args):
-        """Implements lpop and rpop.
-
-        `get_slice` must take a count and return a slice expression for the
-        range to pop.
-        """
-        # This implementation is somewhat contorted to match the odd
-        # behaviours described in https://github.com/redis/redis/issues/9680.
-        count = 1
-        if len(args) > 1:
-            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-        elif len(args) == 1:
-            count = args[0]
-            if count < 0:
-                raise SimpleError(msgs.INDEX_ERROR_MSG)
-            elif count == 0 and self.version == 6:
-                return None
-        if not key:
-            return None
-        elif type(key.value) != list:
-            raise SimpleError(msgs.WRONGTYPE_MSG)
-        slc = get_slice(count)
-        ret = key.value[slc]
-        del key.value[slc]
-        key.updated()
-        if not args:
-            ret = ret[0]
-        return ret
-
-    @command((Key(),), (Int(),))
-    def lpop(self, key, *args):
-        return self._list_pop(lambda count: slice(None, count), key, *args)
-
-    @command((Key(list), bytes), (bytes,))
-    def lpush(self, key, *values):
-        for value in values:
-            key.value.insert(0, value)
-        key.updated()
-        return len(key.value)
-
-    @command((Key(list), bytes), (bytes,))
-    def lpushx(self, key, *values):
-        if not key:
-            return 0
-        return self.lpush(key, *values)
-
-    @command((Key(list), Int, Int))
-    def lrange(self, key, start, stop):
-        start, stop = self._fix_range(start, stop, len(key.value))
-        return key.value[start:stop]
-
-    @command((Key(list), Int, bytes))
-    def lrem(self, key, count, value):
-        a_list = key.value
-        found = []
-        for i, el in enumerate(a_list):
-            if el == value:
-                found.append(i)
-        if count > 0:
-            indices_to_remove = found[:count]
-        elif count < 0:
-            indices_to_remove = found[count:]
-        else:
-            indices_to_remove = found
-        # Iterating in reverse order to ensure the indices
-        # remain valid during deletion.
-        for index in reversed(indices_to_remove):
-            del a_list[index]
-        if indices_to_remove:
-            key.updated()
-        return len(indices_to_remove)
-
-    @command((Key(list), Int, bytes))
-    def lset(self, key, index, value):
-        if not key:
-            raise SimpleError(msgs.NO_KEY_MSG)
-        try:
-            key.value[index] = value
-            key.updated()
-        except IndexError:
-            raise SimpleError(msgs.INDEX_ERROR_MSG)
-        return OK
-
-    @command((Key(list), Int, Int))
-    def ltrim(self, key, start, stop):
-        if key:
-            if stop == -1:
-                stop = None
-            else:
-                stop += 1
-            new_value = key.value[start:stop]
-            # TODO: check if this should actually be conditional
-            if len(new_value) != len(key.value):
-                key.update(new_value)
-        return OK
-
-    @command((Key(),), (Int(),))
-    def rpop(self, key, *args):
-        return self._list_pop(lambda count: slice(None, -count - 1, -1), key, *args)
-
-    @command((Key(list, None), Key(list)))
-    def rpoplpush(self, src, dst):
-        el = self.rpop(src)
-        self.lpush(dst, el)
-        return el
-
-    @command((Key(list), bytes), (bytes,))
-    def rpush(self, key, *values):
-        for value in values:
-            key.value.append(value)
-        key.updated()
-        return len(key.value)
-
-    @command((Key(list), bytes), (bytes,))
-    def rpushx(self, key, *values):
-        if not key:
-            return 0
-        return self.rpush(key, *values)
-
-    # Set commands
-
-    @command((Key(set), bytes), (bytes,))
-    def sadd(self, key, *members):
-        old_size = len(key.value)
-        key.value.update(members)
-        key.updated()
-        return len(key.value) - old_size
-
-    @command((Key(set),))
-    def scard(self, key):
-        return len(key.value)
-
-    @staticmethod
-    def _calc_setop(op, stop_if_missing, key, *keys):
-        if stop_if_missing and not key.value:
-            return set()
-        ans = key.value.copy()
-        for other in keys:
-            value = other.value if other.value is not None else set()
-            if not isinstance(value, set):
-                raise SimpleError(msgs.WRONGTYPE_MSG)
-            if stop_if_missing and not value:
-                return set()
-            ans = op(ans, value)
-        return ans
-
-    def _setop(self, op, stop_if_missing, dst, key, *keys):
-        """Apply one of SINTER[STORE], SUNION[STORE], SDIFF[STORE].
-
-        If `stop_if_missing`, the output will be made an empty set as soon as
-        an empty input set is encountered (use for SINTER[STORE]). May assume
-        that `key` is a set (or empty), but `keys` could be anything.
-        """
-        ans = self._calc_setop(op, stop_if_missing, key, *keys)
-        if dst is None:
-            return list(ans)
-        else:
-            dst.value = ans
-            return len(dst.value)
-
-    @command((Key(set),), (Key(set),))
-    def sdiff(self, *keys):
-        return self._setop(lambda a, b: a - b, False, None, *keys)
-
-    @command((Key(), Key(set)), (Key(set),))
-    def sdiffstore(self, dst, *keys):
-        return self._setop(lambda a, b: a - b, False, dst, *keys)
-
-    @command((Key(set),), (Key(set),))
-    def sinter(self, *keys):
-        return self._setop(lambda a, b: a & b, True, None, *keys)
-
-    @command((Key(), Key(set)), (Key(set),))
-    def sinterstore(self, dst, *keys):
-        return self._setop(lambda a, b: a & b, True, dst, *keys)
-
-    @command((Key(set), bytes))
-    def sismember(self, key, member):
-        return int(member in key.value)
-
-    @command((Key(set), bytes), (bytes,))
-    def smismember(self, key, *members):
-        return [self.sismember(key, member) for member in members]
-
-    @command((Key(set),))
-    def smembers(self, key):
-        return list(key.value)
-
-    @command((Key(set, 0), Key(set), bytes))
-    def smove(self, src, dst, member):
-        try:
-            src.value.remove(member)
-            src.updated()
-        except KeyError:
-            return 0
-        else:
-            dst.value.add(member)
-            dst.updated()  # TODO: is it updated if member was already present?
-            return 1
-
-    @command((Key(set),), (Int,))
-    def spop(self, key, count=None):
-        if count is None:
-            if not key.value:
-                return None
-            item = random.sample(list(key.value), 1)[0]
-            key.value.remove(item)
-            key.updated()
-            return item
-        else:
-            if count < 0:
-                raise SimpleError(msgs.INDEX_ERROR_MSG)
-            items = self.srandmember(key, count)
-            for item in items:
-                key.value.remove(item)
-                key.updated()  # Inside the loop because redis special-cases count=0
-            return items
-
-    @command((Key(set),), (Int,))
-    def srandmember(self, key, count=None):
-        if count is None:
-            if not key.value:
-                return None
-            else:
-                return random.sample(list(key.value), 1)[0]
-        elif count >= 0:
-            count = min(count, len(key.value))
-            return random.sample(list(key.value), count)
-        else:
-            items = list(key.value)
-            return [random.choice(items) for _ in range(-count)]
-
-    @command((Key(set), bytes), (bytes,))
-    def srem(self, key, *members):
-        old_size = len(key.value)
-        for member in members:
-            key.value.discard(member)
-        deleted = old_size - len(key.value)
-        if deleted:
-            key.updated()
-        return deleted
-
-    @command((Key(set), Int), (bytes, bytes))
-    def sscan(self, key, cursor, *args):
-        return self._scan(key.value, cursor, *args)
-
-    @command((Key(set),), (Key(set),))
-    def sunion(self, *keys):
-        return self._setop(lambda a, b: a | b, False, None, *keys)
-
-    @command((Key(), Key(set)), (Key(set),))
-    def sunionstore(self, dst, *keys):
-        return self._setop(lambda a, b: a | b, False, dst, *keys)
-
-    # Hyperloglog commands
-    # These are not quite the same as the real redis ones, which are
-    # approximate and store the results in a string. Instead, it is implemented
-    # on top of sets.
-
-    @command((Key(set),), (bytes,))
-    def pfadd(self, key, *elements):
-        result = self.sadd(key, *elements)
-        # Per the documentation:
-        # - 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise.
-        return 1 if result > 0 else 0
-
-    @command((Key(set),), (Key(set),))
-    def pfcount(self, *keys):
-        """
-        Return the approximated cardinality of
-        the set observed by the HyperLogLog at key(s).
-        """
-        return len(self.sunion(*keys))
-
-    @command((Key(set), Key(set)), (Key(set),))
-    def pfmerge(self, dest, *sources):
-        "Merge N different HyperLogLogs into a single one."
-        self.sunionstore(dest, *sources)
-        return OK
-
-    # Sorted set commands
-    # TODO: [b]zpopmin/zpopmax,
-
-    @staticmethod
-    def _limit_items(items, offset, count):
-        out = []
-        for item in items:
-            if offset:  # Note: not offset > 0, in order to match redis
-                offset -= 1
-                continue
-            if count == 0:
-                break
-            count -= 1
-            out.append(item)
-        return out
-
-    def _apply_withscores(self, items, withscores):
-        if withscores:
-            out = []
-            for item in items:
-                out.append(item[1])
-                out.append(self._encodefloat(item[0], False))
-        else:
-            out = [item[1] for item in items]
-        return out
-
-    @command((Key(ZSet), bytes, bytes), (bytes,))
-    def zadd(self, key, *args):
-        zset = key.value
-
-        i = 0
-        ch = False
-        nx = False
-        xx = False
-        incr = False
-        while i < len(args):
-            if casematch(args[i], b'ch'):
-                ch = True
-                i += 1
-            elif casematch(args[i], b'nx'):
-                nx = True
-                i += 1
-            elif casematch(args[i], b'xx'):
-                xx = True
-                i += 1
-            elif casematch(args[i], b'incr'):
-                incr = True
-                i += 1
-            else:
-                # First argument not matching flags indicates the start of
-                # score pairs.
-                break
-
-        if nx and xx:
-            raise SimpleError(msgs.ZADD_NX_XX_ERROR_MSG)
-
-        elements = args[i:]
-        if not elements or len(elements) % 2 != 0:
-            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-        if incr and len(elements) != 2:
-            raise SimpleError(msgs.ZADD_INCR_LEN_ERROR_MSG)
-        # Parse all scores first, before updating
-        items = [
-            (0.0 + Float.decode(elements[j]) if self.version >= 7 else Float.decode(elements[j]), elements[j + 1])
-            for j in range(0, len(elements), 2)
-        ]
-        old_len = len(zset)
-        changed_items = 0
-
-        if incr:
-            item_score, item_name = items[0]
-            if (nx and item_name in zset) or (xx and item_name not in zset):
-                return None
-            return self.zincrby(key, item_score, item_name)
-
-        for item_score, item_name in items:
-            if (
-                    (not nx or item_name not in zset)
-                    and (not xx or item_name in zset)
-            ):
-                if zset.add(item_name, item_score):
-                    changed_items += 1
-
-        if changed_items:
-            key.updated()
-
-        if ch:
-            return changed_items
-        return len(zset) - old_len
-
-    @command((Key(ZSet),))
-    def zcard(self, key):
-        return len(key.value)
-
-    @command((Key(ZSet), ScoreTest, ScoreTest))
-    def zcount(self, key, min, max):
-        return key.value.zcount(min.lower_bound, max.upper_bound)
-
-    @command((Key(ZSet), Float, bytes))
-    def zincrby(self, key, increment, member):
-        # Can't just default the old score to 0.0, because in IEEE754, adding
-        # 0.0 to something isn't a nop (e.g. 0.0 + -0.0 == 0.0).
-        try:
-            score = key.value.get(member, None) + increment
-        except TypeError:
-            score = increment
-        if math.isnan(score):
-            raise SimpleError(msgs.SCORE_NAN_MSG)
-        key.value[member] = score
-        key.updated()
-        return self._encodefloat(score, False)
-
-    @command((Key(ZSet), StringTest, StringTest))
-    def zlexcount(self, key, min, max):
-        return key.value.zlexcount(min.value, min.exclusive, max.value, max.exclusive)
-
-    def _zrange(self, key, start, stop, reverse, *args):
-        zset = key.value
-        withscores = False
-        for arg in args:
-            if casematch(arg, b'withscores'):
-                withscores = True
-            else:
-                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-        start, stop = self._fix_range(start, stop, len(zset))
-        if reverse:
-            start, stop = len(zset) - stop, len(zset) - start
-        items = zset.islice_score(start, stop, reverse)
-        items = self._apply_withscores(items, withscores)
-        return items
-
-    @command((Key(ZSet), Int, Int), (bytes,))
-    def zrange(self, key, start, stop, *args):
-        return self._zrange(key, start, stop, False, *args)
-
-    @command((Key(ZSet), Int, Int), (bytes,))
-    def zrevrange(self, key, start, stop, *args):
-        return self._zrange(key, start, stop, True, *args)
-
-    def _zrangebylex(self, key, _min, _max, reverse, *args):
-        if args:
-            if len(args) != 3 or not casematch(args[0], b'limit'):
-                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-            offset = Int.decode(args[1])
-            count = Int.decode(args[2])
-        else:
-            offset = 0
-            count = -1
-        zset = key.value
-        items = zset.irange_lex(_min.value, _max.value,
-                                inclusive=(not _min.exclusive, not _max.exclusive),
-                                reverse=reverse)
-        items = self._limit_items(items, offset, count)
-        return items
-
-    @command((Key(ZSet), StringTest, StringTest), (bytes,))
-    def zrangebylex(self, key, _min, _max, *args):
-        return self._zrangebylex(key, _min, _max, False, *args)
-
-    @command((Key(ZSet), StringTest, StringTest), (bytes,))
-    def zrevrangebylex(self, key, _max, _min, *args):
-        return self._zrangebylex(key, _min, _max, True, *args)
-
-    def _zrangebyscore(self, key, _min, _max, reverse, *args):
-        withscores = False
-        offset = 0
-        count = -1
-        i = 0
-        while i < len(args):
-            if casematch(args[i], b'withscores'):
-                withscores = True
-                i += 1
-            elif casematch(args[i], b'limit') and i + 2 < len(args):
-                offset = Int.decode(args[i + 1])
-                count = Int.decode(args[i + 2])
-                i += 3
-            else:
-                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-        zset = key.value
-        items = list(zset.irange_score(_min.lower_bound, _max.upper_bound, reverse=reverse))
-        items = self._limit_items(items, offset, count)
-        items = self._apply_withscores(items, withscores)
-        return items
-
-    @command((Key(ZSet), ScoreTest, ScoreTest), (bytes,))
-    def zrangebyscore(self, key, _min, _max, *args):
-        return self._zrangebyscore(key, _min, _max, False, *args)
-
-    @command((Key(ZSet), ScoreTest, ScoreTest), (bytes,))
-    def zrevrangebyscore(self, key, _max, _min, *args):
-        return self._zrangebyscore(key, _min, _max, True, *args)
-
-    @command((Key(ZSet), bytes))
-    def zrank(self, key, member):
-        try:
-            return key.value.rank(member)
-        except KeyError:
-            return None
-
-    @command((Key(ZSet), bytes))
-    def zrevrank(self, key, member):
-        try:
-            return len(key.value) - 1 - key.value.rank(member)
-        except KeyError:
-            return None
-
-    @command((Key(ZSet), bytes), (bytes,))
-    def zrem(self, key, *members):
-        old_size = len(key.value)
-        for member in members:
-            key.value.discard(member)
-        deleted = old_size - len(key.value)
-        if deleted:
-            key.updated()
-        return deleted
-
-    @command((Key(ZSet), StringTest, StringTest))
-    def zremrangebylex(self, key, min, max):
-        items = key.value.irange_lex(min.value, max.value,
-                                     inclusive=(not min.exclusive, not max.exclusive))
-        return self.zrem(key, *items)
-
-    @command((Key(ZSet), ScoreTest, ScoreTest))
-    def zremrangebyscore(self, key, min, max):
-        items = key.value.irange_score(min.lower_bound, max.upper_bound)
-        return self.zrem(key, *[item[1] for item in items])
-
-    @command((Key(ZSet), Int, Int))
-    def zremrangebyrank(self, key, start, stop):
-        zset = key.value
-        start, stop = self._fix_range(start, stop, len(zset))
-        items = zset.islice_score(start, stop)
-        return self.zrem(key, *[item[1] for item in items])
-
-    @command((Key(ZSet), Int), (bytes, bytes))
-    def zscan(self, key, cursor, *args):
-        new_cursor, ans = self._scan(key.value.items(), cursor, *args)
-        flat = []
-        for (key, score) in ans:
-            flat.append(key)
-            flat.append(self._encodefloat(score, False))
-        return [new_cursor, flat]
-
-    @command((Key(ZSet), bytes))
-    def zscore(self, key, member):
-        try:
-            return self._encodefloat(key.value[member], False)
-        except KeyError:
-            return None
-
-    @staticmethod
-    def _get_zset(value):
-        if isinstance(value, set):
-            zset = ZSet()
-            for item in value:
-                zset[item] = 1.0
-            return zset
-        elif isinstance(value, ZSet):
-            return value
-        else:
-            raise SimpleError(msgs.WRONGTYPE_MSG)
-
-    def _zunioninter(self, func, dest, numkeys, *args):
-        if numkeys < 1:
-            raise SimpleError(msgs.ZUNIONSTORE_KEYS_MSG)
-        if numkeys > len(args):
-            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-        aggregate = b'sum'
-        sets = []
-        for i in range(numkeys):
-            item = CommandItem(args[i], self._db, item=self._db.get(args[i]), default=ZSet())
-            sets.append(self._get_zset(item.value))
-        weights = [1.0] * numkeys
-
-        i = numkeys
-        while i < len(args):
-            arg = args[i]
-            if casematch(arg, b'weights') and i + numkeys < len(args):
-                weights = [Float.decode(x) for x in args[i + 1:i + numkeys + 1]]
-                i += numkeys + 1
-            elif casematch(arg, b'aggregate') and i + 1 < len(args):
-                aggregate = casenorm(args[i + 1])
-                if aggregate not in (b'sum', b'min', b'max'):
-                    raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-                i += 2
-            else:
-                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-
-        out_members = set(sets[0])
-        for s in sets[1:]:
-            if func == 'ZUNIONSTORE':
-                out_members |= set(s)
-            else:
-                out_members.intersection_update(s)
-
-        # We first build a regular dict and turn it into a ZSet. The
-        # reason is subtle: a ZSet won't update a score from -0 to +0
-        # (or vice versa) through assignment, but a regular dict will.
-        out = {}
-        # The sort affects the order of floating-point operations.
-        # Note that redis uses qsort(1), which has no stability guarantees,
-        # so we can't be sure to match it in all cases.
-        for s, w in sorted(zip(sets, weights), key=lambda x: len(x[0])):
-            for member, score in s.items():
-                score *= w
-                # Redis only does this step for ZUNIONSTORE. See
-                # https://github.com/antirez/redis/issues/3954.
-                if func == 'ZUNIONSTORE' and math.isnan(score):
-                    score = 0.0
-                if member not in out_members:
-                    continue
-                if member in out:
-                    old = out[member]
-                    if aggregate == b'sum':
-                        score += old
-                        if math.isnan(score):
-                            score = 0.0
-                    elif aggregate == b'max':
-                        score = max(old, score)
-                    elif aggregate == b'min':
-                        score = min(old, score)
-                    else:
-                        assert False  # pragma: nocover
-                if math.isnan(score):
-                    score = 0.0
-                out[member] = score
-
-        out_zset = ZSet()
-        for member, score in out.items():
-            out_zset[member] = score
-
-        dest.value = out_zset
-        return len(out_zset)
-
-    @command((Key(), Int, bytes), (bytes,))
-    def zunionstore(self, dest, numkeys, *args):
-        return self._zunioninter('ZUNIONSTORE', dest, numkeys, *args)
-
-    @command((Key(), Int, bytes), (bytes,))
-    def zinterstore(self, dest, numkeys, *args):
-        return self._zunioninter('ZINTERSTORE', dest, numkeys, *args)
-
-    # Server commands
-    # TODO: lots
-
-    @command((), (bytes,), flags='s')
-    def bgsave(self, *args):
-        if len(args) > 1 or (len(args) == 1 and not casematch(args[0], b'schedule')):
-            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-        self._server.lastsave = int(time.time())
-        return BGSAVE_STARTED
-
-    @command(())
-    def dbsize(self):
-        return len(self._db)
-
-    @command((), (bytes,))
-    def flushdb(self, *args):
-        if args:
-            if len(args) != 1 or not casematch(args[0], b'async'):
-                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-        self._db.clear()
-        return OK
-
-    @command((), (bytes,))
-    def flushall(self, *args):
-        if args:
-            if len(args) != 1 or not casematch(args[0], b'async'):
-                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
-        for db in self._server.dbs.values():
-            db.clear()
-        # TODO: clear watches and/or pubsub as well?
-        return OK
-
-    @command(())
-    def lastsave(self):
-        return self._server.lastsave
-
-    @command((), flags='s')
-    def save(self):
-        self._server.lastsave = int(time.time())
-        return OK
-
-    @command(())
-    def time(self):
-        now_us = round(time.time() * 1000000)
-        now_s = now_us // 1000000
-        now_us %= 1000000
-        return [str(now_s).encode(), str(now_us).encode()]
-
-    # Script commands
-    # script debug and script kill will probably not be supported
-
-    def _convert_redis_arg(self, lua_runtime, value):
-        # Type checks are exact to avoid issues like bool being a subclass of int.
-        if type(value) is bytes:
-            return value
-        elif type(value) in {int, float}:
-            return '{:.17g}'.format(value).encode()
-        else:
-            # TODO: add the context
-            msg = LUA_COMMAND_ARG_MSG6 if self.version < 7 else LUA_COMMAND_ARG_MSG
-            raise SimpleError(msg)
-
-    def _convert_redis_result(self, lua_runtime, result):
-        if isinstance(result, (bytes, int)):
-            return result
-        elif isinstance(result, SimpleString):
-            return lua_runtime.table_from({b"ok": result.value})
-        elif result is None:
-            return False
-        elif isinstance(result, list):
-            converted = [
-                self._convert_redis_result(lua_runtime, item)
-                for item in result
-            ]
-            return lua_runtime.table_from(converted)
-        elif isinstance(result, SimpleError):
-            raise result
-        else:
-            raise RuntimeError("Unexpected return type from redis: {}".format(type(result)))
-
-    def _convert_lua_result(self, result, nested=True):
-        from lupa import lua_type
-        if lua_type(result) == 'table':
-            for key in (b'ok', b'err'):
-                if key in result:
-                    msg = self._convert_lua_result(result[key])
-                    if not isinstance(msg, bytes):
-                        raise SimpleError(msgs.LUA_WRONG_NUMBER_ARGS_MSG)
-                    if key == b'ok':
-                        return SimpleString(msg)
-                    elif nested:
-                        return SimpleError(msg.decode('utf-8', 'replace'))
-                    else:
-                        raise SimpleError(msg.decode('utf-8', 'replace'))
-            # Convert Lua tables into lists, starting from index 1, mimicking the behavior of StrictRedis.
-            result_list = []
-            for index in itertools.count(1):
-                if index not in result:
-                    break
-                item = result[index]
-                result_list.append(self._convert_lua_result(item))
-            return result_list
-        elif isinstance(result, str):
-            return result.encode()
-        elif isinstance(result, float):
-            return int(result)
-        elif isinstance(result, bool):
-            return 1 if result else None
-        return result
-
-    def _check_for_lua_globals(self, lua_runtime, expected_globals):
-        actual_globals = set(lua_runtime.globals().keys())
-        if actual_globals != expected_globals:
-            unexpected = [six.ensure_str(var, 'utf-8', 'replace')
-                          for var in actual_globals - expected_globals]
-            raise SimpleError(msgs.GLOBAL_VARIABLE_MSG.format(", ".join(unexpected)))
-
-    def _lua_redis_call(self, lua_runtime, expected_globals, op, *args):
-        # Check if we've set any global variables before making any change.
-        self._check_for_lua_globals(lua_runtime, expected_globals)
-        func, func_name = self._name_to_func(op)
-        args = [self._convert_redis_arg(lua_runtime, arg) for arg in args]
-        result = self._run_command(func, func._fakeredis_sig, args, True)
-        return self._convert_redis_result(lua_runtime, result)
-
-    def _lua_redis_pcall(self, lua_runtime, expected_globals, op, *args):
-        try:
-            return self._lua_redis_call(lua_runtime, expected_globals, op, *args)
-        except Exception as ex:
-            return lua_runtime.table_from({b"err": str(ex)})
-
-    def _lua_redis_log(self, lua_runtime, expected_globals, lvl, *args):
-        self._check_for_lua_globals(lua_runtime, expected_globals)
-        if len(args) < 1:
-            raise SimpleError(msgs.REQUIRES_MORE_ARGS_MSG.format("redis.log()", "two"))
-        if lvl not in REDIS_LOG_LEVELS.values():
-            raise SimpleError(msgs.LOG_INVALID_DEBUG_LEVEL_MSG)
-        msg = ' '.join([x.decode('utf-8')
-                        if isinstance(x, bytes) else str(x)
-                        for x in args if not isinstance(x, bool)])
-        LOGGER.log(REDIS_LOG_LEVELS_TO_LOGGING[lvl], msg)
-
-    @command((bytes, Int), (bytes,), flags='s')
-    def eval(self, script, numkeys, *keys_and_args):
-        from lupa import LuaError, LuaRuntime, as_attrgetter
-
-        if numkeys > len(keys_and_args):
-            raise SimpleError(msgs.TOO_MANY_KEYS_MSG)
-        if numkeys < 0:
-            raise SimpleError(msgs.NEGATIVE_KEYS_MSG)
-        sha1 = hashlib.sha1(script).hexdigest().encode()
-        self._server.script_cache[sha1] = script
-        lua_runtime = LuaRuntime(encoding=None, unpack_returned_tuples=True)
-
-        set_globals = lua_runtime.eval(
-            """
-            function(keys, argv, redis_call, redis_pcall, redis_log, redis_log_levels)
-                redis = {}
-                redis.call = redis_call
-                redis.pcall = redis_pcall
-                redis.log = redis_log
-                for level, pylevel in python.iterex(redis_log_levels.items()) do
-                    redis[level] = pylevel
-                end
-                redis.error_reply = function(msg) return {err=msg} end
-                redis.status_reply = function(msg) return {ok=msg} end
-                KEYS = keys
-                ARGV = argv
-            end
-            """
-        )
-        expected_globals = set()
-        set_globals(
-            lua_runtime.table_from(keys_and_args[:numkeys]),
-            lua_runtime.table_from(keys_and_args[numkeys:]),
-            functools.partial(self._lua_redis_call, lua_runtime, expected_globals),
-            functools.partial(self._lua_redis_pcall, lua_runtime, expected_globals),
-            functools.partial(self._lua_redis_log, lua_runtime, expected_globals),
-            as_attrgetter(REDIS_LOG_LEVELS)
-        )
-        expected_globals.update(lua_runtime.globals().keys())
-
-        try:
-            result = lua_runtime.execute(script)
-        except SimpleError as ex:
-            if self.version == 6:
-                raise SimpleError(msgs.SCRIPT_ERROR_MSG.format(sha1.decode(), ex))
-            raise SimpleError(ex.value)
-        except LuaError as ex:
-            raise SimpleError(msgs.SCRIPT_ERROR_MSG.format(sha1.decode(), ex))
-
-        self._check_for_lua_globals(lua_runtime, expected_globals)
-
-        return self._convert_lua_result(result, nested=False)
-
-    @command((bytes, Int), (bytes,), flags='s')
-    def evalsha(self, sha1, numkeys, *keys_and_args):
-        try:
-            script = self._server.script_cache[sha1]
-        except KeyError:
-            raise SimpleError(msgs.NO_MATCHING_SCRIPT_MSG)
-        return self.eval(script, numkeys, *keys_and_args)
-
-    @command((bytes,), (bytes,), flags='s')
-    def script(self, subcmd, *args):
-        if casematch(subcmd, b'load'):
-            if len(args) != 1:
-                raise SimpleError(msgs.BAD_SUBCOMMAND_MSG.format('SCRIPT'))
-            script = args[0]
-            sha1 = hashlib.sha1(script).hexdigest().encode()
-            self._server.script_cache[sha1] = script
-            return sha1
-        elif casematch(subcmd, b'exists'):
-            if self.version >= 7 and len(args) == 0:
-                raise SimpleError(msgs.WRONG_ARGS_MSG.format('script|exists'))
-            return [int(sha1 in self._server.script_cache) for sha1 in args]
-        elif casematch(subcmd, b'flush'):
-            if len(args) > 1 or (len(args) == 1 and casenorm(args[0]) not in {b'sync', b'async'}):
-                raise SimpleError(msgs.BAD_SUBCOMMAND_MSG.format('SCRIPT'))
-            self._server.script_cache = {}
-            return OK
-        else:
-            raise SimpleError(msgs.BAD_SUBCOMMAND_MSG.format('SCRIPT'))
-
-    # Pubsub commands
-    # TODO: pubsub command
-
-    def _subscribe(self, channels, subscribers, mtype):
-        for channel in channels:
-            subs = subscribers[channel]
-            if self not in subs:
-                subs.add(self)
-                self._pubsub += 1
-            msg = [mtype, channel, self._pubsub]
-            self.put_response(msg)
-        return NoResponse()
-
-    def _unsubscribe(self, channels, subscribers, mtype):
-        if not channels:
-            channels = []
-            for (channel, subs) in subscribers.items():
-                if self in subs:
-                    channels.append(channel)
-        for channel in channels:
-            subs = subscribers.get(channel, set())
-            if self in subs:
-                subs.remove(self)
-                if not subs:
-                    del subscribers[channel]
-                self._pubsub -= 1
-            msg = [mtype, channel, self._pubsub]
-            self.put_response(msg)
-        return NoResponse()
-
-    @command((bytes,), (bytes,), flags='s')
-    def psubscribe(self, *patterns):
-        return self._subscribe(patterns, self._server.psubscribers, b'psubscribe')
-
-    @command((bytes,), (bytes,), flags='s')
-    def subscribe(self, *channels):
-        return self._subscribe(channels, self._server.subscribers, b'subscribe')
-
-    @command((), (bytes,), flags='s')
-    def punsubscribe(self, *patterns):
-        return self._unsubscribe(patterns, self._server.psubscribers, b'punsubscribe')
-
-    @command((), (bytes,), flags='s')
-    def unsubscribe(self, *channels):
-        return self._unsubscribe(channels, self._server.subscribers, b'unsubscribe')
-
-    @command((bytes, bytes))
-    def publish(self, channel, message):
-        receivers = 0
-        msg = [b'message', channel, message]
-        subs = self._server.subscribers.get(channel, set())
-        for sock in subs:
-            sock.put_response(msg)
-            receivers += 1
-        for (pattern, socks) in self._server.psubscribers.items():
-            regex = compile_pattern(pattern)
-            if regex.match(channel):
-                msg = [b'pmessage', pattern, channel, message]
-                for sock in socks:
-                    sock.put_response(msg)
-                    receivers += 1
-        return receivers
-
-    def _encodefloat(self, value, humanfriendly):
-        if self.version >= 7:
-            value = 0 + value
-        return Float.encode(value, humanfriendly)
-
-    def _encodeint(self, value):
-        if self.version >= 7:
-            value = 0 + value
-        return Int.encode(value)
-
-
-setattr(FakeSocket, 'del', FakeSocket.del_)
-delattr(FakeSocket, 'del_')
-setattr(FakeSocket, 'set', FakeSocket.set_)
-delattr(FakeSocket, 'set_')
-setattr(FakeSocket, 'exec', FakeSocket.exec_)
-delattr(FakeSocket, 'exec_')
+from fakeredis.stack import JSONCommandsMixin
+from ._basefakesocket import BaseFakeSocket
+from .commands_mixins.bitmap_mixin import BitmapCommandsMixin
+from .commands_mixins.connection_mixin import ConnectionCommandsMixin
+from .commands_mixins.generic_mixin import GenericCommandsMixin
+from .commands_mixins.geo_mixin import GeoCommandsMixin
+from .commands_mixins.hash_mixin import HashCommandsMixin
+from .commands_mixins.list_mixin import ListCommandsMixin
+from .commands_mixins.pubsub_mixin import PubSubCommandsMixin
+from .commands_mixins.scripting_mixin import ScriptingCommandsMixin
+from .commands_mixins.server_mixin import ServerCommandsMixin
+from .commands_mixins.set_mixin import SetCommandsMixin
+from .commands_mixins.sortedset_mixin import SortedSetCommandsMixin
+from .commands_mixins.streams_mixin import StreamsCommandsMixin
+from .commands_mixins.string_mixin import StringCommandsMixin
+from .commands_mixins.transactions_mixin import TransactionsCommandsMixin
+
+
+class FakeSocket(
+    BaseFakeSocket,
+    GenericCommandsMixin,
+    ScriptingCommandsMixin,
+    HashCommandsMixin,
+    ConnectionCommandsMixin,
+    ListCommandsMixin,
+    ServerCommandsMixin,
+    StringCommandsMixin,
+    TransactionsCommandsMixin,
+    PubSubCommandsMixin,
+    SetCommandsMixin,
+    BitmapCommandsMixin,
+    SortedSetCommandsMixin,
+    StreamsCommandsMixin,
+    JSONCommandsMixin,
+    GeoCommandsMixin,
+):
+
+    def __init__(self, server, db):
+        super(FakeSocket, self).__init__(server, db)
diff --git a/fakeredis/_helpers.py b/fakeredis/_helpers.py
index 9f64b73..23e40c2 100644
--- a/fakeredis/_helpers.py
+++ b/fakeredis/_helpers.py
@@ -1,4 +1,3 @@
-import logging
 import re
 import threading
 import time
@@ -6,22 +5,6 @@ import weakref
 from collections import defaultdict
 from collections.abc import MutableMapping
 
-LOGGER = logging.getLogger('fakeredis')
-REDIS_LOG_LEVELS = {
-    b'LOG_DEBUG': 0,
-    b'LOG_VERBOSE': 1,
-    b'LOG_NOTICE': 2,
-    b'LOG_WARNING': 3
-}
-REDIS_LOG_LEVELS_TO_LOGGING = {
-    0: logging.DEBUG,
-    1: logging.INFO,
-    2: logging.INFO,
-    3: logging.WARNING
-}
-
-MAX_STRING_SIZE = 512 * 1024 * 1024
-
 
 class SimpleString:
     def __init__(self, value):
@@ -48,24 +31,24 @@ class NoResponse:
 
 OK = SimpleString(b'OK')
 QUEUED = SimpleString(b'QUEUED')
-PONG = SimpleString(b'PONG')
 BGSAVE_STARTED = SimpleString(b'Background saving started')
 
 
 def null_terminate(s):
     # Redis uses C functions on some strings, which means they stop at the
     # first NULL.
-    if b'\0' in s:
-        return s[:s.find(b'\0')]
-    return s
+    ind = s.find(b'\0')
+    if ind > -1:
+        return s[:ind].lower()
+    return s.lower()
 
 
-def casenorm(s):
-    return null_terminate(s).lower()
+def casematch(a, b):
+    return null_terminate(a) == null_terminate(b)
 
 
-def casematch(a, b):
-    return casenorm(a) == casenorm(b)
+def encode_command(s):
+    return s.decode(encoding='utf-8', errors='replace').lower()
 
 
 def compile_pattern(pattern):
@@ -214,8 +197,8 @@ class Database(MutableMapping):
 def valid_response_type(value, nested=False):
     if isinstance(value, NoResponse) and not nested:
         return True
-    if value is not None and not isinstance(value, (bytes, SimpleString, SimpleError,
-                                                    int, list)):
+    if (value is not None
+            and not isinstance(value, (bytes, SimpleString, SimpleError, float, int, list))):
         return False
     if isinstance(value, list):
         if any(not valid_response_type(item, True) for item in value):
@@ -223,27 +206,10 @@ def valid_response_type(value, nested=False):
     return True
 
 
-class _DummyParser:
-    def __init__(self, socket_read_size):
-        self.socket_read_size = socket_read_size
-
-    def on_disconnect(self):
-        pass
-
-    def on_connect(self, connection):
-        pass
-
-
-# Redis <3.2 will not have a selector
-try:
-    from redis.selector import BaseSelector
-except ImportError:
-    class BaseSelector:
-        def __init__(self, sock):
-            self.sock = sock
-
+class FakeSelector(object):
+    def __init__(self, sock):
+        self.sock = sock
 
-class FakeSelector(BaseSelector):
     def check_can_read(self, timeout):
         if self.sock.responses.qsize():
             return True
@@ -261,5 +227,6 @@ class FakeSelector(BaseSelector):
             if timeout is not None and now > start + timeout:
                 return False
 
-    def check_is_ready_for_command(self, timeout):
+    @staticmethod
+    def check_is_ready_for_command(timeout):
         return True
diff --git a/fakeredis/_msgs.py b/fakeredis/_msgs.py
index f957e59..e270e84 100644
--- a/fakeredis/_msgs.py
+++ b/fakeredis/_msgs.py
@@ -1,15 +1,20 @@
 INVALID_EXPIRE_MSG = "ERR invalid expire time in {}"
 WRONGTYPE_MSG = "WRONGTYPE Operation against a key holding the wrong kind of value"
 SYNTAX_ERROR_MSG = "ERR syntax error"
+SYNTAX_ERROR_LIMIT_ONLY_WITH_MSG = (
+    "ERR syntax error, LIMIT is only supported in combination with either BYSCORE or BYLEX")
+INVALID_HASH_MSG = "ERR hash value is not an integer"
 INVALID_INT_MSG = "ERR value is not an integer or out of range"
 INVALID_FLOAT_MSG = "ERR value is not a valid float"
+INVALID_WEIGHT_MSG = "ERR weight value is not a float"
 INVALID_OFFSET_MSG = "ERR offset is out of range"
 INVALID_BIT_OFFSET_MSG = "ERR bit offset is not an integer or out of range"
 INVALID_BIT_VALUE_MSG = "ERR bit is not an integer or out of range"
+BITOP_NOT_ONE_KEY_ONLY = "ERR BITOP NOT must be called with a single source key"
 INVALID_DB_MSG = "ERR DB index is out of range"
 INVALID_MIN_MAX_FLOAT_MSG = "ERR min or max is not a float"
 INVALID_MIN_MAX_STR_MSG = "ERR min or max not a valid string range item"
-STRING_OVERFLOW_MSG = "ERR string exceeds maximum allowed size (512MB)"
+STRING_OVERFLOW_MSG = "ERR string exceeds maximum allowed size (proto-max-bulk-len)"
 OVERFLOW_MSG = "ERR increment or decrement would overflow"
 NONFINITE_MSG = "ERR increment would produce NaN or Infinity"
 SCORE_NAN_MSG = "ERR resulting score is not a number (NaN)"
@@ -17,11 +22,17 @@ INVALID_SORT_FLOAT_MSG = "ERR One or more scores can't be converted into double"
 SRC_DST_SAME_MSG = "ERR source and destination objects are the same"
 NO_KEY_MSG = "ERR no such key"
 INDEX_ERROR_MSG = "ERR index out of range"
-ZADD_NX_XX_ERROR_MSG = "ERR ZADD allows either 'nx' or 'xx', not both"
+INDEX_NEGATIVE_ERROR_MSG = "ERR value is out of range, must be positive"
+# ZADD_NX_XX_ERROR_MSG6 = "ERR ZADD allows either 'nx' or 'xx', not both"
+ZADD_NX_XX_ERROR_MSG = "ERR XX and NX options at the same time are not compatible"
 ZADD_INCR_LEN_ERROR_MSG = "ERR INCR option supports a single increment-element pair"
-ZUNIONSTORE_KEYS_MSG = "ERR at least 1 input key is needed for ZUNIONSTORE/ZINTERSTORE"
-WRONG_ARGS_MSG = "ERR wrong number of arguments for '{}' command"
-UNKNOWN_COMMAND_MSG = "ERR unknown command '{}'"
+ZADD_NX_GT_LT_ERROR_MSG = "ERR GT, LT, and/or NX options at the same time are not compatible"
+NX_XX_GT_LT_ERROR_MSG = "ERR NX and XX, GT or LT options at the same time are not compatible"
+EXPIRE_UNSUPPORTED_OPTION = "ERR Unsupported option {}"
+ZUNIONSTORE_KEYS_MSG = "ERR at least 1 input key is needed for {}"
+WRONG_ARGS_MSG7 = "ERR Wrong number of args calling Redis command from script"
+WRONG_ARGS_MSG6 = "ERR wrong number of arguments for '{}' command"
+UNKNOWN_COMMAND_MSG = "ERR unknown command `{}`, with args beginning with: "
 EXECABORT_MSG = "EXECABORT Transaction discarded because of previous errors."
 MULTI_NESTED_MSG = "ERR MULTI calls can not be nested"
 WITHOUT_MULTI_MSG = "ERR {0} without MULTI"
@@ -45,5 +56,15 @@ SCRIPT_ERROR_MSG = "ERR Error running script (call to f_{}): @user_script:?: {}"
 RESTORE_KEY_EXISTS = "BUSYKEY Target key name already exists."
 RESTORE_INVALID_CHECKSUM_MSG = "ERR DUMP payload version or checksum are wrong"
 RESTORE_INVALID_TTL_MSG = "ERR Invalid TTL value, must be >= 0"
-
+JSON_WRONG_REDIS_TYPE = "ERR Existing key has wrong Redis type"
+JSON_KEY_NOT_FOUND = "ERR could not perform this operation on a key that doesn't exist"
+JSON_PATH_NOT_FOUND_OR_NOT_STRING = "ERR Path '{}' does not exist or not a string"
+JSON_PATH_DOES_NOT_EXIST = "ERR Path '{}' does not exist"
+LCS_CANT_HAVE_BOTH_LEN_AND_IDX = "ERR If you want both the length and indexes, please just use IDX."
+BIT_ARG_MUST_BE_ZERO_OR_ONE = "ERR The bit argument must be 1 or 0."
+XADD_ID_LOWER_THAN_LAST = "The ID specified in XADD is equal or smaller than the target stream top item"
+XADD_INVALID_ID = 'Invalid stream ID specified as stream command argument'
 FLAG_NO_SCRIPT = 's'  # Command not allowed in scripts
+FLAG_LEAVE_EMPTY_VAL = 'v'
+FLAG_TRANSACTION = 't'
+GEO_UNSUPPORTED_UNIT = 'unsupported unit provided. please use M, KM, FT, MI'
diff --git a/fakeredis/_server.py b/fakeredis/_server.py
index 12e245e..aed81ec 100644
--- a/fakeredis/_server.py
+++ b/fakeredis/_server.py
@@ -1,27 +1,29 @@
 import inspect
+import logging
 import queue
 import threading
 import time
+import uuid
 import warnings
 import weakref
 from collections import defaultdict
+from typing import Dict
 
 import redis
 
 from fakeredis._fakesocket import FakeSocket
-from fakeredis._helpers import (
-    Database, FakeSelector, LOGGER)
-from fakeredis._msgs import CONNECTION_ERROR_MSG
+from fakeredis._helpers import (Database, FakeSelector)
+from . import _msgs as msgs
 
-LOGGER = LOGGER
+LOGGER = logging.getLogger('fakeredis')
 
 
 class FakeServer:
+    _servers_map: Dict[str, 'FakeServer'] = dict()
+
     def __init__(self, version=7):
         self.lock = threading.Lock()
         self.dbs = defaultdict(lambda: Database(self.lock))
-        # Maps SHA1 to script source
-        self.script_cache = {}
         # Maps channel/pattern to weak set of sockets
         self.subscribers = defaultdict(weakref.WeakSet)
         self.psubscribers = defaultdict(weakref.WeakSet)
@@ -31,18 +33,34 @@ class FakeServer:
         self.closed_sockets = []
         self.version = version
 
+    @staticmethod
+    def get_server(key, version: int):
+        return FakeServer._servers_map.setdefault(key, FakeServer(version=version))
 
-class FakeConnection(redis.Connection):
-    description_format = "FakeConnection<db=%(db)s>"
 
+class FakeBaseConnectionMixin:
     def __init__(self, *args, **kwargs):
-        self.encoder = None
         self.client_name = None
         self._sock = None
         self._selector = None
-        self._server = kwargs.pop('server')
+        self._server = kwargs.pop('server', None)
+        path = kwargs.pop('path', None)
+        version = kwargs.pop('version', 7)
+        connected = kwargs.pop('connected', True)
+        if self._server is None:
+            if path:
+                self.server_key = path
+            else:
+                host, port = kwargs.get('host'), kwargs.get('port')
+                self.server_key = uuid.uuid4().hex if host is None or port is None else f'{host}:{port}'
+            self.server_key += f'v{version}'
+            self._server = FakeServer.get_server(self.server_key, version=version)
+            self._server.connected = connected
         super().__init__(*args, **kwargs)
 
+
+class FakeConnection(FakeBaseConnectionMixin, redis.Connection):
+
     def connect(self):
         super().connect()
         # The selector is set in redis.Connection.connect() after _connect() is called
@@ -50,8 +68,8 @@ class FakeConnection(redis.Connection):
 
     def _connect(self):
         if not self._server.connected:
-            raise redis.ConnectionError(CONNECTION_ERROR_MSG)
-        return FakeSocket(self._server)
+            raise redis.ConnectionError(msgs.CONNECTION_ERROR_MSG)
+        return FakeSocket(self._server, db=self.db)
 
     def can_read(self, timeout=0):
         if not self._server.connected:
@@ -73,17 +91,19 @@ class FakeConnection(redis.Connection):
         else:
             return response
 
-    def read_response(self, disable_decoding=False):
+    def read_response(self, **kwargs):
         if not self._server.connected:
             try:
                 response = self._sock.responses.get_nowait()
             except queue.Empty:
-                raise redis.ConnectionError(CONNECTION_ERROR_MSG)
+                if kwargs.get('disconnect_on_error', True):
+                    self.disconnect()
+                raise redis.ConnectionError(msgs.CONNECTION_ERROR_MSG)
         else:
             response = self._sock.responses.get()
         if isinstance(response, redis.ResponseError):
             raise response
-        if disable_decoding:
+        if kwargs.get('disable_decoding', False):
             return response
         else:
             return self._decode(response)
@@ -97,6 +117,9 @@ class FakeConnection(redis.Connection):
             pieces.append(('client_name', self.client_name))
         return pieces
 
+    def __str__(self):
+        return self.server_key
+
 
 class FakeRedisMixin:
     def __init__(self, *args, server=None, connected=True, version=7, **kwargs):
@@ -105,7 +128,11 @@ class FakeRedisMixin:
         parameters = inspect.signature(redis.Redis.__init__).parameters
         parameter_names = list(parameters.keys())
         default_args = parameters.values()
-        kwds = {p.name: p.default for p in default_args if p.default != inspect.Parameter.empty}
+        ignore_default_param_values = {'host', 'port', 'db'}
+        kwds = {p.name: p.default
+                for p in default_args
+                if (p.default != inspect.Parameter.empty
+                    and p.name not in ignore_default_param_values)}
         kwds.update(kwargs)
         if not kwds.get('connection_pool', None):
             charset = kwds.get('charset', None)
@@ -119,15 +146,9 @@ class FakeRedisMixin:
                 warnings.warn(DeprecationWarning(
                     '"errors" is deprecated. Use "encoding_errors" instead'))
                 kwds['encoding_errors'] = errors
-
-            if server is None:
-                server = FakeServer(version=version)
-                server.connected = connected
-            kwargs = {
-                'connection_class': FakeConnection,
-                'server': server
-            }
-            conn_pool_args = [
+            conn_pool_args = {
+                'host',
+                'port',
                 'db',
                 # Ignoring because AUTH is not implemented
                 # 'username',
@@ -139,12 +160,15 @@ class FakeRedisMixin:
                 'retry_on_timeout',
                 'max_connections',
                 'health_check_interval',
-                'client_name'
-            ]
-            for arg in conn_pool_args:
-                if arg in kwds:
-                    kwargs[arg] = kwds[arg]
-            kwds['connection_pool'] = redis.connection.ConnectionPool(**kwargs)
+                'client_name',
+            }
+            connection_kwargs = {
+                'connection_class': FakeConnection,
+                'server': server,
+                'version': version,
+            }
+            connection_kwargs.update({arg: kwds[arg] for arg in conn_pool_args if arg in kwds})
+            kwds['connection_pool'] = redis.connection.ConnectionPool(**connection_kwargs)
         kwds.pop('server', None)
         kwds.pop('connected', None)
         kwds.pop('version', None)
@@ -155,16 +179,9 @@ class FakeRedisMixin:
 
     @classmethod
     def from_url(cls, *args, **kwargs):
-        server = kwargs.pop('server', None)
-        if server is None:
-            server = FakeServer()
         pool = redis.ConnectionPool.from_url(*args, **kwargs)
         # Now override how it creates connections
         pool.connection_class = FakeConnection
-        pool.connection_kwargs['server'] = server
-        # FakeConnection cannot handle the path kwarg (present when from_url
-        # is called with a unix socket)
-        pool.connection_kwargs.pop('path', None)
         # Using username and password fails since AUTH is not implemented.
         # https://github.com/cunla/fakeredis-py/issues/9
         pool.connection_kwargs.pop('username', None)
diff --git a/fakeredis/_stream.py b/fakeredis/_stream.py
new file mode 100644
index 0000000..7cd1e99
--- /dev/null
+++ b/fakeredis/_stream.py
@@ -0,0 +1,150 @@
+import bisect
+import time
+from typing import List, Union, Tuple, Optional
+
+from fakeredis._commands import BeforeAny, AfterAny
+
+
+class StreamRangeTest:
+    """Argument converter for sorted set LEX endpoints."""
+
+    def __init__(self, value, exclusive):
+        self.value = value
+        self.exclusive = exclusive
+
+    @staticmethod
+    def parse_id(id_str: str):
+        if isinstance(id_str, bytes):
+            id_str = id_str.decode()
+        try:
+            timestamp, sequence = (int(x) for x in id_str.split('-'))
+        except ValueError:
+            return -1, -1
+        return timestamp, sequence
+
+    @classmethod
+    def decode(cls, value, exclusive=False):
+        if value == b'-':
+            return cls(BeforeAny(), True)
+        elif value == b'+':
+            return cls(AfterAny(), True)
+        elif value[:1] == b'(':
+            return cls(cls.parse_id(value[1:]), True)
+        return cls(cls.parse_id(value), exclusive)
+
+
+class XStream:
+    def __init__(self):
+        # Values:
+        # [
+        #    ((timestamp,sequence), [field1, value1, field2, value2, ...])
+        #    ((timestamp,sequence), [field1, value1, field2, value2, ...])
+        # ]
+        self._values = list()
+
+    def delete(self, lst: List[str]) -> int:
+        """Delete items from stream
+
+        :param lst: list of IDs to delete, in the form of timestamp-sequence.
+        :returns: Number of items deleted
+        """
+        res = 0
+        for item in lst:
+            ind, found = self.find_index(item)
+            if found:
+                del self._values[ind]
+                res += 1
+        return res
+
+    def add(self, fields: List, id_str: str = '*') -> Union[None, bytes]:
+        assert len(fields) % 2 == 0
+        if isinstance(id_str, bytes):
+            id_str = id_str.decode()
+
+        if id_str is None or id_str == '*':
+            ts, seq = int(1000 * time.time()), 0
+            if (len(self._values) > 0
+                    and self._values[-1][0][0] == ts
+                    and self._values[-1][0][1] >= seq):
+                seq = self._values[-1][0][1] + 1
+            ts_seq = (ts, seq)
+        elif id_str[-1] == '*':
+            split = id_str.split('-')
+            if len(split) != 2:
+                return None
+            ts, seq = int(split[0]), split[1]
+            if len(self._values) > 0 and ts == self._values[-1][0][0]:
+                seq = self._values[-1][0][1] + 1
+            else:
+                seq = 0
+            ts_seq = (ts, seq)
+        else:
+            ts_seq = StreamRangeTest.parse_id(id_str)
+
+        if len(self._values) > 0 and self._values[-1][0] > ts_seq:
+            return None
+        new_val = (ts_seq, list(fields))
+        self._values.append(new_val)
+        return f'{ts_seq[0]}-{ts_seq[1]}'.encode()
+
+    def __len__(self):
+        return len(self._values)
+
+    def __iter__(self):
+        def gen():
+            for record in self._values:
+                yield self._format_record(record)
+
+        return gen()
+
+    def find_index(self, id_str: str) -> Tuple[int, bool]:
+        if len(self._values) == 0:
+            return 0, False
+        ts_seq = StreamRangeTest.parse_id(id_str)
+        ind = bisect.bisect_left(list(map(lambda x: x[0], self._values)), ts_seq)
+        return ind, self._values[ind][0] == ts_seq
+
+    @staticmethod
+    def _encode_id(record):
+        return f'{record[0][0]}-{record[0][1]}'.encode()
+
+    @staticmethod
+    def _format_record(record):
+        results = list(record[1:][0])
+        return [XStream._encode_id(record), results]
+
+    def trim(self,
+             maxlen: Optional[int] = None,
+             minid: Optional[str] = None,
+             limit: Optional[int] = None) -> int:
+        if maxlen is not None and minid is not None:
+            raise
+        start_ind = None
+        if maxlen is not None:
+            start_ind = len(self._values) - maxlen
+        elif minid is not None:
+            ind, exact = self.find_index(minid)
+            start_ind = ind
+        res = max(start_ind, 0)
+        if limit is not None:
+            res = min(start_ind, limit)
+        self._values = self._values[res:]
+        return res
+
+    def irange(self,
+               start, stop,
+               exclusive: Tuple[bool, bool] = (True, True),
+               reverse=False):
+        def match(record):
+            result = stop > record[0] > start
+            result = result or (not exclusive[0] and record[0] == start)
+            result = result or (not exclusive[1] and record[0] == stop)
+            return result
+
+        matches = map(self._format_record, filter(match, self._values))
+        if reverse:
+            return list(reversed(tuple(matches)))
+        return list(matches)
+
+    def last_item_key(self):
+        XStream._encode_id(self._values[-1])
diff --git a/fakeredis/_zset.py b/fakeredis/_zset.py
index 47d1169..a09b3dc 100644
--- a/fakeredis/_zset.py
+++ b/fakeredis/_zset.py
@@ -3,7 +3,7 @@ import sortedcontainers
 
 class ZSet:
     def __init__(self):
-        self._bylex = {}     # Maps value to score
+        self._bylex = {}  # Maps value to score
         self._byscore = sortedcontainers.SortedList()
 
     def __contains__(self, value):
diff --git a/fakeredis/aioredis.py b/fakeredis/aioredis.py
index 5104f4c..cdebce0 100644
--- a/fakeredis/aioredis.py
+++ b/fakeredis/aioredis.py
@@ -1,19 +1,253 @@
+from __future__ import annotations
+
+import asyncio
+import sys
+from typing import Union, Optional
+
 import redis
-import packaging.version
 
-# aioredis was integrated into redis in version 4.2.0 as redis.asyncio
-if packaging.version.Version(redis.__version__) >= packaging.version.Version("4.2.0"):
-    import redis.asyncio as aioredis
-    from ._aioredis2 import FakeConnection, FakeRedis  # noqa: F401
+from ._server import FakeBaseConnectionMixin
+
+if sys.version_info >= (3, 8):
+    from typing import Type, TypedDict
+else:
+    from typing_extensions import Type, TypedDict
+
+if sys.version_info >= (3, 11):
+    from asyncio import timeout as async_timeout
 else:
-    try:
-        import aioredis
-    except ImportError as e:
-        raise ImportError("aioredis is required for redis-py below 4.2.0") from e
-
-    if packaging.version.Version(aioredis.__version__) >= packaging.version.Version('2.0.0a1'):
-        from ._aioredis2 import FakeConnection, FakeRedis  # noqa: F401
-    else:
-        from ._aioredis1 import (  # noqa: F401
-            FakeConnectionsPool, create_connection, create_redis, create_pool, create_redis_pool
+    from async_timeout import timeout as async_timeout
+
+import redis.asyncio as redis_async  # aioredis was integrated into redis in version 4.2.0 as redis.asyncio
+from redis.asyncio.connection import BaseParser
+
+from . import _fakesocket
+from . import _helpers
+from . import _msgs as msgs
+from . import _server
+
+
+class AsyncFakeSocket(_fakesocket.FakeSocket):
+    _connection_error_class = redis_async.ConnectionError
+
+    def __init__(self, *args, **kwargs):
+        super().__init__(*args, **kwargs)
+        self.responses = asyncio.Queue()
+
+    def _decode_error(self, error):
+        parser = BaseParser(1) if redis.VERSION < (5, 0) else BaseParser()
+        return parser.parse_error(error.value)
+
+    def put_response(self, msg):
+        self.responses.put_nowait(msg)
+
+    async def _async_blocking(self, timeout, func, event, callback):
+        result = None
+        try:
+            async with async_timeout(timeout if timeout else None):
+                while True:
+                    await event.wait()
+                    event.clear()
+                    # This is a coroutine outside the normal control flow that
+                    # locks the server, so we have to take our own lock.
+                    with self._server.lock:
+                        ret = func(False)
+                        if ret is not None:
+                            result = self._decode_result(ret)
+                            self.put_response(result)
+                            break
+        except asyncio.TimeoutError:
+            pass
+        finally:
+            with self._server.lock:
+                self._db.remove_change_callback(callback)
+            self.put_response(result)
+            self.resume()
+
+    def _blocking(self, timeout, func):
+        loop = asyncio.get_event_loop()
+        ret = func(True)
+        if ret is not None or self._in_transaction:
+            return ret
+        event = asyncio.Event()
+
+        def callback():
+            loop.call_soon_threadsafe(event.set)
+
+        self._db.add_change_callback(callback)
+        self.pause()
+        loop.create_task(self._async_blocking(timeout, func, event, callback))
+        return _helpers.NoResponse()
+
+
+class FakeReader:
+    def __init__(self, socket: AsyncFakeSocket) -> None:
+        self._socket = socket
+
+    async def read(self, length: int) -> bytes:
+        return await self._socket.responses.get()
+
+
+class FakeWriter:
+    def __init__(self, socket: AsyncFakeSocket) -> None:
+        self._socket = socket
+
+    def close(self):
+        self._socket = None
+
+    async def wait_closed(self):
+        pass
+
+    async def drain(self):
+        pass
+
+    def writelines(self, data):
+        for chunk in data:
+            self._socket.sendall(chunk)
+
+
+class FakeConnection(FakeBaseConnectionMixin, redis_async.Connection):
+
+    async def _connect(self):
+        if not self._server.connected:
+            raise redis_async.ConnectionError(msgs.CONNECTION_ERROR_MSG)
+        self._sock = AsyncFakeSocket(self._server, self.db)
+        self._reader = FakeReader(self._sock)
+        self._writer = FakeWriter(self._sock)
+
+    async def disconnect(self, **kwargs):
+        await super().disconnect(**kwargs)
+        self._sock = None
+
+    async def can_read(self, timeout: float = 0):
+        if not self.is_connected:
+            await self.connect()
+        if timeout == 0:
+            return not self._sock.responses.empty()
+        # asyncio.Queue doesn't have a way to wait for the queue to be
+        # non-empty without consuming an item, so kludge it with a sleep/poll
+        # loop.
+        loop = asyncio.get_event_loop()
+        start = loop.time()
+        while True:
+            if not self._sock.responses.empty():
+                return True
+            await asyncio.sleep(0.01)
+            now = loop.time()
+            if timeout is not None and now > start + timeout:
+                return False
+
+    def _decode(self, response):
+        if isinstance(response, list):
+            return [self._decode(item) for item in response]
+        elif isinstance(response, bytes):
+            return self.encoder.decode(response, )
+        else:
+            return response
+
+    async def read_response(self, **kwargs):
+        if not self._server.connected:
+            try:
+                response = self._sock.responses.get_nowait()
+            except asyncio.QueueEmpty:
+                if kwargs.get('disconnect_on_error', True):
+                    await self.disconnect()
+                raise redis_async.ConnectionError(msgs.CONNECTION_ERROR_MSG)
+        else:
+            timeout = kwargs.pop('timeout', None)
+            can_read = await self.can_read(timeout)
+            response = await self._reader.read(0) if can_read else None
+        if isinstance(response, redis_async.ResponseError):
+            raise response
+        return self._decode(response)
+
+    def repr_pieces(self):
+        pieces = [
+            ('server', self._server),
+            ('db', self.db)
+        ]
+        if self.client_name:
+            pieces.append(('client_name', self.client_name))
+        return pieces
+
+
+class ConnectionKwargs(TypedDict, total=False):
+    db: Union[str, int]
+    username: Optional[str]
+    password: Optional[str]
+    socket_timeout: Optional[float]
+    encoding: str
+    encoding_errors: str
+    decode_responses: bool
+    retry_on_timeout: bool
+    health_check_interval: int
+    client_name: Optional[str]
+    server: Optional[_server.FakeServer]
+    connection_class: Type[redis_async.Connection]
+    max_connections: Optional[int]
+
+
+class FakeRedis(redis_async.Redis):
+    def __init__(
+            self,
+            *,
+            db: Union[str, int] = 0,
+            password: Optional[str] = None,
+            socket_timeout: Optional[float] = None,
+            connection_pool: Optional[redis_async.ConnectionPool] = None,
+            encoding: str = "utf-8",
+            encoding_errors: str = "strict",
+            decode_responses: bool = False,
+            retry_on_timeout: bool = False,
+            max_connections: Optional[int] = None,
+            health_check_interval: int = 0,
+            client_name: Optional[str] = None,
+            username: Optional[str] = None,
+            server: Optional[_server.FakeServer] = None,
+            connected: bool = True,
+            **kwargs,
+    ):
+        if not connection_pool:
+            # Adapted from aioredis
+            connection_kwargs: ConnectionKwargs = {
+                "db": db,
+                # Ignoring because AUTH is not implemented
+                # 'username',
+                # 'password',
+                "socket_timeout": socket_timeout,
+                "encoding": encoding,
+                "encoding_errors": encoding_errors,
+                "decode_responses": decode_responses,
+                "retry_on_timeout": retry_on_timeout,
+                "health_check_interval": health_check_interval,
+                "client_name": client_name,
+                "server": server,
+                "connected": connected,
+                "connection_class": FakeConnection,
+                "max_connections": max_connections,
+            }
+            connection_pool = redis_async.ConnectionPool(**connection_kwargs)
+        super().__init__(
+            db=db,
+            password=password,
+            socket_timeout=socket_timeout,
+            connection_pool=connection_pool,
+            encoding=encoding,
+            encoding_errors=encoding_errors,
+            decode_responses=decode_responses,
+            retry_on_timeout=retry_on_timeout,
+            max_connections=max_connections,
+            health_check_interval=health_check_interval,
+            client_name=client_name,
+            username=username,
+            **kwargs,
         )
+
+    @classmethod
+    def from_url(cls, url: str, **kwargs):
+        self = super().from_url(url, **kwargs)
+        pool = self.connection_pool  # Now override how it creates connections
+        pool.connection_class = FakeConnection
+        pool.connection_kwargs.pop('username', None)
+        pool.connection_kwargs.pop('password', None)
+        return self
diff --git a/fakeredis/commands_mixins/__init__.py b/fakeredis/commands_mixins/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/fakeredis/commands_mixins/bitmap_mixin.py b/fakeredis/commands_mixins/bitmap_mixin.py
new file mode 100644
index 0000000..f16ddbe
--- /dev/null
+++ b/fakeredis/commands_mixins/bitmap_mixin.py
@@ -0,0 +1,138 @@
+from fakeredis import _msgs as msgs
+from fakeredis._commands import (command, Key, Int, BitOffset, BitValue, fix_range_string, fix_range)
+from fakeredis._helpers import SimpleError, casematch
+
+
+class BitmapCommandsMixin:
+    # TODO: bitfield, bitfield_ro, bitpos
+    @staticmethod
+    def _bytes_as_bin_string(value):
+        return ''.join([bin(i).lstrip('0b').rjust(8, '0') for i in value])
+
+    @command((Key(bytes), Int), (bytes,))
+    def bitpos(self, key, bit, *args):
+        if bit != 0 and bit != 1:
+            raise SimpleError(msgs.BIT_ARG_MUST_BE_ZERO_OR_ONE)
+        if len(args) > 3:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        if len(args) == 3 and self.version < 7:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        bit_mode = False
+        if len(args) == 3 and self.version >= 7:
+            bit_mode = casematch(args[2], b'bit')
+            if not bit_mode and not casematch(args[2], b'byte'):
+                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        start = 0 if len(args) == 0 else Int.decode(args[0])
+        bit_chr = str(bit)
+        key_value = key.value if key.value else b''
+
+        if bit_mode:
+            value = self._bytes_as_bin_string(key_value)
+            end = len(value) if len(args) <= 1 else Int.decode(args[1])
+            start, end = fix_range(start, end, len(value))
+            value = value[start:end]
+        else:
+            end = len(key_value) if len(args) <= 1 else Int.decode(args[1])
+            start, end = fix_range(start, end, len(key_value))
+            value = self._bytes_as_bin_string(key_value[start:end])
+
+        result = value.find(bit_chr)
+        if result != -1:
+            result += start if bit_mode else (start * 8)
+        return result
+
+    @command((Key(bytes, 0),), (bytes,))
+    def bitcount(self, key, *args):
+        # Redis checks the argument count before decoding integers. That's why
+        # we can't declare them as Int.
+        if len(args) == 0:
+            value = key.value
+            return bin(int.from_bytes(value, 'little')).count('1')
+
+        if not 2 <= len(args) <= 3:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        start = Int.decode(args[0])
+        end = Int.decode(args[1])
+        bit_mode = False
+        if len(args) == 3 and self.version < 7:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        if len(args) == 3 and self.version >= 7:
+            bit_mode = casematch(args[2], b'bit')
+            if not bit_mode and not casematch(args[2], b'byte'):
+                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+
+        if bit_mode:
+            value = self._bytes_as_bin_string(key.value if key.value else b'')
+            start, end = fix_range_string(start, end, len(value))
+            return value[start:end].count('1')
+        start, end = fix_range_string(start, end, len(key.value))
+        value = key.value[start:end]
+
+        return bin(int.from_bytes(value, 'little')).count('1')
+
+    @command((Key(bytes), BitOffset))
+    def getbit(self, key, offset):
+        value = key.get(b'')
+        byte = offset // 8
+        remaining = offset % 8
+        actual_bitoffset = 7 - remaining
+        try:
+            actual_val = value[byte]
+        except IndexError:
+            return 0
+        return 1 if (1 << actual_bitoffset) & actual_val else 0
+
+    @command((Key(bytes), BitOffset, BitValue))
+    def setbit(self, key, offset, value):
+        val = key.value if key.value is not None else b'\x00'
+        byte = offset // 8
+        remaining = offset % 8
+        actual_bitoffset = 7 - remaining
+        if len(val) - 1 < byte:
+            # We need to expand val so that we can set the appropriate
+            # bit.
+            needed = byte - (len(val) - 1)
+            val += b'\x00' * needed
+        old_byte = val[byte]
+        if value == 1:
+            new_byte = old_byte | (1 << actual_bitoffset)
+        else:
+            new_byte = old_byte & ~(1 << actual_bitoffset)
+        old_value = value if old_byte == new_byte else 1 - value
+        reconstructed = bytearray(val)
+        reconstructed[byte] = new_byte
+        if (bytes(reconstructed) != key.value
+                or (self.version == 6 and old_byte != new_byte)):
+            key.update(bytes(reconstructed))
+        return old_value
+
+    @staticmethod
+    def _bitop(op, *keys):
+        value = keys[0].value
+        ans = keys[0].value
+        i = 1
+        while i < len(keys):
+            value = keys[i].value if keys[i].value is not None else b''
+            ans = bytes(op(a, b) for a, b in zip(ans, value))
+            i += 1
+        return ans
+
+    @command((bytes, Key()), (Key(bytes),))
+    def bitop(self, op_name, dst, *keys):
+        if len(keys) == 0:
+            raise SimpleError(msgs.WRONG_ARGS_MSG6.format('bitop'))
+        if casematch(op_name, b'and'):
+            res = self._bitop(lambda a, b: a & b, *keys)
+        elif casematch(op_name, b'or'):
+            res = self._bitop(lambda a, b: a | b, *keys)
+        elif casematch(op_name, b'xor'):
+            res = self._bitop(lambda a, b: a ^ b, *keys)
+        elif casematch(op_name, b'not'):
+            if len(keys) != 1:
+                raise SimpleError(msgs.BITOP_NOT_ONE_KEY_ONLY)
+            val = keys[0].value
+            res = bytes([((1 << 8) - 1 - val[i]) for i in range(len(val))])
+        else:
+            raise SimpleError(msgs.WRONG_ARGS_MSG6.format('bitop'))
+        dst.value = res
+        return len(dst.value)
diff --git a/fakeredis/commands_mixins/connection_mixin.py b/fakeredis/commands_mixins/connection_mixin.py
new file mode 100644
index 0000000..772df87
--- /dev/null
+++ b/fakeredis/commands_mixins/connection_mixin.py
@@ -0,0 +1,30 @@
+from fakeredis import _msgs as msgs
+from fakeredis._commands import (command, DbIndex)
+from fakeredis._helpers import (SimpleError, OK, SimpleString)
+
+PONG = SimpleString(b'PONG')
+
+
+class ConnectionCommandsMixin:
+    # Connection commands
+    # TODO: auth, quit
+
+    @command((bytes,))
+    def echo(self, message):
+        return message
+
+    @command((), (bytes,))
+    def ping(self, *args):
+        if len(args) > 1:
+            msg = msgs.WRONG_ARGS_MSG6.format('ping')
+            raise SimpleError(msg)
+        if self._pubsub:
+            return [b'pong', args[0] if args else b'']
+        else:
+            return args[0] if args else PONG
+
+    @command((DbIndex,))
+    def select(self, index):
+        self._db = self._server.dbs[index]
+        self._db_num = index
+        return OK
diff --git a/fakeredis/commands_mixins/generic_mixin.py b/fakeredis/commands_mixins/generic_mixin.py
new file mode 100644
index 0000000..a0382f5
--- /dev/null
+++ b/fakeredis/commands_mixins/generic_mixin.py
@@ -0,0 +1,270 @@
+import hashlib
+import pickle
+import random
+
+from fakeredis import _msgs as msgs
+from fakeredis._command_args_parsing import extract_args
+from fakeredis._commands import (
+    command, Key, Int, DbIndex, BeforeAny, CommandItem, SortFloat,
+    delete_keys, key_value_type, )
+from fakeredis._helpers import (compile_pattern, SimpleError, OK, casematch)
+from fakeredis._zset import ZSet
+
+
+class GenericCommandsMixin:
+    def _lookup_key(self, key, pattern):
+        """Python implementation of lookupKeyByPattern from redis"""
+        if pattern == b'#':
+            return key
+        p = pattern.find(b'*')
+        if p == -1:
+            return None
+        prefix = pattern[:p]
+        suffix = pattern[p + 1:]
+        arrow = suffix.find(b'->', 0, -1)
+        if arrow != -1:
+            field = suffix[arrow + 2:]
+            suffix = suffix[:arrow]
+        else:
+            field = None
+        new_key = prefix + key + suffix
+        item = CommandItem(new_key, self._db, item=self._db.get(new_key))
+        if item.value is None:
+            return None
+        if field is not None:
+            if not isinstance(item.value, dict):
+                return None
+            return item.value.get(field)
+        else:
+            if not isinstance(item.value, bytes):
+                return None
+            return item.value
+
+    def _expireat(self, key, timestamp, *args):
+        nx = False
+        xx = False
+        gt = False
+        lt = False
+        for arg in args:
+            if casematch(b'nx', arg):
+                nx = True
+            elif casematch(b'xx', arg):
+                xx = True
+            elif casematch(b'gt', arg):
+                gt = True
+            elif casematch(b'lt', arg):
+                lt = True
+            else:
+                raise SimpleError(msgs.EXPIRE_UNSUPPORTED_OPTION.format(arg))
+        if self.version < 7 and any((nx, xx, gt, lt)):
+            raise SimpleError(msgs.WRONG_ARGS_MSG6.format('expire'))
+        counter = (nx, gt, lt).count(True)
+        if (counter > 1) or (nx and xx):
+            raise SimpleError(msgs.NX_XX_GT_LT_ERROR_MSG)
+        if (not key
+                or (xx and key.expireat is None)
+                or (nx and key.expireat is not None)
+                or (gt and key.expireat is not None and timestamp < key.expireat)
+                or (lt and key.expireat is not None and timestamp > key.expireat)):
+            return 0
+        key.expireat = timestamp
+        return 1
+
+    @command((Key(),), (Key(),), name='del')
+    def del_(self, *keys):
+        return delete_keys(*keys)
+
+    @command((Key(missing_return=None),))
+    def dump(self, key):
+        value = pickle.dumps(key.value)
+        checksum = hashlib.sha1(value).digest()
+        return checksum + value
+
+    @command((Key(),), (Key(),))
+    def exists(self, *keys):
+        ret = 0
+        for key in keys:
+            if key:
+                ret += 1
+        return ret
+
+    @command((Key(), Int,), (bytes,), name='expire')
+    def expire(self, key, seconds, *args):
+        res = self._expireat(key, self._db.time + seconds, *args)
+        return res
+
+    @command((Key(), Int))
+    def expireat(self, key, timestamp):
+        return self._expireat(key, float(timestamp))
+
+    @command((bytes,))
+    def keys(self, pattern):
+        if pattern == b'*':
+            return list(self._db)
+        else:
+            regex = compile_pattern(pattern)
+            return [key for key in self._db if regex.match(key)]
+
+    @command((Key(), DbIndex))
+    def move(self, key, db):
+        if db == self._db_num:
+            raise SimpleError(msgs.SRC_DST_SAME_MSG)
+        if not key or key.key in self._server.dbs[db]:
+            return 0
+        # TODO: what is the interaction with expiry?
+        self._server.dbs[db][key.key] = self._server.dbs[self._db_num][key.key]
+        key.value = None  # Causes deletion
+        return 1
+
+    @command((Key(),))
+    def persist(self, key):
+        if key.expireat is None:
+            return 0
+        key.expireat = None
+        return 1
+
+    @command((Key(), Int))
+    def pexpire(self, key, ms):
+        return self._expireat(key, self._db.time + ms / 1000.0)
+
+    @command((Key(), Int))
+    def pexpireat(self, key, ms_timestamp):
+        return self._expireat(key, ms_timestamp / 1000.0)
+
+    @command((Key(),))
+    def pttl(self, key):
+        return self._ttl(key, 1000.0)
+
+    @command(())
+    def randomkey(self):
+        keys = list(self._db.keys())
+        if not keys:
+            return None
+        return random.choice(keys)
+
+    @command((Key(), Key()))
+    def rename(self, key, newkey):
+        if not key:
+            raise SimpleError(msgs.NO_KEY_MSG)
+        # TODO: check interaction with WATCH
+        if newkey.key != key.key:
+            newkey.value = key.value
+            newkey.expireat = key.expireat
+            key.value = None
+        return OK
+
+    @command((Key(), Key()))
+    def renamenx(self, key, newkey):
+        if not key:
+            raise SimpleError(msgs.NO_KEY_MSG)
+        if newkey:
+            return 0
+        self.rename(key, newkey)
+        return 1
+
+    @command((Key(), Int, bytes), (bytes,))
+    def restore(self, key, ttl, value, *args):
+        (replace,), _ = extract_args(args, ('replace',))
+        if key and not replace:
+            raise SimpleError(msgs.RESTORE_KEY_EXISTS)
+        checksum, value = value[:20], value[20:]
+        if hashlib.sha1(value).digest() != checksum:
+            raise SimpleError(msgs.RESTORE_INVALID_CHECKSUM_MSG)
+        if ttl < 0:
+            raise SimpleError(msgs.RESTORE_INVALID_TTL_MSG)
+        if ttl == 0:
+            expireat = None
+        else:
+            expireat = self._db.time + ttl / 1000.0
+        key.value = pickle.loads(value)
+        key.expireat = expireat
+        return OK
+
+    @command((Int,), (bytes, bytes))
+    def scan(self, cursor, *args):
+        return self._scan(list(self._db), cursor, *args)
+
+    @command((Key(),), (bytes,))
+    def sort(self, key, *args):
+        if key.value is not None and not isinstance(key.value, (set, list, ZSet)):
+            raise SimpleError(msgs.WRONGTYPE_MSG)
+        (asc, desc, alpha, store, sortby, (limit_start, limit_count)), args = extract_args(
+            args, ('asc', 'desc', 'alpha', '*store', '*by', '++limit'),
+            error_on_unexpected=False,
+            left_from_first_unexpected=False,
+        )
+        limit_start = limit_start or 0
+        limit_count = -1 if limit_count is None else limit_count
+        dontsort = (sortby is not None and b'*' not in sortby)
+
+        i = 0
+        get = []
+        while i < len(args):
+            if casematch(args[i], b'get') and i + 1 < len(args):
+                get.append(args[i + 1])
+                i += 2
+            else:
+                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+
+        # TODO: force sorting if the object is a set and either in Lua or
+        #  storing to a key, to match redis behaviour.
+        items = list(key.value) if key.value is not None else []
+
+        # These transformations are based on the redis implementation, but
+        # changed to produce a half-open range.
+        start = max(limit_start, 0)
+        end = len(items) if limit_count < 0 else start + limit_count
+        if start >= len(items):
+            start = end = len(items) - 1
+        end = min(end, len(items))
+
+        if not get:
+            get.append(b'#')
+        if sortby is None:
+            sortby = b'#'
+
+        if not dontsort:
+            if alpha:
+                def sort_key(val):
+                    byval = self._lookup_key(val, sortby)
+                    # TODO: use locale.strxfrm when not storing? But then need to decode too.
+                    if byval is None:
+                        byval = BeforeAny()
+                    return byval
+
+            else:
+                def sort_key(val):
+                    byval = self._lookup_key(val, sortby)
+                    score = SortFloat.decode(byval, ) if byval is not None else 0.0
+                    return score, val
+
+            items.sort(key=sort_key, reverse=desc)
+        elif isinstance(key.value, (list, ZSet)):
+            items.reverse()
+
+        out = []
+        for row in items[start:end]:
+            for g in get:
+                v = self._lookup_key(row, g)
+                if store is not None and v is None:
+                    v = b''
+                out.append(v)
+        if store is not None:
+            item = CommandItem(store, self._db, item=self._db.get(store))
+            item.value = out
+            item.writeback()
+            return len(out)
+        else:
+            return out
+
+    @command((Key(),))
+    def ttl(self, key):
+        return self._ttl(key, 1.0)
+
+    @command((Key(),))
+    def type(self, key):
+        return key_value_type(key)
+
+    @command((Key(),), (Key(),), name='unlink')
+    def unlink(self, *keys):
+        return delete_keys(*keys)
diff --git a/fakeredis/commands_mixins/geo_mixin.py b/fakeredis/commands_mixins/geo_mixin.py
new file mode 100644
index 0000000..1f6cb9c
--- /dev/null
+++ b/fakeredis/commands_mixins/geo_mixin.py
@@ -0,0 +1,218 @@
+import sys
+from collections import namedtuple
+from typing import List, Any
+
+from fakeredis import _msgs as msgs
+from fakeredis._command_args_parsing import extract_args
+from fakeredis._commands import command, Key, Float, CommandItem
+from fakeredis._helpers import SimpleError
+from fakeredis._zset import ZSet
+from fakeredis.geo import geohash
+from fakeredis.geo.haversine import distance
+
+UNIT_TO_M = {'km': 0.001, 'mi': 0.000621371, 'ft': 3.28084, 'm': 1}
+
+
+def translate_meters_to_unit(unit_arg: bytes) -> float:
+    """number of meters in a unit.
+    :param unit_arg: unit name (km, mi, ft, m)
+    :returns: number of meters in unit
+    """
+    unit = UNIT_TO_M.get(unit_arg.decode().lower())
+    if unit is None:
+        raise SimpleError(msgs.GEO_UNSUPPORTED_UNIT)
+    return unit
+
+
+GeoResult = namedtuple('GeoResult', 'name long lat hash distance')
+
+
+def _parse_results(
+        items: List[GeoResult],
+        withcoord: bool, withdist: bool) -> List[Any]:
+    """Parse list of GeoResults to redis response
+    :param withcoord: include coordinates in response
+    :param withdist: include distance in response
+    :returns: Parsed list
+    """
+    res = list()
+    for item in items:
+        new_item = [item.name, ]
+        if withdist:
+            new_item.append(Float.encode(item.distance, False))
+        if withcoord:
+            new_item.append([Float.encode(item.long, False),
+                             Float.encode(item.lat, False)])
+        if len(new_item) == 1:
+            new_item = new_item[0]
+        res.append(new_item)
+    return res
+
+
+def _find_near(
+        zset: ZSet,
+        lat: float, long: float, radius: float,
+        conv: float, count: int, count_any: bool, desc: bool) -> List[GeoResult]:
+    """Find items within area (lat,long)+radius
+    :param zset: list of items to check
+    :param lat: latitude
+    :param long: longitude
+    :param radius: radius in whatever units
+    :param conv: conversion of radius to meters
+    :param count: number of results to give
+    :param count_any: should we return any results that match? (vs. sorted)
+    :param desc: should results be sorted descending order?
+    :returns: List of GeoResults
+    """
+    results = list()
+    for name, _hash in zset.items():
+        p_lat, p_long, _, _ = geohash.decode(_hash)
+        dist = distance((p_lat, p_long), (lat, long)) * conv
+        if dist < radius:
+            results.append(GeoResult(name, p_long, p_lat, _hash, dist))
+            if count_any and len(results) >= count:
+                break
+    results = sorted(results, key=lambda x: x.distance, reverse=desc)
+    if count:
+        results = results[:count]
+    return results
+
+
+class GeoCommandsMixin:
+    def _store_geo_results(self, item_name: bytes, geo_results: List[GeoResult], scoredist: bool) -> int:
+        db_item = CommandItem(item_name, self._db, item=self._db.get(item_name), default=ZSet())
+        db_item.value = ZSet()
+        for item in geo_results:
+            val = item.distance if scoredist else item.hash
+            db_item.value.add(item.name, val)
+        db_item.writeback()
+        return len(geo_results)
+
+    @command(name='GEOADD', fixed=(Key(ZSet),), repeat=(bytes,))
+    def geoadd(self, key, *args):
+        (xx, nx, ch), data = extract_args(
+            args, ('nx', 'xx', 'ch'),
+            error_on_unexpected=False, left_from_first_unexpected=True)
+        if xx and nx:
+            raise SimpleError(msgs.NX_XX_GT_LT_ERROR_MSG)
+        if len(data) == 0 or len(data) % 3 != 0:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        zset = key.value
+        old_len, changed_items = len(zset), 0
+        for i in range(0, len(data), 3):
+            long, lat, name = Float.decode(data[i + 0]), Float.decode(data[i + 1]), data[i + 2]
+            if (name in zset and not xx) or (name not in zset and not nx):
+                if zset.add(name, geohash.encode(lat, long, 10)):
+                    changed_items += 1
+        if changed_items:
+            key.updated()
+        if ch:
+            return changed_items
+        return len(zset) - old_len
+
+    @command(name='GEOHASH', fixed=(Key(ZSet), bytes), repeat=(bytes,))
+    def geohash(self, key, *members):
+        hashes = map(key.value.get, members)
+        geohash_list = [((x + '0').encode() if x is not None else x) for x in hashes]
+        return geohash_list
+
+    @command(name='GEOPOS', fixed=(Key(ZSet), bytes), repeat=(bytes,))
+    def geopos(self, key, *members):
+        gospositions = map(
+            lambda x: geohash.decode(x) if x is not None else x,
+            map(key.value.get, members))
+        res = [([self._encodefloat(x[1], humanfriendly=False),
+                 self._encodefloat(x[0], humanfriendly=False)]
+                if x is not None else None)
+               for x in gospositions]
+        return res
+
+    @command(name='GEODIST', fixed=(Key(ZSet), bytes, bytes), repeat=(bytes,))
+    def geodist(self, key, m1, m2, *args):
+        geohashes = [key.value.get(m1), key.value.get(m2)]
+        if any(elem is None for elem in geohashes):
+            return None
+        geo_locs = [geohash.decode(x) for x in geohashes]
+        res = distance((geo_locs[0][0], geo_locs[0][1]),
+                       (geo_locs[1][0], geo_locs[1][1]))
+        unit = translate_meters_to_unit(args[0]) if len(args) == 1 else 1
+        return res * unit
+
+    def _search(
+            self, key, long, lat, radius, conv,
+            withcoord, withdist, _, count, count_any, desc, store, storedist):
+        zset = key.value
+        geo_results = _find_near(zset, lat, long, radius, conv, count, count_any, desc)
+
+        if store:
+            self._store_geo_results(store, geo_results, scoredist=False)
+            return len(geo_results)
+        if storedist:
+            self._store_geo_results(storedist, geo_results, scoredist=True)
+            return len(geo_results)
+        ret = _parse_results(geo_results, withcoord, withdist)
+        return ret
+
+    @command(name='GEORADIUS_RO', fixed=(Key(ZSet), Float, Float, Float), repeat=(bytes,))
+    def georadius_ro(self, key, long, lat, radius, *args):
+        (withcoord, withdist, withhash, count, count_any, desc), left_args = extract_args(
+            args, ('withcoord', 'withdist', 'withhash', '+count', 'any', 'desc',),
+            error_on_unexpected=False, left_from_first_unexpected=False)
+        count = count or sys.maxsize
+        conv = translate_meters_to_unit(args[0]) if len(args) >= 1 else 1
+        return self._search(
+            key, long, lat, radius, conv,
+            withcoord, withdist, withhash, count, count_any, desc, False, False)
+
+    @command(name='GEORADIUS', fixed=(Key(ZSet), Float, Float, Float), repeat=(bytes,))
+    def georadius(self, key, long, lat, radius, *args):
+        (withcoord, withdist, withhash, count, count_any, desc, store, storedist), left_args = extract_args(
+            args, ('withcoord', 'withdist', 'withhash', '+count', 'any', 'desc', '*store', '*storedist'),
+            error_on_unexpected=False, left_from_first_unexpected=False)
+        count = count or sys.maxsize
+        conv = translate_meters_to_unit(args[0]) if len(args) >= 1 else 1
+        return self._search(
+            key, long, lat, radius, conv,
+            withcoord, withdist, withhash, count, count_any, desc, store, storedist)
+
+    @command(name='GEORADIUSBYMEMBER', fixed=(Key(ZSet), bytes, Float), repeat=(bytes,))
+    def georadiusbymember(self, key, member_name, radius, *args):
+        member_score = key.value.get(member_name)
+        lat, long, _, _ = geohash.decode(member_score)
+        return self.georadius(key, long, lat, radius, *args)
+
+    @command(name='GEORADIUSBYMEMBER_RO', fixed=(Key(ZSet), bytes, Float), repeat=(bytes,))
+    def georadiusbymember_ro(self, key, member_name, radius, *args):
+        member_score = key.value.get(member_name)
+        lat, long, _, _ = geohash.decode(member_score)
+        return self.georadius_ro(key, long, lat, radius, *args)
+
+    @command(name='GEOSEARCH', fixed=(Key(ZSet),), repeat=(bytes,))
+    def geosearch(self, key, *args):
+        (frommember, (long, lat), radius), left_args = extract_args(
+            args, ('*frommember', '..fromlonlat', '.byradius'),
+            error_on_unexpected=False, left_from_first_unexpected=False)
+        if frommember is None and long is None:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        if frommember is not None and long is not None:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        if frommember:
+            return self.georadiusbymember_ro(key, frommember, radius, *left_args)
+        else:
+            return self.georadius_ro(key, long, lat, radius, *left_args)
+
+    @command(name='GEOSEARCHSTORE', fixed=(bytes, Key(ZSet),), repeat=(bytes,))
+    def geosearchstore(self, dst, src, *args):
+        (frommember, (long, lat), radius, storedist), left_args = extract_args(
+            args, ('*frommember', '..fromlonlat', '.byradius', 'storedist'),
+            error_on_unexpected=False, left_from_first_unexpected=False)
+        if frommember is None and long is None:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        if frommember is not None and long is not None:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        additional = [b'storedist', dst] if storedist else [b'store', dst]
+
+        if frommember:
+            return self.georadiusbymember(src, frommember, radius, *left_args, *additional)
+        else:
+            return self.georadius(src, long, lat, radius, *left_args, *additional)
diff --git a/fakeredis/commands_mixins/hash_mixin.py b/fakeredis/commands_mixins/hash_mixin.py
new file mode 100644
index 0000000..5226f40
--- /dev/null
+++ b/fakeredis/commands_mixins/hash_mixin.py
@@ -0,0 +1,100 @@
+import itertools
+import math
+
+from fakeredis import _msgs as msgs
+from fakeredis._commands import (command, Key, Hash, Int, Float)
+from fakeredis._helpers import (SimpleError, OK)
+
+
+class HashCommandsMixin:
+    @command((Key(Hash), bytes), (bytes,))
+    def hdel(self, key, *fields):
+        h = key.value
+        rem = 0
+        for field in fields:
+            if field in h:
+                del h[field]
+                key.updated()
+                rem += 1
+        return rem
+
+    @command((Key(Hash), bytes))
+    def hexists(self, key, field):
+        return int(field in key.value)
+
+    @command((Key(Hash), bytes))
+    def hget(self, key, field):
+        return key.value.get(field)
+
+    @command((Key(Hash),))
+    def hgetall(self, key):
+        return list(itertools.chain(*key.value.items()))
+
+    @command(fixed=(Key(Hash), bytes, bytes))
+    def hincrby(self, key, field, amount):
+        amount = Int.decode(amount)
+        field_value = Int.decode(key.value.get(field, b'0'), decode_error=msgs.INVALID_HASH_MSG)
+        c = field_value + amount
+        key.value[field] = self._encodeint(c)
+        key.updated()
+        return c
+
+    @command((Key(Hash), bytes, bytes))
+    def hincrbyfloat(self, key, field, amount):
+        c = Float.decode(key.value.get(field, b'0')) + Float.decode(amount)
+        if not math.isfinite(c):
+            raise SimpleError(msgs.NONFINITE_MSG)
+        encoded = self._encodefloat(c, True)
+        key.value[field] = encoded
+        key.updated()
+        return encoded
+
+    @command((Key(Hash),))
+    def hkeys(self, key):
+        return list(key.value.keys())
+
+    @command((Key(Hash),))
+    def hlen(self, key):
+        return len(key.value)
+
+    @command((Key(Hash), bytes), (bytes,))
+    def hmget(self, key, *fields):
+        return [key.value.get(field) for field in fields]
+
+    @command((Key(Hash), bytes, bytes), (bytes, bytes))
+    def hmset(self, key, *args):
+        self.hset(key, *args)
+        return OK
+
+    @command((Key(Hash), Int,), (bytes, bytes))
+    def hscan(self, key, cursor, *args):
+        cursor, keys = self._scan(key.value, cursor, *args)
+        items = []
+        for k in keys:
+            items.append(k)
+            items.append(key.value[k])
+        return [cursor, items]
+
+    @command((Key(Hash), bytes, bytes), (bytes, bytes))
+    def hset(self, key, *args):
+        h = key.value
+        keys_count = len(h.keys())
+        h.update(dict(zip(*[iter(args)] * 2)))  # https://stackoverflow.com/a/12739974/1056460
+        created = len(h.keys()) - keys_count
+
+        key.updated()
+        return created
+
+    @command((Key(Hash), bytes, bytes))
+    def hsetnx(self, key, field, value):
+        if field in key.value:
+            return 0
+        return self.hset(key, field, value)
+
+    @command((Key(Hash), bytes))
+    def hstrlen(self, key, field):
+        return len(key.value.get(field, b''))
+
+    @command((Key(Hash),))
+    def hvals(self, key):
+        return list(key.value.values())
diff --git a/fakeredis/commands_mixins/list_mixin.py b/fakeredis/commands_mixins/list_mixin.py
new file mode 100644
index 0000000..5686b42
--- /dev/null
+++ b/fakeredis/commands_mixins/list_mixin.py
@@ -0,0 +1,218 @@
+import functools
+
+from fakeredis import _msgs as msgs
+from fakeredis._commands import (Key, command, Int, CommandItem, Timeout, fix_range)
+from fakeredis._helpers import (OK, SimpleError, SimpleString, casematch)
+
+
+def _list_pop(get_slice, key, *args):
+    """Implements lpop and rpop.
+
+    `get_slice` must take a count and return a slice expression for the range to pop.
+    """
+    # This implementation is somewhat contorted to match the odd
+    # behaviours described in https://github.com/redis/redis/issues/9680.
+    count = 1
+    if len(args) > 1:
+        raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+    elif len(args) == 1:
+        count = Int.decode(args[0], msgs.INDEX_NEGATIVE_ERROR_MSG)
+        if count < 0:
+            raise SimpleError(msgs.INDEX_NEGATIVE_ERROR_MSG)
+    if not key:
+        return None
+    elif type(key.value) != list:
+        raise SimpleError(msgs.WRONGTYPE_MSG)
+    slc = get_slice(count)
+    ret = key.value[slc]
+    del key.value[slc]
+    key.updated()
+    if not args:
+        ret = ret[0]
+    return ret
+
+
+class ListCommandsMixin:
+    def _bpop_pass(self, keys, op, first_pass):
+        for key in keys:
+            item = CommandItem(key, self._db, item=self._db.get(key), default=[])
+            if not isinstance(item.value, list):
+                if first_pass:
+                    raise SimpleError(msgs.WRONGTYPE_MSG)
+                else:
+                    continue
+            if item.value:
+                ret = op(item.value)
+                item.updated()
+                item.writeback()
+                return [key, ret]
+        return None
+
+    def _bpop(self, args, op):
+        keys = args[:-1]
+        timeout = Timeout.decode(args[-1])
+        return self._blocking(timeout, functools.partial(self._bpop_pass, keys, op))
+
+    @command((bytes, bytes), (bytes,), flags=msgs.FLAG_NO_SCRIPT)
+    def blpop(self, *args):
+        return self._bpop(args, lambda lst: lst.pop(0))
+
+    @command((bytes, bytes), (bytes,), flags=msgs.FLAG_NO_SCRIPT)
+    def brpop(self, *args):
+        return self._bpop(args, lambda lst: lst.pop())
+
+    def _brpoplpush_pass(self, source, destination, first_pass):
+        src = CommandItem(source, self._db, item=self._db.get(source), default=[])
+        if not isinstance(src.value, list):
+            if first_pass:
+                raise SimpleError(msgs.WRONGTYPE_MSG)
+            else:
+                return None
+        if not src.value:
+            return None  # Empty list
+        dst = CommandItem(destination, self._db, item=self._db.get(destination), default=[])
+        if not isinstance(dst.value, list):
+            raise SimpleError(msgs.WRONGTYPE_MSG)
+        el = src.value.pop()
+        dst.value.insert(0, el)
+        src.updated()
+        src.writeback()
+        if destination != source:
+            # Ensure writeback only happens once
+            dst.updated()
+            dst.writeback()
+        return el
+
+    @command((bytes, bytes, Timeout), flags=msgs.FLAG_NO_SCRIPT)
+    def brpoplpush(self, source, destination, timeout):
+        return self._blocking(timeout,
+                              functools.partial(self._brpoplpush_pass, source, destination))
+
+    @command((Key(list, None), Int))
+    def lindex(self, key, index):
+        try:
+            return key.value[index]
+        except IndexError:
+            return None
+
+    @command((Key(list), bytes, bytes, bytes))
+    def linsert(self, key, where, pivot, value):
+        if not casematch(where, b'before') and not casematch(where, b'after'):
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        if not key:
+            return 0
+        else:
+            try:
+                index = key.value.index(pivot)
+            except ValueError:
+                return -1
+            if casematch(where, b'after'):
+                index += 1
+            key.value.insert(index, value)
+            key.updated()
+            return len(key.value)
+
+    @command((Key(list),))
+    def llen(self, key):
+        return len(key.value)
+
+    @command((Key(list, None), Key(list), SimpleString, SimpleString))
+    def lmove(self, first_list, second_list, src, dst):
+        if src not in [b'LEFT', b'RIGHT']:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        if dst not in [b'LEFT', b'RIGHT']:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        el = self.rpop(first_list) if src == b'RIGHT' else self.lpop(first_list)
+        self.lpush(second_list, el) if dst == b'LEFT' else self.rpush(second_list, el)
+        return el
+
+    @command(fixed=(Key(),), repeat=(bytes,))
+    def lpop(self, key, *args):
+        return _list_pop(lambda count: slice(None, count), key, *args)
+
+    @command((Key(list), bytes), (bytes,))
+    def lpush(self, key, *values):
+        for value in values:
+            key.value.insert(0, value)
+        key.updated()
+        return len(key.value)
+
+    @command((Key(list), bytes), (bytes,))
+    def lpushx(self, key, *values):
+        if not key:
+            return 0
+        return self.lpush(key, *values)
+
+    @command((Key(list), Int, Int))
+    def lrange(self, key, start, stop):
+        start, stop = fix_range(start, stop, len(key.value))
+        return key.value[start:stop]
+
+    @command((Key(list), Int, bytes))
+    def lrem(self, key, count, value):
+        a_list = key.value
+        found = []
+        for i, el in enumerate(a_list):
+            if el == value:
+                found.append(i)
+        if count > 0:
+            indices_to_remove = found[:count]
+        elif count < 0:
+            indices_to_remove = found[count:]
+        else:
+            indices_to_remove = found
+        # Iterating in reverse order to ensure the indices
+        # remain valid during deletion.
+        for index in reversed(indices_to_remove):
+            del a_list[index]
+        if indices_to_remove:
+            key.updated()
+        return len(indices_to_remove)
+
+    @command((Key(list), bytes, bytes))
+    def lset(self, key, index, value):
+        if not key:
+            raise SimpleError(msgs.NO_KEY_MSG)
+        index = Int.decode(index)
+        try:
+            key.value[index] = value
+            key.updated()
+        except IndexError:
+            raise SimpleError(msgs.INDEX_ERROR_MSG)
+        return OK
+
+    @command((Key(list), Int, Int))
+    def ltrim(self, key, start, stop):
+        if key:
+            if stop == -1:
+                stop = None
+            else:
+                stop += 1
+            new_value = key.value[start:stop]
+            # TODO: check if this should actually be conditional
+            if len(new_value) != len(key.value):
+                key.update(new_value)
+        return OK
+
+    @command(fixed=(Key(),), repeat=(bytes,))
+    def rpop(self, key, *args):
+        return _list_pop(lambda count: slice(None, -count - 1, -1), key, *args)
+
+    @command((Key(list, None), Key(list)))
+    def rpoplpush(self, src, dst):
+        el = self.rpop(src)
+        self.lpush(dst, el)
+        return el
+
+    @command((Key(list), bytes), (bytes,))
+    def rpush(self, key, *values):
+        for value in values:
+            key.value.append(value)
+        key.updated()
+        return len(key.value)
+
+    @command((Key(list), bytes), (bytes,))
+    def rpushx(self, key, *values):
+        if not key:
+            return 0
+        return self.rpush(key, *values)
diff --git a/fakeredis/commands_mixins/pubsub_mixin.py b/fakeredis/commands_mixins/pubsub_mixin.py
new file mode 100644
index 0000000..4f4e5de
--- /dev/null
+++ b/fakeredis/commands_mixins/pubsub_mixin.py
@@ -0,0 +1,128 @@
+from fakeredis import _msgs as msgs
+from fakeredis._commands import (command)
+from fakeredis._helpers import (NoResponse, compile_pattern, SimpleError)
+
+
+class PubSubCommandsMixin:
+    def __init__(self, *args, **kwargs):
+        super(PubSubCommandsMixin, self).__init__(*args, **kwargs)
+        self._pubsub = 0  # Count of subscriptions
+
+    def _subscribe(self, channels, subscribers, mtype):
+        for channel in channels:
+            subs = subscribers[channel]
+            if self not in subs:
+                subs.add(self)
+                self._pubsub += 1
+            msg = [mtype, channel, self._pubsub]
+            self.put_response(msg)
+        return NoResponse()
+
+    def _unsubscribe(self, channels, subscribers, mtype):
+        if not channels:
+            channels = []
+            for (channel, subs) in subscribers.items():
+                if self in subs:
+                    channels.append(channel)
+        for channel in channels:
+            subs = subscribers.get(channel, set())
+            if self in subs:
+                subs.remove(self)
+                if not subs:
+                    del subscribers[channel]
+                self._pubsub -= 1
+            msg = [mtype, channel, self._pubsub]
+            self.put_response(msg)
+        return NoResponse()
+
+    @command((bytes,), (bytes,), flags=msgs.FLAG_NO_SCRIPT)
+    def psubscribe(self, *patterns):
+        return self._subscribe(patterns, self._server.psubscribers, b'psubscribe')
+
+    @command((bytes,), (bytes,), flags=msgs.FLAG_NO_SCRIPT)
+    def subscribe(self, *channels):
+        return self._subscribe(channels, self._server.subscribers, b'subscribe')
+
+    @command((), (bytes,), flags=msgs.FLAG_NO_SCRIPT)
+    def punsubscribe(self, *patterns):
+        return self._unsubscribe(patterns, self._server.psubscribers, b'punsubscribe')
+
+    @command((), (bytes,), flags=msgs.FLAG_NO_SCRIPT)
+    def unsubscribe(self, *channels):
+        return self._unsubscribe(channels, self._server.subscribers, b'unsubscribe')
+
+    @command((bytes, bytes))
+    def publish(self, channel, message):
+        receivers = 0
+        msg = [b'message', channel, message]
+        subs = self._server.subscribers.get(channel, set())
+        for sock in subs:
+            sock.put_response(msg)
+            receivers += 1
+        for (pattern, socks) in self._server.psubscribers.items():
+            regex = compile_pattern(pattern)
+            if regex.match(channel):
+                msg = [b'pmessage', pattern, channel, message]
+                for sock in socks:
+                    sock.put_response(msg)
+                    receivers += 1
+        return receivers
+
+    @command(name='PUBSUB CHANNELS', fixed=(), repeat=(bytes,))
+    def pubsub_channels(self, *args):
+        channels = list(self._server.subscribers.keys())
+        if len(args) > 0:
+            regex = compile_pattern(args[0])
+            channels = [ch for ch in channels if regex.match(ch)]
+        return channels
+
+    @command(name='PUBSUB NUMSUB', fixed=(), repeat=(bytes,))
+    def pubsub_numsub(self, *args):
+        channels = args
+        tuples_list = [
+            (ch, len(self._server.subscribers.get(ch, [])))
+            for ch in channels
+        ]
+        return [item for sublist in tuples_list for item in sublist]
+
+    @command(name='PUBSUB', fixed=())
+    def pubsub(self, *args):
+        raise SimpleError(msgs.WRONG_ARGS_MSG6.format('pubsub'))
+
+    @command(name='PUBSUB HELP', fixed=())
+    def pubsub_help(self, *args):
+        if self.version >= 7:
+            help_strings = [
+                'PUBSUB <subcommand> [<arg> [value] [opt] ...]. Subcommands are:',
+                'CHANNELS [<pattern>]',
+                "    Return the currently active channels matching a <pattern> (default: '*')"
+                '.',
+                'NUMPAT',
+                '    Return number of subscriptions to patterns.',
+                'NUMSUB [<channel> ...]',
+                '    Return the number of subscribers for the specified channels, excluding',
+                '    pattern subscriptions(default: no channels).',
+                'SHARDCHANNELS [<pattern>]',
+                '    Return the currently active shard level channels matching a <pattern> (d'
+                "efault: '*').",
+                'SHARDNUMSUB [<shardchannel> ...]',
+                '    Return the number of subscribers for the specified shard level channel(s'
+                ')',
+                'HELP',
+                '    Prints this help.'
+            ]
+        else:
+            help_strings = [
+                'PUBSUB <subcommand> [<arg> [value] [opt] ...]. Subcommands are:',
+                'CHANNELS [<pattern>]',
+                "    Return the currently active channels matching a <pattern> (default: '*')"
+                '.',
+                'NUMPAT',
+                '    Return number of subscriptions to patterns.',
+                'NUMSUB [<channel> ...]',
+                '    Return the number of subscribers for the specified channels, excluding',
+                '    pattern subscriptions(default: no channels).',
+                'HELP',
+                '    Prints this help.'
+            ]
+        return [s.encode() for s in help_strings]
diff --git a/fakeredis/commands_mixins/scripting_mixin.py b/fakeredis/commands_mixins/scripting_mixin.py
new file mode 100644
index 0000000..febb1cb
--- /dev/null
+++ b/fakeredis/commands_mixins/scripting_mixin.py
@@ -0,0 +1,248 @@
+import functools
+import hashlib
+import itertools
+import logging
+
+from fakeredis import _msgs as msgs
+from fakeredis._commands import (command, Int)
+from fakeredis._helpers import (SimpleError, SimpleString, null_terminate, OK, encode_command)
+
+LOGGER = logging.getLogger('fakeredis')
+REDIS_LOG_LEVELS = {
+    b'LOG_DEBUG': 0,
+    b'LOG_VERBOSE': 1,
+    b'LOG_NOTICE': 2,
+    b'LOG_WARNING': 3
+}
+REDIS_LOG_LEVELS_TO_LOGGING = {
+    0: logging.DEBUG,
+    1: logging.INFO,
+    2: logging.INFO,
+    3: logging.WARNING
+}
+
+
+def _ensure_str(s, encoding, replaceerr):
+    if isinstance(s, bytes):
+        res = s.decode(encoding=encoding, errors=replaceerr)
+    else:
+        res = str(s).encode(encoding=encoding, errors=replaceerr)
+    return res
+
+
+def _check_for_lua_globals(lua_runtime, expected_globals):
+    unexpected_globals = set(lua_runtime.globals().keys()) - expected_globals
+    if len(unexpected_globals) > 0:
+        unexpected = [_ensure_str(var, 'utf-8', 'replace') for var in unexpected_globals]
+        raise SimpleError(msgs.GLOBAL_VARIABLE_MSG.format(", ".join(unexpected)))
+
+
+def _lua_redis_log(lua_runtime, expected_globals, lvl, *args):
+    _check_for_lua_globals(lua_runtime, expected_globals)
+    if len(args) < 1:
+        raise SimpleError(msgs.REQUIRES_MORE_ARGS_MSG.format("redis.log()", "two"))
+    if lvl not in REDIS_LOG_LEVELS_TO_LOGGING.keys():
+        raise SimpleError(msgs.LOG_INVALID_DEBUG_LEVEL_MSG)
+    msg = ' '.join([x.decode('utf-8')
+                    if isinstance(x, bytes) else str(x)
+                    for x in args if not isinstance(x, bool)])
+    LOGGER.log(REDIS_LOG_LEVELS_TO_LOGGING[lvl], msg)
+
+
+class ScriptingCommandsMixin:
+
+    # Script commands
+    # script debug and script kill will probably not be supported
+
+    def __init__(self, *args, **kwargs):
+        super(ScriptingCommandsMixin, self).__init__(*args, **kwargs)
+        # Maps SHA1 to script source
+        self.script_cache = {}
+
+    def _convert_redis_arg(self, lua_runtime, value):
+        # Type checks are exact to avoid issues like bool being a subclass of int.
+        if type(value) is bytes:
+            return value
+        elif type(value) in {int, float}:
+            return '{:.17g}'.format(value).encode()
+        else:
+            # TODO: add the context
+            msg = msgs.LUA_COMMAND_ARG_MSG6 if self.version < 7 else msgs.LUA_COMMAND_ARG_MSG
+            raise SimpleError(msg)
+
+    def _convert_redis_result(self, lua_runtime, result):
+        if isinstance(result, (bytes, int)):
+            return result
+        elif isinstance(result, SimpleString):
+            return lua_runtime.table_from({b"ok": result.value})
+        elif result is None:
+            return False
+        elif isinstance(result, list):
+            converted = [
+                self._convert_redis_result(lua_runtime, item)
+                for item in result
+            ]
+            return lua_runtime.table_from(converted)
+        elif isinstance(result, SimpleError):
+            if result.value.startswith('ERR wrong number of arguments'):
+                raise SimpleError(msgs.WRONG_ARGS_MSG7)
+            raise result
+        else:
+            raise RuntimeError("Unexpected return type from redis: {}".format(type(result)))
+
+    def _convert_lua_result(self, result, nested=True):
+        from lupa import lua_type
+        if lua_type(result) == 'table':
+            for key in (b'ok', b'err'):
+                if key in result:
+                    msg = self._convert_lua_result(result[key])
+                    if not isinstance(msg, bytes):
+                        raise SimpleError(msgs.LUA_WRONG_NUMBER_ARGS_MSG)
+                    if key == b'ok':
+                        return SimpleString(msg)
+                    elif nested:
+                        return SimpleError(msg.decode('utf-8', 'replace'))
+                    else:
+                        raise SimpleError(msg.decode('utf-8', 'replace'))
+            # Convert Lua tables into lists, starting from index 1, mimicking the behavior of StrictRedis.
+            result_list = []
+            for index in itertools.count(1):
+                if index not in result:
+                    break
+                item = result[index]
+                result_list.append(self._convert_lua_result(item))
+            return result_list
+        elif isinstance(result, str):
+            return result.encode()
+        elif isinstance(result, float):
+            return int(result)
+        elif isinstance(result, bool):
+            return 1 if result else None
+        return result
+
+    def _lua_redis_call(self, lua_runtime, expected_globals, op, *args):
+        # Check if we've set any global variables before making any change.
+        _check_for_lua_globals(lua_runtime, expected_globals)
+        func, sig = self._name_to_func(encode_command(op))
+        args = [self._convert_redis_arg(lua_runtime, arg) for arg in args]
+        result = self._run_command(func, sig, args, True)
+        return self._convert_redis_result(lua_runtime, result)
+
+    def _lua_redis_pcall(self, lua_runtime, expected_globals, op, *args):
+        try:
+            return self._lua_redis_call(lua_runtime, expected_globals, op, *args)
+        except Exception as ex:
+            return lua_runtime.table_from({b"err": str(ex)})
+
+    @command((bytes, Int), (bytes,), flags=msgs.FLAG_NO_SCRIPT)
+    def eval(self, script, numkeys, *keys_and_args):
+        from lupa import LuaError, LuaRuntime, as_attrgetter
+
+        if numkeys > len(keys_and_args):
+            raise SimpleError(msgs.TOO_MANY_KEYS_MSG)
+        if numkeys < 0:
+            raise SimpleError(msgs.NEGATIVE_KEYS_MSG)
+        sha1 = hashlib.sha1(script).hexdigest().encode()
+        self.script_cache[sha1] = script
+        lua_runtime = LuaRuntime(encoding=None, unpack_returned_tuples=True)
+
+        set_globals = lua_runtime.eval(
+            """
+            function(keys, argv, redis_call, redis_pcall, redis_log, redis_log_levels)
+                redis = {}
+                redis.call = redis_call
+                redis.pcall = redis_pcall
+                redis.log = redis_log
+                for level, pylevel in python.iterex(redis_log_levels.items()) do
+                    redis[level] = pylevel
+                end
+                redis.error_reply = function(msg) return {err=msg} end
+                redis.status_reply = function(msg) return {ok=msg} end
+                KEYS = keys
+                ARGV = argv
+            end
+            """
+        )
+        expected_globals = set()
+        set_globals(
+            lua_runtime.table_from(keys_and_args[:numkeys]),
+            lua_runtime.table_from(keys_and_args[numkeys:]),
+            functools.partial(self._lua_redis_call, lua_runtime, expected_globals),
+            functools.partial(self._lua_redis_pcall, lua_runtime, expected_globals),
+            functools.partial(_lua_redis_log, lua_runtime, expected_globals),
+            as_attrgetter(REDIS_LOG_LEVELS)
+        )
+        expected_globals.update(lua_runtime.globals().keys())
+
+        try:
+            result = lua_runtime.execute(script)
+        except SimpleError as ex:
+            if self.version <= 6:
+                raise SimpleError(msgs.SCRIPT_ERROR_MSG.format(sha1.decode(), ex))
+            raise SimpleError(ex.value)
+        except LuaError as ex:
+            raise SimpleError(msgs.SCRIPT_ERROR_MSG.format(sha1.decode(), ex))
+
+        _check_for_lua_globals(lua_runtime, expected_globals)
+
+        return self._convert_lua_result(result, nested=False)
+
+    @command((bytes, Int), (bytes,), flags=msgs.FLAG_NO_SCRIPT)
+    def evalsha(self, sha1, numkeys, *keys_and_args):
+        try:
+            script = self.script_cache[sha1]
+        except KeyError:
+            raise SimpleError(msgs.NO_MATCHING_SCRIPT_MSG)
+        return self.eval(script, numkeys, *keys_and_args)
+
+    @command(name='script load', fixed=(bytes,), repeat=(bytes,), flags=msgs.FLAG_NO_SCRIPT, )
+    def script_load(self, *args):
+        if len(args) != 1:
+            raise SimpleError(msgs.BAD_SUBCOMMAND_MSG.format('SCRIPT'))
+        script = args[0]
+        sha1 = hashlib.sha1(script).hexdigest().encode()
+        self.script_cache[sha1] = script
+        return sha1
+
+    @command(name='script exists', fixed=(), repeat=(bytes,), flags=msgs.FLAG_NO_SCRIPT, )
+    def script_exists(self, *args):
+        if self.version >= 7 and len(args) == 0:
+            raise SimpleError(msgs.WRONG_ARGS_MSG7)
+        return [int(sha1 in self.script_cache) for sha1 in args]
+
+    @command(name='script flush', fixed=(), repeat=(bytes,), flags=msgs.FLAG_NO_SCRIPT, )
+    def script_flush(self, *args):
+        if len(args) > 1 or (len(args) == 1 and null_terminate(args[0]) not in {b'sync', b'async'}):
+            raise SimpleError(msgs.BAD_SUBCOMMAND_MSG.format('SCRIPT'))
+        self.script_cache = {}
+        return OK
+
+    @command((), flags=msgs.FLAG_NO_SCRIPT)
+    def script(self, *args):
+        raise SimpleError(msgs.BAD_SUBCOMMAND_MSG.format('SCRIPT'))
+
+    @command(name='SCRIPT HELP', fixed=())
+    def script_help(self, *args):
+        help_strings = [
+            'SCRIPT <subcommand> [<arg> [value] [opt] ...]. Subcommands are:',
+            'DEBUG (YES|SYNC|NO)',
+            '    Set the debug mode for subsequent scripts executed.',
+            'EXISTS <sha1> [<sha1> ...]',
+            '    Return information about the existence of the scripts in the script cach'
+            'e.',
+            'FLUSH [ASYNC|SYNC]',
+            '    Flush the Lua scripts cache. Very dangerous on replicas.',
+            '    When called without the optional mode argument, the behavior is determin'
+            'ed by the',
+            '    lazyfree-lazy-user-flush configuration directive. Valid modes are:',
+            '    * ASYNC: Asynchronously flush the scripts cache.',
+            '    * SYNC: Synchronously flush the scripts cache.',
+            'KILL',
+            '    Kill the currently executing Lua script.',
+            'LOAD <script>',
+            '    Load a script into the scripts cache without executing it.',
+            'HELP',
+            '    Prints this help.'
+        ]
+
+        return [s.encode() for s in help_strings]
diff --git a/fakeredis/commands_mixins/server_mixin.py b/fakeredis/commands_mixins/server_mixin.py
new file mode 100644
index 0000000..3532d84
--- /dev/null
+++ b/fakeredis/commands_mixins/server_mixin.py
@@ -0,0 +1,60 @@
+import time
+
+from fakeredis import _msgs as msgs
+from fakeredis._commands import (command, DbIndex)
+from fakeredis._helpers import (OK, SimpleError, casematch, BGSAVE_STARTED)
+
+
+class ServerCommandsMixin:
+    # TODO: lots
+
+    @command((), (bytes,), flags=msgs.FLAG_NO_SCRIPT)
+    def bgsave(self, *args):
+        if len(args) > 1 or (len(args) == 1 and not casematch(args[0], b'schedule')):
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        self._server.lastsave = int(time.time())
+        return BGSAVE_STARTED
+
+    @command(())
+    def dbsize(self):
+        return len(self._db)
+
+    @command((), (bytes,))
+    def flushdb(self, *args):
+        if len(args) > 0 and (len(args) != 1 or not casematch(args[0], b'async')):
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        self._db.clear()
+        return OK
+
+    @command((), (bytes,))
+    def flushall(self, *args):
+        if len(args) > 0 and (len(args) != 1 or not casematch(args[0], b'async')):
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        for db in self._server.dbs.values():
+            db.clear()
+        # TODO: clear watches and/or pubsub as well?
+        return OK
+
+    @command(())
+    def lastsave(self):
+        return self._server.lastsave
+
+    @command((), flags=msgs.FLAG_NO_SCRIPT)
+    def save(self):
+        self._server.lastsave = int(time.time())
+        return OK
+
+    @command(())
+    def time(self):
+        now_us = round(time.time() * 1_000_000)
+        now_s = now_us // 1_000_000
+        now_us %= 1_000_000
+        return [str(now_s).encode(), str(now_us).encode()]
+
+    @command((DbIndex, DbIndex))
+    def swapdb(self, index1, index2):
+        if index1 != index2:
+            db1 = self._server.dbs[index1]
+            db2 = self._server.dbs[index2]
+            db1.swap(db2)
+        return OK
diff --git a/fakeredis/commands_mixins/set_mixin.py b/fakeredis/commands_mixins/set_mixin.py
new file mode 100644
index 0000000..88c8ef0
--- /dev/null
+++ b/fakeredis/commands_mixins/set_mixin.py
@@ -0,0 +1,192 @@
+import random
+
+from fakeredis import _msgs as msgs
+from fakeredis._commands import (command, Key, Int, CommandItem)
+from fakeredis._helpers import (OK, SimpleError, casematch)
+
+
+def _calc_setop(op, stop_if_missing, key, *keys):
+    if stop_if_missing and not key.value:
+        return set()
+    value = key.value
+    if not isinstance(value, set):
+        raise SimpleError(msgs.WRONGTYPE_MSG)
+    ans = value.copy()
+    for other in keys:
+        value = other.value if other.value is not None else set()
+        if not isinstance(value, set):
+            raise SimpleError(msgs.WRONGTYPE_MSG)
+        if stop_if_missing and not value:
+            return set()
+        ans = op(ans, value)
+    return ans
+
+
+def _setop(op, stop_if_missing, dst, key, *keys):
+    """Apply one of SINTER[STORE], SUNION[STORE], SDIFF[STORE].
+
+    If `stop_if_missing`, the output will be made an empty set as soon as
+    an empty input set is encountered (use for SINTER[STORE]). May assume
+    that `key` is a set (or empty), but `keys` could be anything.
+    """
+    ans = _calc_setop(op, stop_if_missing, key, *keys)
+    if dst is None:
+        return list(ans)
+    else:
+        dst.value = ans
+        return len(dst.value)
+
+
+class SetCommandsMixin:
+    # Set and Hyperloglog commands
+
+    # Set commands
+    @command((Key(set), bytes), (bytes,))
+    def sadd(self, key, *members):
+        old_size = len(key.value)
+        key.value.update(members)
+        key.updated()
+        return len(key.value) - old_size
+
+    @command((Key(set),))
+    def scard(self, key):
+        return len(key.value)
+
+    @command((Key(set),), (Key(set),))
+    def sdiff(self, *keys):
+        return _setop(lambda a, b: a - b, False, None, *keys)
+
+    @command((Key(), Key(set)), (Key(set),))
+    def sdiffstore(self, dst, *keys):
+        return _setop(lambda a, b: a - b, False, dst, *keys)
+
+    @command((Key(set),), (Key(set),))
+    def sinter(self, *keys):
+        res = _setop(lambda a, b: a & b, True, None, *keys)
+        return res
+
+    @command((Int, bytes), (bytes,))
+    def sintercard(self, numkeys, *args):
+        if self.version < 7:
+            raise SimpleError(msgs.UNKNOWN_COMMAND_MSG.format('sintercard'))
+        if numkeys < 1:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        limit = 0
+        if casematch(args[-2], b'limit'):
+            limit = Int.decode(args[-1])
+            args = args[:-2]
+        if numkeys != len(args):
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        keys = [CommandItem(args[i], self._db, item=self._db.get(args[i], default=None))
+                for i in range(numkeys)]
+
+        res = _setop(lambda a, b: a & b, False, None, *keys)
+        return len(res) if limit == 0 else min(limit, len(res))
+
+    @command((Key(), Key(set)), (Key(set),))
+    def sinterstore(self, dst, *keys):
+        return _setop(lambda a, b: a & b, True, dst, *keys)
+
+    @command((Key(set), bytes))
+    def sismember(self, key, member):
+        return int(member in key.value)
+
+    @command((Key(set), bytes), (bytes,))
+    def smismember(self, key, *members):
+        return [self.sismember(key, member) for member in members]
+
+    @command((Key(set),))
+    def smembers(self, key):
+        return list(key.value)
+
+    @command((Key(set, 0), Key(set), bytes))
+    def smove(self, src, dst, member):
+        try:
+            src.value.remove(member)
+            src.updated()
+        except KeyError:
+            return 0
+        else:
+            dst.value.add(member)
+            dst.updated()  # TODO: is it updated if member was already present?
+            return 1
+
+    @command((Key(set),), (Int,))
+    def spop(self, key, count=None):
+        if count is None:
+            if not key.value:
+                return None
+            item = random.sample(list(key.value), 1)[0]
+            key.value.remove(item)
+            key.updated()
+            return item
+        else:
+            if count < 0:
+                raise SimpleError(msgs.INDEX_ERROR_MSG)
+            items = self.srandmember(key, count)
+            for item in items:
+                key.value.remove(item)
+                key.updated()  # Inside the loop because redis special-cases count=0
+            return items
+
+    @command((Key(set),), (Int,))
+    def srandmember(self, key, count=None):
+        if count is None:
+            if not key.value:
+                return None
+            else:
+                return random.sample(list(key.value), 1)[0]
+        elif count >= 0:
+            count = min(count, len(key.value))
+            return random.sample(list(key.value), count)
+        else:
+            items = list(key.value)
+            return [random.choice(items) for _ in range(-count)]
+
+    @command((Key(set), bytes), (bytes,))
+    def srem(self, key, *members):
+        old_size = len(key.value)
+        for member in members:
+            key.value.discard(member)
+        deleted = old_size - len(key.value)
+        if deleted:
+            key.updated()
+        return deleted
+
+    @command((Key(set), Int), (bytes, bytes))
+    def sscan(self, key, cursor, *args):
+        return self._scan(key.value, cursor, *args)
+
+    @command((Key(set),), (Key(set),))
+    def sunion(self, *keys):
+        return _setop(lambda a, b: a | b, False, None, *keys)
+
+    @command((Key(), Key(set)), (Key(set),))
+    def sunionstore(self, dst, *keys):
+        return _setop(lambda a, b: a | b, False, dst, *keys)
+
+    # Hyperloglog commands
+    # These are not quite the same as the real redis ones, which are
+    # approximate and store the results in a string. Instead, it is implemented
+    # on top of sets.
+
+    @command((Key(set),), (bytes,))
+    def pfadd(self, key, *elements):
+        result = self.sadd(key, *elements)
+        # Per the documentation:
+        # - 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise.
+        return 1 if result > 0 else 0
+
+    @command((Key(set),), (Key(set),))
+    def pfcount(self, *keys):
+        """
+        Return the approximated cardinality of
+        the set observed by the HyperLogLog at key(s).
+        """
+        return len(self.sunion(*keys))
+
+    @command((Key(set), Key(set)), (Key(set),))
+    def pfmerge(self, dest, *sources):
+        """Merge N different HyperLogLogs into a single one."""
+        self.sunionstore(dest, *sources)
+        return OK
diff --git a/fakeredis/commands_mixins/sortedset_mixin.py b/fakeredis/commands_mixins/sortedset_mixin.py
new file mode 100644
index 0000000..886048f
--- /dev/null
+++ b/fakeredis/commands_mixins/sortedset_mixin.py
@@ -0,0 +1,417 @@
+from __future__ import annotations
+
+import functools
+import itertools
+import math
+from typing import Union, Optional
+
+from fakeredis import _msgs as msgs
+from fakeredis._command_args_parsing import extract_args
+from fakeredis._commands import (command, Key, Int, Float, CommandItem, Timeout, ScoreTest, StringTest, fix_range)
+from fakeredis._helpers import (SimpleError, casematch, null_terminate, )
+from fakeredis._zset import ZSet
+
+
+class SortedSetCommandsMixin:
+    # Sorted set commands
+    def _zpop(self, key, count, reverse):
+        zset = key.value
+        members = list(zset)
+        if reverse:
+            members.reverse()
+        members = members[:count]
+        res = [(bytes(member), self._encodefloat(zset.get(member), True)) for member in members]
+        res = list(itertools.chain.from_iterable(res))
+        for item in members:
+            zset.discard(item)
+        return res
+
+    def _bzpop(self, keys, reverse, first_pass):
+        for key in keys:
+            item = CommandItem(key, self._db, item=self._db.get(key), default=[])
+            temp_res = self._zpop(item, 1, reverse)
+            if temp_res:
+                return [key, temp_res[0], temp_res[1]]
+        return None
+
+    @command((Key(ZSet),), (Int,))
+    def zpopmin(self, key, count=1):
+        return self._zpop(key, count, False)
+
+    @command((Key(ZSet),), (Int,))
+    def zpopmax(self, key, count=1):
+        return self._zpop(key, count, True)
+
+    @command((bytes, bytes), (bytes,), flags=msgs.FLAG_NO_SCRIPT)
+    def bzpopmin(self, *args):
+        keys = args[:-1]
+        timeout = Timeout.decode(args[-1])
+        return self._blocking(timeout, functools.partial(self._bzpop, keys, False))
+
+    @command((bytes, bytes), (bytes,), flags=msgs.FLAG_NO_SCRIPT)
+    def bzpopmax(self, *args):
+        keys = args[:-1]
+        timeout = Timeout.decode(args[-1])
+        return self._blocking(timeout, functools.partial(self._bzpop, keys, True))
+
+    @staticmethod
+    def _limit_items(items, offset, count):
+        out = []
+        for item in items:
+            if offset:  # Note: not offset > 0, in order to match redis
+                offset -= 1
+                continue
+            if count == 0:
+                break
+            count -= 1
+            out.append(item)
+        return out
+
+    def _apply_withscores(self, items, withscores):
+        if withscores:
+            out = []
+            for item in items:
+                out.append(item[1])
+                out.append(self._encodefloat(item[0], False))
+        else:
+            out = [item[1] for item in items]
+        return out
+
+    @command((Key(ZSet), bytes, bytes), (bytes,))
+    def zadd(self, key, *args):
+        zset = key.value
+
+        (nx, xx, ch, incr, gt, lt), left_args = extract_args(
+            args, ('nx', 'xx', 'ch', 'incr', 'gt', 'lt',), error_on_unexpected=False)
+
+        elements = left_args
+        if not elements or len(elements) % 2 != 0:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        if nx and xx:
+            raise SimpleError(msgs.ZADD_NX_XX_ERROR_MSG)
+        if [nx, gt, lt].count(True) > 1:
+            raise SimpleError(msgs.ZADD_NX_GT_LT_ERROR_MSG)
+        if incr and len(elements) != 2:
+            raise SimpleError(msgs.ZADD_INCR_LEN_ERROR_MSG)
+        # Parse all scores first, before updating
+        items = [
+            (0.0 + Float.decode(elements[j]) if self.version >= 7 else Float.decode(elements[j]), elements[j + 1])
+            for j in range(0, len(elements), 2)
+        ]
+        old_len = len(zset)
+        changed_items = 0
+
+        if incr:
+            item_score, item_name = items[0]
+            if (nx and item_name in zset) or (xx and item_name not in zset):
+                return None
+            return self.zincrby(key, item_score, item_name)
+        count = [nx, gt, lt, xx].count(True)
+        for item_score, item_name in items:
+            update = count == 0
+            update = update or (count == 1 and nx and item_name not in zset)
+            update = update or (count == 1 and xx and item_name in zset)
+            update = update or (gt and ((item_name in zset and zset.get(item_name) < item_score)
+                                        or (not xx and item_name not in zset)))
+            update = update or (lt and ((item_name in zset and zset.get(item_name) > item_score)
+                                        or (not xx and item_name not in zset)))
+
+            if update:
+                if zset.add(item_name, item_score):
+                    changed_items += 1
+
+        if changed_items:
+            key.updated()
+
+        if ch:
+            return changed_items
+        return len(zset) - old_len
+
+    @command((Key(ZSet),))
+    def zcard(self, key):
+        return len(key.value)
+
+    @command((Key(ZSet), ScoreTest, ScoreTest))
+    def zcount(self, key, _min, _max):
+        return key.value.zcount(_min.lower_bound, _max.upper_bound)
+
+    @command((Key(ZSet), Float, bytes))
+    def zincrby(self, key, increment, member):
+        # Can't just default the old score to 0.0, because in IEEE754, adding
+        # 0.0 to something isn't a nop (e.g. 0.0 + -0.0 == 0.0).
+        try:
+            score = key.value.get(member, None) + increment
+        except TypeError:
+            score = increment
+        if math.isnan(score):
+            raise SimpleError(msgs.SCORE_NAN_MSG)
+        key.value[member] = score
+        key.updated()
+        # For some reason, here it does not ignore the version
+        # https://github.com/cunla/fakeredis-py/actions/runs/3377186364/jobs/5605815202
+        return Float.encode(score, False)
+        # return self._encodefloat(score, False)
+
+    @command((Key(ZSet), StringTest, StringTest))
+    def zlexcount(self, key, _min, _max):
+        return key.value.zlexcount(_min.value, _min.exclusive, _max.value, _max.exclusive)
+
+    def _zrangebyscore(self, key, _min, _max, reverse, withscores, offset, count):
+        zset = key.value
+        items = list(zset.irange_score(_min.lower_bound, _max.upper_bound, reverse=reverse))
+        items = self._limit_items(items, offset, count)
+        items = self._apply_withscores(items, withscores)
+        return items
+
+    def _zrange(self, key, start, stop, reverse, withscores, byscore):
+        zset = key.value
+        if byscore:
+            items = zset.irange_score(start.lower_bound, stop.upper_bound, reverse=reverse)
+        else:
+            start, stop = Int.decode(start.bytes_val), Int.decode(stop.bytes_val)
+            start, stop = fix_range(start, stop, len(zset))
+            if reverse:
+                start, stop = len(zset) - stop, len(zset) - start
+            items = zset.islice_score(start, stop, reverse)
+        items = self._apply_withscores(items, withscores)
+        return items
+
+    def _zrangebylex(self, key, _min, _max, reverse, offset, count):
+        zset = key.value
+        if reverse:
+            _min, _max = _max, _min
+        items = zset.irange_lex(_min.value, _max.value,
+                                inclusive=(not _min.exclusive, not _max.exclusive),
+                                reverse=reverse)
+        items = self._limit_items(items, offset, count)
+        return items
+
+    def _zrange_args(self, key, start, stop, *args):
+        (bylex, byscore, rev, (offset, count), withscores), _ = extract_args(
+            args, ('bylex', 'byscore', 'rev', '++limit', 'withscores'))
+        if offset is not None and not bylex and not byscore:
+            raise SimpleError(msgs.SYNTAX_ERROR_LIMIT_ONLY_WITH_MSG)
+        if bylex and byscore:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+
+        offset = offset or 0
+        count = -1 if count is None else count
+
+        if bylex:
+            res = self._zrangebylex(
+                key, StringTest.decode(start), StringTest.decode(stop), rev, offset, count)
+        elif byscore:
+            res = self._zrangebyscore(
+                key, ScoreTest.decode(start), ScoreTest.decode(stop), rev, withscores, offset, count)
+        else:
+            res = self._zrange(
+                key, ScoreTest.decode(start), ScoreTest.decode(stop), rev, withscores, byscore)
+        return res
+
+    @command((Key(ZSet), bytes, bytes), (bytes,))
+    def zrange(self, key, start, stop, *args):
+        return self._zrange_args(key, start, stop, *args)
+
+    @command((Key(ZSet), ScoreTest, ScoreTest), (bytes,))
+    def zrevrange(self, key, start, stop, *args):
+        (withscores, byscore), _ = extract_args(args, ('withscores', 'byscore'))
+        return self._zrange(key, start, stop, True, withscores, byscore)
+
+    @command((Key(ZSet), StringTest, StringTest), (bytes,))
+    def zrangebylex(self, key, _min, _max, *args):
+        ((offset, count),), _ = extract_args(args, ('++limit',))
+        offset = offset or 0
+        count = -1 if count is None else count
+        return self._zrangebylex(key, _min, _max, False, offset, count)
+
+    @command((Key(ZSet), StringTest, StringTest), (bytes,))
+    def zrevrangebylex(self, key, _min, _max, *args):
+        ((offset, count),), _ = extract_args(args, ('++limit',))
+        offset = offset or 0
+        count = -1 if count is None else count
+        return self._zrangebylex(key, _min, _max, True, offset, count)
+
+    @command((Key(ZSet), ScoreTest, ScoreTest), (bytes,))
+    def zrangebyscore(self, key, _min, _max, *args):
+        (withscores, (offset, count)), _ = extract_args(args, ('withscores', '++limit'))
+        offset = offset or 0
+        count = -1 if count is None else count
+        return self._zrangebyscore(key, _min, _max, False, withscores, offset, count)
+
+    @command((Key(ZSet), ScoreTest, ScoreTest), (bytes,))
+    def zrevrangebyscore(self, key, _max, _min, *args):
+        (withscores, (offset, count)), _ = extract_args(args, ('withscores', '++limit'))
+        offset = offset or 0
+        count = -1 if count is None else count
+        return self._zrangebyscore(key, _min, _max, True, withscores, offset, count)
+
+    @command((Key(ZSet), bytes))
+    def zrank(self, key, member):
+        try:
+            return key.value.rank(member)
+        except KeyError:
+            return None
+
+    @command((Key(ZSet), bytes))
+    def zrevrank(self, key, member):
+        try:
+            return len(key.value) - 1 - key.value.rank(member)
+        except KeyError:
+            return None
+
+    @command((Key(ZSet), bytes), (bytes,))
+    def zrem(self, key, *members):
+        old_size = len(key.value)
+        for member in members:
+            key.value.discard(member)
+        deleted = old_size - len(key.value)
+        if deleted:
+            key.updated()
+        return deleted
+
+    @command((Key(ZSet), StringTest, StringTest))
+    def zremrangebylex(self, key, _min, _max):
+        items = key.value.irange_lex(_min.value, _max.value,
+                                     inclusive=(not _min.exclusive, not _max.exclusive))
+        return self.zrem(key, *items)
+
+    @command((Key(ZSet), ScoreTest, ScoreTest))
+    def zremrangebyscore(self, key, _min, _max):
+        items = key.value.irange_score(_min.lower_bound, _max.upper_bound)
+        return self.zrem(key, *[item[1] for item in items])
+
+    @command((Key(ZSet), Int, Int))
+    def zremrangebyrank(self, key, start, stop):
+        zset = key.value
+        start, stop = fix_range(start, stop, len(zset))
+        items = zset.islice_score(start, stop)
+        return self.zrem(key, *[item[1] for item in items])
+
+    @command((Key(ZSet), Int), (bytes, bytes))
+    def zscan(self, key, cursor, *args):
+        new_cursor, ans = self._scan(key.value.items(), cursor, *args)
+        flat = []
+        for (key, score) in ans:
+            flat.append(key)
+            flat.append(self._encodefloat(score, False))
+        return [new_cursor, flat]
+
+    @command((Key(ZSet), bytes))
+    def zscore(self, key, member):
+        try:
+            return self._encodefloat(key.value[member], False)
+        except KeyError:
+            return None
+
+    @staticmethod
+    def _get_zset(value):
+        if isinstance(value, set):
+            zset = ZSet()
+            for item in value:
+                zset[item] = 1.0
+            return zset
+        elif isinstance(value, ZSet):
+            return value
+        else:
+            raise SimpleError(msgs.WRONGTYPE_MSG)
+
+    def _zunioninter(self, func, dest, numkeys, *args):
+        if numkeys < 1:
+            raise SimpleError(msgs.ZUNIONSTORE_KEYS_MSG.format(func.lower()))
+        if numkeys > len(args):
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        aggregate = b'sum'
+        weights = [1.0] * numkeys
+
+        i = numkeys
+        while i < len(args):
+            arg = args[i]
+            if casematch(arg, b'weights') and i + numkeys < len(args):
+                weights = [
+                    Float.decode(x, decode_error=msgs.INVALID_WEIGHT_MSG)
+                    for x in args[i + 1:i + numkeys + 1]
+                ]
+                i += numkeys + 1
+            elif casematch(arg, b'aggregate') and i + 1 < len(args):
+                aggregate = null_terminate(args[i + 1])
+                if aggregate not in (b'sum', b'min', b'max'):
+                    raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+                i += 2
+            else:
+                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+
+        sets = []
+        for i in range(numkeys):
+            item = CommandItem(args[i], self._db, item=self._db.get(args[i]), default=ZSet())
+            sets.append(self._get_zset(item.value))
+
+        out_members = set(sets[0])
+        for s in sets[1:]:
+            if func == 'ZUNIONSTORE':
+                out_members |= set(s)
+            else:
+                out_members.intersection_update(s)
+
+        # We first build a regular dict and turn it into a ZSet. The
+        # reason is subtle: a ZSet won't update a score from -0 to +0
+        # (or vice versa) through assignment, but a regular dict will.
+        out = {}
+        # The sort affects the order of floating-point operations.
+        # Note that redis uses qsort(1), which has no stability guarantees,
+        # so we can't be sure to match it in all cases.
+        for s, w in sorted(zip(sets, weights), key=lambda x: len(x[0])):
+            for member, score in s.items():
+                score *= w
+                # Redis only does this step for ZUNIONSTORE. See
+                # https://github.com/antirez/redis/issues/3954.
+                if func == 'ZUNIONSTORE' and math.isnan(score):
+                    score = 0.0
+                if member not in out_members:
+                    continue
+                if member in out:
+                    old = out[member]
+                    if aggregate == b'sum':
+                        score += old
+                        if math.isnan(score):
+                            score = 0.0
+                    elif aggregate == b'max':
+                        score = max(old, score)
+                    elif aggregate == b'min':
+                        score = min(old, score)
+                    else:
+                        assert False  # pragma: nocover
+                if math.isnan(score):
+                    score = 0.0
+                out[member] = score
+
+        out_zset = ZSet()
+        for member, score in out.items():
+            out_zset[member] = score
+
+        dest.value = out_zset
+        return len(out_zset)
+
+    @command((Key(), Int, bytes), (bytes,))
+    def zunionstore(self, dest, numkeys, *args):
+        return self._zunioninter('ZUNIONSTORE', dest, numkeys, *args)
+
+    @command((Key(), Int, bytes), (bytes,))
+    def zinterstore(self, dest, numkeys, *args):
+        return self._zunioninter('ZINTERSTORE', dest, numkeys, *args)
+
+    @command(name="ZMSCORE", fixed=(Key(ZSet), bytes), repeat=(bytes,))
+    def zmscore(self, key: CommandItem, *members: Union[str, bytes]) -> list[Optional[float]]:
+        """Get the scores associated with the specified members in the sorted set
+        stored at key.
+
+        For every member that does not exist in the sorted set, a nil value
+        is returned.
+        """
+        scores = map(
+            lambda score: score if score is None else self._encodefloat(score, humanfriendly=False),
+            map(key.value.get, members),
+        )
+        return list(scores)
+
+    def _encodefloat(self, value, humanfriendly):
+        raise NotImplementedError  # Implemented in BaseFakeSocket
diff --git a/fakeredis/commands_mixins/streams_mixin.py b/fakeredis/commands_mixins/streams_mixin.py
new file mode 100644
index 0000000..fbc3323
--- /dev/null
+++ b/fakeredis/commands_mixins/streams_mixin.py
@@ -0,0 +1,121 @@
+import functools
+from typing import List
+
+import fakeredis._msgs as msgs
+from fakeredis._command_args_parsing import extract_args
+from fakeredis._commands import Key, command, CommandItem
+from fakeredis._helpers import SimpleError, casematch
+from fakeredis._stream import XStream, StreamRangeTest
+
+
+class StreamsCommandsMixin:
+    @command(name="XADD", fixed=(Key(),), repeat=(bytes,), )
+    def xadd(self, key, *args):
+        (nomkstream, limit, maxlen, minid), left_args = extract_args(
+            args, ('nomkstream', '+limit', '~+maxlen', '~minid'), error_on_unexpected=False)
+        if nomkstream and key.value is None:
+            return None
+        id_str = left_args[0]
+        elements = left_args[1:]
+        if not elements or len(elements) % 2 != 0:
+            raise SimpleError(msgs.WRONG_ARGS_MSG6.format('XADD'))
+        stream = key.value or XStream()
+        if self.version < 7 and id_str != b'*' and StreamRangeTest.parse_id(id_str) == (-1, -1):
+            raise SimpleError(msgs.XADD_INVALID_ID)
+        id_str = stream.add(elements, id_str=id_str)
+        if id_str is None:
+            if StreamRangeTest.parse_id(left_args[0]) == (-1, -1):
+                raise SimpleError(msgs.XADD_INVALID_ID)
+            raise SimpleError(msgs.XADD_ID_LOWER_THAN_LAST)
+        if maxlen is not None or minid is not None:
+            stream.trim(maxlen=maxlen, minid=minid, limit=limit)
+        key.update(stream)
+        return id_str
+
+    @command(name='XTRIM', fixed=(Key(XStream),), repeat=(bytes,), )
+    def xtrim(self, key, *args):
+        (limit, maxlen, minid), _ = extract_args(
+            args, ('+limit', '~+maxlen', '~minid'))
+        if maxlen is not None and minid is not None:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        if maxlen is None and minid is None:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        stream = key.value or XStream()
+
+        res = stream.trim(maxlen=maxlen, minid=minid, limit=limit)
+
+        key.update(stream)
+        return res
+
+    @command(name="XLEN", fixed=(Key(XStream),))
+    def xlen(self, key):
+        if key.value is None:
+            return 0
+        return len(key.value)
+
+    def _xrange(self, key, _min, _max, reverse, count, ):
+        if key.value is None:
+            return None
+        if count is None:
+            count = len(key.value)
+        res = key.value.irange(
+            _min.value, _max.value,
+            exclusive=(_min.exclusive, _max.exclusive),
+            reverse=reverse)
+        return res[:count]
+
+    @command(name="XRANGE", fixed=(Key(XStream), StreamRangeTest, StreamRangeTest), repeat=(bytes,))
+    def xrange(self, key, _min, _max, *args):
+        (count,), _ = extract_args(args, ('+count',))
+        return self._xrange(key, _min, _max, False, count)
+
+    @command(name="XREVRANGE", fixed=(Key(XStream), StreamRangeTest, StreamRangeTest), repeat=(bytes,))
+    def xrevrange(self, key, _min, _max, *args):
+        (count,), _ = extract_args(args, ('+count',))
+        return self._xrange(key, _max, _min, True, count)
+
+    def _xread(self, stream_start_id_list: List, count: int, first_pass: bool):
+        max_inf = StreamRangeTest.decode(b'+')
+        res = list()
+        for (item, start_id) in stream_start_id_list:
+            stream_results = self._xrange(item, start_id, max_inf, False, count)
+            if first_pass and (count is None or len(stream_results) < count):
+                raise SimpleError(msgs.WRONGTYPE_MSG)
+            if len(stream_results) > 0:
+                res.append([item.key, stream_results])
+        return res
+
+    @staticmethod
+    def _parse_start_id(key: CommandItem, s: bytes) -> StreamRangeTest:
+        if s == b'$':
+            return StreamRangeTest.decode(key.value.last_item_key(), exclusive=True)
+        return StreamRangeTest.decode(s, exclusive=True)
+
+    @command(name="XREAD", fixed=(bytes,), repeat=(bytes,))
+    def xread(self, *args):
+        (count, timeout,), left_args = extract_args(args, ('+count', '+block',), error_on_unexpected=False)
+        if (len(left_args) < 3
+                or not casematch(left_args[0], b'STREAMS')
+                or len(left_args) % 2 != 1):
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        left_args = left_args[1:]
+        num_streams = int(len(left_args) / 2)
+
+        stream_start_id_list = list()
+        for i in range(num_streams):
+            item = CommandItem(left_args[i], self._db, item=self._db.get(left_args[i]), default=None)
+            start_id = self._parse_start_id(item, left_args[i + num_streams])
+            stream_start_id_list.append((item, start_id,))
+        if timeout is None:
+            return self._xread(stream_start_id_list, count, False)
+        else:
+            return self._blocking(timeout, functools.partial(self._xread, stream_start_id_list, count))
+
+    @command(name="XDEL", fixed=(Key(XStream),), repeat=(bytes,), )
+    def xdel(self, key, *args):
+        if len(args) == 0:
+            raise SimpleError(msgs.WRONG_ARGS_MSG6.format('xdel'))
+        if key.value is None:
+            return 0
+        res = key.value.delete(args)
+        return res
diff --git a/fakeredis/commands_mixins/string_mixin.py b/fakeredis/commands_mixins/string_mixin.py
new file mode 100644
index 0000000..0312fc1
--- /dev/null
+++ b/fakeredis/commands_mixins/string_mixin.py
@@ -0,0 +1,271 @@
+import math
+
+from fakeredis import _msgs as msgs
+from fakeredis._command_args_parsing import extract_args
+from fakeredis._commands import (command, Key, Int, Float, MAX_STRING_SIZE, delete_keys, fix_range_string)
+from fakeredis._helpers import (OK, SimpleError, casematch)
+
+
+def _lcs(s1, s2):
+    l1 = len(s1)
+    l2 = len(s2)
+
+    # Opt array to store the optimal solution value till ith and jth position for 2 strings
+    opt = [[0] * (l2 + 1) for _ in range(0, l1 + 1)]
+
+    # Pi array to store the direction when calculating the actual sequence
+    pi = [[0] * (l2 + 1) for _ in range(0, l1 + 1)]
+
+    # Algorithm to calculate the length of the longest common subsequence
+    for r in range(1, l1 + 1):
+        for c in range(1, l2 + 1):
+            if s1[r - 1] == s2[c - 1]:
+                opt[r][c] = opt[r - 1][c - 1] + 1
+                pi[r][c] = 0
+            elif opt[r][c - 1] >= opt[r - 1][c]:
+                opt[r][c] = opt[r][c - 1]
+                pi[r][c] = 1
+            else:
+                opt[r][c] = opt[r - 1][c]
+                pi[r][c] = 2
+    # Length of the longest common subsequence is saved at opt[n][m]
+
+    # Algorithm to calculate the longest common subsequence using the Pi array
+    # Also calculate the list of matches
+    r, c = l1, l2
+    result = ''
+    matches = list()
+    s1ind, s2ind, curr_length = None, None, 0
+
+    while r > 0 and c > 0:
+        if pi[r][c] == 0:
+            result = chr(s1[r - 1]) + result
+            r -= 1
+            c -= 1
+            curr_length += 1
+        elif pi[r][c] == 2:
+            r -= 1
+        else:
+            c -= 1
+
+        if pi[r][c] == 0 and curr_length == 1:
+            s1ind = r
+            s2ind = c
+        elif pi[r][c] > 0 and curr_length > 0:
+            matches.append([[r, s1ind], [c, s2ind], curr_length])
+            s1ind, s2ind, curr_length = None, None, 0
+    if curr_length:
+        matches.append([[s1ind, r], [s2ind, c], curr_length])
+
+    return opt[l1][l2], result.encode(), matches
+
+
+class StringCommandsMixin:
+    # String commands
+
+    @command((Key(bytes), bytes))
+    def append(self, key, value):
+        old = key.get(b'')
+        if len(old) + len(value) > MAX_STRING_SIZE:
+            raise SimpleError(msgs.STRING_OVERFLOW_MSG)
+        key.update(key.get(b'') + value)
+        return len(key.value)
+
+    @command((Key(bytes),))
+    def decr(self, key):
+        return self.incrby(key, -1)
+
+    @command((Key(bytes), Int))
+    def decrby(self, key, amount):
+        return self.incrby(key, -amount)
+
+    @command((Key(bytes),))
+    def get(self, key):
+        return key.get(None)
+
+    @command((Key(bytes),))
+    def getdel(self, key):
+        res = key.get(None)
+        delete_keys(key)
+        return res
+
+    @command(name=['GETRANGE', 'SUBSTR'], fixed=(Key(bytes), Int, Int))
+    def getrange(self, key, start, end):
+        value = key.get(b'')
+        start, end = fix_range_string(start, end, len(value))
+        return value[start:end]
+
+    @command((Key(bytes), bytes))
+    def getset(self, key, value):
+        old = key.value
+        key.value = value
+        return old
+
+    @command((Key(bytes), Int))
+    def incrby(self, key, amount):
+        c = Int.decode(key.get(b'0')) + amount
+        key.update(self._encodeint(c))
+        return c
+
+    @command((Key(bytes),))
+    def incr(self, key):
+        return self.incrby(key, 1)
+
+    @command((Key(bytes), bytes))
+    def incrbyfloat(self, key, amount):
+        # TODO: introduce convert_order so that we can specify amount is Float
+        c = Float.decode(key.get(b'0')) + Float.decode(amount)
+        if not math.isfinite(c):
+            raise SimpleError(msgs.NONFINITE_MSG)
+        encoded = self._encodefloat(c, True)
+        key.update(encoded)
+        return encoded
+
+    @command((Key(),), (Key(),))
+    def mget(self, *keys):
+        return [key.value if isinstance(key.value, bytes) else None for key in keys]
+
+    @command((Key(), bytes), (Key(), bytes))
+    def mset(self, *args):
+        for i in range(0, len(args), 2):
+            args[i].value = args[i + 1]
+        return OK
+
+    @command((Key(), bytes), (Key(), bytes))
+    def msetnx(self, *args):
+        for i in range(0, len(args), 2):
+            if args[i]:
+                return 0
+        for i in range(0, len(args), 2):
+            args[i].value = args[i + 1]
+        return 1
+
+    @command((Key(), Int, bytes))
+    def psetex(self, key, ms, value):
+        if ms <= 0 or self._db.time * 1000 + ms >= 2 ** 63:
+            raise SimpleError(msgs.INVALID_EXPIRE_MSG.format('psetex'))
+        key.value = value
+        key.expireat = self._db.time + ms / 1000.0
+        return OK
+
+    @command(name="set", fixed=(Key(), bytes), repeat=(bytes,))
+    def set_(self, key, value, *args):
+        (ex, px, xx, nx, keepttl, get), _ = extract_args(args, ('+ex', '+px', 'xx', 'nx', 'keepttl', 'get'))
+        if ex is not None and (ex <= 0 or (self._db.time + ex) * 1000 >= 2 ** 63):
+            raise SimpleError(msgs.INVALID_EXPIRE_MSG.format('set'))
+        if px is not None and (px <= 0 or self._db.time * 1000 + px >= 2 ** 63):
+            raise SimpleError(msgs.INVALID_EXPIRE_MSG.format('set'))
+
+        if (xx and nx) or ((px is not None) + (ex is not None) + keepttl > 1):
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        if nx and get and self.version < 7:
+            # The command docs say this is allowed from Redis 7.0.
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+
+        old_value = None
+        if get:
+            if key.value is not None and type(key.value) is not bytes:
+                raise SimpleError(msgs.WRONGTYPE_MSG)
+            old_value = key.value
+
+        if nx and key:
+            return old_value
+        if xx and not key:
+            return old_value
+        if not keepttl:
+            key.value = value
+        else:
+            key.update(value)
+        if ex is not None:
+            key.expireat = self._db.time + ex
+        if px is not None:
+            key.expireat = self._db.time + px / 1000.0
+        return OK if not get else old_value
+
+    @command((Key(), Int, bytes))
+    def setex(self, key, seconds, value):
+        if seconds <= 0 or (self._db.time + seconds) * 1000 >= 2 ** 63:
+            raise SimpleError(msgs.INVALID_EXPIRE_MSG.format('setex'))
+        key.value = value
+        key.expireat = self._db.time + seconds
+        return OK
+
+    @command((Key(), bytes))
+    def setnx(self, key, value):
+        if key:
+            return 0
+        key.value = value
+        return 1
+
+    @command((Key(bytes), Int, bytes))
+    def setrange(self, key, offset, value):
+        if offset < 0:
+            raise SimpleError(msgs.INVALID_OFFSET_MSG)
+        elif not value:
+            return len(key.get(b''))
+        elif offset + len(value) > MAX_STRING_SIZE:
+            raise SimpleError(msgs.STRING_OVERFLOW_MSG)
+        out = key.get(b'')
+        if len(out) < offset:
+            out += b'\x00' * (offset - len(out))
+        out = out[0:offset] + value + out[offset + len(value):]
+        key.update(out)
+        return len(out)
+
+    @command((Key(bytes),))
+    def strlen(self, key):
+        return len(key.get(b''))
+
+    @command((Key(bytes),), (bytes,))
+    def getex(self, key, *args):
+        i, count_options, expire_time, diff = 0, 0, None, None
+
+        while i < len(args):
+            count_options += 1
+            if casematch(args[i], b'ex') and i + 1 < len(args):
+                diff = Int.decode(args[i + 1])
+                expire_time = self._db.time + diff
+                i += 2
+            elif casematch(args[i], b'px') and i + 1 < len(args):
+                diff = Int.decode(args[i + 1])
+                expire_time = (self._db.time * 1000 + diff) / 1000.0
+                i += 2
+            elif casematch(args[i], b'exat') and i + 1 < len(args):
+                expire_time = Int.decode(args[i + 1])
+                i += 2
+            elif casematch(args[i], b'pxat') and i + 1 < len(args):
+                expire_time = Int.decode(args[i + 1]) / 1000.0
+                i += 2
+            elif casematch(args[i], b'persist'):
+                expire_time = None
+                i += 1
+            else:
+                raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        if ((expire_time is not None and (expire_time <= 0 or expire_time * 1000 >= 2 ** 63))
+                or (diff is not None and (diff <= 0 or diff * 1000 >= 2 ** 63))):
+            raise SimpleError(msgs.INVALID_EXPIRE_MSG.format('getex'))
+        if count_options > 1:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+
+        key.expireat = expire_time
+        return key.get(None)
+
+    @command((Key(bytes), Key(bytes),), (bytes,))
+    def lcs(self, k1, k2, *args):
+        s1 = k1.value or b''
+        s2 = k2.value or b''
+
+        (arg_idx, arg_len, arg_minmatchlen, arg_withmatchlen), _ = extract_args(
+            args, ('idx', 'len', '+minmatchlen', 'withmatchlen'))
+        if arg_idx and arg_len:
+            raise SimpleError(msgs.LCS_CANT_HAVE_BOTH_LEN_AND_IDX)
+        lcs_len, lcs_val, matches = _lcs(s1, s2)
+        if not arg_idx and not arg_len:
+            return lcs_val
+        if arg_len:
+            return lcs_len
+        arg_minmatchlen = arg_minmatchlen if arg_minmatchlen else 0
+        results = list(filter(lambda x: x[2] >= arg_minmatchlen, matches))
+        if not arg_withmatchlen:
+            results = list(map(lambda x: [x[0], x[1]], results))
+        return [b'matches', results, b'len', lcs_len]
diff --git a/fakeredis/commands_mixins/transactions_mixin.py b/fakeredis/commands_mixins/transactions_mixin.py
new file mode 100644
index 0000000..19141c7
--- /dev/null
+++ b/fakeredis/commands_mixins/transactions_mixin.py
@@ -0,0 +1,84 @@
+from fakeredis import _msgs as msgs
+from fakeredis._commands import (command, Key)
+from fakeredis._helpers import (OK, SimpleError)
+
+
+class TransactionsCommandsMixin:
+    def __init__(self, *args, **kwargs):
+        super(TransactionsCommandsMixin, self).__init__(*args, **kwargs)
+        self._watches = set()
+        # When in a MULTI, set to a list of function calls
+        self._transaction = None
+        self._transaction_failed = False
+        # Set when executing the commands from EXEC
+        self._in_transaction = False
+        self._watch_notified = False
+
+    def _clear_watches(self):
+        self._watch_notified = False
+        while self._watches:
+            (key, db) = self._watches.pop()
+            db.remove_watch(key, self)
+
+    # Transaction commands
+    @command((), flags=[msgs.FLAG_NO_SCRIPT, msgs.FLAG_TRANSACTION])
+    def discard(self):
+        if self._transaction is None:
+            raise SimpleError(msgs.WITHOUT_MULTI_MSG.format('DISCARD'))
+        self._transaction = None
+        self._transaction_failed = False
+        self._clear_watches()
+        return OK
+
+    @command(name='exec', fixed=(), repeat=(), flags=[msgs.FLAG_NO_SCRIPT, msgs.FLAG_TRANSACTION])
+    def exec_(self):
+        if self._transaction is None:
+            raise SimpleError(msgs.WITHOUT_MULTI_MSG.format('EXEC'))
+        if self._transaction_failed:
+            self._transaction = None
+            self._clear_watches()
+            raise SimpleError(msgs.EXECABORT_MSG)
+        transaction = self._transaction
+        self._transaction = None
+        self._transaction_failed = False
+        watch_notified = self._watch_notified
+        self._clear_watches()
+        if watch_notified:
+            return None
+        result = []
+        for func, sig, args in transaction:
+            try:
+                self._in_transaction = True
+                ans = self._run_command(func, sig, args, False)
+            except SimpleError as exc:
+                ans = exc
+            finally:
+                self._in_transaction = False
+            result.append(ans)
+        return result
+
+    @command((), flags=[msgs.FLAG_NO_SCRIPT, msgs.FLAG_TRANSACTION])
+    def multi(self):
+        if self._transaction is not None:
+            raise SimpleError(msgs.MULTI_NESTED_MSG)
+        self._transaction = []
+        self._transaction_failed = False
+        return OK
+
+    @command((), flags=msgs.FLAG_NO_SCRIPT)
+    def unwatch(self):
+        self._clear_watches()
+        return OK
+
+    @command((Key(),), (Key(),), flags=[msgs.FLAG_NO_SCRIPT, msgs.FLAG_TRANSACTION])
+    def watch(self, *keys):
+        if self._transaction is not None:
+            raise SimpleError(msgs.WATCH_INSIDE_MULTI_MSG)
+        for key in keys:
+            if key not in self._watches:
+                self._watches.add((key.key, self._db))
+                self._db.add_watch(key.key, self)
+        return OK
+
+    def notify_watch(self):
+        self._watch_notified = True
diff --git a/fakeredis/geo/__init__.py b/fakeredis/geo/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/fakeredis/geo/geohash.py b/fakeredis/geo/geohash.py
new file mode 100644
index 0000000..e8f14b3
--- /dev/null
+++ b/fakeredis/geo/geohash.py
@@ -0,0 +1,72 @@
+#  Note: the alphabet in geohash differs from the common base32
+#  alphabet described in IETF's RFC 4648
+#  (http://tools.ietf.org/html/rfc4648)
+from typing import Tuple
+
+base32 = '0123456789bcdefghjkmnpqrstuvwxyz'
+decodemap = {base32[i]: i for i in range(len(base32))}
+
+
+def decode(geohash: str) -> Tuple[float, float, float, float]:
+    """
+    Decode the geohash to its exact values, including the error
+    margins of the result.  Returns four float values: latitude,
+    longitude, the plus/minus error for latitude (as a positive
+    number) and the plus/minus error for longitude (as a positive
+    number).
+    """
+    lat_interval, lon_interval = (-90.0, 90.0), (-180.0, 180.0)
+    lat_err, lon_err = 90.0, 180.0
+    is_longitude = True
+    for c in geohash:
+        cd = decodemap[c]
+        for mask in [16, 8, 4, 2, 1]:
+            if is_longitude:  # adds longitude info
+                lon_err /= 2
+                if cd & mask:
+                    lon_interval = ((lon_interval[0] + lon_interval[1]) / 2, lon_interval[1])
+                else:
+                    lon_interval = (lon_interval[0], (lon_interval[0] + lon_interval[1]) / 2)
+            else:  # adds latitude info
+                lat_err /= 2
+                if cd & mask:
+                    lat_interval = ((lat_interval[0] + lat_interval[1]) / 2, lat_interval[1])
+                else:
+                    lat_interval = (lat_interval[0], (lat_interval[0] + lat_interval[1]) / 2)
+            is_longitude = not is_longitude
+    lat = (lat_interval[0] + lat_interval[1]) / 2
+    lon = (lon_interval[0] + lon_interval[1]) / 2
+    return lat, lon, lat_err, lon_err
+
+
+def encode(latitude: float, longitude: float, precision=12) -> str:
+    """
+    Encode a position given in float arguments latitude, longitude to
+    a geohash which will have the character count precision.
+    """
+    lat_interval, lon_interval = (-90.0, 90.0), (-180.0, 180.0)
+    geohash, bits = [], [16, 8, 4, 2, 1]
+    bit, ch = 0, 0
+    is_longitude = True
+
+    def next_interval(curr: float, interval: Tuple[float, float], ch: int) -> Tuple[Tuple[float, float], int]:
+        mid = (interval[0] + interval[1]) / 2
+        if curr > mid:
+            ch |= bits[bit]
+            return (mid, interval[1]), ch
+        else:
+            return (interval[0], mid), ch
+
+    while len(geohash) < precision:
+        if is_longitude:
+            lon_interval, ch = next_interval(longitude, lon_interval, ch)
+        else:
+            lat_interval, ch = next_interval(latitude, lat_interval, ch)
+        is_longitude = not is_longitude
+        if bit < 4:
+            bit += 1
+        else:
+            geohash += base32[ch]
+            bit = 0
+            ch = 0
+    return ''.join(geohash)
diff --git a/fakeredis/geo/haversine.py b/fakeredis/geo/haversine.py
new file mode 100644
index 0000000..99a7216
--- /dev/null
+++ b/fakeredis/geo/haversine.py
@@ -0,0 +1,34 @@
+import math
+from typing import Tuple
+
+
+# class GeoMember:
+#     def __init__(self, name: bytes, lat: float, long: float):
+#         self.name = name
+#         self.long = long
+#         self.lat = lat
+#
+#     @staticmethod
+#     def from_bytes_tuple(t: Tuple[bytes, bytes, bytes]) -> 'GeoMember':
+#         long = Float.decode(t[0])
+#         lat = Float.decode(t[1])
+#         name = t[2]
+#         return GeoMember(name, lat, long)
+#
+#     def geohash(self):
+#         return geohash.encode(self.lat, self.long)
+
+
+def distance(origin: Tuple[float, float], destination: Tuple[float, float]) -> float:
+    """Calculate the Haversine distance in meters."""
+    radius = 6372797.560856  # Earth's quatratic mean radius for WGS-84
+
+    lat1, lon1, lat2, lon2 = map(
+        math.radians, [origin[0], origin[1], destination[0], destination[1]])
+
+    dlon = lon2 - lon1
+    dlat = lat2 - lat1
+    a = math.sin(dlat / 2) ** 2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon / 2) ** 2
+    c = 2 * math.asin(math.sqrt(a))
+
+    return c * radius
diff --git a/fakeredis/py.typed b/fakeredis/py.typed
new file mode 100644
index 0000000..e69de29
diff --git a/fakeredis/stack/__init__.py b/fakeredis/stack/__init__.py
new file mode 100644
index 0000000..20ebd73
--- /dev/null
+++ b/fakeredis/stack/__init__.py
@@ -0,0 +1,10 @@
+try:
+    from jsonpath_ng.ext import parse  # noqa: F401
+    from redis.commands.json.path import Path  # noqa: F401
+    from ._json_mixin import JSONCommandsMixin, JSONObject  # noqa: F401
+except ImportError as e:
+    if e.name == 'fakeredis.stack._json_mixin':
+        raise e
+
+    class JSONCommandsMixin:
+        pass
diff --git a/fakeredis/stack/_json_mixin.py b/fakeredis/stack/_json_mixin.py
new file mode 100644
index 0000000..fbc3ecd
--- /dev/null
+++ b/fakeredis/stack/_json_mixin.py
@@ -0,0 +1,427 @@
+"""Command mixin for emulating `redis-py`'s JSON functionality."""
+
+# Future Imports
+from __future__ import annotations
+
+from json import JSONDecodeError
+
+import copy
+# Standard Library Imports
+import json
+from jsonpath_ng import Root, JSONPath
+from jsonpath_ng.exceptions import JsonPathParserError
+from jsonpath_ng.ext import parse
+from redis.commands.json.commands import JsonType
+from typing import Any, Optional, Union
+
+from fakeredis import _helpers as helpers, _msgs as msgs
+from fakeredis._command_args_parsing import extract_args
+from fakeredis._commands import Key, command, delete_keys, CommandItem, Int, Float
+from fakeredis._helpers import SimpleError, casematch
+from fakeredis._zset import ZSet
+
+
+def _format_path(path) -> str:
+    if isinstance(path, bytes):
+        path = path.decode()
+    if path == '.':
+        return '$'
+    elif path.startswith('.'):
+        return '$' + path
+    elif path.startswith('$'):
+        return path
+    else:
+        return '$.' + path
+
+
+def _parse_jsonpath(path: Union[str, bytes]):
+    path = _format_path(path)
+    try:
+        return parse(path)
+    except JsonPathParserError:
+        raise SimpleError(msgs.JSON_PATH_DOES_NOT_EXIST.format(path))
+
+
+def _path_is_root(path: JSONPath) -> bool:
+    return path == Root()
+
+
+class JSONObject:
+    """Argument converter for JSON objects."""
+
+    DECODE_ERROR = msgs.JSON_WRONG_REDIS_TYPE
+    ENCODE_ERROR = msgs.JSON_WRONG_REDIS_TYPE
+
+    @classmethod
+    def decode(cls, value: bytes) -> Any:
+        """Deserialize the supplied bytes into a valid Python object."""
+        try:
+            return json.loads(value)
+        except JSONDecodeError:
+            raise SimpleError(cls.DECODE_ERROR)
+
+    @classmethod
+    def encode(cls, value: Any) -> bytes:
+        """Serialize the supplied Python object into a valid, JSON-formatted
+        byte-encoded string."""
+        return json.dumps(value, default=str).encode() if value is not None else None
+
+
+def _json_write_iterate(method, key, path_str, **kwargs):
+    """Implement json.* write commands.
+    Iterate over values with path_str in key and running method to get new value for path item.
+    """
+    if key.value is None:
+        raise SimpleError(msgs.JSON_KEY_NOT_FOUND)
+    path = _parse_jsonpath(path_str)
+    found_matches = path.find(key.value)
+    if len(found_matches) == 0:
+        raise SimpleError(msgs.JSON_PATH_NOT_FOUND_OR_NOT_STRING.format(path_str))
+
+    curr_value = copy.deepcopy(key.value)
+    res = list()
+    for item in found_matches:
+        new_value, res_val, update = method(item.value)
+        if update:
+            curr_value = item.full_path.update(curr_value, new_value)
+        res.append(res_val)
+
+    key.update(curr_value)
+
+    if len(path_str) > 1 and path_str[0] == ord(b'.'):
+        if kwargs.get('allow_result_none', False):
+            return res[-1]
+        else:
+            return next(x for x in reversed(res) if x is not None)
+    if len(res) == 1 and path_str[0] != ord(b'$'):
+        return res[0]
+    return res
+
+
+def _json_read_iterate(method, key, *args, error_on_zero_matches=False):
+    path_str = args[0] if len(args) > 0 else '$'
+    if key.value is None:
+        if path_str[0] == 36:
+            raise SimpleError(msgs.JSON_KEY_NOT_FOUND)
+        else:
+            return None
+
+    path = _parse_jsonpath(path_str)
+    found_matches = path.find(key.value)
+    if error_on_zero_matches and len(found_matches) == 0 and path_str[0] != 36:
+        raise SimpleError(msgs.JSON_PATH_NOT_FOUND_OR_NOT_STRING.format(path_str))
+    res = list()
+    for item in found_matches:
+        res.append(method(item.value))
+
+    if path_str[0] == 46:
+        return res[0] if len(res) > 0 else None
+    if len(res) == 1 and (len(args) == 0 or (len(args) == 1 and args[0][0] == 46)):
+        return res[0]
+
+    return res
+
+
+class JSONCommandsMixin:
+    """`CommandsMixin` for enabling RedisJSON compatibility in `fakeredis`."""
+    NoneType = type(None)
+    TYPES_EMPTY_VAL_DICT = {
+        dict: {},
+        int: 0,
+        float: 0.0,
+        list: [],
+    }
+    TYPE_NAMES = {
+        dict: b'object',
+        int: b'integer',
+        float: b'number',
+        bytes: b'string',
+        list: b'array',
+        set: b'set',
+        str: b'string',
+        bool: b'boolean',
+        NoneType: b'null',
+        ZSet: 'zset'
+    }
+
+    def __init__(self, *args: Any, **kwargs: Any) -> None:
+        super().__init__(*args, **kwargs)
+
+    @staticmethod
+    def _get_single(key, path_str: str, always_return_list: bool = False, empty_list_as_none: bool = False):
+        path = _parse_jsonpath(path_str)
+        path_value = path.find(key.value)
+        val = [i.value for i in path_value]
+        if empty_list_as_none and len(val) == 0:
+            val = None
+        elif len(val) == 1 and not always_return_list:
+            val = val[0]
+        return val
+
+    @command(name=["JSON.DEL", "JSON.FORGET"], fixed=(Key(),), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_del(self, key, path_str) -> int:
+        if key.value is None:
+            return 0
+
+        path = _parse_jsonpath(path_str)
+        if _path_is_root(path):
+            delete_keys(key)
+            return 1
+        curr_value = copy.deepcopy(key.value)
+
+        found_matches = path.find(curr_value)
+        res = 0
+        while len(found_matches) > 0:
+            item = found_matches[0]
+            curr_value = item.full_path.filter(lambda _: True, curr_value)
+            res += 1
+            found_matches = path.find(curr_value)
+
+        key.update(curr_value)
+        return res
+
+    @command(name="JSON.SET", fixed=(Key(), bytes, JSONObject), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_set(self, key, path_str: bytes, value: JsonType, *args) -> Optional[helpers.SimpleString]:
+        """Set the JSON value at key `name` under the `path` to `obj`.
+
+        For more information see `JSON.SET <https://redis.io/commands/json.set>`_.
+        """
+        path = _parse_jsonpath(path_str)
+        if key.value is not None and (type(key.value) is not dict) and not _path_is_root(path):
+            raise SimpleError(msgs.JSON_WRONG_REDIS_TYPE)
+        old_value = path.find(key.value)
+        (nx, xx), _ = extract_args(args, ('nx', 'xx'))
+        if xx and nx:
+            raise SimpleError(msgs.SYNTAX_ERROR_MSG)
+        if (nx and old_value) or (xx and not old_value):
+            return None
+        new_value = path.update_or_create(key.value, value)
+        key.update(new_value)
+
+        return helpers.OK
+
+    @command(name="JSON.GET", fixed=(Key(),), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_get(self, key, *args) -> bytes:
+        paths = [arg for arg in args if not casematch(b'noescape', arg)]
+        no_wrapping_array = (len(paths) == 1 and paths[0][0] == ord(b'.'))
+
+        formatted_paths = [
+            _format_path(arg) for arg in args
+            if not casematch(b'noescape', arg)
+        ]
+        path_values = [self._get_single(key, path, len(formatted_paths) > 1) for path in formatted_paths]
+
+        # Emulate the behavior of `redis-py`:
+        #   - if only one path was supplied => return a single value
+        #   - if more than one path was specified => return one value for each specified path
+        if (no_wrapping_array or
+                (len(path_values) == 1 and isinstance(path_values[0], list))):
+            return JSONObject.encode(path_values[0])
+        if len(path_values) == 1:
+            return JSONObject.encode(path_values)
+        return JSONObject.encode(dict(zip(formatted_paths, path_values)))
+
+    @command(name="JSON.MGET", fixed=(bytes,), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_mget(self, *args):
+        if len(args) < 2:
+            raise SimpleError(msgs.WRONG_ARGS_MSG6.format('json.mget'))
+        path_str = args[-1]
+        keys = [CommandItem(key, self._db, item=self._db.get(key), default=[])
+                for key in args[:-1]]
+
+        result = [JSONObject.encode(self._get_single(key, path_str, empty_list_as_none=True)) for key in keys]
+        return result
+
+    @command(name="JSON.TOGGLE", fixed=(Key(),), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_toggle(self, key, *args):
+        if key.value is None:
+            raise SimpleError(msgs.JSON_KEY_NOT_FOUND)
+        path_str = args[0] if len(args) > 0 else '$'
+        path = _parse_jsonpath(path_str)
+        found_matches = path.find(key.value)
+
+        curr_value = copy.deepcopy(key.value)
+        res = list()
+        for item in found_matches:
+            if type(item.value) == bool:
+                curr_value = item.full_path.update(curr_value, not item.value)
+                res.append(not item.value)
+            else:
+                res.append(None)
+        if all([x is None for x in res]):
+            raise SimpleError(msgs.JSON_KEY_NOT_FOUND)
+        key.update(curr_value)
+
+        if len(res) == 1 and (len(args) == 0 or (len(args) == 1 and args[0] == b'.')):
+            return res[0]
+
+        return res
+
+    @command(name="JSON.CLEAR", fixed=(Key(),), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_clear(self, key, *args, ):
+        if key.value is None:
+            raise SimpleError(msgs.JSON_KEY_NOT_FOUND)
+        path_str = args[0] if len(args) > 0 else '$'
+        path = _parse_jsonpath(path_str)
+        found_matches = path.find(key.value)
+        curr_value = copy.deepcopy(key.value)
+        res = 0
+        for item in found_matches:
+            new_val = self.TYPES_EMPTY_VAL_DICT.get(type(item.value), None)
+            if new_val is not None:
+                curr_value = item.full_path.update(curr_value, new_val)
+                res += 1
+
+        key.update(curr_value)
+        return res
+
+    @command(name="JSON.STRAPPEND", fixed=(Key(), bytes), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_strappend(self, key, path_str, *args):
+        if len(args) == 0:
+            raise SimpleError(msgs.WRONG_ARGS_MSG6.format('json.strappend'))
+        addition = JSONObject.decode(args[0])
+
+        def strappend(val):
+            if type(val) == str:
+                new_value = val + addition
+                return new_value, len(new_value), True
+            else:
+                return None, None, False
+
+        return _json_write_iterate(strappend, key, path_str)
+
+    @command(name="JSON.ARRAPPEND", fixed=(Key(), bytes,), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_arrappend(self, key, path_str, *args):
+        if len(args) == 0:
+            raise SimpleError(msgs.WRONG_ARGS_MSG6.format('json.arrappend'))
+
+        addition = [JSONObject.decode(item) for item in args]
+
+        def arrappend(val):
+            if type(val) == list:
+                new_value = val + addition
+                return new_value, len(new_value), True
+            else:
+                return None, None, False
+
+        return _json_write_iterate(arrappend, key, path_str)
+
+    @command(name="JSON.ARRINSERT", fixed=(Key(), bytes, Int), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_arrinsert(self, key, path_str, index, *args):
+        if len(args) == 0:
+            raise SimpleError(msgs.WRONG_ARGS_MSG6.format('json.arrinsert'))
+
+        addition = [JSONObject.decode(item) for item in args]
+
+        def arrinsert(val):
+            if type(val) == list:
+                new_value = val[:index] + addition + val[index:]
+                return new_value, len(new_value), True
+            else:
+                return None, None, False
+
+        return _json_write_iterate(arrinsert, key, path_str)
+
+    @command(name="JSON.ARRPOP", fixed=(Key(),), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_arrpop(self, key, *args):
+        path_str = args[0] if len(args) > 0 else '$'
+        index = Int.decode(args[1]) if len(args) > 1 else -1
+
+        def arrpop(val):
+            if type(val) == list and len(val) > 0:
+                ind = index if index < len(val) else -1
+                res = val.pop(ind)
+                return val, JSONObject.encode(res), True
+            else:
+                return None, None, False
+
+        return _json_write_iterate(arrpop, key, path_str, allow_result_none=True)
+
+    @command(name="JSON.ARRTRIM", fixed=(Key(),), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_arrtrim(self, key, *args):
+        path_str = args[0] if len(args) > 0 else '$'
+        start = Int.decode(args[1]) if len(args) > 1 else 0
+        stop = Int.decode(args[2]) if len(args) > 2 else None
+
+        def arrtrim(val):
+            if type(val) == list:
+                start_ind = min(start, len(val))
+                stop_ind = len(val) if stop is None or stop == -1 else stop + 1
+                if stop_ind < 0:
+                    stop_ind = len(val) + stop_ind + 1
+                new_val = val[start_ind:stop_ind]
+                return new_val, len(new_val), True
+            else:
+                return None, None, False
+
+        return _json_write_iterate(arrtrim, key, path_str)
+
+    @command(name="JSON.NUMINCRBY", fixed=(Key(), bytes, Float), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_numincrby(self, key, path_str, inc_by, *args):
+
+        def numincrby(val):
+            if type(val) in {int, float}:
+                new_value = val + inc_by
+                return new_value, new_value, True
+            else:
+                return None, None, False
+
+        return _json_write_iterate(numincrby, key, path_str)
+
+    @command(name="JSON.NUMMULTBY", fixed=(Key(), bytes, Float), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_nummultby(self, key, path_str, mult_by, *args):
+
+        def nummultby(val):
+            if type(val) in {int, float}:
+                new_value = val * mult_by
+                return new_value, new_value, True
+            else:
+                return None, None, False
+
+        return _json_write_iterate(nummultby, key, path_str)
+
+    # Read operations
+    @command(name="JSON.ARRINDEX", fixed=(Key(), bytes, bytes), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_arrindex(self, key, path_str, encoded_value, *args):
+        start = max(0, Int.decode(args[0]) if len(args) > 0 else 0)
+        end = Int.decode(args[1]) if len(args) > 1 else -1
+        end = end if end > 0 else -1
+        expected_value = JSONObject.decode(encoded_value)
+
+        def check_index(value):
+            if type(value) != list:
+                return None
+            try:
+                ind = next(filter(
+                    lambda x: x[1] == expected_value and type(x[1]) == type(expected_value),
+                    enumerate(value[start:end])))
+                return ind[0] + start
+            except StopIteration:
+                return -1
+
+        return _json_read_iterate(check_index, key, path_str, *args, error_on_zero_matches=True)
+
+    @command(name="JSON.STRLEN", fixed=(Key(),), repeat=(bytes,))
+    def json_strlen(self, key, *args):
+        return _json_read_iterate(
+            lambda val: len(val) if type(val) == str else None, key, *args)
+
+    @command(name="JSON.ARRLEN", fixed=(Key(),), repeat=(bytes,))
+    def json_arrlen(self, key, *args):
+        return _json_read_iterate(
+            lambda val: len(val) if type(val) == list else None, key, *args)
+
+    @command(name="JSON.OBJLEN", fixed=(Key(),), repeat=(bytes,))
+    def json_objlen(self, key, *args):
+        return _json_read_iterate(
+            lambda val: len(val) if type(val) == dict else None, key, *args)
+
+    @command(name="JSON.TYPE", fixed=(Key(),), repeat=(bytes,), flags=msgs.FLAG_LEAVE_EMPTY_VAL)
+    def json_type(self, key, *args, ):
+        return _json_read_iterate(
+            lambda val: self.TYPE_NAMES.get(type(val), None), key, *args)
+
+    @command(name="JSON.OBJKEYS", fixed=(Key(),), repeat=(bytes,))
+    def json_objkeys(self, key, *args):
+        return _json_read_iterate(
+            lambda val: [i.encode() for i in val.keys()] if type(val) == dict else None, key, *args)
diff --git a/mkdocs.yml b/mkdocs.yml
new file mode 100644
index 0000000..f418176
--- /dev/null
+++ b/mkdocs.yml
@@ -0,0 +1,116 @@
+---
+site_name: fakeredis
+site_author: Daniel Moran
+site_description: >-
+  Documentation for fakeredis python library
+# Repository
+repo_name: cunla/fakeredis
+repo_url: https://github.com/cunla/fakeredis
+
+# Copyright
+copyright: Copyright &copy; 2022 - 2023 Daniel Moran
+
+extra:
+  generator: false
+  analytics:
+    provider: google
+    property: G-GJBJBKXT19
+markdown_extensions:
+  - abbr
+  - admonition
+  - attr_list
+  - def_list
+  - footnotes
+  - md_in_html
+  - pymdownx.arithmatex:
+      generic: true
+  - pymdownx.betterem:
+      smart_enable: all
+  - pymdownx.caret
+  - pymdownx.details
+  - pymdownx.emoji:
+      emoji_generator: !!python/name:materialx.emoji.to_svg
+      emoji_index: !!python/name:materialx.emoji.twemoji
+  - pymdownx.highlight:
+      anchor_linenums: true
+  - pymdownx.inlinehilite
+  - pymdownx.keys
+  - pymdownx.magiclink:
+      repo_url_shorthand: true
+      user: cunla
+      repo: fakeredis
+  - pymdownx.mark
+  - pymdownx.smartsymbols
+  - pymdownx.superfences:
+      custom_fences:
+        - name: mermaid
+          class: mermaid
+          format: !!python/name:pymdownx.superfences.fence_code_format
+  - pymdownx.tabbed:
+      alternate_style: true
+  - pymdownx.tasklist:
+      custom_checkbox: true
+  - pymdownx.tilde
+  - toc:
+      permalink: true
+      toc_depth: 2
+
+nav:
+  - Home: index.md
+  - Redis stack: redis-stack.md
+  - Redis commands:
+      - Redis commands: redis-commands/Redis.md
+      - RedisJSON commands: redis-commands/RedisJson.md
+      - Search commands: redis-commands/RedisSearch.md
+      - Graph commands: redis-commands/RedisGraph.md
+      - Time Series commands: redis-commands/RedisTimeSeries.md
+      - Probabilistic commands: redis-commands/RedisBloom.md
+  - Guides:
+      - Implementing support for a command: guides/implement-command.md
+      - Write a new test case: guides/test-case.md
+  - About:
+      - Release Notes: about/changelog.md
+      - Contributing: about/contributing.md
+      - License: about/license.md
+
+theme:
+  name: material
+  palette:
+    - scheme: default
+      primary: indigo
+      accent: indigo
+      toggle:
+        icon: material/brightness-7
+        name: Switch to dark mode
+    - scheme: slate
+      primary: indigo
+      accent: indigo
+      toggle:
+        icon: material/brightness-4
+        name: Switch to light mode
+  features:
+    # - announce.dismiss
+    - content.action.edit
+    - content.action.view
+    - content.code.annotate
+    - content.code.copy
+    # - content.tabs.link
+    - content.tooltips
+    # - header.autohide
+    # - navigation.expand
+    - navigation.footer
+    - navigation.indexes
+    # - navigation.instant
+    # - navigation.prune
+    - navigation.sections
+    # - navigation.tabs.sticky
+    - navigation.tracking
+    - search.highlight
+    - search.share
+    - search.suggest
+    - toc.follow
+    # - toc.integrate
+  highlightjs: true
+  hljs_languages:
+    - yaml
+    - django
diff --git a/poetry.lock b/poetry.lock
index d78f762..809ec2e 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -1,17 +1,4 @@
-[[package]]
-name = "aioredis"
-version = "2.0.1"
-description = "asyncio (PEP 3156) Redis support"
-category = "main"
-optional = true
-python-versions = ">=3.6"
-
-[package.dependencies]
-async-timeout = "*"
-typing-extensions = "*"
-
-[package.extras]
-hiredis = ["hiredis (>=1.0)"]
+# This file is automatically @generated by Poetry 1.4.2 and should not be changed by hand.
 
 [[package]]
 name = "async-timeout"
@@ -20,39 +7,47 @@ description = "Timeout context manager for asyncio programs"
 category = "main"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "async-timeout-4.0.2.tar.gz", hash = "sha256:2163e1640ddb52b7a8c80d0a67a08587e5d245cc9c553a74a847056bc2976b15"},
+    {file = "async_timeout-4.0.2-py3-none-any.whl", hash = "sha256:8ca1e4fcf50d07413d66d1a5e416e42cfdf5851c981d679a09851a6853383b3c"},
+]
 
 [package.dependencies]
 typing-extensions = {version = ">=3.6.5", markers = "python_version < \"3.8\""}
 
-[[package]]
-name = "atomicwrites"
-version = "1.4.1"
-description = "Atomic file writes."
-category = "dev"
-optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
-
 [[package]]
 name = "attrs"
-version = "22.1.0"
+version = "23.1.0"
 description = "Classes Without Boilerplate"
 category = "dev"
 optional = false
-python-versions = ">=3.5"
+python-versions = ">=3.7"
+files = [
+    {file = "attrs-23.1.0-py3-none-any.whl", hash = "sha256:1f28b4522cdc2fb4256ac1a020c78acf9cba2c6b461ccd2c126f3aa8e8335d04"},
+    {file = "attrs-23.1.0.tar.gz", hash = "sha256:6279836d581513a26f1bf235f9acd333bc9115683f14f7e8fae46c98fc50e015"},
+]
+
+[package.dependencies]
+importlib-metadata = {version = "*", markers = "python_version < \"3.8\""}
 
 [package.extras]
-dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit", "cloudpickle"]
-docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"]
-tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "cloudpickle"]
-tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "cloudpickle"]
+cov = ["attrs[tests]", "coverage[toml] (>=5.3)"]
+dev = ["attrs[docs,tests]", "pre-commit"]
+docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope-interface"]
+tests = ["attrs[tests-no-zope]", "zope-interface"]
+tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
 
 [[package]]
 name = "bleach"
-version = "5.0.1"
+version = "6.0.0"
 description = "An easy safelist-based HTML-sanitizing tool."
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "bleach-6.0.0-py3-none-any.whl", hash = "sha256:33c16e3353dbd13028ab4799a0f89a83f113405c766e9c122df8a06f5b85b3f4"},
+    {file = "bleach-6.0.0.tar.gz", hash = "sha256:1a1a85c1595e07d8db14c5f09f09e6433502c51c595970edc090551f0db99414"},
+]
 
 [package.dependencies]
 six = ">=1.9.0"
@@ -60,15 +55,30 @@ webencodings = "*"
 
 [package.extras]
 css = ["tinycss2 (>=1.1.0,<1.2)"]
-dev = ["build (==0.8.0)", "flake8 (==4.0.1)", "hashin (==0.17.0)", "pip-tools (==6.6.2)", "pytest (==7.1.2)", "Sphinx (==4.3.2)", "tox (==3.25.0)", "twine (==4.0.1)", "wheel (==0.37.1)", "black (==22.3.0)", "mypy (==0.961)"]
+
+[[package]]
+name = "cachetools"
+version = "5.3.0"
+description = "Extensible memoizing collections and decorators"
+category = "dev"
+optional = false
+python-versions = "~=3.7"
+files = [
+    {file = "cachetools-5.3.0-py3-none-any.whl", hash = "sha256:429e1a1e845c008ea6c85aa35d4b98b65d6a9763eeef3e37e92728a12d1de9d4"},
+    {file = "cachetools-5.3.0.tar.gz", hash = "sha256:13dfddc7b8df938c21a940dfa6557ce6e94a2f1cdfa58eb90c805721d58f2c14"},
+]
 
 [[package]]
 name = "certifi"
-version = "2022.6.15"
+version = "2023.5.7"
 description = "Python package for providing Mozilla's CA Bundle."
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "certifi-2023.5.7-py3-none-any.whl", hash = "sha256:c6c2e98f5c7869efca1f8916fed228dd91539f9f1b444c314c06eef02980c716"},
+    {file = "certifi-2023.5.7.tar.gz", hash = "sha256:0f0d56dc5a6ad56fd4ba36484d6cc34451e1c6548c61daad8c320169f91eddc7"},
+]
 
 [[package]]
 name = "cffi"
@@ -77,47 +87,245 @@ description = "Foreign Function Interface for Python calling C code."
 category = "dev"
 optional = false
 python-versions = "*"
+files = [
+    {file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
+    {file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
+    {file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
+    {file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
+    {file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
+    {file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
+    {file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
+    {file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
+    {file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
+    {file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
+    {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
+    {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
+    {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
+    {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
+    {file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
+    {file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
+    {file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
+    {file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
+    {file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
+    {file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
+    {file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
+    {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
+    {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
+    {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
+    {file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
+    {file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
+    {file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
+    {file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
+    {file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
+    {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
+    {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
+    {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
+    {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
+    {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
+    {file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
+    {file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
+    {file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
+    {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
+    {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
+    {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
+    {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
+    {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
+    {file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
+    {file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
+    {file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
+    {file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
+    {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
+    {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
+    {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
+    {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
+    {file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
+    {file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
+    {file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
+    {file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
+    {file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
+    {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
+    {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
+    {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
+    {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
+    {file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
+    {file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
+    {file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
+    {file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
+    {file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
+]
 
 [package.dependencies]
 pycparser = "*"
 
 [[package]]
-name = "charset-normalizer"
-version = "2.1.0"
-description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+name = "chardet"
+version = "5.1.0"
+description = "Universal encoding detector for Python 3"
 category = "dev"
 optional = false
-python-versions = ">=3.6.0"
-
-[package.extras]
-unicode_backport = ["unicodedata2"]
+python-versions = ">=3.7"
+files = [
+    {file = "chardet-5.1.0-py3-none-any.whl", hash = "sha256:362777fb014af596ad31334fde1e8c327dfdb076e1960d1694662d46a6917ab9"},
+    {file = "chardet-5.1.0.tar.gz", hash = "sha256:0d62712b956bc154f85fb0a266e2a3c5913c2967e00348701b32411d6def31e5"},
+]
 
 [[package]]
-name = "colorama"
-version = "0.4.5"
-description = "Cross-platform colored terminal text."
+name = "charset-normalizer"
+version = "3.1.0"
+description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
 category = "dev"
 optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+python-versions = ">=3.7.0"
+files = [
+    {file = "charset-normalizer-3.1.0.tar.gz", hash = "sha256:34e0a2f9c370eb95597aae63bf85eb5e96826d81e3dcf88b8886012906f509b5"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e0ac8959c929593fee38da1c2b64ee9778733cdf03c482c9ff1d508b6b593b2b"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d7fc3fca01da18fbabe4625d64bb612b533533ed10045a2ac3dd194bfa656b60"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:04eefcee095f58eaabe6dc3cc2262f3bcd776d2c67005880894f447b3f2cb9c1"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:20064ead0717cf9a73a6d1e779b23d149b53daf971169289ed2ed43a71e8d3b0"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1435ae15108b1cb6fffbcea2af3d468683b7afed0169ad718451f8db5d1aff6f"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c84132a54c750fda57729d1e2599bb598f5fa0344085dbde5003ba429a4798c0"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75f2568b4189dda1c567339b48cba4ac7384accb9c2a7ed655cd86b04055c795"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:11d3bcb7be35e7b1bba2c23beedac81ee893ac9871d0ba79effc7fc01167db6c"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:891cf9b48776b5c61c700b55a598621fdb7b1e301a550365571e9624f270c203"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:5f008525e02908b20e04707a4f704cd286d94718f48bb33edddc7d7b584dddc1"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:b06f0d3bf045158d2fb8837c5785fe9ff9b8c93358be64461a1089f5da983137"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:49919f8400b5e49e961f320c735388ee686a62327e773fa5b3ce6721f7e785ce"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:22908891a380d50738e1f978667536f6c6b526a2064156203d418f4856d6e86a"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-win32.whl", hash = "sha256:12d1a39aa6b8c6f6248bb54550efcc1c38ce0d8096a146638fd4738e42284448"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:65ed923f84a6844de5fd29726b888e58c62820e0769b76565480e1fdc3d062f8"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9a3267620866c9d17b959a84dd0bd2d45719b817245e49371ead79ed4f710d19"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6734e606355834f13445b6adc38b53c0fd45f1a56a9ba06c2058f86893ae8017"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f8303414c7b03f794347ad062c0516cee0e15f7a612abd0ce1e25caf6ceb47df"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aaf53a6cebad0eae578f062c7d462155eada9c172bd8c4d250b8c1d8eb7f916a"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3dc5b6a8ecfdc5748a7e429782598e4f17ef378e3e272eeb1340ea57c9109f41"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e1b25e3ad6c909f398df8921780d6a3d120d8c09466720226fc621605b6f92b1"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ca564606d2caafb0abe6d1b5311c2649e8071eb241b2d64e75a0d0065107e62"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b82fab78e0b1329e183a65260581de4375f619167478dddab510c6c6fb04d9b6"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:bd7163182133c0c7701b25e604cf1611c0d87712e56e88e7ee5d72deab3e76b5"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:11d117e6c63e8f495412d37e7dc2e2fff09c34b2d09dbe2bee3c6229577818be"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:cf6511efa4801b9b38dc5546d7547d5b5c6ef4b081c60b23e4d941d0eba9cbeb"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:abc1185d79f47c0a7aaf7e2412a0eb2c03b724581139193d2d82b3ad8cbb00ac"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cb7b2ab0188829593b9de646545175547a70d9a6e2b63bf2cd87a0a391599324"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-win32.whl", hash = "sha256:c36bcbc0d5174a80d6cccf43a0ecaca44e81d25be4b7f90f0ed7bcfbb5a00909"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:cca4def576f47a09a943666b8f829606bcb17e2bc2d5911a46c8f8da45f56755"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0c95f12b74681e9ae127728f7e5409cbbef9cd914d5896ef238cc779b8152373"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fca62a8301b605b954ad2e9c3666f9d97f63872aa4efcae5492baca2056b74ab"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ac0aa6cd53ab9a31d397f8303f92c42f534693528fafbdb997c82bae6e477ad9"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c3af8e0f07399d3176b179f2e2634c3ce9c1301379a6b8c9c9aeecd481da494f"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a5fc78f9e3f501a1614a98f7c54d3969f3ad9bba8ba3d9b438c3bc5d047dd28"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:628c985afb2c7d27a4800bfb609e03985aaecb42f955049957814e0491d4006d"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:74db0052d985cf37fa111828d0dd230776ac99c740e1a758ad99094be4f1803d"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:1e8fcdd8f672a1c4fc8d0bd3a2b576b152d2a349782d1eb0f6b8e52e9954731d"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:04afa6387e2b282cf78ff3dbce20f0cc071c12dc8f685bd40960cc68644cfea6"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:dd5653e67b149503c68c4018bf07e42eeed6b4e956b24c00ccdf93ac79cdff84"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:d2686f91611f9e17f4548dbf050e75b079bbc2a82be565832bc8ea9047b61c8c"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-win32.whl", hash = "sha256:4155b51ae05ed47199dc5b2a4e62abccb274cee6b01da5b895099b61b1982974"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-win_amd64.whl", hash = "sha256:322102cdf1ab682ecc7d9b1c5eed4ec59657a65e1c146a0da342b78f4112db23"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e633940f28c1e913615fd624fcdd72fdba807bf53ea6925d6a588e84e1151531"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:3a06f32c9634a8705f4ca9946d667609f52cf130d5548881401f1eb2c39b1e2c"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7381c66e0561c5757ffe616af869b916c8b4e42b367ab29fedc98481d1e74e14"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3573d376454d956553c356df45bb824262c397c6e26ce43e8203c4c540ee0acb"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e89df2958e5159b811af9ff0f92614dabf4ff617c03a4c1c6ff53bf1c399e0e1"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:78cacd03e79d009d95635e7d6ff12c21eb89b894c354bd2b2ed0b4763373693b"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de5695a6f1d8340b12a5d6d4484290ee74d61e467c39ff03b39e30df62cf83a0"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1c60b9c202d00052183c9be85e5eaf18a4ada0a47d188a83c8f5c5b23252f649"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f645caaf0008bacf349875a974220f1f1da349c5dbe7c4ec93048cdc785a3326"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:ea9f9c6034ea2d93d9147818f17c2a0860d41b71c38b9ce4d55f21b6f9165a11"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:80d1543d58bd3d6c271b66abf454d437a438dff01c3e62fdbcd68f2a11310d4b"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:73dc03a6a7e30b7edc5b01b601e53e7fc924b04e1835e8e407c12c037e81adbd"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6f5c2e7bc8a4bf7c426599765b1bd33217ec84023033672c1e9a8b35eaeaaaf8"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-win32.whl", hash = "sha256:12a2b561af122e3d94cdb97fe6fb2bb2b82cef0cdca131646fdb940a1eda04f0"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:3160a0fd9754aab7d47f95a6b63ab355388d890163eb03b2d2b87ab0a30cfa59"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:38e812a197bf8e71a59fe55b757a84c1f946d0ac114acafaafaf21667a7e169e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6baf0baf0d5d265fa7944feb9f7451cc316bfe30e8df1a61b1bb08577c554f31"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:8f25e17ab3039b05f762b0a55ae0b3632b2e073d9c8fc88e89aca31a6198e88f"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3747443b6a904001473370d7810aa19c3a180ccd52a7157aacc264a5ac79265e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b116502087ce8a6b7a5f1814568ccbd0e9f6cfd99948aa59b0e241dc57cf739f"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d16fd5252f883eb074ca55cb622bc0bee49b979ae4e8639fff6ca3ff44f9f854"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21fa558996782fc226b529fdd2ed7866c2c6ec91cee82735c98a197fae39f706"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f6c7a8a57e9405cad7485f4c9d3172ae486cfef1344b5ddd8e5239582d7355e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ac3775e3311661d4adace3697a52ac0bab17edd166087d493b52d4f4f553f9f0"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:10c93628d7497c81686e8e5e557aafa78f230cd9e77dd0c40032ef90c18f2230"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:6f4f4668e1831850ebcc2fd0b1cd11721947b6dc7c00bf1c6bd3c929ae14f2c7"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:0be65ccf618c1e7ac9b849c315cc2e8a8751d9cfdaa43027d4f6624bd587ab7e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:53d0a3fa5f8af98a1e261de6a3943ca631c526635eb5817a87a59d9a57ebf48f"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-win32.whl", hash = "sha256:a04f86f41a8916fe45ac5024ec477f41f886b3c435da2d4e3d2709b22ab02af1"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:830d2948a5ec37c386d3170c483063798d7879037492540f10a475e3fd6f244b"},
+    {file = "charset_normalizer-3.1.0-py3-none-any.whl", hash = "sha256:3d9098b479e78c85080c98e1e35ff40b4a31d8953102bb0fd7d1b6f8a2111a3d"},
+]
 
 [[package]]
-name = "commonmark"
-version = "0.9.1"
-description = "Python parser for the CommonMark Markdown spec"
+name = "colorama"
+version = "0.4.6"
+description = "Cross-platform colored terminal text."
 category = "dev"
 optional = false
-python-versions = "*"
-
-[package.extras]
-test = ["hypothesis (==3.55.3)", "flake8 (==3.7.8)"]
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
+files = [
+    {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
+    {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
+]
 
 [[package]]
 name = "coverage"
-version = "6.4.2"
+version = "7.2.5"
 description = "Code coverage measurement for Python"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "coverage-7.2.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:883123d0bbe1c136f76b56276074b0c79b5817dd4238097ffa64ac67257f4b6c"},
+    {file = "coverage-7.2.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d2fbc2a127e857d2f8898aaabcc34c37771bf78a4d5e17d3e1f5c30cd0cbc62a"},
+    {file = "coverage-7.2.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5f3671662dc4b422b15776cdca89c041a6349b4864a43aa2350b6b0b03bbcc7f"},
+    {file = "coverage-7.2.5-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:780551e47d62095e088f251f5db428473c26db7829884323e56d9c0c3118791a"},
+    {file = "coverage-7.2.5-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:066b44897c493e0dcbc9e6a6d9f8bbb6607ef82367cf6810d387c09f0cd4fe9a"},
+    {file = "coverage-7.2.5-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b9a4ee55174b04f6af539218f9f8083140f61a46eabcaa4234f3c2a452c4ed11"},
+    {file = "coverage-7.2.5-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:706ec567267c96717ab9363904d846ec009a48d5f832140b6ad08aad3791b1f5"},
+    {file = "coverage-7.2.5-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:ae453f655640157d76209f42c62c64c4d4f2c7f97256d3567e3b439bd5c9b06c"},
+    {file = "coverage-7.2.5-cp310-cp310-win32.whl", hash = "sha256:f81c9b4bd8aa747d417407a7f6f0b1469a43b36a85748145e144ac4e8d303cb5"},
+    {file = "coverage-7.2.5-cp310-cp310-win_amd64.whl", hash = "sha256:dc945064a8783b86fcce9a0a705abd7db2117d95e340df8a4333f00be5efb64c"},
+    {file = "coverage-7.2.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:40cc0f91c6cde033da493227797be2826cbf8f388eaa36a0271a97a332bfd7ce"},
+    {file = "coverage-7.2.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a66e055254a26c82aead7ff420d9fa8dc2da10c82679ea850d8feebf11074d88"},
+    {file = "coverage-7.2.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c10fbc8a64aa0f3ed136b0b086b6b577bc64d67d5581acd7cc129af52654384e"},
+    {file = "coverage-7.2.5-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9a22cbb5ede6fade0482111fa7f01115ff04039795d7092ed0db43522431b4f2"},
+    {file = "coverage-7.2.5-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:292300f76440651529b8ceec283a9370532f4ecba9ad67d120617021bb5ef139"},
+    {file = "coverage-7.2.5-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:7ff8f3fb38233035028dbc93715551d81eadc110199e14bbbfa01c5c4a43f8d8"},
+    {file = "coverage-7.2.5-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:a08c7401d0b24e8c2982f4e307124b671c6736d40d1c39e09d7a8687bddf83ed"},
+    {file = "coverage-7.2.5-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:ef9659d1cda9ce9ac9585c045aaa1e59223b143f2407db0eaee0b61a4f266fb6"},
+    {file = "coverage-7.2.5-cp311-cp311-win32.whl", hash = "sha256:30dcaf05adfa69c2a7b9f7dfd9f60bc8e36b282d7ed25c308ef9e114de7fc23b"},
+    {file = "coverage-7.2.5-cp311-cp311-win_amd64.whl", hash = "sha256:97072cc90f1009386c8a5b7de9d4fc1a9f91ba5ef2146c55c1f005e7b5c5e068"},
+    {file = "coverage-7.2.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:bebea5f5ed41f618797ce3ffb4606c64a5de92e9c3f26d26c2e0aae292f015c1"},
+    {file = "coverage-7.2.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:828189fcdda99aae0d6bf718ea766b2e715eabc1868670a0a07bf8404bf58c33"},
+    {file = "coverage-7.2.5-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6e8a95f243d01ba572341c52f89f3acb98a3b6d1d5d830efba86033dd3687ade"},
+    {file = "coverage-7.2.5-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e8834e5f17d89e05697c3c043d3e58a8b19682bf365048837383abfe39adaed5"},
+    {file = "coverage-7.2.5-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d1f25ee9de21a39b3a8516f2c5feb8de248f17da7eead089c2e04aa097936b47"},
+    {file = "coverage-7.2.5-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:1637253b11a18f453e34013c665d8bf15904c9e3c44fbda34c643fbdc9d452cd"},
+    {file = "coverage-7.2.5-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8e575a59315a91ccd00c7757127f6b2488c2f914096077c745c2f1ba5b8c0969"},
+    {file = "coverage-7.2.5-cp37-cp37m-win32.whl", hash = "sha256:509ecd8334c380000d259dc66feb191dd0a93b21f2453faa75f7f9cdcefc0718"},
+    {file = "coverage-7.2.5-cp37-cp37m-win_amd64.whl", hash = "sha256:12580845917b1e59f8a1c2ffa6af6d0908cb39220f3019e36c110c943dc875b0"},
+    {file = "coverage-7.2.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b5016e331b75310610c2cf955d9f58a9749943ed5f7b8cfc0bb89c6134ab0a84"},
+    {file = "coverage-7.2.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:373ea34dca98f2fdb3e5cb33d83b6d801007a8074f992b80311fc589d3e6b790"},
+    {file = "coverage-7.2.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a063aad9f7b4c9f9da7b2550eae0a582ffc7623dca1c925e50c3fbde7a579771"},
+    {file = "coverage-7.2.5-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:38c0a497a000d50491055805313ed83ddba069353d102ece8aef5d11b5faf045"},
+    {file = "coverage-7.2.5-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a2b3b05e22a77bb0ae1a3125126a4e08535961c946b62f30985535ed40e26614"},
+    {file = "coverage-7.2.5-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:0342a28617e63ad15d96dca0f7ae9479a37b7d8a295f749c14f3436ea59fdcb3"},
+    {file = "coverage-7.2.5-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:cf97ed82ca986e5c637ea286ba2793c85325b30f869bf64d3009ccc1a31ae3fd"},
+    {file = "coverage-7.2.5-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:c2c41c1b1866b670573657d584de413df701f482574bad7e28214a2362cb1fd1"},
+    {file = "coverage-7.2.5-cp38-cp38-win32.whl", hash = "sha256:10b15394c13544fce02382360cab54e51a9e0fd1bd61ae9ce012c0d1e103c813"},
+    {file = "coverage-7.2.5-cp38-cp38-win_amd64.whl", hash = "sha256:a0b273fe6dc655b110e8dc89b8ec7f1a778d78c9fd9b4bda7c384c8906072212"},
+    {file = "coverage-7.2.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5c587f52c81211d4530fa6857884d37f514bcf9453bdeee0ff93eaaf906a5c1b"},
+    {file = "coverage-7.2.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4436cc9ba5414c2c998eaedee5343f49c02ca93b21769c5fdfa4f9d799e84200"},
+    {file = "coverage-7.2.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6599bf92f33ab041e36e06d25890afbdf12078aacfe1f1d08c713906e49a3fe5"},
+    {file = "coverage-7.2.5-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:857abe2fa6a4973f8663e039ead8d22215d31db613ace76e4a98f52ec919068e"},
+    {file = "coverage-7.2.5-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6f5cab2d7f0c12f8187a376cc6582c477d2df91d63f75341307fcdcb5d60303"},
+    {file = "coverage-7.2.5-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:aa387bd7489f3e1787ff82068b295bcaafbf6f79c3dad3cbc82ef88ce3f48ad3"},
+    {file = "coverage-7.2.5-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:156192e5fd3dbbcb11cd777cc469cf010a294f4c736a2b2c891c77618cb1379a"},
+    {file = "coverage-7.2.5-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:bd3b4b8175c1db502adf209d06136c000df4d245105c8839e9d0be71c94aefe1"},
+    {file = "coverage-7.2.5-cp39-cp39-win32.whl", hash = "sha256:ddc5a54edb653e9e215f75de377354e2455376f416c4378e1d43b08ec50acc31"},
+    {file = "coverage-7.2.5-cp39-cp39-win_amd64.whl", hash = "sha256:338aa9d9883aaaad53695cb14ccdeb36d4060485bb9388446330bef9c361c252"},
+    {file = "coverage-7.2.5-pp37.pp38.pp39-none-any.whl", hash = "sha256:8877d9b437b35a85c18e3c6499b23674684bf690f5d96c1006a1ef61f9fdf0f3"},
+    {file = "coverage-7.2.5.tar.gz", hash = "sha256:f99ef080288f09ffc687423b8d60978cf3a465d3f404a18d1a05474bd8575a47"},
+]
 
 [package.dependencies]
 tomli = {version = "*", optional = true, markers = "python_full_version <= \"3.11.0a6\" and extra == \"toml\""}
@@ -127,158 +335,267 @@ toml = ["tomli"]
 
 [[package]]
 name = "cryptography"
-version = "37.0.4"
+version = "40.0.2"
 description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "cryptography-40.0.2-cp36-abi3-macosx_10_12_universal2.whl", hash = "sha256:8f79b5ff5ad9d3218afb1e7e20ea74da5f76943ee5edb7f76e56ec5161ec782b"},
+    {file = "cryptography-40.0.2-cp36-abi3-macosx_10_12_x86_64.whl", hash = "sha256:05dc219433b14046c476f6f09d7636b92a1c3e5808b9a6536adf4932b3b2c440"},
+    {file = "cryptography-40.0.2-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4df2af28d7bedc84fe45bd49bc35d710aede676e2a4cb7fc6d103a2adc8afe4d"},
+    {file = "cryptography-40.0.2-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0dcca15d3a19a66e63662dc8d30f8036b07be851a8680eda92d079868f106288"},
+    {file = "cryptography-40.0.2-cp36-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:a04386fb7bc85fab9cd51b6308633a3c271e3d0d3eae917eebab2fac6219b6d2"},
+    {file = "cryptography-40.0.2-cp36-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:adc0d980fd2760c9e5de537c28935cc32b9353baaf28e0814df417619c6c8c3b"},
+    {file = "cryptography-40.0.2-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:d5a1bd0e9e2031465761dfa920c16b0065ad77321d8a8c1f5ee331021fda65e9"},
+    {file = "cryptography-40.0.2-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:a95f4802d49faa6a674242e25bfeea6fc2acd915b5e5e29ac90a32b1139cae1c"},
+    {file = "cryptography-40.0.2-cp36-abi3-win32.whl", hash = "sha256:aecbb1592b0188e030cb01f82d12556cf72e218280f621deed7d806afd2113f9"},
+    {file = "cryptography-40.0.2-cp36-abi3-win_amd64.whl", hash = "sha256:b12794f01d4cacfbd3177b9042198f3af1c856eedd0a98f10f141385c809a14b"},
+    {file = "cryptography-40.0.2-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:142bae539ef28a1c76794cca7f49729e7c54423f615cfd9b0b1fa90ebe53244b"},
+    {file = "cryptography-40.0.2-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:956ba8701b4ffe91ba59665ed170a2ebbdc6fc0e40de5f6059195d9f2b33ca0e"},
+    {file = "cryptography-40.0.2-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:4f01c9863da784558165f5d4d916093737a75203a5c5286fde60e503e4276c7a"},
+    {file = "cryptography-40.0.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:3daf9b114213f8ba460b829a02896789751626a2a4e7a43a28ee77c04b5e4958"},
+    {file = "cryptography-40.0.2-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:48f388d0d153350f378c7f7b41497a54ff1513c816bcbbcafe5b829e59b9ce5b"},
+    {file = "cryptography-40.0.2-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c0764e72b36a3dc065c155e5b22f93df465da9c39af65516fe04ed3c68c92636"},
+    {file = "cryptography-40.0.2-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:cbaba590180cba88cb99a5f76f90808a624f18b169b90a4abb40c1fd8c19420e"},
+    {file = "cryptography-40.0.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:7a38250f433cd41df7fcb763caa3ee9362777fdb4dc642b9a349721d2bf47404"},
+    {file = "cryptography-40.0.2.tar.gz", hash = "sha256:c33c0d32b8594fa647d2e01dbccc303478e16fdd7cf98652d5b3ed11aa5e5c99"},
+]
 
 [package.dependencies]
 cffi = ">=1.12"
 
 [package.extras]
-docs = ["sphinx (>=1.6.5,!=1.8.0,!=3.1.0,!=3.1.1)", "sphinx-rtd-theme"]
-docstest = ["pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"]
-pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"]
-sdist = ["setuptools_rust (>=0.11.4)"]
+docs = ["sphinx (>=5.3.0)", "sphinx-rtd-theme (>=1.1.1)"]
+docstest = ["pyenchant (>=1.6.11)", "sphinxcontrib-spelling (>=4.0.1)", "twine (>=1.12.0)"]
+pep8test = ["black", "check-manifest", "mypy", "ruff"]
+sdist = ["setuptools-rust (>=0.11.4)"]
 ssh = ["bcrypt (>=3.1.5)"]
-test = ["pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"]
+test = ["iso8601", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-shard (>=0.1.2)", "pytest-subtests", "pytest-xdist"]
+test-randomorder = ["pytest-randomly"]
+tox = ["tox"]
 
 [[package]]
-name = "deprecated"
-version = "1.2.13"
-description = "Python @deprecated decorator to deprecate old python classes, functions or methods."
+name = "decorator"
+version = "5.1.1"
+description = "Decorators for Humans"
 category = "main"
-optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
-
-[package.dependencies]
-wrapt = ">=1.10,<2"
-
-[package.extras]
-dev = ["tox", "bump2version (<1)", "sphinx (<2)", "importlib-metadata (<3)", "importlib-resources (<4)", "configparser (<5)", "sphinxcontrib-websupport (<2)", "zipp (<2)", "PyTest (<5)", "PyTest-Cov (<2.6)", "pytest", "pytest-cov"]
+optional = true
+python-versions = ">=3.5"
+files = [
+    {file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
+    {file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
+]
 
 [[package]]
 name = "distlib"
-version = "0.3.5"
+version = "0.3.6"
 description = "Distribution utilities"
 category = "dev"
 optional = false
 python-versions = "*"
+files = [
+    {file = "distlib-0.3.6-py2.py3-none-any.whl", hash = "sha256:f35c4b692542ca110de7ef0bea44d73981caeb34ca0b9b6b2e6d7790dda8f80e"},
+    {file = "distlib-0.3.6.tar.gz", hash = "sha256:14bad2d9b04d3a36127ac97f30b12a19268f211063d8f8ee4f47108896e11b46"},
+]
+
+[[package]]
+name = "docker"
+version = "6.1.2"
+description = "A Python library for the Docker Engine API."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "docker-6.1.2-py3-none-any.whl", hash = "sha256:134cd828f84543cbf8e594ff81ca90c38288df3c0a559794c12f2e4b634ea19e"},
+    {file = "docker-6.1.2.tar.gz", hash = "sha256:dcc088adc2ec4e7cfc594e275d8bd2c9738c56c808de97476939ef67db5af8c2"},
+]
+
+[package.dependencies]
+packaging = ">=14.0"
+pywin32 = {version = ">=304", markers = "sys_platform == \"win32\""}
+requests = ">=2.26.0"
+urllib3 = ">=1.26.0"
+websocket-client = ">=0.32.0"
+
+[package.extras]
+ssh = ["paramiko (>=2.4.3)"]
 
 [[package]]
 name = "docutils"
-version = "0.19"
+version = "0.20.1"
 description = "Docutils -- Python Documentation Utilities"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "docutils-0.20.1-py3-none-any.whl", hash = "sha256:96f387a2c5562db4476f09f13bbab2192e764cac08ebbf3a34a95d9b1e4a59d6"},
+    {file = "docutils-0.20.1.tar.gz", hash = "sha256:f08a4e276c3a1583a86dce3e34aba3fe04d02bba2dd51ed16106244e8a923e3b"},
+]
 
 [[package]]
 name = "exceptiongroup"
-version = "1.0.0rc8"
+version = "1.1.1"
 description = "Backport of PEP 654 (exception groups)"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "exceptiongroup-1.1.1-py3-none-any.whl", hash = "sha256:232c37c63e4f682982c8b6459f33a8981039e5fb8756b2074364e5055c498c9e"},
+    {file = "exceptiongroup-1.1.1.tar.gz", hash = "sha256:d484c3090ba2889ae2928419117447a14daf3c1231d5e30d0aae34f354f01785"},
+]
 
 [package.extras]
 test = ["pytest (>=6)"]
 
 [[package]]
 name = "filelock"
-version = "3.7.1"
+version = "3.12.0"
 description = "A platform independent file lock."
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "filelock-3.12.0-py3-none-any.whl", hash = "sha256:ad98852315c2ab702aeb628412cbf7e95b7ce8c3bf9565670b4eaecf1db370a9"},
+    {file = "filelock-3.12.0.tar.gz", hash = "sha256:fc03ae43288c013d2ea83c8597001b1129db351aad9c57fe2409327916b8e718"},
+]
 
 [package.extras]
-docs = ["furo (>=2021.8.17b43)", "sphinx (>=4.1)", "sphinx-autodoc-typehints (>=1.12)"]
-testing = ["covdefaults (>=1.2.0)", "coverage (>=4)", "pytest (>=4)", "pytest-cov", "pytest-timeout (>=1.4.2)"]
+docs = ["furo (>=2023.3.27)", "sphinx (>=6.1.3)", "sphinx-autodoc-typehints (>=1.23,!=1.23.4)"]
+testing = ["covdefaults (>=2.3)", "coverage (>=7.2.3)", "diff-cover (>=7.5)", "pytest (>=7.3.1)", "pytest-cov (>=4)", "pytest-mock (>=3.10)", "pytest-timeout (>=2.1)"]
 
 [[package]]
 name = "flake8"
-version = "4.0.1"
+version = "6.0.0"
 description = "the modular source code checker: pep8 pyflakes and co"
 category = "dev"
 optional = false
-python-versions = ">=3.6"
+python-versions = ">=3.8.1"
+files = [
+    {file = "flake8-6.0.0-py2.py3-none-any.whl", hash = "sha256:3833794e27ff64ea4e9cf5d410082a8b97ff1a06c16aa3d2027339cd0f1195c7"},
+    {file = "flake8-6.0.0.tar.gz", hash = "sha256:c61007e76655af75e6785a931f452915b371dc48f56efd765247c8fe68f2b181"},
+]
 
 [package.dependencies]
-importlib-metadata = {version = "<4.3", markers = "python_version < \"3.8\""}
-mccabe = ">=0.6.0,<0.7.0"
-pycodestyle = ">=2.8.0,<2.9.0"
-pyflakes = ">=2.4.0,<2.5.0"
+mccabe = ">=0.7.0,<0.8.0"
+pycodestyle = ">=2.10.0,<2.11.0"
+pyflakes = ">=3.0.0,<3.1.0"
 
 [[package]]
 name = "hypothesis"
-version = "6.53.0"
+version = "6.75.3"
 description = "A library for property-based testing"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "hypothesis-6.75.3-py3-none-any.whl", hash = "sha256:a12bf34c29bd22757d20edf93f95805978ed0ffb8d0b22dbadc890a79dc9baa8"},
+    {file = "hypothesis-6.75.3.tar.gz", hash = "sha256:15cdadb80a7ac59087581624d266a4fb585b5cce9b7f88f506c481a9f0e583f6"},
+]
 
 [package.dependencies]
 attrs = ">=19.2.0"
-exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
+exceptiongroup = {version = ">=1.0.0", markers = "python_version < \"3.11\""}
 sortedcontainers = ">=2.1.0,<3.0.0"
 
 [package.extras]
-all = ["black (>=19.10b0)", "click (>=7.0)", "django (>=2.2)", "dpcontracts (>=0.4)", "lark-parser (>=0.6.5)", "libcst (>=0.3.16)", "numpy (>=1.9.0)", "pandas (>=1.0)", "pytest (>=4.6)", "python-dateutil (>=1.4)", "pytz (>=2014.1)", "redis (>=3.0.0)", "rich (>=9.0.0)", "importlib-metadata (>=3.6)", "backports.zoneinfo (>=0.2.1)", "tzdata (>=2022.1)"]
-cli = ["click (>=7.0)", "black (>=19.10b0)", "rich (>=9.0.0)"]
+all = ["backports.zoneinfo (>=0.2.1)", "black (>=19.10b0)", "click (>=7.0)", "django (>=3.2)", "dpcontracts (>=0.4)", "importlib-metadata (>=3.6)", "lark (>=0.10.1)", "libcst (>=0.3.16)", "numpy (>=1.16.0)", "pandas (>=1.1)", "pytest (>=4.6)", "python-dateutil (>=1.4)", "pytz (>=2014.1)", "redis (>=3.0.0)", "rich (>=9.0.0)", "tzdata (>=2023.3)"]
+cli = ["black (>=19.10b0)", "click (>=7.0)", "rich (>=9.0.0)"]
 codemods = ["libcst (>=0.3.16)"]
 dateutil = ["python-dateutil (>=1.4)"]
-django = ["django (>=2.2)"]
+django = ["django (>=3.2)"]
 dpcontracts = ["dpcontracts (>=0.4)"]
 ghostwriter = ["black (>=19.10b0)"]
-lark = ["lark-parser (>=0.6.5)"]
-numpy = ["numpy (>=1.9.0)"]
-pandas = ["pandas (>=1.0)"]
+lark = ["lark (>=0.10.1)"]
+numpy = ["numpy (>=1.16.0)"]
+pandas = ["pandas (>=1.1)"]
 pytest = ["pytest (>=4.6)"]
 pytz = ["pytz (>=2014.1)"]
 redis = ["redis (>=3.0.0)"]
-zoneinfo = ["backports.zoneinfo (>=0.2.1)", "tzdata (>=2022.1)"]
+zoneinfo = ["backports.zoneinfo (>=0.2.1)", "tzdata (>=2023.3)"]
 
 [[package]]
 name = "idna"
-version = "3.3"
+version = "3.4"
 description = "Internationalized Domain Names in Applications (IDNA)"
 category = "dev"
 optional = false
 python-versions = ">=3.5"
+files = [
+    {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
+    {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
+]
 
 [[package]]
 name = "importlib-metadata"
-version = "4.2.0"
+version = "6.6.0"
 description = "Read metadata from Python packages"
 category = "main"
 optional = false
-python-versions = ">=3.6"
+python-versions = ">=3.7"
+files = [
+    {file = "importlib_metadata-6.6.0-py3-none-any.whl", hash = "sha256:43dd286a2cd8995d5eaef7fee2066340423b818ed3fd70adf0bad5f1fac53fed"},
+    {file = "importlib_metadata-6.6.0.tar.gz", hash = "sha256:92501cdf9cc66ebd3e612f1b4f0c0765dfa42f0fa38ffb319b6bd84dd675d705"},
+]
 
 [package.dependencies]
 typing-extensions = {version = ">=3.6.4", markers = "python_version < \"3.8\""}
 zipp = ">=0.5"
 
 [package.extras]
-docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"]
-testing = ["pytest (>=4.6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.0.1)", "packaging", "pep517", "pyfakefs", "flufl.flake8", "pytest-black (>=0.3.7)", "pytest-mypy", "importlib-resources (>=1.3)"]
+docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
+perf = ["ipython"]
+testing = ["flake8 (<5)", "flufl.flake8", "importlib-resources (>=1.3)", "packaging", "pyfakefs", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)", "pytest-perf (>=0.9.2)"]
+
+[[package]]
+name = "importlib-resources"
+version = "5.12.0"
+description = "Read resources from Python packages"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "importlib_resources-5.12.0-py3-none-any.whl", hash = "sha256:7b1deeebbf351c7578e09bf2f63fa2ce8b5ffec296e0d349139d43cca061a81a"},
+    {file = "importlib_resources-5.12.0.tar.gz", hash = "sha256:4be82589bf5c1d7999aedf2a45159d10cb3ca4f19b2271f8792bc8e6da7b22f6"},
+]
+
+[package.dependencies]
+zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
+
+[package.extras]
+docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
+testing = ["flake8 (<5)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
 
 [[package]]
 name = "iniconfig"
-version = "1.1.1"
-description = "iniconfig: brain-dead simple config-ini parsing"
+version = "2.0.0"
+description = "brain-dead simple config-ini parsing"
 category = "dev"
 optional = false
-python-versions = "*"
+python-versions = ">=3.7"
+files = [
+    {file = "iniconfig-2.0.0-py3-none-any.whl", hash = "sha256:b6a85871a79d2e3b22d2d1b94ac2824226a63c6b741c88f7ae975f18b6778374"},
+    {file = "iniconfig-2.0.0.tar.gz", hash = "sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3"},
+]
 
 [[package]]
-name = "invoke"
-version = "1.7.1"
-description = "Pythonic task execution"
+name = "jaraco-classes"
+version = "3.2.3"
+description = "Utility functions for Python class constructs"
 category = "dev"
 optional = false
-python-versions = "*"
+python-versions = ">=3.7"
+files = [
+    {file = "jaraco.classes-3.2.3-py3-none-any.whl", hash = "sha256:2353de3288bc6b82120752201c6b1c1a14b058267fa424ed5ce5984e3b922158"},
+    {file = "jaraco.classes-3.2.3.tar.gz", hash = "sha256:89559fa5c1d3c34eff6f631ad80bb21f378dbcbb35dd161fd2c6b93f5be2f98a"},
+]
+
+[package.dependencies]
+more-itertools = "*"
+
+[package.extras]
+docs = ["jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
+testing = ["flake8 (<5)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
 
 [[package]]
 name = "jeepney"
@@ -287,78 +604,310 @@ description = "Low-level, pure Python DBus protocol wrapper."
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "jeepney-0.8.0-py3-none-any.whl", hash = "sha256:c0a454ad016ca575060802ee4d590dd912e35c122fa04e70306de3d076cce755"},
+    {file = "jeepney-0.8.0.tar.gz", hash = "sha256:5efe48d255973902f6badc3ce55e2aa6c5c3b3bc642059ef3a91247bcfcc5806"},
+]
 
 [package.extras]
-trio = ["async-generator", "trio"]
-test = ["async-timeout", "trio", "testpath", "pytest-asyncio (>=0.17)", "pytest-trio", "pytest"]
+test = ["async-timeout", "pytest", "pytest-asyncio (>=0.17)", "pytest-trio", "testpath", "trio"]
+trio = ["async_generator", "trio"]
+
+[[package]]
+name = "jsonpath-ng"
+version = "1.5.3"
+description = "A final implementation of JSONPath for Python that aims to be standard compliant, including arithmetic and binary comparison operators and providing clear AST for metaprogramming."
+category = "main"
+optional = true
+python-versions = "*"
+files = [
+    {file = "jsonpath-ng-1.5.3.tar.gz", hash = "sha256:a273b182a82c1256daab86a313b937059261b5c5f8c4fa3fc38b882b344dd567"},
+    {file = "jsonpath_ng-1.5.3-py2-none-any.whl", hash = "sha256:f75b95dbecb8a0f3b86fd2ead21c2b022c3f5770957492b9b6196ecccfeb10aa"},
+    {file = "jsonpath_ng-1.5.3-py3-none-any.whl", hash = "sha256:292a93569d74029ba75ac2dc3d3630fc0e17b2df26119a165fa1d498ca47bf65"},
+]
+
+[package.dependencies]
+decorator = "*"
+ply = "*"
+six = "*"
 
 [[package]]
 name = "keyring"
-version = "23.7.0"
+version = "23.13.1"
 description = "Store and access your passwords safely."
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "keyring-23.13.1-py3-none-any.whl", hash = "sha256:771ed2a91909389ed6148631de678f82ddc73737d85a927f382a8a1b157898cd"},
+    {file = "keyring-23.13.1.tar.gz", hash = "sha256:ba2e15a9b35e21908d0aaf4e0a47acc52d6ae33444df0da2b49d41a46ef6d678"},
+]
 
 [package.dependencies]
-importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
+importlib-metadata = {version = ">=4.11.4", markers = "python_version < \"3.12\""}
+importlib-resources = {version = "*", markers = "python_version < \"3.9\""}
+"jaraco.classes" = "*"
 jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""}
-pywin32-ctypes = {version = "<0.1.0 || >0.1.0,<0.1.1 || >0.1.1", markers = "sys_platform == \"win32\""}
+pywin32-ctypes = {version = ">=0.2.0", markers = "sys_platform == \"win32\""}
 SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""}
 
 [package.extras]
-docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
-testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
+completion = ["shtab"]
+docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
+testing = ["flake8 (<5)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
 
 [[package]]
 name = "lupa"
-version = "1.13"
+version = "1.14.1"
 description = "Python wrapper around Lua and LuaJIT"
 category = "main"
 optional = true
 python-versions = "*"
+files = [
+    {file = "lupa-1.14.1-cp27-cp27m-macosx_10_15_x86_64.whl", hash = "sha256:20b486cda76ff141cfb5f28df9c757224c9ed91e78c5242d402d2e9cb699d464"},
+    {file = "lupa-1.14.1-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c685143b18c79a3a1fa25a4cc774a87b5a61c606f249bcf824d125d8accb6b2c"},
+    {file = "lupa-1.14.1-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:3865f9dbe9a84bd6a471250e52068aaf1147f206a51905fb6d93e1db9efb00ee"},
+    {file = "lupa-1.14.1-cp27-cp27m-win32.whl", hash = "sha256:2dacdddd5e28c6f5fd96a46c868ec5c34b0fad1ec7235b5bbb56f06183a37f20"},
+    {file = "lupa-1.14.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e754cbc6cacc9bca6ff2b39025e9659a2098420639d214054b06b466825f4470"},
+    {file = "lupa-1.14.1-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:9e36f3eb70705841bce9c15e12bc6fc3b2f4f68a41ba0e4af303b22fc4d8667c"},
+    {file = "lupa-1.14.1-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0aac06098d46729edd2d04e80b55d9d310e902f042f27521308df77cb1ba0191"},
+    {file = "lupa-1.14.1-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:9706a192339efa1a6b7d806389572a669dd9ae2250469ff1ce13f684085af0b4"},
+    {file = "lupa-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d688a35f7fe614720ed7b820cbb739b37eff577a764c2003e229c2a752201cea"},
+    {file = "lupa-1.14.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:36d888bd42589ecad21a5fb957b46bc799640d18eff2fd0c47a79ffb4a1b286c"},
+    {file = "lupa-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:0423acd739cf25dbdbf1e33a0aa8026f35e1edea0573db63d156f14a082d77c8"},
+    {file = "lupa-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:7068ae0d6a1a35ea8718ef6e103955c1ee143181bf0684604a76acc67f69de55"},
+    {file = "lupa-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5fef8b755591f0466438ad0a3e92ecb21dd6bb1f05d0215139b6ff8c87b2ce65"},
+    {file = "lupa-1.14.1-cp310-cp310-win32.whl", hash = "sha256:4a44e1fd0e9f4a546fbddd2e0fd913c823c9ac58a5f3160fb4f9109f633cb027"},
+    {file = "lupa-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:b83100cd7b48a7ca85dda4e9a6a5e7bc3312691e7f94c6a78d1f9a48a86a7fec"},
+    {file = "lupa-1.14.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:1b8bda50c61c98ff9bb41d1f4934640c323e9f1539021810016a2eae25a66c3d"},
+    {file = "lupa-1.14.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:aa1449aa1ab46c557344867496dee324b47ede0c41643df8f392b00262d21b12"},
+    {file = "lupa-1.14.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:a17ebf91b3aa1c5c36661e34c9cf10e04bb4cc00076e8b966f86749647162050"},
+    {file = "lupa-1.14.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:b1d9cfa469e7a2ad7e9a00fea7196b0022aa52f43a2043c2e0be92122e7bcfe8"},
+    {file = "lupa-1.14.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:bc4f5e84aee0d567aa2e116ff6844d06086ef7404d5102807e59af5ce9daf3c0"},
+    {file = "lupa-1.14.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:40cf2eb90087dfe8ee002740469f2c4c5230d5e7d10ffb676602066d2f9b1ac9"},
+    {file = "lupa-1.14.1-cp311-cp311-win_amd64.whl", hash = "sha256:63a27c38295aa971730795941270fff2ce65576f68ec63cb3ecb90d7a4526d03"},
+    {file = "lupa-1.14.1-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:457330e7a5456c4415fc6d38822036bd4cff214f9d8f7906200f6b588f1b2932"},
+    {file = "lupa-1.14.1-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:d61fb507a36e18dc68f2d9e9e2ea19e1114b1a5e578a36f18e9be7a17d2931d1"},
+    {file = "lupa-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:f26b73d10130ad73e07d45dfe9b7c3833e3a2aa1871a4ecf5ce2dc1abeeae74d"},
+    {file = "lupa-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:297d801ba8e4e882b295c25d92f1634dde5e76d07ec6c35b13882401248c485d"},
+    {file = "lupa-1.14.1-cp36-cp36m-macosx_10_15_x86_64.whl", hash = "sha256:c8bddd22eaeea0ce9d302b390d8bc606f003bf6c51be68e8b007504433b91280"},
+    {file = "lupa-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1661c890861cf0f7002d7a7e00f50c885577954c2d85a7173b218d3228fa3869"},
+    {file = "lupa-1.14.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:2ee480d31555f00f8bf97dd949c596508bd60264cff1921a3797a03dd369e8cd"},
+    {file = "lupa-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:1ff93560c2546d7627ab2f95b5e88f000705db70a3d6041ac29d050f094f2a35"},
+    {file = "lupa-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:47f1459e2c98480c291ae3b70688d762f82dbb197ef121d529aa2c4e8bab1ba3"},
+    {file = "lupa-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:8986dba002346505ee44c78303339c97a346b883015d5cf3aaa0d76d3b952744"},
+    {file = "lupa-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8912459fddf691e70f2add799a128822bae725826cfb86f69720a38bdfa42410"},
+    {file = "lupa-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:9b9d1b98391959ae531bbb8df7559ac2c408fcbd33721921b6a05fd6414161e0"},
+    {file = "lupa-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:61ff409040fa3a6c358b7274c10e556ba22afeb3470f8d23cd0a6bf418fb30c9"},
+    {file = "lupa-1.14.1-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:350ba2218eea800898854b02753dc0c9cfe83db315b30c0dc10ab17493f0321a"},
+    {file = "lupa-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:46dcbc0eae63899468686bb1dfc2fe4ed21fe06f69416113f039d88aab18f5dc"},
+    {file = "lupa-1.14.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:7ad96923e2092d8edbf0c1b274f9b522690b932ed47a70d9a0c1c329f169f107"},
+    {file = "lupa-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:364b291bf2b55555c87b4bffb4db5a9619bcdb3c02e58aebde5319c3c59ec9b2"},
+    {file = "lupa-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0ed071efc8ee231fac1fcd6b6fce44dc6da75a352b9b78403af89a48d759743c"},
+    {file = "lupa-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:bce60847bebb4aa9ed3436fab3e84585e9094e15e1cb8d32e16e041c4ef65331"},
+    {file = "lupa-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:5fbe7f83b0007cda3b158a93726c80dfd39003a8c5c5d608f6fdf8c60c42117f"},
+    {file = "lupa-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:4bd789967cbb5c84470f358c7fa8fcbf7464185adbd872a6c3de9b42d29a6d26"},
+    {file = "lupa-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:ca58da94a6495dda0063ba975fe2e6f722c5e84c94f09955671b279c41cfde96"},
+    {file = "lupa-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:51d6965663b2be1a593beabfa10803fdbbcf0b293aa4a53ea09a23db89787d0d"},
+    {file = "lupa-1.14.1-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:d251ba009996a47231615ea6b78123c88446979ae99b5585269ec46f7a9197aa"},
+    {file = "lupa-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:abe3fc103d7bd34e7028d06db557304979f13ebf9050ad0ea6c1cc3a1caea017"},
+    {file = "lupa-1.14.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:4ea185c394bf7d07e9643d868e50cc94a530bb298d4bdae4915672b3809cc72b"},
+    {file = "lupa-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:6aff7257b5953de620db489899406cddb22093d1124fc5b31f8900e44a9dbc2a"},
+    {file = "lupa-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d6f5bfbd8fc48c27786aef8f30c84fd9197747fa0b53761e69eb968d81156cbf"},
+    {file = "lupa-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:dec7580b86975bc5bdf4cc54638c93daaec10143b4acc4a6c674c0f7e27dd363"},
+    {file = "lupa-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:96a201537930813b34145daf337dcd934ddfaebeba6452caf8a32a418e145e82"},
+    {file = "lupa-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:c0efaae8e7276f4feb82cba43c3cd45c82db820c9dab3965a8f2e0cb8b0bc30b"},
+    {file = "lupa-1.14.1-cp38-cp38-win32.whl", hash = "sha256:b6953854a343abdfe11aa52a2d021fadf3d77d0cd2b288b650f149b597e0d02d"},
+    {file = "lupa-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:c79ced2aaf7577e3d06933cf0d323fa968e6864c498c376b0bd475ded86f01f3"},
+    {file = "lupa-1.14.1-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:72589a21a3776c7dd4b05374780e7ecf1b49c490056077fc91486461935eaaa3"},
+    {file = "lupa-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:30d356a433653b53f1fe29477faaf5e547b61953b971b010d2185a561f4ce82a"},
+    {file = "lupa-1.14.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:2116eb467797d5a134b2c997dfc7974b9a84b3aa5776c17ba8578ed4f5f41a9b"},
+    {file = "lupa-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:24d6c3435d38614083d197f3e7bcfe6d3d9eb02ee393d60a4ab9c719bc000162"},
+    {file = "lupa-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:9144ecfa5e363f03e4d1c1e678b081cd223438be08f96604fca478591c3e3b53"},
+    {file = "lupa-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:69be1d6c3f3ab9fc988c9a0e5801f23f68e2c8b5900a8fd3ae57d1d0e9c5539c"},
+    {file = "lupa-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:77b587043d0bee9cc738e00c12718095cf808dd269b171f852bd82026c664c69"},
+    {file = "lupa-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:62530cf0a9c749a3cd13ad92b31eaf178939d642b6176b46cfcd98f6c5006383"},
+    {file = "lupa-1.14.1-cp39-cp39-win32.whl", hash = "sha256:d891b43b8810191eb4c42a0bc57c32f481098029aac42b176108e09ffe118cdc"},
+    {file = "lupa-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:cf643bc48a152e2c572d8be7fc1de1c417a6a9648d337ffedebf00f57016b786"},
+    {file = "lupa-1.14.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:0ac862c6d2eb542ac70d294a8e960b9ae7f46297559733b4c25f9e3c945e522a"},
+    {file = "lupa-1.14.1-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:0a15680f425b91ec220eb84b0ab59d24c4bee69d15b88245a6998a7d38c78ba6"},
+    {file = "lupa-1.14.1-pp37-pypy37_pp73-win32.whl", hash = "sha256:8a064d72991ba53aeea9720d95f2055f7f8a1e2f35b32a35d92248b63a94bcd1"},
+    {file = "lupa-1.14.1-pp38-pypy38_pp73-macosx_10_15_x86_64.whl", hash = "sha256:6d87d6c51e6c3b6326d18af83e81f4860ba0b287cda1101b1ab8562389d598f5"},
+    {file = "lupa-1.14.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:b3efe9d887cfdf459054308ecb716e0eb11acb9a96c3022ee4e677c1f510d244"},
+    {file = "lupa-1.14.1-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:723fff6fcab5e7045e0fa79014729577f98082bd1fd1050f907f83a41e4c9865"},
+    {file = "lupa-1.14.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:930092a27157241d07d6d09ff01d5530a9e4c0dd515228211f2902b7e88ec1f0"},
+    {file = "lupa-1.14.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:7f6bc9852bdf7b16840c984a1e9f952815f7d4b3764585d20d2e062bd1128074"},
+    {file = "lupa-1.14.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:8f65d2007092a04616c215fea5ad05ba8f661bd0f45cde5265d27150f64d3dd8"},
+    {file = "lupa-1.14.1.tar.gz", hash = "sha256:d0fd4e60ad149fe25c90530e2a0e032a42a6f0455f29ca0edb8170d6ec751c6e"},
+]
+
+[[package]]
+name = "markdown-it-py"
+version = "2.2.0"
+description = "Python port of markdown-it. Markdown parsing, done right!"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "markdown-it-py-2.2.0.tar.gz", hash = "sha256:7c9a5e412688bc771c67432cbfebcdd686c93ce6484913dccf06cb5a0bea35a1"},
+    {file = "markdown_it_py-2.2.0-py3-none-any.whl", hash = "sha256:5a35f8d1870171d9acc47b99612dc146129b631baf04970128b568f190d0cc30"},
+]
+
+[package.dependencies]
+mdurl = ">=0.1,<1.0"
+typing_extensions = {version = ">=3.7.4", markers = "python_version < \"3.8\""}
+
+[package.extras]
+benchmarking = ["psutil", "pytest", "pytest-benchmark"]
+code-style = ["pre-commit (>=3.0,<4.0)"]
+compare = ["commonmark (>=0.9,<1.0)", "markdown (>=3.4,<4.0)", "mistletoe (>=1.0,<2.0)", "mistune (>=2.0,<3.0)", "panflute (>=2.3,<3.0)"]
+linkify = ["linkify-it-py (>=1,<3)"]
+plugins = ["mdit-py-plugins"]
+profiling = ["gprof2dot"]
+rtd = ["attrs", "myst-parser", "pyyaml", "sphinx", "sphinx-copybutton", "sphinx-design", "sphinx_book_theme"]
+testing = ["coverage", "pytest", "pytest-cov", "pytest-regressions"]
 
 [[package]]
 name = "mccabe"
-version = "0.6.1"
+version = "0.7.0"
 description = "McCabe checker, plugin for flake8"
 category = "dev"
 optional = false
-python-versions = "*"
+python-versions = ">=3.6"
+files = [
+    {file = "mccabe-0.7.0-py2.py3-none-any.whl", hash = "sha256:6c2d30ab6be0e4a46919781807b4f0d834ebdd6c6e3dca0bda5a15f863427b6e"},
+    {file = "mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325"},
+]
 
 [[package]]
-name = "packaging"
-version = "21.3"
-description = "Core utilities for Python packages"
-category = "main"
+name = "mdurl"
+version = "0.1.2"
+description = "Markdown URL utilities"
+category = "dev"
 optional = false
-python-versions = ">=3.6"
+python-versions = ">=3.7"
+files = [
+    {file = "mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8"},
+    {file = "mdurl-0.1.2.tar.gz", hash = "sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba"},
+]
+
+[[package]]
+name = "more-itertools"
+version = "9.1.0"
+description = "More routines for operating on iterables, beyond itertools"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "more-itertools-9.1.0.tar.gz", hash = "sha256:cabaa341ad0389ea83c17a94566a53ae4c9d07349861ecb14dc6d0345cf9ac5d"},
+    {file = "more_itertools-9.1.0-py3-none-any.whl", hash = "sha256:d2bc7f02446e86a68911e58ded76d6561eea00cddfb2a91e7019bbb586c799f3"},
+]
+
+[[package]]
+name = "mypy"
+version = "1.3.0"
+description = "Optional static typing for Python"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "mypy-1.3.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c1eb485cea53f4f5284e5baf92902cd0088b24984f4209e25981cc359d64448d"},
+    {file = "mypy-1.3.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:4c99c3ecf223cf2952638da9cd82793d8f3c0c5fa8b6ae2b2d9ed1e1ff51ba85"},
+    {file = "mypy-1.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:550a8b3a19bb6589679a7c3c31f64312e7ff482a816c96e0cecec9ad3a7564dd"},
+    {file = "mypy-1.3.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cbc07246253b9e3d7d74c9ff948cd0fd7a71afcc2b77c7f0a59c26e9395cb152"},
+    {file = "mypy-1.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:a22435632710a4fcf8acf86cbd0d69f68ac389a3892cb23fbad176d1cddaf228"},
+    {file = "mypy-1.3.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6e33bb8b2613614a33dff70565f4c803f889ebd2f859466e42b46e1df76018dd"},
+    {file = "mypy-1.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7d23370d2a6b7a71dc65d1266f9a34e4cde9e8e21511322415db4b26f46f6b8c"},
+    {file = "mypy-1.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:658fe7b674769a0770d4b26cb4d6f005e88a442fe82446f020be8e5f5efb2fae"},
+    {file = "mypy-1.3.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:6e42d29e324cdda61daaec2336c42512e59c7c375340bd202efa1fe0f7b8f8ca"},
+    {file = "mypy-1.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:d0b6c62206e04061e27009481cb0ec966f7d6172b5b936f3ead3d74f29fe3dcf"},
+    {file = "mypy-1.3.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:76ec771e2342f1b558c36d49900dfe81d140361dd0d2df6cd71b3db1be155409"},
+    {file = "mypy-1.3.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ebc95f8386314272bbc817026f8ce8f4f0d2ef7ae44f947c4664efac9adec929"},
+    {file = "mypy-1.3.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:faff86aa10c1aa4a10e1a301de160f3d8fc8703b88c7e98de46b531ff1276a9a"},
+    {file = "mypy-1.3.0-cp37-cp37m-win_amd64.whl", hash = "sha256:8c5979d0deb27e0f4479bee18ea0f83732a893e81b78e62e2dda3e7e518c92ee"},
+    {file = "mypy-1.3.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c5d2cc54175bab47011b09688b418db71403aefad07cbcd62d44010543fc143f"},
+    {file = "mypy-1.3.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:87df44954c31d86df96c8bd6e80dfcd773473e877ac6176a8e29898bfb3501cb"},
+    {file = "mypy-1.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:473117e310febe632ddf10e745a355714e771ffe534f06db40702775056614c4"},
+    {file = "mypy-1.3.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:74bc9b6e0e79808bf8678d7678b2ae3736ea72d56eede3820bd3849823e7f305"},
+    {file = "mypy-1.3.0-cp38-cp38-win_amd64.whl", hash = "sha256:44797d031a41516fcf5cbfa652265bb994e53e51994c1bd649ffcd0c3a7eccbf"},
+    {file = "mypy-1.3.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ddae0f39ca146972ff6bb4399f3b2943884a774b8771ea0a8f50e971f5ea5ba8"},
+    {file = "mypy-1.3.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1c4c42c60a8103ead4c1c060ac3cdd3ff01e18fddce6f1016e08939647a0e703"},
+    {file = "mypy-1.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e86c2c6852f62f8f2b24cb7a613ebe8e0c7dc1402c61d36a609174f63e0ff017"},
+    {file = "mypy-1.3.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:f9dca1e257d4cc129517779226753dbefb4f2266c4eaad610fc15c6a7e14283e"},
+    {file = "mypy-1.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:95d8d31a7713510685b05fbb18d6ac287a56c8f6554d88c19e73f724a445448a"},
+    {file = "mypy-1.3.0-py3-none-any.whl", hash = "sha256:a8763e72d5d9574d45ce5881962bc8e9046bf7b375b0abf031f3e6811732a897"},
+    {file = "mypy-1.3.0.tar.gz", hash = "sha256:e1f4d16e296f5135624b34e8fb741eb0eadedca90862405b1f1fde2040b9bd11"},
+]
 
 [package.dependencies]
-pyparsing = ">=2.0.2,<3.0.5 || >3.0.5"
+mypy-extensions = ">=1.0.0"
+tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
+typed-ast = {version = ">=1.4.0,<2", markers = "python_version < \"3.8\""}
+typing-extensions = ">=3.10"
+
+[package.extras]
+dmypy = ["psutil (>=4.0)"]
+install-types = ["pip"]
+python2 = ["typed-ast (>=1.4.0,<2)"]
+reports = ["lxml"]
+
+[[package]]
+name = "mypy-extensions"
+version = "1.0.0"
+description = "Type system extensions for programs checked with the mypy type checker."
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "mypy_extensions-1.0.0-py3-none-any.whl", hash = "sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d"},
+    {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
+]
+
+[[package]]
+name = "packaging"
+version = "23.1"
+description = "Core utilities for Python packages"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "packaging-23.1-py3-none-any.whl", hash = "sha256:994793af429502c4ea2ebf6bf664629d07c1a9fe974af92966e4b8d2df7edc61"},
+    {file = "packaging-23.1.tar.gz", hash = "sha256:a392980d2b6cffa644431898be54b0045151319d1e7ec34f0cfed48767dd334f"},
+]
 
 [[package]]
 name = "pkginfo"
-version = "1.8.3"
-description = "Query metadatdata from sdists / bdists / installed packages."
+version = "1.9.6"
+description = "Query metadata from sdists / bdists / installed packages."
 category = "dev"
 optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
+python-versions = ">=3.6"
+files = [
+    {file = "pkginfo-1.9.6-py3-none-any.whl", hash = "sha256:4b7a555a6d5a22169fcc9cf7bfd78d296b0361adad412a346c1226849af5e546"},
+    {file = "pkginfo-1.9.6.tar.gz", hash = "sha256:8fd5896e8718a4372f0ea9cc9d96f6417c9b986e23a4d116dda26b62cc29d046"},
+]
 
 [package.extras]
-testing = ["nose", "coverage"]
+testing = ["pytest", "pytest-cov"]
 
 [[package]]
 name = "platformdirs"
-version = "2.5.2"
-description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
+version = "3.5.1"
+description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "platformdirs-3.5.1-py3-none-any.whl", hash = "sha256:e2378146f1964972c03c085bb5662ae80b2b8c06226c54b2ff4aa9483e8a13a5"},
+    {file = "platformdirs-3.5.1.tar.gz", hash = "sha256:412dae91f52a6f84830f39a8078cecd0e866cb72294a5c66808e74d5e88d251f"},
+]
+
+[package.dependencies]
+typing-extensions = {version = ">=4.5", markers = "python_version < \"3.8\""}
 
 [package.extras]
-docs = ["furo (>=2021.7.5b38)", "proselint (>=0.10.2)", "sphinx-autodoc-typehints (>=1.12)", "sphinx (>=4)"]
-test = ["appdirs (==1.4.4)", "pytest-cov (>=2.7)", "pytest-mock (>=3.6)", "pytest (>=6)"]
+docs = ["furo (>=2023.3.27)", "proselint (>=0.13)", "sphinx (>=6.2.1)", "sphinx-autodoc-typehints (>=1.23,!=1.23.4)"]
+test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=7.3.1)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"]
 
 [[package]]
 name = "pluggy"
@@ -367,29 +916,41 @@ description = "plugin and hook calling mechanisms for python"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"},
+    {file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
+]
 
 [package.dependencies]
 importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""}
 
 [package.extras]
-testing = ["pytest-benchmark", "pytest"]
-dev = ["tox", "pre-commit"]
+dev = ["pre-commit", "tox"]
+testing = ["pytest", "pytest-benchmark"]
 
 [[package]]
-name = "py"
-version = "1.11.0"
-description = "library with cross-python path, ini-parsing, io, code, log facilities"
-category = "dev"
-optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+name = "ply"
+version = "3.11"
+description = "Python Lex & Yacc"
+category = "main"
+optional = true
+python-versions = "*"
+files = [
+    {file = "ply-3.11-py2.py3-none-any.whl", hash = "sha256:096f9b8350b65ebd2fd1346b12452efe5b9607f7482813ffca50c22722a807ce"},
+    {file = "ply-3.11.tar.gz", hash = "sha256:00c7c1aaa88358b9c765b6d3000c6eec0ba42abca5351b095321aef446081da3"},
+]
 
 [[package]]
 name = "pycodestyle"
-version = "2.8.0"
+version = "2.10.0"
 description = "Python style guide checker"
 category = "dev"
 optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+python-versions = ">=3.6"
+files = [
+    {file = "pycodestyle-2.10.0-py2.py3-none-any.whl", hash = "sha256:8a4eaf0d0495c7395bdab3589ac2db602797d76207242c17d470186815706610"},
+    {file = "pycodestyle-2.10.0.tar.gz", hash = "sha256:347187bdb476329d98f695c213d7295a846d1152ff4fe9bacb8a9590b8ee7053"},
+]
 
 [[package]]
 name = "pycparser"
@@ -398,99 +959,162 @@ description = "C parser in Python"
 category = "dev"
 optional = false
 python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
+    {file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
+]
 
 [[package]]
 name = "pyflakes"
-version = "2.4.0"
+version = "3.0.1"
 description = "passive checker of Python programs"
 category = "dev"
 optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+python-versions = ">=3.6"
+files = [
+    {file = "pyflakes-3.0.1-py2.py3-none-any.whl", hash = "sha256:ec55bf7fe21fff7f1ad2f7da62363d749e2a470500eab1b555334b67aa1ef8cf"},
+    {file = "pyflakes-3.0.1.tar.gz", hash = "sha256:ec8b276a6b60bd80defed25add7e439881c19e64850afd9b346283d4165fd0fd"},
+]
 
 [[package]]
 name = "pygments"
-version = "2.12.0"
+version = "2.15.1"
 description = "Pygments is a syntax highlighting package written in Python."
 category = "dev"
 optional = false
-python-versions = ">=3.6"
+python-versions = ">=3.7"
+files = [
+    {file = "Pygments-2.15.1-py3-none-any.whl", hash = "sha256:db2db3deb4b4179f399a09054b023b6a586b76499d36965813c71aa8ed7b5fd1"},
+    {file = "Pygments-2.15.1.tar.gz", hash = "sha256:8ace4d3c1dd481894b2005f560ead0f9f19ee64fe983366be1a21e171d12775c"},
+]
+
+[package.extras]
+plugins = ["importlib-metadata"]
 
 [[package]]
-name = "pyparsing"
-version = "3.0.9"
-description = "pyparsing module - Classes and methods to define and execute parsing grammars"
-category = "main"
+name = "pyproject-api"
+version = "1.5.1"
+description = "API to interact with the python pyproject.toml based projects"
+category = "dev"
 optional = false
-python-versions = ">=3.6.8"
+python-versions = ">=3.7"
+files = [
+    {file = "pyproject_api-1.5.1-py3-none-any.whl", hash = "sha256:4698a3777c2e0f6b624f8a4599131e2a25376d90fe8d146d7ac74c67c6f97c43"},
+    {file = "pyproject_api-1.5.1.tar.gz", hash = "sha256:435f46547a9ff22cf4208ee274fca3e2869aeb062a4834adfc99a4dd64af3cf9"},
+]
+
+[package.dependencies]
+packaging = ">=23"
+tomli = {version = ">=2.0.1", markers = "python_version < \"3.11\""}
 
 [package.extras]
-diagrams = ["railroad-diagrams", "jinja2"]
+docs = ["furo (>=2022.12.7)", "sphinx (>=6.1.3)", "sphinx-autodoc-typehints (>=1.22,!=1.23.4)"]
+testing = ["covdefaults (>=2.2.2)", "importlib-metadata (>=6)", "pytest (>=7.2.1)", "pytest-cov (>=4)", "pytest-mock (>=3.10)", "virtualenv (>=20.17.1)", "wheel (>=0.38.4)"]
 
 [[package]]
 name = "pytest"
-version = "7.1.2"
+version = "7.3.1"
 description = "pytest: simple powerful testing with Python"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "pytest-7.3.1-py3-none-any.whl", hash = "sha256:3799fa815351fea3a5e96ac7e503a96fa51cc9942c3753cda7651b93c1cfa362"},
+    {file = "pytest-7.3.1.tar.gz", hash = "sha256:434afafd78b1d78ed0addf160ad2b77a30d35d4bdf8af234fe621919d9ed15e3"},
+]
 
 [package.dependencies]
-atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""}
-attrs = ">=19.2.0"
 colorama = {version = "*", markers = "sys_platform == \"win32\""}
+exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
 importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""}
 iniconfig = "*"
 packaging = "*"
 pluggy = ">=0.12,<2.0"
-py = ">=1.8.2"
-tomli = ">=1.0.0"
+tomli = {version = ">=1.0.0", markers = "python_version < \"3.11\""}
 
 [package.extras]
-testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
+testing = ["argcomplete", "attrs (>=19.2.0)", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
 
 [[package]]
 name = "pytest-asyncio"
-version = "0.19.0"
+version = "0.21.0"
 description = "Pytest support for asyncio"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "pytest-asyncio-0.21.0.tar.gz", hash = "sha256:2b38a496aef56f56b0e87557ec313e11e1ab9276fc3863f6a7be0f1d0e415e1b"},
+    {file = "pytest_asyncio-0.21.0-py3-none-any.whl", hash = "sha256:f2b3366b7cd501a4056858bd39349d5af19742aed2d81660b7998b6341c7eb9c"},
+]
 
 [package.dependencies]
-pytest = ">=6.1.0"
+pytest = ">=7.0.0"
 typing-extensions = {version = ">=3.7.2", markers = "python_version < \"3.8\""}
 
 [package.extras]
-testing = ["coverage (>=6.2)", "hypothesis (>=5.7.1)", "flaky (>=3.5.0)", "mypy (>=0.931)", "pytest-trio (>=0.7.0)"]
+docs = ["sphinx (>=5.3)", "sphinx-rtd-theme (>=1.0)"]
+testing = ["coverage (>=6.2)", "flaky (>=3.5.0)", "hypothesis (>=5.7.1)", "mypy (>=0.931)", "pytest-trio (>=0.7.0)"]
 
 [[package]]
 name = "pytest-cov"
-version = "3.0.0"
+version = "4.0.0"
 description = "Pytest plugin for measuring coverage."
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "pytest-cov-4.0.0.tar.gz", hash = "sha256:996b79efde6433cdbd0088872dbc5fb3ed7fe1578b68cdbba634f14bb8dd0470"},
+    {file = "pytest_cov-4.0.0-py3-none-any.whl", hash = "sha256:2feb1b751d66a8bd934e5edfa2e961d11309dc37b73b0eabe73b5945fee20f6b"},
+]
 
 [package.dependencies]
 coverage = {version = ">=5.2.1", extras = ["toml"]}
 pytest = ">=4.6"
 
 [package.extras]
-testing = ["virtualenv", "pytest-xdist", "six", "process-tests", "hunter", "fields"]
+testing = ["fields", "hunter", "process-tests", "pytest-xdist", "six", "virtualenv"]
 
 [[package]]
 name = "pytest-mock"
-version = "3.8.2"
+version = "3.10.0"
 description = "Thin-wrapper around the mock package for easier use with pytest"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "pytest-mock-3.10.0.tar.gz", hash = "sha256:fbbdb085ef7c252a326fd8cdcac0aa3b1333d8811f131bdcc701002e1be7ed4f"},
+    {file = "pytest_mock-3.10.0-py3-none-any.whl", hash = "sha256:f4c973eeae0282963eb293eb173ce91b091a79c1334455acfac9ddee8a1c784b"},
+]
 
 [package.dependencies]
 pytest = ">=5.0"
 
 [package.extras]
-dev = ["pre-commit", "tox", "pytest-asyncio"]
+dev = ["pre-commit", "pytest-asyncio", "tox"]
+
+[[package]]
+name = "pywin32"
+version = "306"
+description = "Python for Window Extensions"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+    {file = "pywin32-306-cp310-cp310-win32.whl", hash = "sha256:06d3420a5155ba65f0b72f2699b5bacf3109f36acbe8923765c22938a69dfc8d"},
+    {file = "pywin32-306-cp310-cp310-win_amd64.whl", hash = "sha256:84f4471dbca1887ea3803d8848a1616429ac94a4a8d05f4bc9c5dcfd42ca99c8"},
+    {file = "pywin32-306-cp311-cp311-win32.whl", hash = "sha256:e65028133d15b64d2ed8f06dd9fbc268352478d4f9289e69c190ecd6818b6407"},
+    {file = "pywin32-306-cp311-cp311-win_amd64.whl", hash = "sha256:a7639f51c184c0272e93f244eb24dafca9b1855707d94c192d4a0b4c01e1100e"},
+    {file = "pywin32-306-cp311-cp311-win_arm64.whl", hash = "sha256:70dba0c913d19f942a2db25217d9a1b726c278f483a919f1abfed79c9cf64d3a"},
+    {file = "pywin32-306-cp312-cp312-win32.whl", hash = "sha256:383229d515657f4e3ed1343da8be101000562bf514591ff383ae940cad65458b"},
+    {file = "pywin32-306-cp312-cp312-win_amd64.whl", hash = "sha256:37257794c1ad39ee9be652da0462dc2e394c8159dfd913a8a4e8eb6fd346da0e"},
+    {file = "pywin32-306-cp312-cp312-win_arm64.whl", hash = "sha256:5821ec52f6d321aa59e2db7e0a35b997de60c201943557d108af9d4ae1ec7040"},
+    {file = "pywin32-306-cp37-cp37m-win32.whl", hash = "sha256:1c73ea9a0d2283d889001998059f5eaaba3b6238f767c9cf2833b13e6a685f65"},
+    {file = "pywin32-306-cp37-cp37m-win_amd64.whl", hash = "sha256:72c5f621542d7bdd4fdb716227be0dd3f8565c11b280be6315b06ace35487d36"},
+    {file = "pywin32-306-cp38-cp38-win32.whl", hash = "sha256:e4c092e2589b5cf0d365849e73e02c391c1349958c5ac3e9d5ccb9a28e017b3a"},
+    {file = "pywin32-306-cp38-cp38-win_amd64.whl", hash = "sha256:e8ac1ae3601bee6ca9f7cb4b5363bf1c0badb935ef243c4733ff9a393b1690c0"},
+    {file = "pywin32-306-cp39-cp39-win32.whl", hash = "sha256:e25fd5b485b55ac9c057f67d94bc203f3f6595078d1fb3b458c9c28b7153a802"},
+    {file = "pywin32-306-cp39-cp39-win_amd64.whl", hash = "sha256:39b61c15272833b5c329a2989999dcae836b1eed650252ab1b7bfbe1d59f30f4"},
+]
 
 [[package]]
 name = "pywin32-ctypes"
@@ -499,14 +1123,22 @@ description = ""
 category = "dev"
 optional = false
 python-versions = "*"
+files = [
+    {file = "pywin32-ctypes-0.2.0.tar.gz", hash = "sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942"},
+    {file = "pywin32_ctypes-0.2.0-py2.py3-none-any.whl", hash = "sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98"},
+]
 
 [[package]]
 name = "readme-renderer"
-version = "35.0"
+version = "37.3"
 description = "readme_renderer is a library for rendering \"readme\" descriptions for Warehouse"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "readme_renderer-37.3-py3-none-any.whl", hash = "sha256:f67a16caedfa71eef48a31b39708637a6f4664c4394801a7b0d6432d13907343"},
+    {file = "readme_renderer-37.3.tar.gz", hash = "sha256:cd653186dfc73055656f090f227f5cb22a046d7f71a841dfa305f55c9a513273"},
+]
 
 [package.dependencies]
 bleach = ">=2.1.0"
@@ -518,17 +1150,19 @@ md = ["cmarkgfm (>=0.8.0)"]
 
 [[package]]
 name = "redis"
-version = "4.3.4"
+version = "4.5.5"
 description = "Python client for Redis database and key-value store"
 category = "main"
 optional = false
-python-versions = ">=3.6"
+python-versions = ">=3.7"
+files = [
+    {file = "redis-4.5.5-py3-none-any.whl", hash = "sha256:77929bc7f5dab9adf3acba2d3bb7d7658f1e0c2f1cafe7eb36434e751c471119"},
+    {file = "redis-4.5.5.tar.gz", hash = "sha256:dc87a0bdef6c8bfe1ef1e1c40be7034390c2ae02d92dcd0c7ca1729443899880"},
+]
 
 [package.dependencies]
-async-timeout = ">=4.0.2"
-deprecated = ">=1.2.3"
+async-timeout = {version = ">=4.0.2", markers = "python_full_version <= \"3.11.2\""}
 importlib-metadata = {version = ">=1.0", markers = "python_version < \"3.8\""}
-packaging = ">=20.4"
 typing-extensions = {version = "*", markers = "python_version < \"3.8\""}
 
 [package.extras]
@@ -537,29 +1171,37 @@ ocsp = ["cryptography (>=36.0.1)", "pyopenssl (==20.0.1)", "requests (>=2.26.0)"
 
 [[package]]
 name = "requests"
-version = "2.28.1"
+version = "2.30.0"
 description = "Python HTTP for Humans."
 category = "dev"
 optional = false
-python-versions = ">=3.7, <4"
+python-versions = ">=3.7"
+files = [
+    {file = "requests-2.30.0-py3-none-any.whl", hash = "sha256:10e94cc4f3121ee6da529d358cdaeaff2f1c409cd377dbc72b825852f2f7e294"},
+    {file = "requests-2.30.0.tar.gz", hash = "sha256:239d7d4458afcb28a692cdd298d87542235f4ca8d36d03a15bfc128a6559a2f4"},
+]
 
 [package.dependencies]
 certifi = ">=2017.4.17"
-charset-normalizer = ">=2,<3"
+charset-normalizer = ">=2,<4"
 idna = ">=2.5,<4"
-urllib3 = ">=1.21.1,<1.27"
+urllib3 = ">=1.21.1,<3"
 
 [package.extras]
 socks = ["PySocks (>=1.5.6,!=1.5.7)"]
-use_chardet_on_py3 = ["chardet (>=3.0.2,<6)"]
+use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
 
 [[package]]
 name = "requests-toolbelt"
-version = "0.9.1"
+version = "1.0.0"
 description = "A utility belt for advanced users of python-requests"
 category = "dev"
 optional = false
-python-versions = "*"
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "requests-toolbelt-1.0.0.tar.gz", hash = "sha256:7681a0a3d047012b5bdc0ee37d7f8f07ebe76ab08caeccfc3921ce23c88d5bc6"},
+    {file = "requests_toolbelt-1.0.0-py2.py3-none-any.whl", hash = "sha256:cccfdd665f0a24fcf4726e690f65639d272bb0637b9b92dfd91a5568ccf6bd06"},
+]
 
 [package.dependencies]
 requests = ">=2.0.1,<3.0.0"
@@ -571,33 +1213,45 @@ description = "Validating URI References per RFC 3986"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "rfc3986-2.0.0-py2.py3-none-any.whl", hash = "sha256:50b1502b60e289cb37883f3dfd34532b8873c7de9f49bb546641ce9cbd256ebd"},
+    {file = "rfc3986-2.0.0.tar.gz", hash = "sha256:97aacf9dbd4bfd829baad6e6309fa6573aaf1be3f6fa735c8ab05e46cecb261c"},
+]
 
 [package.extras]
 idna2008 = ["idna"]
 
 [[package]]
 name = "rich"
-version = "12.5.1"
+version = "13.3.5"
 description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
 category = "dev"
 optional = false
-python-versions = ">=3.6.3,<4.0.0"
+python-versions = ">=3.7.0"
+files = [
+    {file = "rich-13.3.5-py3-none-any.whl", hash = "sha256:69cdf53799e63f38b95b9bf9c875f8c90e78dd62b2f00c13a911c7a3b9fa4704"},
+    {file = "rich-13.3.5.tar.gz", hash = "sha256:2d11b9b8dd03868f09b4fffadc84a6a8cda574e40dc90821bd845720ebb8e89c"},
+]
 
 [package.dependencies]
-commonmark = ">=0.9.0,<0.10.0"
-pygments = ">=2.6.0,<3.0.0"
+markdown-it-py = ">=2.2.0,<3.0.0"
+pygments = ">=2.13.0,<3.0.0"
 typing-extensions = {version = ">=4.0.0,<5.0", markers = "python_version < \"3.9\""}
 
 [package.extras]
-jupyter = ["ipywidgets (>=7.5.1,<8.0.0)"]
+jupyter = ["ipywidgets (>=7.5.1,<9)"]
 
 [[package]]
 name = "secretstorage"
-version = "3.3.2"
+version = "3.3.3"
 description = "Python bindings to FreeDesktop.org Secret Service API"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "SecretStorage-3.3.3-py3-none-any.whl", hash = "sha256:f356e6628222568e3af06f2eba8df495efa13b3b63081dafd4f7d9a7b7bc9f99"},
+    {file = "SecretStorage-3.3.3.tar.gz", hash = "sha256:2403533ef369eca6d2ba81718576c5e0f564d5cca1b58f73a8b23e7d4eeebd77"},
+]
 
 [package.dependencies]
 cryptography = ">=2.0"
@@ -610,6 +1264,10 @@ description = "Python 2 and 3 compatibility utilities"
 category = "main"
 optional = false
 python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
+files = [
+    {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
+    {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
+]
 
 [[package]]
 name = "sortedcontainers"
@@ -618,14 +1276,10 @@ description = "Sorted Containers -- Sorted List, Sorted Dict, Sorted Set"
 category = "main"
 optional = false
 python-versions = "*"
-
-[[package]]
-name = "toml"
-version = "0.10.2"
-description = "Python Library for Tom's Obvious, Minimal Language"
-category = "dev"
-optional = false
-python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
+files = [
+    {file = "sortedcontainers-2.4.0-py2.py3-none-any.whl", hash = "sha256:a163dcaede0f1c021485e957a39245190e74249897e2ae4b2aa38595db237ee0"},
+    {file = "sortedcontainers-2.4.0.tar.gz", hash = "sha256:25caa5a06cc30b6b83d11423433f65d1f9d76c4c6a0c90e3379eaa43b9bfdb88"},
+]
 
 [[package]]
 name = "tomli"
@@ -634,37 +1288,69 @@ description = "A lil' TOML parser"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
+    {file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
+]
 
 [[package]]
 name = "tox"
-version = "3.25.1"
+version = "4.5.1"
 description = "tox is a generic virtualenv management and test command line tool"
 category = "dev"
 optional = false
-python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
+python-versions = ">=3.7"
+files = [
+    {file = "tox-4.5.1-py3-none-any.whl", hash = "sha256:d25a2e6cb261adc489604fafd76cd689efeadfa79709965e965668d6d3f63046"},
+    {file = "tox-4.5.1.tar.gz", hash = "sha256:5a2eac5fb816779dfdf5cb00fecbc27eb0524e4626626bb1de84747b24cacc56"},
+]
 
 [package.dependencies]
-colorama = {version = ">=0.4.1", markers = "platform_system == \"Windows\""}
-filelock = ">=3.0.0"
-importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""}
-packaging = ">=14"
-pluggy = ">=0.12.0"
-py = ">=1.4.17"
-six = ">=1.14.0"
-toml = ">=0.9.4"
-virtualenv = ">=16.0.0,<20.0.0 || >20.0.0,<20.0.1 || >20.0.1,<20.0.2 || >20.0.2,<20.0.3 || >20.0.3,<20.0.4 || >20.0.4,<20.0.5 || >20.0.5,<20.0.6 || >20.0.6,<20.0.7 || >20.0.7"
+cachetools = ">=5.3"
+chardet = ">=5.1"
+colorama = ">=0.4.6"
+filelock = ">=3.11"
+importlib-metadata = {version = ">=6.4.1", markers = "python_version < \"3.8\""}
+packaging = ">=23.1"
+platformdirs = ">=3.2"
+pluggy = ">=1"
+pyproject-api = ">=1.5.1"
+tomli = {version = ">=2.0.1", markers = "python_version < \"3.11\""}
+typing-extensions = {version = ">=4.5", markers = "python_version < \"3.8\""}
+virtualenv = ">=20.21"
 
 [package.extras]
-docs = ["pygments-github-lexers (>=0.0.5)", "sphinx (>=2.0.0)", "sphinxcontrib-autoprogram (>=0.1.5)", "towncrier (>=18.5.0)"]
-testing = ["flaky (>=3.4.0)", "freezegun (>=0.3.11)", "pytest (>=4.0.0)", "pytest-cov (>=2.5.1)", "pytest-mock (>=1.10.0)", "pytest-randomly (>=1.0.0)", "psutil (>=5.6.1)", "pathlib2 (>=2.3.3)"]
+docs = ["furo (>=2023.3.27)", "sphinx (>=6.1.3)", "sphinx-argparse-cli (>=1.11)", "sphinx-autodoc-typehints (>=1.23,!=1.23.4)", "sphinx-copybutton (>=0.5.2)", "sphinx-inline-tabs (>=2022.1.2b11)", "sphinxcontrib-towncrier (>=0.2.1a0)", "towncrier (>=22.12)"]
+testing = ["build[virtualenv] (>=0.10)", "covdefaults (>=2.3)", "devpi-process (>=0.3)", "diff-cover (>=7.5)", "distlib (>=0.3.6)", "flaky (>=3.7)", "hatch-vcs (>=0.3)", "hatchling (>=1.14)", "psutil (>=5.9.4)", "pytest (>=7.3.1)", "pytest-cov (>=4)", "pytest-mock (>=3.10)", "pytest-xdist (>=3.2.1)", "re-assert (>=1.1)", "time-machine (>=2.9)", "wheel (>=0.40)"]
+
+[[package]]
+name = "tox-docker"
+version = "4.1.0"
+description = "Launch a docker instance around test runs"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+    {file = "tox-docker-4.1.0.tar.gz", hash = "sha256:0317e692dc80f2197eaf9c905dcb8d1d1f9d5bf2686ecfd83c22a1da9d23fb24"},
+    {file = "tox_docker-4.1.0-py2.py3-none-any.whl", hash = "sha256:444c72192a2443d2b4db5766545d4413ea683cc488523d770e2e216f15fa3086"},
+]
+
+[package.dependencies]
+docker = ">=4.0,<7.0"
+packaging = "*"
+tox = ">=3.0.0,<5.0"
 
 [[package]]
 name = "twine"
-version = "4.0.1"
+version = "4.0.2"
 description = "Collection of utilities for publishing packages on PyPI"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "twine-4.0.2-py3-none-any.whl", hash = "sha256:929bc3c280033347a00f847236564d1c52a3e61b1ac2516c97c48f3ceab756d8"},
+    {file = "twine-4.0.2.tar.gz", hash = "sha256:9e102ef5fdd5a20661eb88fad46338806c3bd32cf1db729603fe3697b1bc83c8"},
+]
 
 [package.dependencies]
 importlib-metadata = ">=3.6"
@@ -677,44 +1363,122 @@ rfc3986 = ">=1.4.0"
 rich = ">=12.0.0"
 urllib3 = ">=1.26.0"
 
+[[package]]
+name = "typed-ast"
+version = "1.5.4"
+description = "a fork of Python 2 and 3 ast modules with type comment support"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "typed_ast-1.5.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:669dd0c4167f6f2cd9f57041e03c3c2ebf9063d0757dc89f79ba1daa2bfca9d4"},
+    {file = "typed_ast-1.5.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:211260621ab1cd7324e0798d6be953d00b74e0428382991adfddb352252f1d62"},
+    {file = "typed_ast-1.5.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:267e3f78697a6c00c689c03db4876dd1efdfea2f251a5ad6555e82a26847b4ac"},
+    {file = "typed_ast-1.5.4-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c542eeda69212fa10a7ada75e668876fdec5f856cd3d06829e6aa64ad17c8dfe"},
+    {file = "typed_ast-1.5.4-cp310-cp310-win_amd64.whl", hash = "sha256:a9916d2bb8865f973824fb47436fa45e1ebf2efd920f2b9f99342cb7fab93f72"},
+    {file = "typed_ast-1.5.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:79b1e0869db7c830ba6a981d58711c88b6677506e648496b1f64ac7d15633aec"},
+    {file = "typed_ast-1.5.4-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a94d55d142c9265f4ea46fab70977a1944ecae359ae867397757d836ea5a3f47"},
+    {file = "typed_ast-1.5.4-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:183afdf0ec5b1b211724dfef3d2cad2d767cbefac291f24d69b00546c1837fb6"},
+    {file = "typed_ast-1.5.4-cp36-cp36m-win_amd64.whl", hash = "sha256:639c5f0b21776605dd6c9dbe592d5228f021404dafd377e2b7ac046b0349b1a1"},
+    {file = "typed_ast-1.5.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:cf4afcfac006ece570e32d6fa90ab74a17245b83dfd6655a6f68568098345ff6"},
+    {file = "typed_ast-1.5.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ed855bbe3eb3715fca349c80174cfcfd699c2f9de574d40527b8429acae23a66"},
+    {file = "typed_ast-1.5.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:6778e1b2f81dfc7bc58e4b259363b83d2e509a65198e85d5700dfae4c6c8ff1c"},
+    {file = "typed_ast-1.5.4-cp37-cp37m-win_amd64.whl", hash = "sha256:0261195c2062caf107831e92a76764c81227dae162c4f75192c0d489faf751a2"},
+    {file = "typed_ast-1.5.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2efae9db7a8c05ad5547d522e7dbe62c83d838d3906a3716d1478b6c1d61388d"},
+    {file = "typed_ast-1.5.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7d5d014b7daa8b0bf2eaef684295acae12b036d79f54178b92a2b6a56f92278f"},
+    {file = "typed_ast-1.5.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:370788a63915e82fd6f212865a596a0fefcbb7d408bbbb13dea723d971ed8bdc"},
+    {file = "typed_ast-1.5.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:4e964b4ff86550a7a7d56345c7864b18f403f5bd7380edf44a3c1fb4ee7ac6c6"},
+    {file = "typed_ast-1.5.4-cp38-cp38-win_amd64.whl", hash = "sha256:683407d92dc953c8a7347119596f0b0e6c55eb98ebebd9b23437501b28dcbb8e"},
+    {file = "typed_ast-1.5.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4879da6c9b73443f97e731b617184a596ac1235fe91f98d279a7af36c796da35"},
+    {file = "typed_ast-1.5.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3e123d878ba170397916557d31c8f589951e353cc95fb7f24f6bb69adc1a8a97"},
+    {file = "typed_ast-1.5.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ebd9d7f80ccf7a82ac5f88c521115cc55d84e35bf8b446fcd7836eb6b98929a3"},
+    {file = "typed_ast-1.5.4-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:98f80dee3c03455e92796b58b98ff6ca0b2a6f652120c263efdba4d6c5e58f72"},
+    {file = "typed_ast-1.5.4-cp39-cp39-win_amd64.whl", hash = "sha256:0fdbcf2fef0ca421a3f5912555804296f0b0960f0418c440f5d6d3abb549f3e1"},
+    {file = "typed_ast-1.5.4.tar.gz", hash = "sha256:39e21ceb7388e4bb37f4c679d72707ed46c2fbf2a5609b8b8ebc4b067d977df2"},
+]
+
+[[package]]
+name = "types-pyopenssl"
+version = "23.1.0.3"
+description = "Typing stubs for pyOpenSSL"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+    {file = "types-pyOpenSSL-23.1.0.3.tar.gz", hash = "sha256:e7211088eff3e20d359888dedecb0994f7181d5cce0f26354dd47ca0484dc8a6"},
+    {file = "types_pyOpenSSL-23.1.0.3-py3-none-any.whl", hash = "sha256:ad024b07a1f4bffbca44699543c71efd04733a6c22781fa9673a971e410a3086"},
+]
+
+[package.dependencies]
+cryptography = ">=35.0.0"
+
+[[package]]
+name = "types-redis"
+version = "4.5.5.2"
+description = "Typing stubs for redis"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+    {file = "types-redis-4.5.5.2.tar.gz", hash = "sha256:2fe82f374d9dddf007deaf23d81fddcfd9523d9522bf11523c5c43bc5b27099e"},
+    {file = "types_redis-4.5.5.2-py3-none-any.whl", hash = "sha256:bf8692252038dbe03b007ca4fde87d3ae8e10610854a6858e3bf5d01721a7c4b"},
+]
+
+[package.dependencies]
+cryptography = ">=35.0.0"
+types-pyOpenSSL = "*"
+
 [[package]]
 name = "typing-extensions"
-version = "4.3.0"
+version = "4.5.0"
 description = "Backported and Experimental Type Hints for Python 3.7+"
 category = "main"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "typing_extensions-4.5.0-py3-none-any.whl", hash = "sha256:fb33085c39dd998ac16d1431ebc293a8b3eedd00fd4a32de0ff79002c19511b4"},
+    {file = "typing_extensions-4.5.0.tar.gz", hash = "sha256:5cb5f4a79139d699607b3ef622a1dedafa84e115ab0024e0d9c044a9479ca7cb"},
+]
 
 [[package]]
 name = "urllib3"
-version = "1.26.11"
+version = "2.0.2"
 description = "HTTP library with thread-safe connection pooling, file post, and more."
 category = "dev"
 optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, <4"
+python-versions = ">=3.7"
+files = [
+    {file = "urllib3-2.0.2-py3-none-any.whl", hash = "sha256:d055c2f9d38dc53c808f6fdc8eab7360b6fdbbde02340ed25cfbcd817c62469e"},
+    {file = "urllib3-2.0.2.tar.gz", hash = "sha256:61717a1095d7e155cdb737ac7bb2f4324a858a1e2e6466f6d03ff630ca68d3cc"},
+]
 
 [package.extras]
-brotli = ["brotlicffi (>=0.8.0)", "brotli (>=1.0.9)", "brotlipy (>=0.6.0)"]
-secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
-socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
+brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"]
+secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"]
+socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
+zstd = ["zstandard (>=0.18.0)"]
 
 [[package]]
 name = "virtualenv"
-version = "20.16.2"
+version = "20.23.0"
 description = "Virtual Python Environment builder"
 category = "dev"
 optional = false
-python-versions = ">=3.6"
+python-versions = ">=3.7"
+files = [
+    {file = "virtualenv-20.23.0-py3-none-any.whl", hash = "sha256:6abec7670e5802a528357fdc75b26b9f57d5d92f29c5462ba0fbe45feacc685e"},
+    {file = "virtualenv-20.23.0.tar.gz", hash = "sha256:a85caa554ced0c0afbd0d638e7e2d7b5f92d23478d05d17a76daeac8f279f924"},
+]
 
 [package.dependencies]
-distlib = ">=0.3.1,<1"
-filelock = ">=3.2,<4"
-importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""}
-platformdirs = ">=2,<3"
+distlib = ">=0.3.6,<1"
+filelock = ">=3.11,<4"
+importlib-metadata = {version = ">=6.4.1", markers = "python_version < \"3.8\""}
+platformdirs = ">=3.2,<4"
 
 [package.extras]
-docs = ["proselint (>=0.10.2)", "sphinx (>=3)", "sphinx-argparse (>=0.2.5)", "sphinx-rtd-theme (>=0.4.3)", "towncrier (>=21.3)"]
-testing = ["coverage (>=4)", "coverage-enable-subprocess (>=1)", "flaky (>=3)", "packaging (>=20.0)", "pytest (>=4)", "pytest-env (>=0.6.2)", "pytest-freezegun (>=0.4.1)", "pytest-mock (>=2)", "pytest-randomly (>=1)", "pytest-timeout (>=1)"]
+docs = ["furo (>=2023.3.27)", "proselint (>=0.13)", "sphinx (>=6.1.3)", "sphinx-argparse (>=0.4)", "sphinxcontrib-towncrier (>=0.2.1a0)", "towncrier (>=22.12)"]
+test = ["covdefaults (>=2.3)", "coverage (>=7.2.3)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=23.1)", "pytest (>=7.3.1)", "pytest-env (>=0.8.1)", "pytest-freezegun (>=0.4.2)", "pytest-mock (>=3.10)", "pytest-randomly (>=3.12)", "pytest-timeout (>=2.1)", "setuptools (>=67.7.1)", "time-machine (>=2.9)"]
 
 [[package]]
 name = "webencodings"
@@ -723,95 +1487,49 @@ description = "Character encoding aliases for legacy web content"
 category = "dev"
 optional = false
 python-versions = "*"
+files = [
+    {file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
+    {file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
+]
 
 [[package]]
-name = "wrapt"
-version = "1.14.1"
-description = "Module for decorators, wrappers and monkey patching."
-category = "main"
+name = "websocket-client"
+version = "1.5.1"
+description = "WebSocket client for Python with low level API options"
+category = "dev"
 optional = false
-python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
+python-versions = ">=3.7"
+files = [
+    {file = "websocket-client-1.5.1.tar.gz", hash = "sha256:3f09e6d8230892547132177f575a4e3e73cfdf06526e20cc02aa1c3b47184d40"},
+    {file = "websocket_client-1.5.1-py3-none-any.whl", hash = "sha256:cdf5877568b7e83aa7cf2244ab56a3213de587bbe0ce9d8b9600fc77b455d89e"},
+]
+
+[package.extras]
+docs = ["Sphinx (>=3.4)", "sphinx-rtd-theme (>=0.5)"]
+optional = ["python-socks", "wsaccel"]
+test = ["websockets"]
 
 [[package]]
 name = "zipp"
-version = "3.8.1"
+version = "3.15.0"
 description = "Backport of pathlib-compatible object wrapper for zip files"
 category = "main"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "zipp-3.15.0-py3-none-any.whl", hash = "sha256:48904fc76a60e542af151aded95726c1a5c34ed43ab4134b597665c86d7ad556"},
+    {file = "zipp-3.15.0.tar.gz", hash = "sha256:112929ad649da941c23de50f356a2b5570c954b65150642bccdd66bf194d224b"},
+]
 
 [package.extras]
-docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
-testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.3)", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
+docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
+testing = ["big-O", "flake8 (<5)", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
 
 [extras]
-aioredis = ["aioredis"]
+json = ["jsonpath-ng"]
 lua = ["lupa"]
 
 [metadata]
-lock-version = "1.1"
+lock-version = "2.0"
 python-versions = "^3.7"
-content-hash = "7d7877983e79eff5e022268e408a2e7188ad5957e82aa4734e25134e75e9999c"
-
-[metadata.files]
-aioredis = []
-async-timeout = []
-atomicwrites = []
-attrs = []
-bleach = []
-certifi = []
-cffi = []
-charset-normalizer = []
-colorama = []
-commonmark = []
-coverage = []
-cryptography = []
-deprecated = []
-distlib = []
-docutils = []
-exceptiongroup = []
-filelock = []
-flake8 = []
-hypothesis = []
-idna = []
-importlib-metadata = []
-iniconfig = []
-invoke = []
-jeepney = []
-keyring = []
-lupa = []
-mccabe = []
-packaging = []
-pkginfo = []
-platformdirs = []
-pluggy = []
-py = []
-pycodestyle = []
-pycparser = []
-pyflakes = []
-pygments = []
-pyparsing = []
-pytest = []
-pytest-asyncio = []
-pytest-cov = []
-pytest-mock = []
-pywin32-ctypes = []
-readme-renderer = []
-redis = []
-requests = []
-requests-toolbelt = []
-rfc3986 = []
-rich = []
-secretstorage = []
-six = []
-sortedcontainers = []
-toml = []
-tomli = []
-tox = []
-twine = []
-typing-extensions = []
-urllib3 = []
-virtualenv = []
-webencodings = []
-wrapt = []
-zipp = []
+content-hash = "83c8ae193beb12e757db8b2f8dee9640e97699592add305b4cd369e27ffac663"
diff --git a/pyproject.toml b/pyproject.toml
index 870594c..cbef752 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,5 +1,5 @@
 [build-system]
-requires = ["poetry_core>=1.0.0"]
+requires = ["poetry_core"]
 build-backend = "poetry.core.masonry.api"
 
 
@@ -8,16 +8,17 @@ name = "fakeredis"
 packages = [
     { include = "fakeredis" },
 ]
-version = "1.9.0"
+version = "2.13.0"
 description = "Fake implementation of redis API for testing purposes."
 readme = "README.md"
-keywords = ["redis", "rq", "django-rq", "rq-scheduler"]
+keywords = ["redis", "RedisJson", ]
 authors = [
+    "Daniel Moran <daniel.maruani@gmail.com>",
+    "Bruce Merry <bmerry@ska.ac.za>",
     "James Saryerwinnie <js@jamesls.com>",
-    "Bruce Merry <bmerry@ska.ac.za>"
 ]
 maintainers = [
-    "Daniel Moran <daniel.maruani@gmail.com>"
+    "Daniel Moran <daniel.maruani@gmail.com>",
 ]
 license = "BSD-3-Clause"
 classifiers = [
@@ -27,42 +28,48 @@ classifiers = [
     'License :: OSI Approved :: BSD License',
     'Operating System :: OS Independent',
     'Programming Language :: Python',
-    'Programming Language :: Python :: 3.7',
     'Programming Language :: Python :: 3.8',
     'Programming Language :: Python :: 3.9',
     'Programming Language :: Python :: 3.10',
+    'Programming Language :: Python :: 3.11',
     'Topic :: Software Development :: Libraries :: Python Modules',
 ]
 homepage = "https://github.com/cunla/fakeredis-py"
 repository = "https://github.com/cunla/fakeredis-py"
+documentation = "https://fakeredis.readthedocs.io/"
+include = [
+     { path = "test", format = "sdist" },
+]
 
 [tool.poetry.dependencies]
 python = "^3.7"
-redis = "<4.4"
-six = "^1.16.0"
-sortedcontainers = "^2.4.0"
-lupa = { version = "^1.13", optional = true }
-aioredis = { version = "^2.0.1", optional = true }
+redis = ">=4"
+sortedcontainers = "^2.4"
+lupa = { version = "^1.14", optional = true }
+jsonpath-ng = { version = "^1.5", optional = true }
 
 [tool.poetry.extras]
 lua = ["lupa"]
-aioredis = ["aioredis"]
+json = ["jsonpath-ng"]
 
 [tool.poetry.dev-dependencies]
-invoke = "^1.7.1"
-wheel = "^0.37.1"
-hypothesis = "^6.47.4"
-tox = "^3.25.0"
-twine = "4.0.1"
-coverage = "^6.3"
-pytest = "^7.1.2"
-pytest-asyncio = "0.19.0"
-pytest-cov = "^3.0.0"
-pytest-mock = "^3.7.0"
-flake8 = "^4.0.1"
+hypothesis = "^6.70"
+coverage = "^7"
+pytest = "^7.2"
+pytest-asyncio = "^0.21"
+pytest-cov = "^4.0"
+pytest-mock = "^3.10"
+flake8 = { version = "^6.0", python = ">=3.8.1" }
+mypy = "^1"
+types-redis = ">=4.0"
+twine = "^4.0" # Upload to pypi
+tox = "^4.4"
+tox-docker = "^4"
 
 [tool.poetry.urls]
 "Bug Tracker" = "https://github.com/cunla/fakeredis-py/issues"
+"Funding" = "https://github.com/sponsors/cunla"
+
 
 [tool.pytest.ini_options]
 markers = [
@@ -74,4 +81,14 @@ markers = [
     "max_server",
     "decode_responses",
 ]
-asyncio_mode="strict"
\ No newline at end of file
+asyncio_mode = "strict"
+
+[tool.mypy]
+files = [
+    "fakeredis/**/*.py",
+    "test/**/*.py",
+]
+packages = ['fakeredis', ]
+follow_imports = "silent"
+ignore_missing_imports = true
+scripts_are_modules = true
diff --git a/scripts/create_issues.py b/scripts/create_issues.py
new file mode 100644
index 0000000..5a57591
--- /dev/null
+++ b/scripts/create_issues.py
@@ -0,0 +1,124 @@
+"""
+Script to create issue for every unsupported command.
+"""
+import os
+
+from dotenv import load_dotenv
+from github import Github
+
+from supported import download_redis_commands, implemented_commands
+
+load_dotenv()  # take environment variables from .env.
+
+IGNORE_GROUPS = {
+    'server', 'cf', 'cms', 'topk', 'tdigest', 'bf', 'search', 'suggestion', 'timeseries',
+    'graph', 'server', 'cluster', 'connection',
+    'server', 'cluster', 'list', 'connection', 'bitmap', 'sorted-set', 'generic', 'scripting',
+    'hyperloglog', 'pubsub', 'graph', 'timeseries', 'search', 'suggestion', 'bf', 'cf', 'cms', 'topk',
+    'tdigest',
+    'stream',
+}
+IGNORE_COMMANDS = {
+    'PUBSUB HELP',
+    'OBJECT HELP',
+    'FUNCTION HELP',
+    'SCRIPT HELP',
+    'XGROUP HELP',
+    'XINFO HELP',
+    'JSON.DEBUG HELP',
+    'JSON.DEBUG MEMORY',
+    'JSON.DEBUG',
+    'JSON.TYPE',
+    'JSON.RESP',
+}
+
+
+def commands_groups(
+        all_commands: dict, implemented_set: set
+) -> tuple[dict[str, list[str]], dict[str, list[str]]]:
+    implemented, unimplemented = dict(), dict()
+    for cmd in all_commands:
+        if cmd.upper() in IGNORE_COMMANDS:
+            continue
+        group = all_commands[cmd]['group']
+        unimplemented.setdefault(group, [])
+        implemented.setdefault(group, [])
+        if cmd in implemented_set:
+            implemented[group].append(cmd)
+        else:
+            unimplemented[group].append(cmd)
+    return implemented, unimplemented
+
+
+def get_unimplemented_and_implemented_commands() -> tuple[dict[str, list[str]], dict[str, list[str]]]:
+    """Returns 2 dictionaries, one of unimplemented commands and another of implemented commands
+
+    """
+    commands = download_redis_commands()
+    implemented_commands_set = implemented_commands()
+    implemented_dict, unimplemented_dict = commands_groups(commands, implemented_commands_set)
+    groups = sorted(implemented_dict.keys(), key=lambda x: len(unimplemented_dict[x]))
+    for group in groups:
+        unimplemented_count = len(unimplemented_dict[group])
+        total_count = len(implemented_dict.get(group)) + unimplemented_count
+        print(f'{group} has {unimplemented_count}/{total_count} unimplemented commands')
+    return unimplemented_dict, implemented_dict
+
+
+class GithubData:
+    def __init__(self, dry=True):
+        token = os.getenv('GITHUB_TOKEN', None)
+        g = Github(token)
+        self.dry = dry or (token is None)
+        self.gh_repo = g.get_repo('cunla/fakeredis')
+        open_issues = self.gh_repo.get_issues(state='open')
+        self.issues = {i.title: i.number for i in open_issues}
+        gh_labels = self.gh_repo.get_labels()
+        self.labels = {label.name for label in gh_labels}
+
+    def create_label(self, name):
+        if self.dry:
+            print(f'Creating label "{name}"')
+        else:
+            self.gh_repo.create_label(name, "f29513")
+        self.labels.add(name)
+
+    def create_issue(self, group: str, cmd: str, summary: str):
+        link = f"https://redis.io/commands/{cmd.replace(' ', '-')}/"
+        title = f"Implement support for `{cmd.upper()}` ({group} command)"
+        filename = f'{group}_mixin.py'
+        body = f"""
+Implement support for command `{cmd.upper()}` in {filename}.
+        
+{summary}. 
+        
+Here is the [Official documentation]({link})"""
+        labels = [f'{group}-commands', 'enhancement', 'help wanted']
+        for label in labels:
+            if label not in self.labels:
+                self.create_label(label)
+        if title in self.issues:
+            return
+        if self.dry:
+            print(f'Creating issue with title "{title}" and labels {labels}')
+        else:
+            self.gh_repo.create_issue(title, body, labels=labels)
+
+
+def print_gh_commands(commands: dict, unimplemented: dict):
+    gh = GithubData()
+    for group in unimplemented:
+        if group in IGNORE_GROUPS:
+            continue
+        print(f'### Creating issues for {group} commands')
+        for cmd in unimplemented[group]:
+            if cmd.upper() in IGNORE_COMMANDS:
+                continue
+            summary = commands[cmd]['summary']
+            gh.create_issue(group, cmd, summary)
+
+
+if __name__ == '__main__':
+    commands = download_redis_commands()
+    unimplemented_dict, _ = get_unimplemented_and_implemented_commands()
+    print_gh_commands(commands, unimplemented_dict)
diff --git a/scripts/supported.py b/scripts/supported.py
index 97098da..ed10d20 100755
--- a/scripts/supported.py
+++ b/scripts/supported.py
@@ -1,39 +1,42 @@
-# Script will import fakeredis and list what
-# commands it supports and what commands
-# it does not have support for, based on the
-# command list from:
-# https://raw.github.com/antirez/redis-doc/master/commands.json
-# Because, who wants to do this by hand...
-
-import inspect
 import json
 import os
 
 import requests
 
-import fakeredis
+from fakeredis._commands import SUPPORTED_COMMANDS
 
 THIS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)))
-COMMANDS_FILE = os.path.join(THIS_DIR, '.commands.json')
-COMMANDS_URL = 'https://raw.github.com/antirez/redis-doc/master/commands.json'
+COMMAND_FILES = [
+    ('.commands.json', 'https://raw.githubusercontent.com/redis/redis-doc/master/commands.json'),
+    ('.json.commands.json', 'https://raw.githubusercontent.com/RedisJSON/RedisJSON/master/commands.json'),
+    ('.graph.commands.json', 'https://raw.githubusercontent.com/RedisGraph/RedisGraph/master/commands.json'),
+    ('.ts.commands.json', 'https://raw.githubusercontent.com/RedisTimeSeries/RedisTimeSeries/master/commands.json'),
+    ('.ft.commands.json', 'https://raw.githubusercontent.com/RediSearch/RediSearch/master/commands.json'),
+    ('.bloom.commands.json', 'https://raw.githubusercontent.com/RedisBloom/RedisBloom/master/commands.json'),
+]
+
+TARGET_FILES = {
+    'unimplemented': 'docs/redis-commands/unimplemented_commands.md',
+    'implemented': 'docs/redis-commands/implemented_commands.md',
+}
 
 
 def download_redis_commands() -> dict:
-    if not os.path.exists(COMMANDS_FILE):
-        contents = requests.get(COMMANDS_URL).content
-        open(COMMANDS_FILE, 'wb').write(contents)
-    cmds = json.load(open(COMMANDS_FILE))
-    cmds = {k.lower(): v for k, v in cmds.items()}
+    cmds = {}
+    for filename, url in COMMAND_FILES:
+        full_filename = os.path.join(THIS_DIR, filename)
+        if not os.path.exists(full_filename):
+            contents = requests.get(url).content
+            open(full_filename, 'wb').write(contents)
+        curr_cmds = json.load(open(full_filename))
+        cmds = cmds | {k.lower(): v for k, v in curr_cmds.items()}
     return cmds
 
 
 def implemented_commands() -> set:
-    res = {name
-           for name, method in inspect.getmembers(fakeredis._server.FakeSocket)
-           if hasattr(method, '_fakeredis_sig')
-           }
-    # Currently no programmatic way to discover implemented subcommands
-    res.add('script load')
+    res = set(SUPPORTED_COMMANDS.keys())
+    if 'json.type' not in res:
+        raise ValueError('Make sure jsonpath_ng is installed to get accurate documenentation')
     return res
 
 
@@ -43,36 +46,59 @@ def commands_groups(
     implemented, unimplemented = dict(), dict()
     for cmd in all_commands:
         group = all_commands[cmd]['group']
+        unimplemented.setdefault(group, [])
+        implemented.setdefault(group, [])
         if cmd in implemented_set:
-            implemented.setdefault(group, []).append(cmd)
+            implemented[group].append(cmd)
         else:
-            unimplemented.setdefault(group, []).append(cmd)
+            unimplemented[group].append(cmd)
     return implemented, unimplemented
 
 
-def print_unimplemented_commands(implemented: dict, unimplemented: dict) -> None:
-    def print_groups(dictionary: dict):
+def generate_markdown_files(
+        all_commands: dict,
+        unimplemented: dict,
+        implemented: dict) -> None:
+    def print_groups(dictionary: dict, f):
         for group in dictionary:
-            print(f'### {group}')
+            f.write(f'## {group} commands\n\n')
             for cmd in dictionary[group]:
-                print(f" * {cmd}")
-            print()
+                f.write(f"### [{cmd.upper()}](https://redis.io/commands/{cmd.replace(' ', '-')}/)\n\n")
+                f.write(f"{all_commands[cmd]['summary']}\n\n")
+            f.write("\n")
+
+    supported_commands_file = open(TARGET_FILES['implemented'], 'w')
+    supported_commands_file.write("""# Supported commands
 
-    print("""-----
 Here is a list of all redis [implemented commands](#implemented-commands) and a
 list of [unimplemented commands](#unimplemented-commands).
+
+------\n\n
 """)
-    print("""# Implemented Commands""")
-    print_groups(implemented)
+    print_groups(implemented, supported_commands_file)
 
-    print("""# Unimplemented Commands
-All of the redis commands are implemented in fakeredis with these exceptions:
-    """)
-    print_groups(unimplemented)
+    unimplemented_cmds_file = open(TARGET_FILES['unimplemented'], 'w')
+    unimplemented_cmds_file.write("""# Unimplemented Commands
+All the redis commands are implemented in fakeredis with these exceptions:\n\n""")
+    print_groups(unimplemented, unimplemented_cmds_file)
 
 
-if __name__ == '__main__':
+def get_unimplemented_and_implemented_commands() -> tuple[dict[str, list[str]], dict[str, list[str]]]:
+    """Returns 2 dictionaries, one of unimplemented commands and another of implemented commands
+
+    """
     commands = download_redis_commands()
     implemented_commands_set = implemented_commands()
-    unimplemented_dict, implemented_dict = commands_groups(commands, implemented_commands_set)
-    print_unimplemented_commands(unimplemented_dict, implemented_dict)
+    implemented_dict, unimplemented_dict = commands_groups(commands, implemented_commands_set)
+    groups = sorted(implemented_dict.keys(), key=lambda x: len(unimplemented_dict[x]))
+    for group in groups:
+        unimplemented_count = len(unimplemented_dict[group])
+        total_count = len(implemented_dict.get(group)) + unimplemented_count
+        print(f'{group} has {unimplemented_count}/{total_count} unimplemented commands')
+    return unimplemented_dict, implemented_dict
+
+
+if __name__ == '__main__':
+    commands = download_redis_commands()
+    unimplemented_dict, implemented_dict = get_unimplemented_and_implemented_commands()
+    generate_markdown_files(commands, unimplemented_dict, implemented_dict)
diff --git a/scripts/supported2.py b/scripts/supported2.py
new file mode 100644
index 0000000..036be0a
--- /dev/null
+++ b/scripts/supported2.py
@@ -0,0 +1,87 @@
+import json
+import os
+from collections import namedtuple
+
+import requests
+
+from fakeredis._commands import SUPPORTED_COMMANDS
+
+THIS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)))
+CommandsMeta = namedtuple('CommandsMeta', ['local_filename', 'stack', 'title', 'url', ])
+METADATA = [
+    CommandsMeta('.commands.json', 'Redis', 'Redis',
+                 'https://raw.githubusercontent.com/redis/redis-doc/master/commands.json', ),
+    CommandsMeta('.json.commands.json', 'RedisJson', 'JSON',
+                 'https://raw.githubusercontent.com/RedisJSON/RedisJSON/master/commands.json', ),
+    CommandsMeta('.graph.commands.json', 'RedisGraph', 'Graph',
+                 'https://raw.githubusercontent.com/RedisGraph/RedisGraph/master/commands.json', ),
+    CommandsMeta('.ts.commands.json', 'RedisTimeSeries', 'Time Series',
+                 'https://raw.githubusercontent.com/RedisTimeSeries/RedisTimeSeries/master/commands.json', ),
+    CommandsMeta('.ft.commands.json', 'RedisSearch', 'Search',
+                 'https://raw.githubusercontent.com/RediSearch/RediSearch/master/commands.json', ),
+    CommandsMeta('.bloom.commands.json', 'RedisBloom', 'Probabilistic',
+                 'https://raw.githubusercontent.com/RedisBloom/RedisBloom/master/commands.json', ),
+]
+
+
+def download_single_stack_commands(filename, url) -> dict:
+    full_filename = os.path.join(THIS_DIR, filename)
+    if not os.path.exists(full_filename):
+        contents = requests.get(url).content
+        open(full_filename, 'wb').write(contents)
+    curr_cmds = json.load(open(full_filename))
+    cmds = {k.lower(): v for k, v in curr_cmds.items()}
+    return cmds
+
+
+def implemented_commands() -> set:
+    res = set(SUPPORTED_COMMANDS.keys())
+    if 'json.type' not in res:
+        raise ValueError('Make sure jsonpath_ng is installed to get accurate documenentation')
+    return res
+
+
+def _commands_groups(commands: dict) -> dict[str, list[str]]:
+    groups = dict()
+    for cmd in commands:
+        group = commands[cmd]['group']
+        groups.setdefault(group, []).append(cmd)
+    return groups
+
+
+def generate_markdown_files(commands: dict, implemented_commands: set[str], stack: str, filename: str) -> None:
+    groups = _commands_groups(commands)
+    f = open(filename, 'w')
+    f.write(f'# {stack} commands\n\n')
+    implemeted_count = 0
+    for group in groups:
+        implemeted_count = list(map(lambda cmd: cmd in implemented_commands, groups[group])).count(True)
+        if implemeted_count > 0:
+            break
+    if implemeted_count == 0:
+        f.write(f'Module currently not implemented in fakeredis.\n\n')
+    for group in groups:
+        implemented_in_group = list(filter(lambda cmd: cmd in implemented_commands, groups[group]))
+        if len(implemented_in_group) > 0:
+            f.write(f'## {group} commands\n\n')
+        for cmd in implemented_in_group:
+            f.write(f"### [{cmd.upper()}](https://redis.io/commands/{cmd.replace(' ', '-')}/)\n\n")
+            f.write(f"{commands[cmd]['summary']}\n\n")
+        f.write("\n")
+        unimplemented_in_group = list(filter(lambda cmd: cmd not in implemented_commands, groups[group]))
+        if len(unimplemented_in_group) > 0:
+            f.write(f'### Unsupported {group} commands \n')
+            f.write(f'> To implement support for a command, see [here](/guides/implement-command/) \n\n')
+            for cmd in unimplemented_in_group:
+                f.write(f"#### [{cmd.upper()}](https://redis.io/commands/{cmd.replace(' ', '-')}/)"
+                        f" <small>(not implemented)</small>\n\n")
+                f.write(f"{commands[cmd]['summary']}\n\n")
+        f.write("\n")
+
+
+if __name__ == '__main__':
+    implemented = implemented_commands()
+    for cmd_meta in METADATA:
+        cmds = download_single_stack_commands(cmd_meta.local_filename, cmd_meta.url)
+        markdown_filename = f'docs/redis-commands/{cmd_meta.stack}.md'
+        generate_markdown_files(cmds, implemented, cmd_meta.title, markdown_filename)
diff --git a/test/__init__.py b/test/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/test/conftest.py b/test/conftest.py
index 71b5b11..ad09d69 100644
--- a/test/conftest.py
+++ b/test/conftest.py
@@ -1,3 +1,5 @@
+from typing import Callable, Union
+
 import pytest
 import pytest_asyncio
 import redis
@@ -7,20 +9,22 @@ import fakeredis
 
 
 @pytest_asyncio.fixture(scope="session")
-def is_redis_running():
+def real_redis_version() -> Union[None, str]:
+    """Returns server version or None if server is not running"""
+    client = None
     try:
-        r = redis.StrictRedis('localhost', port=6379)
-        r.ping()
-        return True
+        client = redis.StrictRedis('localhost', port=6379)
+        server_version = client.info()['redis_version']
+        return server_version
     except redis.ConnectionError:
-        return False
+        return None
     finally:
-        if hasattr(r, 'close'):
-            r.close()  # Absent in older versions of redis-py
+        if hasattr(client, 'close'):
+            client.close()  # Absent in older versions of redis-py
 
 
-@pytest_asyncio.fixture
-def fake_server(request):
+@pytest_asyncio.fixture(name='fake_server')
+def _fake_server(request):
     min_server_marker = request.node.get_closest_marker('min_server')
     server_version = 6
     if min_server_marker and min_server_marker.args[0].startswith('7'):
@@ -31,7 +35,7 @@ def fake_server(request):
 
 
 @pytest_asyncio.fixture
-def r(request, create_redis):
+def r(request, create_redis) -> redis.Redis:
     rconn = create_redis(db=0)
     connected = request.node.get_closest_marker('disconnected') is None
     if connected:
@@ -43,41 +47,41 @@ def r(request, create_redis):
         rconn.close()  # Older versions of redis-py don't have this method
 
 
+def _marker_version_value(request, marker_name: str):
+    marker_value = request.node.get_closest_marker(marker_name)
+    if marker_value is None:
+        return Version(str(0 if marker_name == 'min_server' else 100))
+    return Version(marker_value.args[0])
+
+
 @pytest_asyncio.fixture(
+    name='create_redis',
     params=[
         pytest.param('StrictRedis', marks=pytest.mark.real),
         pytest.param('FakeStrictRedis', marks=pytest.mark.fake),
     ]
 )
-def create_redis(request):
-    name = request.param
-    if not name.startswith('Fake') and not request.getfixturevalue('is_redis_running'):
+def _create_redis(request) -> Callable[[int], redis.Redis]:
+    cls_name = request.param
+    server_version = request.getfixturevalue('real_redis_version')
+    if not cls_name.startswith('Fake') and not server_version:
         pytest.skip('Redis is not running')
+    server_version = server_version or '6'
+    min_server = _marker_version_value(request, 'min_server')
+    max_server = _marker_version_value(request, 'max_server')
+    if Version(server_version) < min_server:
+        pytest.skip(f'Redis server {min_server.base_version} or more required but {server_version} found')
+    if Version(server_version) > max_server:
+        pytest.skip(f'Redis server {max_server.base_version} or less required but {server_version} found')
     decode_responses = request.node.get_closest_marker('decode_responses') is not None
 
     def factory(db=0):
-        if name.startswith('Fake'):
+        if cls_name.startswith('Fake'):
             fake_server = request.getfixturevalue('fake_server')
-            cls = getattr(fakeredis, name)
+            cls = getattr(fakeredis, cls_name)
             return cls(db=db, decode_responses=decode_responses, server=fake_server)
-        else:
-            cls = getattr(redis, name)
-            conn = cls('localhost', port=6379, db=db, decode_responses=decode_responses)
-            server_version = conn.info()['redis_version']
-            min_server_marker = request.node.get_closest_marker('min_server')
-            if min_server_marker is not None:
-                min_version = Version(min_server_marker.args[0])
-                if Version(server_version) < min_version:
-                    pytest.skip(
-                        'Redis server {} or more required but {} found'.format(min_version, server_version)
-                    )
-            max_server_marker = request.node.get_closest_marker('max_server')
-            if max_server_marker is not None:
-                max_server = Version(max_server_marker.args[0])
-                if Version(server_version) > max_server:
-                    pytest.skip(
-                        'Redis server {} or less required but {} found'.format(max_server, server_version)
-                    )
-            return conn
+        # Real
+        cls = getattr(redis, cls_name)
+        return cls('localhost', port=6379, db=db, decode_responses=decode_responses)
 
     return factory
diff --git a/test/test_aioredis1.py b/test/test_aioredis1.py
deleted file mode 100644
index f95276e..0000000
--- a/test/test_aioredis1.py
+++ /dev/null
@@ -1,167 +0,0 @@
-import asyncio
-
-import pytest
-import pytest_asyncio
-from packaging.version import Version
-
-import testtools
-
-pytestmark = [
-    testtools.run_test_if_redis_ver('below', '4.2'),
-]
-
-aioredis = pytest.importorskip("aioredis")
-
-import fakeredis.aioredis
-
-aioredis2 = Version(aioredis.__version__) >= Version('2.0.0a1')
-pytestmark.extend([
-    pytest.mark.asyncio,
-    pytest.mark.skipif(aioredis2, reason="Test is only applicable to aioredis 1.x"),
-])
-
-
-@pytest_asyncio.fixture(
-    params=[
-        pytest.param('fake', marks=pytest.mark.fake),
-        pytest.param('real', marks=pytest.mark.real)
-    ]
-)
-async def req_aioredis1(request):
-    if request.param == 'fake':
-        ret = await fakeredis.aioredis.create_redis_pool()
-    else:
-        if not request.getfixturevalue('is_redis_running'):
-            pytest.skip('Redis is not running')
-        ret = await aioredis.create_redis_pool('redis://localhost')
-    await ret.flushall()
-
-    yield ret
-
-    await ret.flushall()
-    ret.close()
-    await ret.wait_closed()
-
-
-@pytest_asyncio.fixture
-async def conn(req_aioredis1):
-    """A single connection, rather than a pool."""
-    with await req_aioredis1 as conn:
-        yield conn
-
-
-async def test_ping(req_aioredis1):
-    pong = await req_aioredis1.ping()
-    assert pong == b'PONG'
-
-
-async def test_types(req_aioredis1):
-    await req_aioredis1.hmset_dict('hash', key1='value1', key2='value2', key3=123)
-    result = await req_aioredis1.hgetall('hash', encoding='utf-8')
-    assert result == {
-        'key1': 'value1',
-        'key2': 'value2',
-        'key3': '123'
-    }
-
-
-async def test_transaction(req_aioredis1):
-    tr = req_aioredis1.multi_exec()
-    tr.set('key1', 'value1')
-    tr.set('key2', 'value2')
-    ok1, ok2 = await tr.execute()
-    assert ok1
-    assert ok2
-    result = await req_aioredis1.get('key1')
-    assert result == b'value1'
-
-
-async def test_transaction_fail(req_aioredis1, conn):
-    # ensure that the WATCH applies to the same connection as the MULTI/EXEC.
-    await req_aioredis1.set('foo', '1')
-    await conn.watch('foo')
-    await conn.set('foo', '2')  # Different connection
-    tr = conn.multi_exec()
-    tr.get('foo')
-    with pytest.raises(aioredis.MultiExecError):
-        await tr.execute()
-
-
-async def test_pubsub(req_aioredis1, event_loop):
-    ch, = await req_aioredis1.subscribe('channel')
-    queue = asyncio.Queue()
-
-    async def reader(channel):
-        async for message in ch.iter():
-            queue.put_nowait(message)
-
-    task = event_loop.create_task(reader(ch))
-    await req_aioredis1.publish('channel', 'message1')
-    await req_aioredis1.publish('channel', 'message2')
-    result1 = await queue.get()
-    result2 = await queue.get()
-    assert result1 == b'message1'
-    assert result2 == b'message2'
-    ch.close()
-    await task
-
-
-async def test_blocking_ready(req_aioredis1, conn):
-    """Blocking command which does not need to block."""
-    await req_aioredis1.rpush('list', 'x')
-    result = await conn.blpop('list', timeout=1)
-    assert result == [b'list', b'x']
-
-
-@pytest.mark.slow
-async def test_blocking_timeout(conn):
-    """Blocking command that times out without completing."""
-    result = await conn.blpop('missing', timeout=1)
-    assert result is None
-
-
-@pytest.mark.slow
-async def test_blocking_unblock(req_aioredis1, conn, event_loop):
-    """Blocking command that gets unblocked after some time."""
-
-    async def unblock():
-        await asyncio.sleep(0.1)
-        await req_aioredis1.rpush('list', 'y')
-
-    task = event_loop.create_task(unblock())
-    result = await conn.blpop('list', timeout=1)
-    assert result == [b'list', b'y']
-    await task
-
-
-@pytest.mark.slow
-async def test_blocking_pipeline(conn):
-    """Blocking command with another command issued behind it."""
-    await conn.set('foo', 'bar')
-    fut = asyncio.ensure_future(conn.blpop('list', timeout=1))
-    assert (await conn.get('foo')) == b'bar'
-    assert (await fut) is None
-
-
-async def test_wrongtype_error(req_aioredis1):
-    await req_aioredis1.set('foo', 'bar')
-    with pytest.raises(aioredis.ReplyError, match='^WRONGTYPE'):
-        await req_aioredis1.rpush('foo', 'baz')
-
-
-async def test_syntax_error(req_aioredis1):
-    with pytest.raises(aioredis.ReplyError,
-                       match="^ERR wrong number of arguments for 'get' command$"):
-        await req_aioredis1.execute('get')
-
-
-async def test_no_script_error(req_aioredis1):
-    with pytest.raises(aioredis.ReplyError, match='^NOSCRIPT '):
-        await req_aioredis1.evalsha('0123456789abcdef0123456789abcdef')
-
-
-@testtools.run_test_if_lupa
-async def test_failed_script_error(req_aioredis1):
-    await req_aioredis1.set('foo', 'bar')
-    with pytest.raises(aioredis.ReplyError, match='^ERR Error running script'):
-        await req_aioredis1.eval('return redis.call("ZCOUNT", KEYS[1])', ['foo'])
diff --git a/test/test_aioredis2.py b/test/test_aioredis2.py
deleted file mode 100644
index 3ac1c8f..0000000
--- a/test/test_aioredis2.py
+++ /dev/null
@@ -1,267 +0,0 @@
-import asyncio
-import re
-
-import pytest
-import pytest_asyncio
-import redis
-
-import testtools
-
-pytestmark = [
-    testtools.run_test_if_redis_ver('below', '4.2'),
-]
-
-aioredis = pytest.importorskip("aioredis", minversion='2.0.0a1')
-import async_timeout
-
-import fakeredis.aioredis
-from packaging.version import Version
-
-REDIS_VERSION = Version(redis.__version__)
-fake_only = pytest.mark.parametrize(
-    'req_aioredis2',
-    [pytest.param('fake', marks=pytest.mark.fake)],
-    indirect=True
-)
-pytestmark.extend([
-    pytest.mark.asyncio,
-])
-
-
-@pytest_asyncio.fixture(
-    params=[
-        pytest.param('fake', marks=pytest.mark.fake),
-        pytest.param('real', marks=pytest.mark.real)
-    ]
-)
-async def req_aioredis2(request):
-    if request.param == 'fake':
-        fake_server = request.getfixturevalue('fake_server')
-        ret = fakeredis.aioredis.FakeRedis(server=fake_server)
-    else:
-        if not request.getfixturevalue('is_redis_running'):
-            pytest.skip('Redis is not running')
-        ret = aioredis.Redis()
-        fake_server = None
-    if not fake_server or fake_server.connected:
-        await ret.flushall()
-
-    yield ret
-
-    if not fake_server or fake_server.connected:
-        await ret.flushall()
-    await ret.connection_pool.disconnect()
-
-
-@pytest_asyncio.fixture
-async def conn(req_aioredis2):
-    """A single connection, rather than a pool."""
-    async with req_aioredis2.client() as conn:
-        yield conn
-
-
-@testtools.run_test_if_redis_ver('above', '4.2')
-def test_redis_asyncio_is_used():
-    """Redis 4.2+ has support for asyncio and should be preferred over aioredis"""
-    assert not hasattr(fakeredis.aioredis, "__version__")
-
-
-async def test_ping(req_aioredis2):
-    pong = await req_aioredis2.ping()
-    assert pong is True
-
-
-async def test_types(req_aioredis2):
-    await req_aioredis2.hset('hash', mapping={'key1': 'value1', 'key2': 'value2', 'key3': 123})
-    result = await req_aioredis2.hgetall('hash')
-    assert result == {
-        b'key1': b'value1',
-        b'key2': b'value2',
-        b'key3': b'123'
-    }
-
-
-async def test_transaction(req_aioredis2):
-    async with req_aioredis2.pipeline(transaction=True) as tr:
-        tr.set('key1', 'value1')
-        tr.set('key2', 'value2')
-        ok1, ok2 = await tr.execute()
-    assert ok1
-    assert ok2
-    result = await req_aioredis2.get('key1')
-    assert result == b'value1'
-
-
-async def test_transaction_fail(req_aioredis2):
-    await req_aioredis2.set('foo', '1')
-    async with req_aioredis2.pipeline(transaction=True) as tr:
-        await tr.watch('foo')
-        await req_aioredis2.set('foo', '2')  # Different connection
-        tr.multi()
-        tr.get('foo')
-        with pytest.raises(aioredis.exceptions.WatchError):
-            await tr.execute()
-
-
-async def test_pubsub(req_aioredis2, event_loop):
-    queue = asyncio.Queue()
-
-    async def reader(ps):
-        while True:
-            message = await ps.get_message(ignore_subscribe_messages=True, timeout=5)
-            if message is not None:
-                if message.get('data') == b'stop':
-                    break
-                queue.put_nowait(message)
-
-    async with async_timeout.timeout(5), req_aioredis2.pubsub() as ps:
-        await ps.subscribe('channel')
-        task = event_loop.create_task(reader(ps))
-        await req_aioredis2.publish('channel', 'message1')
-        await req_aioredis2.publish('channel', 'message2')
-        result1 = await queue.get()
-        result2 = await queue.get()
-        assert result1 == {
-            'channel': b'channel',
-            'pattern': None,
-            'type': 'message',
-            'data': b'message1'
-        }
-        assert result2 == {
-            'channel': b'channel',
-            'pattern': None,
-            'type': 'message',
-            'data': b'message2'
-        }
-        await req_aioredis2.publish('channel', 'stop')
-        await task
-
-
-@pytest.mark.slow
-async def test_pubsub_timeout(req_aioredis2):
-    async with req_aioredis2.pubsub() as ps:
-        await ps.subscribe('channel')
-        await ps.get_message(timeout=0.5)  # Subscription message
-        message = await ps.get_message(timeout=0.5)
-        assert message is None
-
-
-@pytest.mark.slow
-async def test_pubsub_disconnect(req_aioredis2):
-    async with req_aioredis2.pubsub() as ps:
-        await ps.subscribe('channel')
-        await ps.connection.disconnect()
-        message = await ps.get_message(timeout=0.5)  # Subscription message
-        assert message is not None
-        message = await ps.get_message(timeout=0.5)
-        assert message is None
-
-
-async def test_blocking_ready(req_aioredis2, conn):
-    """Blocking command which does not need to block."""
-    await req_aioredis2.rpush('list', 'x')
-    result = await conn.blpop('list', timeout=1)
-    assert result == (b'list', b'x')
-
-
-@pytest.mark.slow
-async def test_blocking_timeout(conn):
-    """Blocking command that times out without completing."""
-    result = await conn.blpop('missing', timeout=1)
-    assert result is None
-
-
-@pytest.mark.slow
-async def test_blocking_unblock(req_aioredis2, conn, event_loop):
-    """Blocking command that gets unblocked after some time."""
-
-    async def unblock():
-        await asyncio.sleep(0.1)
-        await req_aioredis2.rpush('list', 'y')
-
-    task = event_loop.create_task(unblock())
-    result = await conn.blpop('list', timeout=1)
-    assert result == (b'list', b'y')
-    await task
-
-
-async def test_wrongtype_error(req_aioredis2):
-    await req_aioredis2.set('foo', 'bar')
-    with pytest.raises(aioredis.ResponseError, match='^WRONGTYPE'):
-        await req_aioredis2.rpush('foo', 'baz')
-
-
-async def test_syntax_error(req_aioredis2):
-    with pytest.raises(aioredis.ResponseError,
-                       match="^wrong number of arguments for 'get' command$"):
-        await req_aioredis2.execute_command('get')
-
-
-async def test_no_script_error(req_aioredis2):
-    with pytest.raises(aioredis.exceptions.NoScriptError):
-        await req_aioredis2.evalsha('0123456789abcdef0123456789abcdef', 0)
-
-
-@testtools.run_test_if_lupa
-async def test_failed_script_error(req_aioredis2):
-    await req_aioredis2.set('foo', 'bar')
-    with pytest.raises(aioredis.ResponseError, match='^Error running script'):
-        await req_aioredis2.eval('return redis.call("ZCOUNT", KEYS[1])', 1, 'foo')
-
-
-@fake_only
-async def test_repr(req_aioredis2):
-    assert re.fullmatch(
-        r'ConnectionPool<FakeConnection<server=<fakeredis._server.FakeServer object at .*>,db=0>>',
-        repr(req_aioredis2.connection_pool)
-    )
-
-
-@fake_only
-@pytest.mark.disconnected
-async def test_not_connected(req_aioredis2):
-    with pytest.raises(aioredis.ConnectionError):
-        await req_aioredis2.ping()
-
-
-@fake_only
-async def test_disconnect_server(req_aioredis2, fake_server):
-    await req_aioredis2.ping()
-    fake_server.connected = False
-    with pytest.raises(aioredis.ConnectionError):
-        await req_aioredis2.ping()
-    fake_server.connected = True
-
-
-@pytest.mark.fake
-async def test_from_url():
-    r0 = fakeredis.aioredis.FakeRedis.from_url('redis://localhost?db=0')
-    r1 = fakeredis.aioredis.FakeRedis.from_url('redis://localhost?db=1')
-    # Check that they are indeed different databases
-    await r0.set('foo', 'a')
-    await r1.set('foo', 'b')
-    assert await r0.get('foo') == b'a'
-    assert await r1.get('foo') == b'b'
-    await r0.connection_pool.disconnect()
-    await r1.connection_pool.disconnect()
-
-
-@fake_only
-async def test_from_url_with_server(req_aioredis2, fake_server):
-    r2 = fakeredis.aioredis.FakeRedis.from_url('redis://localhost', server=fake_server)
-    await req_aioredis2.set('foo', 'bar')
-    assert await r2.get('foo') == b'bar'
-    await r2.connection_pool.disconnect()
-
-
-@pytest.mark.fake
-async def test_without_server():
-    r = fakeredis.aioredis.FakeRedis()
-    assert await r.ping()
-
-
-@pytest.mark.fake
-async def test_without_server_disconnected():
-    r = fakeredis.aioredis.FakeRedis(connected=False)
-    with pytest.raises(aioredis.ConnectionError):
-        await r.ping()
diff --git a/test/test_connection.py b/test/test_connection.py
new file mode 100644
index 0000000..37c1423
--- /dev/null
+++ b/test/test_connection.py
@@ -0,0 +1,486 @@
+import pytest
+import redis
+import redis.client
+from redis.exceptions import ResponseError
+
+from fakeredis import _msgs as msgs
+from test import testtools
+from test.testtools import raw_command
+
+
+def test_ping(r: redis.Redis):
+    assert r.ping()
+    assert testtools.raw_command(r, 'ping', 'test') == b'test'
+    with pytest.raises(redis.ResponseError, match=msgs.WRONG_ARGS_MSG6.format('ping')[4:]):
+        raw_command(r, 'ping', 'arg1', 'arg2')
+
+
+def test_echo(r: redis.Redis):
+    assert r.echo(b'hello') == b'hello'
+    assert r.echo('hello') == b'hello'
+
+
+@testtools.fake_only
+def test_time(r, mocker):
+    fake_time = mocker.patch('time.time')
+    fake_time.return_value = 1234567890.1234567
+    assert r.time() == (1234567890, 123457)
+    fake_time.return_value = 1234567890.000001
+    assert r.time() == (1234567890, 1)
+    fake_time.return_value = 1234567890.9999999
+    assert r.time() == (1234567891, 0)
+
+
+@pytest.mark.decode_responses
+class TestDecodeResponses:
+    def test_decode_str(self, r):
+        r.set('foo', 'bar')
+        assert r.get('foo') == 'bar'
+
+    def test_decode_set(self, r):
+        r.sadd('foo', 'member1')
+        assert r.smembers('foo') == {'member1'}
+
+    def test_decode_list(self, r):
+        r.rpush('foo', 'a', 'b')
+        assert r.lrange('foo', 0, -1) == ['a', 'b']
+
+    def test_decode_dict(self, r):
+        r.hset('foo', 'key', 'value')
+        assert r.hgetall('foo') == {'key': 'value'}
+
+    def test_decode_error(self, r):
+        r.set('foo', 'bar')
+        with pytest.raises(ResponseError) as exc_info:
+            r.hset('foo', 'bar', 'baz')
+        assert isinstance(exc_info.value.args[0], str)
+
+
+@pytest.mark.disconnected
+@testtools.fake_only
+class TestFakeStrictRedisConnectionErrors:
+    def test_flushdb(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.flushdb()
+
+    def test_flushall(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.flushall()
+
+    def test_append(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.append('key', 'value')
+
+    def test_bitcount(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.bitcount('key', 0, 20)
+
+    def test_decr(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.decr('key', 2)
+
+    def test_exists(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.exists('key')
+
+    def test_expire(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.expire('key', 20)
+
+    def test_pexpire(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.pexpire('key', 20)
+
+    def test_echo(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.echo('value')
+
+    def test_get(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.get('key')
+
+    def test_getbit(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.getbit('key', 2)
+
+    def test_getset(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.getset('key', 'value')
+
+    def test_incr(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.incr('key')
+
+    def test_incrby(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.incrby('key')
+
+    def test_ncrbyfloat(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.incrbyfloat('key')
+
+    def test_keys(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.keys()
+
+    def test_mget(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.mget(['key1', 'key2'])
+
+    def test_mset(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.mset({'key': 'value'})
+
+    def test_msetnx(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.msetnx({'key': 'value'})
+
+    def test_persist(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.persist('key')
+
+    def test_rename(self, r):
+        server = r.connection_pool.connection_kwargs['server']
+        server.connected = True
+        r.set('key1', 'value')
+        server.connected = False
+        with pytest.raises(redis.ConnectionError):
+            r.rename('key1', 'key2')
+        server.connected = True
+        assert r.exists('key1')
+
+    def test_eval(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.eval('', 0)
+
+    def test_lpush(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.lpush('name', 1, 2)
+
+    def test_lrange(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.lrange('name', 1, 5)
+
+    def test_llen(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.llen('name')
+
+    def test_lrem(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.lrem('name', 2, 2)
+
+    def test_rpush(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.rpush('name', 1)
+
+    def test_lpop(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.lpop('name')
+
+    def test_lset(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.lset('name', 1, 4)
+
+    def test_rpushx(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.rpushx('name', 1)
+
+    def test_ltrim(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.ltrim('name', 1, 4)
+
+    def test_lindex(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.lindex('name', 1)
+
+    def test_lpushx(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.lpushx('name', 1)
+
+    def test_rpop(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.rpop('name')
+
+    def test_linsert(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.linsert('name', 'where', 'refvalue', 'value')
+
+    def test_rpoplpush(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.rpoplpush('src', 'dst')
+
+    def test_blpop(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.blpop('keys')
+
+    def test_brpop(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.brpop('keys')
+
+    def test_brpoplpush(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.brpoplpush('src', 'dst')
+
+    def test_hdel(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.hdel('name')
+
+    def test_hexists(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.hexists('name', 'key')
+
+    def test_hget(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.hget('name', 'key')
+
+    def test_hgetall(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.hgetall('name')
+
+    def test_hincrby(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.hincrby('name', 'key')
+
+    def test_hincrbyfloat(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.hincrbyfloat('name', 'key')
+
+    def test_hkeys(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.hkeys('name')
+
+    def test_hlen(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.hlen('name')
+
+    def test_hset(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.hset('name', 'key', 1)
+
+    def test_hsetnx(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.hsetnx('name', 'key', 2)
+
+    def test_hmset(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.hmset('name', {'key': 1})
+
+    def test_hmget(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.hmget('name', ['a', 'b'])
+
+    def test_hvals(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.hvals('name')
+
+    def test_sadd(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.sadd('name', 1, 2)
+
+    def test_scard(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.scard('name')
+
+    def test_sdiff(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.sdiff(['a', 'b'])
+
+    def test_sdiffstore(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.sdiffstore('dest', ['a', 'b'])
+
+    def test_sinter(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.sinter(['a', 'b'])
+
+    def test_sinterstore(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.sinterstore('dest', ['a', 'b'])
+
+    def test_sismember(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.sismember('name', 20)
+
+    def test_smembers(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.smembers('name')
+
+    def test_smove(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.smove('src', 'dest', 20)
+
+    def test_spop(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.spop('name')
+
+    def test_srandmember(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.srandmember('name')
+
+    def test_srem(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.srem('name')
+
+    def test_sunion(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.sunion(['a', 'b'])
+
+    def test_sunionstore(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.sunionstore('dest', ['a', 'b'])
+
+    def test_zadd(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zadd('name', {'key': 'value'})
+
+    def test_zcard(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zcard('name')
+
+    def test_zcount(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zcount('name', 1, 5)
+
+    def test_zincrby(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zincrby('name', 1, 1)
+
+    def test_zinterstore(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zinterstore('dest', ['a', 'b'])
+
+    def test_zrange(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zrange('name', 1, 5)
+
+    def test_zrangebyscore(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zrangebyscore('name', 1, 5)
+
+    def test_rangebylex(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zrangebylex('name', 1, 4)
+
+    def test_zrem(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zrem('name', 'value')
+
+    def test_zremrangebyrank(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zremrangebyrank('name', 1, 5)
+
+    def test_zremrangebyscore(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zremrangebyscore('name', 1, 5)
+
+    def test_zremrangebylex(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zremrangebylex('name', 1, 5)
+
+    def test_zlexcount(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zlexcount('name', 1, 5)
+
+    def test_zrevrange(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zrevrange('name', 1, 5, 1)
+
+    def test_zrevrangebyscore(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zrevrangebyscore('name', 5, 1)
+
+    def test_zrevrangebylex(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zrevrangebylex('name', 5, 1)
+
+    def test_zrevran(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zrevrank('name', 2)
+
+    def test_zscore(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zscore('name', 2)
+
+    def test_zunionstor(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.zunionstore('dest', ['1', '2'])
+
+    def test_pipeline(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.pipeline().watch('key')
+
+    def test_transaction(self, r):
+        with pytest.raises(redis.ConnectionError):
+            def func(a):
+                return a * a
+
+            r.transaction(func, 3)
+
+    def test_lock(self, r):
+        with pytest.raises(redis.ConnectionError):
+            with r.lock('name'):
+                pass
+
+    def test_pubsub(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.pubsub().subscribe('channel')
+
+    def test_pfadd(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.pfadd('name', 1)
+
+    def test_pfmerge(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.pfmerge('dest', 'a', 'b')
+
+    def test_scan(self, r):
+        with pytest.raises(redis.ConnectionError):
+            list(r.scan())
+
+    def test_sscan(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.sscan('name')
+
+    def test_hscan(self, r):
+        with pytest.raises(redis.ConnectionError):
+            r.hscan('name')
+
+    def test_scan_iter(self, r):
+        with pytest.raises(redis.ConnectionError):
+            list(r.scan_iter())
+
+    def test_sscan_iter(self, r):
+        with pytest.raises(redis.ConnectionError):
+            list(r.sscan_iter('name'))
+
+    def test_hscan_iter(self, r):
+        with pytest.raises(redis.ConnectionError):
+            list(r.hscan_iter('name'))
+
+
+@pytest.mark.disconnected
+@testtools.fake_only
+class TestPubSubConnected:
+    @pytest.fixture
+    def pubsub(self, r):
+        return r.pubsub()
+
+    def test_basic_subscribe(self, pubsub):
+        with pytest.raises(redis.ConnectionError):
+            pubsub.subscribe('logs')
+
+    def test_subscription_conn_lost(self, fake_server, pubsub):
+        fake_server.connected = True
+        pubsub.subscribe('logs')
+        fake_server.connected = False
+        # The initial message is already in the pipe
+        msg = pubsub.get_message()
+        check = {
+            'type': 'subscribe',
+            'pattern': None,
+            'channel': b'logs',
+            'data': 1
+        }
+        assert msg == check, 'Message was not published to channel'
+        with pytest.raises(redis.ConnectionError):
+            pubsub.get_message()
diff --git a/test/test_extract_args.py b/test/test_extract_args.py
new file mode 100644
index 0000000..c1fdb9a
--- /dev/null
+++ b/test/test_extract_args.py
@@ -0,0 +1,103 @@
+import pytest
+
+from fakeredis._command_args_parsing import extract_args
+from fakeredis._helpers import SimpleError
+
+
+def test_extract_args():
+    args = (b'nx', b'ex', b'324', b'xx',)
+    (xx, nx, ex, keepttl), _ = extract_args(args, ('nx', 'xx', '+ex', 'keepttl'))
+    assert xx
+    assert nx
+    assert ex == 324
+    assert not keepttl
+
+
+def test_extract_args__should_raise_error():
+    args = (b'nx', b'ex', b'324', b'xx', b'something')
+    with pytest.raises(SimpleError):
+        (xx, nx, ex, keepttl), _ = extract_args(args, ('nx', 'xx', '+ex', 'keepttl'))
+
+
+def test_extract_args__should_return_something():
+    args = (b'nx', b'ex', b'324', b'xx', b'something')
+
+    (xx, nx, ex, keepttl), left = extract_args(
+        args, ('nx', 'xx', '+ex', 'keepttl'), error_on_unexpected=False)
+    assert xx
+    assert nx
+    assert ex == 324
+    assert not keepttl
+    assert left == (b'something',)
+
+    args = (b'nx', b'something', b'ex', b'324', b'xx',)
+
+    (xx, nx, ex, keepttl), left = extract_args(
+        args, ('nx', 'xx', '+ex', 'keepttl'),
+        error_on_unexpected=False,
+        left_from_first_unexpected=False
+    )
+    assert xx
+    assert nx
+    assert ex == 324
+    assert not keepttl
+    assert left == [b'something', ]
+
+
+def test_extract_args__multiple_numbers():
+    args = (b'nx', b'limit', b'324', b'123', b'xx',)
+
+    (xx, nx, limit, keepttl), _ = extract_args(
+        args, ('nx', 'xx', '++limit', 'keepttl'))
+    assert xx
+    assert nx
+    assert limit == [324, 123]
+    assert not keepttl
+
+    (xx, nx, limit, keepttl), _ = extract_args(
+        (b'nx', b'xx',),
+        ('nx', 'xx', '++limit', 'keepttl'))
+    assert xx
+    assert nx
+    assert not keepttl
+    assert limit == [None, None]
+
+
+def test_extract_args__extract_non_numbers():
+    args = (b'by', b'dd', b'nx', b'limit', b'324', b'123', b'xx',)
+
+    (xx, nx, limit, sortby), _ = extract_args(
+        args, ('nx', 'xx', '++limit', '*by'))
+    assert xx
+    assert nx
+    assert limit == [324, 123]
+    assert sortby == b'dd'
+
+
+def test_extract_args__extract_maxlen():
+    args = (b'MAXLEN', b'5')
+    (nomkstream, limit, maxlen, maxid), left_args = extract_args(
+        args, ('nomkstream', '+limit', '~+maxlen', '~maxid'), error_on_unexpected=False)
+    assert not nomkstream
+    assert limit is None
+    assert maxlen == 5
+    assert maxid is None
+
+    args = (b'MAXLEN', b'~', b'5', b'maxid', b'~', b'1')
+    (nomkstream, limit, maxlen, maxid), left_args = extract_args(
+        args, ('nomkstream', '+limit', '~+maxlen', '~maxid'), error_on_unexpected=False)
+    assert not nomkstream
+    assert limit is None
+    assert maxlen == 5
+    assert maxid == b"1"
+
+    args = (b'by', b'dd', b'nx', b'maxlen', b'~', b'10',
+            b'limit', b'324', b'123', b'xx',)
+
+    (nx, maxlen, xx, limit, sortby), _ = extract_args(
+        args, ('nx', '~+maxlen', 'xx', '++limit', '*by'))
+    assert xx
+    assert nx
+    assert maxlen == 10
+    assert limit == [324, 123]
+    assert sortby == b'dd'
diff --git a/test/test_fakeredis6.py b/test/test_fakeredis6.py
deleted file mode 100644
index 12e8dec..0000000
--- a/test/test_fakeredis6.py
+++ /dev/null
@@ -1,4673 +0,0 @@
-import os
-import threading
-from collections import OrderedDict
-from datetime import datetime, timedelta
-from queue import Queue
-from time import sleep, time
-
-import math
-import pytest
-import redis
-import redis.client
-import six
-from packaging.version import Version
-from redis.exceptions import ResponseError
-
-import fakeredis
-import testtools
-from testtools import raw_command
-
-REDIS_VERSION = Version(redis.__version__)
-
-
-def key_val_dict(size=100):
-    return {b'key:' + bytes([i]): b'val:' + bytes([i])
-            for i in range(size)}
-
-
-def round_str(x):
-    assert isinstance(x, bytes)
-    return round(float(x))
-
-
-def zincrby(r, key, amount, value):
-    if REDIS_VERSION >= Version('3'):
-        return r.zincrby(key, amount, value)
-    else:
-        return r.zincrby(key, value, amount)
-
-
-fake_only = pytest.mark.parametrize(
-    'create_redis',
-    [pytest.param('FakeStrictRedis', marks=pytest.mark.fake)],
-    indirect=True
-)
-
-
-def test_large_command(r):
-    r.set('foo', 'bar' * 10000)
-    assert r.get('foo') == b'bar' * 10000
-
-
-def test_dbsize(r):
-    assert r.dbsize() == 0
-    r.set('foo', 'bar')
-    r.set('bar', 'foo')
-    assert r.dbsize() == 2
-
-
-def test_flushdb(r):
-    r.set('foo', 'bar')
-    assert r.keys() == [b'foo']
-    assert r.flushdb() is True
-    assert r.keys() == []
-
-
-def test_dump_missing(r):
-    assert r.dump('foo') is None
-
-
-def test_dump_restore(r):
-    r.set('foo', 'bar')
-    dump = r.dump('foo')
-    r.restore('baz', 0, dump)
-    assert r.get('baz') == b'bar'
-    assert r.ttl('baz') == -1
-
-
-def test_dump_restore_ttl(r):
-    r.set('foo', 'bar')
-    dump = r.dump('foo')
-    r.restore('baz', 2000, dump)
-    assert r.get('baz') == b'bar'
-    assert 1000 <= r.pttl('baz') <= 2000
-
-
-def test_dump_restore_replace(r):
-    r.set('foo', 'bar')
-    dump = r.dump('foo')
-    r.set('foo', 'baz')
-    r.restore('foo', 0, dump, replace=True)
-    assert r.get('foo') == b'bar'
-
-
-def test_restore_exists(r):
-    r.set('foo', 'bar')
-    dump = r.dump('foo')
-    with pytest.raises(ResponseError):
-        r.restore('foo', 0, dump)
-
-
-def test_restore_invalid_dump(r):
-    r.set('foo', 'bar')
-    dump = r.dump('foo')
-    with pytest.raises(ResponseError):
-        r.restore('baz', 0, dump[:-1])
-
-
-def test_restore_invalid_ttl(r):
-    r.set('foo', 'bar')
-    dump = r.dump('foo')
-    with pytest.raises(ResponseError):
-        r.restore('baz', -1, dump)
-
-
-def test_set_then_get(r):
-    assert r.set('foo', 'bar') is True
-    assert r.get('foo') == b'bar'
-
-
-def test_set_float_value(r):
-    x = 1.23456789123456789
-    r.set('foo', x)
-    assert float(r.get('foo')) == x
-
-
-def test_saving_non_ascii_chars_as_value(r):
-    assert r.set('foo', 'Ñandu') is True
-    assert r.get('foo') == 'Ñandu'.encode()
-
-
-def test_saving_unicode_type_as_value(r):
-    assert r.set('foo', 'Ñandu') is True
-    assert r.get('foo') == 'Ñandu'.encode()
-
-
-def test_saving_non_ascii_chars_as_key(r):
-    assert r.set('Ñandu', 'foo') is True
-    assert r.get('Ñandu') == b'foo'
-
-
-def test_saving_unicode_type_as_key(r):
-    assert r.set('Ñandu', 'foo') is True
-    assert r.get('Ñandu') == b'foo'
-
-
-def test_future_newbytes(r):
-    # bytes = pytest.importorskip('builtins', reason='future.types not available').bytes
-    r.set(bytes(b'\xc3\x91andu'), 'foo')
-    assert r.get('Ñandu') == b'foo'
-
-
-def test_future_newstr(r):
-    # str = pytest.importorskip('builtins', reason='future.types not available').str
-    r.set(str('Ñandu'), 'foo')
-    assert r.get('Ñandu') == b'foo'
-
-
-def test_get_does_not_exist(r):
-    assert r.get('foo') is None
-
-
-def test_get_with_non_str_keys(r):
-    assert r.set('2', 'bar') is True
-    assert r.get(2) == b'bar'
-
-
-def test_get_invalid_type(r):
-    assert r.hset('foo', 'key', 'value') == 1
-    with pytest.raises(redis.ResponseError):
-        r.get('foo')
-
-
-def test_set_non_str_keys(r):
-    assert r.set(2, 'bar') is True
-    assert r.get(2) == b'bar'
-    assert r.get('2') == b'bar'
-
-
-def test_getbit(r):
-    r.setbit('foo', 3, 1)
-    assert r.getbit('foo', 0) == 0
-    assert r.getbit('foo', 1) == 0
-    assert r.getbit('foo', 2) == 0
-    assert r.getbit('foo', 3) == 1
-    assert r.getbit('foo', 4) == 0
-    assert r.getbit('foo', 100) == 0
-
-
-def test_getbit_wrong_type(r):
-    r.rpush('foo', b'x')
-    with pytest.raises(redis.ResponseError):
-        r.getbit('foo', 1)
-
-
-def test_multiple_bits_set(r):
-    r.setbit('foo', 1, 1)
-    r.setbit('foo', 3, 1)
-    r.setbit('foo', 5, 1)
-
-    assert r.getbit('foo', 0) == 0
-    assert r.getbit('foo', 1) == 1
-    assert r.getbit('foo', 2) == 0
-    assert r.getbit('foo', 3) == 1
-    assert r.getbit('foo', 4) == 0
-    assert r.getbit('foo', 5) == 1
-    assert r.getbit('foo', 6) == 0
-
-
-def test_unset_bits(r):
-    r.setbit('foo', 1, 1)
-    r.setbit('foo', 2, 0)
-    r.setbit('foo', 3, 1)
-    assert r.getbit('foo', 1) == 1
-    r.setbit('foo', 1, 0)
-    assert r.getbit('foo', 1) == 0
-    r.setbit('foo', 3, 0)
-    assert r.getbit('foo', 3) == 0
-
-
-def test_get_set_bits(r):
-    # set bit 5
-    assert not r.setbit('a', 5, True)
-    assert r.getbit('a', 5)
-    # unset bit 4
-    assert not r.setbit('a', 4, False)
-    assert not r.getbit('a', 4)
-    # set bit 4
-    assert not r.setbit('a', 4, True)
-    assert r.getbit('a', 4)
-    # set bit 5 again
-    assert r.setbit('a', 5, True)
-    assert r.getbit('a', 5)
-
-
-def test_setbits_and_getkeys(r):
-    # The bit operations and the get commands
-    # should play nicely with each other.
-    r.setbit('foo', 1, 1)
-    assert r.get('foo') == b'@'
-    r.setbit('foo', 2, 1)
-    assert r.get('foo') == b'`'
-    r.setbit('foo', 3, 1)
-    assert r.get('foo') == b'p'
-    r.setbit('foo', 9, 1)
-    assert r.get('foo') == b'p@'
-    r.setbit('foo', 54, 1)
-    assert r.get('foo') == b'p@\x00\x00\x00\x00\x02'
-
-
-def test_setbit_wrong_type(r):
-    r.rpush('foo', b'x')
-    with pytest.raises(redis.ResponseError):
-        r.setbit('foo', 0, 1)
-
-
-def test_setbit_expiry(r):
-    r.set('foo', b'0x00', ex=10)
-    r.setbit('foo', 1, 1)
-    assert r.ttl('foo') > 0
-
-
-def test_bitcount(r):
-    r.delete('foo')
-    assert r.bitcount('foo') == 0
-    r.setbit('foo', 1, 1)
-    assert r.bitcount('foo') == 1
-    r.setbit('foo', 8, 1)
-    assert r.bitcount('foo') == 2
-    assert r.bitcount('foo', 1, 1) == 1
-    r.setbit('foo', 57, 1)
-    assert r.bitcount('foo') == 3
-    r.set('foo', ' ')
-    assert r.bitcount('foo') == 1
-
-
-def test_bitcount_wrong_type(r):
-    r.rpush('foo', b'x')
-    with pytest.raises(redis.ResponseError):
-        r.bitcount('foo')
-
-
-def test_getset_not_exist(r):
-    val = r.getset('foo', 'bar')
-    assert val is None
-    assert r.get('foo') == b'bar'
-
-
-def test_getset_exists(r):
-    r.set('foo', 'bar')
-    val = r.getset('foo', b'baz')
-    assert val == b'bar'
-    val = r.getset('foo', b'baz2')
-    assert val == b'baz'
-
-
-def test_getset_wrong_type(r):
-    r.rpush('foo', b'x')
-    with pytest.raises(redis.ResponseError):
-        r.getset('foo', 'bar')
-
-
-def test_setitem_getitem(r):
-    assert r.keys() == []
-    r['foo'] = 'bar'
-    assert r['foo'] == b'bar'
-
-
-def test_getitem_non_existent_key(r):
-    assert r.keys() == []
-    assert 'noexists' not in r.keys()
-
-
-def test_strlen(r):
-    r['foo'] = 'bar'
-
-    assert r.strlen('foo') == 3
-    assert r.strlen('noexists') == 0
-
-
-def test_strlen_wrong_type(r):
-    r.rpush('foo', b'x')
-    with pytest.raises(redis.ResponseError):
-        r.strlen('foo')
-
-
-def test_substr(r):
-    r['foo'] = 'one_two_three'
-    assert r.substr('foo', 0) == b'one_two_three'
-    assert r.substr('foo', 0, 2) == b'one'
-    assert r.substr('foo', 4, 6) == b'two'
-    assert r.substr('foo', -5) == b'three'
-    assert r.substr('foo', -4, -5) == b''
-    assert r.substr('foo', -5, -3) == b'thr'
-
-
-def test_substr_noexist_key(r):
-    assert r.substr('foo', 0) == b''
-    assert r.substr('foo', 10) == b''
-    assert r.substr('foo', -5, -1) == b''
-
-
-def test_substr_wrong_type(r):
-    r.rpush('foo', b'x')
-    with pytest.raises(redis.ResponseError):
-        r.substr('foo', 0)
-
-
-def test_append(r):
-    assert r.set('foo', 'bar')
-    assert r.append('foo', 'baz') == 6
-    assert r.get('foo') == b'barbaz'
-
-
-def test_append_with_no_preexisting_key(r):
-    assert r.append('foo', 'bar') == 3
-    assert r.get('foo') == b'bar'
-
-
-def test_append_wrong_type(r):
-    r.rpush('foo', b'x')
-    with pytest.raises(redis.ResponseError):
-        r.append('foo', b'x')
-
-
-def test_incr_with_no_preexisting_key(r):
-    assert r.incr('foo') == 1
-    assert r.incr('bar', 2) == 2
-
-
-def test_incr_by(r):
-    assert r.incrby('foo') == 1
-    assert r.incrby('bar', 2) == 2
-
-
-def test_incr_preexisting_key(r):
-    r.set('foo', 15)
-    assert r.incr('foo', 5) == 20
-    assert r.get('foo') == b'20'
-
-
-def test_incr_expiry(r):
-    r.set('foo', 15, ex=10)
-    r.incr('foo', 5)
-    assert r.ttl('foo') > 0
-
-
-def test_incr_bad_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.incr('foo', 15)
-    r.rpush('foo2', 1)
-    with pytest.raises(redis.ResponseError):
-        r.incr('foo2', 15)
-
-
-def test_incr_with_float(r):
-    with pytest.raises(redis.ResponseError):
-        r.incr('foo', 2.0)
-
-
-def test_incr_followed_by_mget(r):
-    r.set('foo', 15)
-    assert r.incr('foo', 5) == 20
-    assert r.get('foo') == b'20'
-
-
-def test_incr_followed_by_mget_returns_strings(r):
-    r.incr('foo', 1)
-    assert r.mget(['foo']) == [b'1']
-
-
-def test_incrbyfloat(r):
-    r.set('foo', 0)
-    assert r.incrbyfloat('foo', 1.0) == 1.0
-    assert r.incrbyfloat('foo', 1.0) == 2.0
-
-
-def test_incrbyfloat_with_noexist(r):
-    assert r.incrbyfloat('foo', 1.0) == 1.0
-    assert r.incrbyfloat('foo', 1.0) == 2.0
-
-
-def test_incrbyfloat_expiry(r):
-    r.set('foo', 1.5, ex=10)
-    r.incrbyfloat('foo', 2.5)
-    assert r.ttl('foo') > 0
-
-
-def test_incrbyfloat_bad_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError, match='not a valid float'):
-        r.incrbyfloat('foo', 1.0)
-    r.rpush('foo2', 1)
-    with pytest.raises(redis.ResponseError):
-        r.incrbyfloat('foo2', 1.0)
-
-
-def test_incrbyfloat_precision(r):
-    x = 1.23456789123456789
-    assert r.incrbyfloat('foo', x) == x
-    assert float(r.get('foo')) == x
-
-
-def test_decr(r):
-    r.set('foo', 10)
-    assert r.decr('foo') == 9
-    assert r.get('foo') == b'9'
-
-
-def test_decr_newkey(r):
-    r.decr('foo')
-    assert r.get('foo') == b'-1'
-
-
-def test_decr_expiry(r):
-    r.set('foo', 10, ex=10)
-    r.decr('foo', 5)
-    assert r.ttl('foo') > 0
-
-
-def test_decr_badtype(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.decr('foo', 15)
-    r.rpush('foo2', 1)
-    with pytest.raises(redis.ResponseError):
-        r.decr('foo2', 15)
-
-
-def test_keys(r):
-    r.set('', 'empty')
-    r.set('abc\n', '')
-    r.set('abc\\', '')
-    r.set('abcde', '')
-    r.set(b'\xfe\xcd', '')
-    assert sorted(r.keys()) == [b'', b'abc\n', b'abc\\', b'abcde', b'\xfe\xcd']
-    assert r.keys('??') == [b'\xfe\xcd']
-    # empty pattern not the same as no pattern
-    assert r.keys('') == [b'']
-    # ? must match \n
-    assert sorted(r.keys('abc?')) == [b'abc\n', b'abc\\']
-    # must be anchored at both ends
-    assert r.keys('abc') == []
-    assert r.keys('bcd') == []
-    # wildcard test
-    assert r.keys('a*de') == [b'abcde']
-    # positive groups
-    assert sorted(r.keys('abc[d\n]*')) == [b'abc\n', b'abcde']
-    assert r.keys('abc[c-e]?') == [b'abcde']
-    assert r.keys('abc[e-c]?') == [b'abcde']
-    assert r.keys('abc[e-e]?') == []
-    assert r.keys('abcd[ef') == [b'abcde']
-    assert r.keys('abcd[]') == []
-    # negative groups
-    assert r.keys('abc[^d\\\\]*') == [b'abc\n']
-    assert r.keys('abc[^]e') == [b'abcde']
-    # escaping
-    assert r.keys(r'abc\?e') == []
-    assert r.keys(r'abc\de') == [b'abcde']
-    assert r.keys(r'abc[\d]e') == [b'abcde']
-    # some escaping cases that redis handles strangely
-    assert r.keys('abc\\') == [b'abc\\']
-    assert r.keys(r'abc[\c-e]e') == []
-    assert r.keys(r'abc[c-\e]e') == []
-
-
-def test_exists(r):
-    assert 'foo' not in r
-    r.set('foo', 'bar')
-    assert 'foo' in r
-
-
-def test_contains(r):
-    assert not r.exists('foo')
-    r.set('foo', 'bar')
-    assert r.exists('foo')
-
-
-def test_rename(r):
-    r.set('foo', 'unique value')
-    assert r.rename('foo', 'bar')
-    assert r.get('foo') is None
-    assert r.get('bar') == b'unique value'
-
-
-def test_rename_nonexistent_key(r):
-    with pytest.raises(redis.ResponseError):
-        r.rename('foo', 'bar')
-
-
-def test_renamenx_doesnt_exist(r):
-    r.set('foo', 'unique value')
-    assert r.renamenx('foo', 'bar')
-    assert r.get('foo') is None
-    assert r.get('bar') == b'unique value'
-
-
-def test_rename_does_exist(r):
-    r.set('foo', 'unique value')
-    r.set('bar', 'unique value2')
-    assert not r.renamenx('foo', 'bar')
-    assert r.get('foo') == b'unique value'
-    assert r.get('bar') == b'unique value2'
-
-
-def test_rename_expiry(r):
-    r.set('foo', 'value1', ex=10)
-    r.set('bar', 'value2')
-    r.rename('foo', 'bar')
-    assert r.ttl('bar') > 0
-
-
-def test_mget(r):
-    r.set('foo', 'one')
-    r.set('bar', 'two')
-    assert r.mget(['foo', 'bar']) == [b'one', b'two']
-    assert r.mget(['foo', 'bar', 'baz']) == [b'one', b'two', None]
-    assert r.mget('foo', 'bar') == [b'one', b'two']
-
-
-def test_mget_with_no_keys(r):
-    if REDIS_VERSION >= Version('3'):
-        assert r.mget([]) == []
-    else:
-        with pytest.raises(redis.ResponseError, match='wrong number of arguments'):
-            r.mget([])
-
-
-def test_mget_mixed_types(r):
-    r.hset('hash', 'bar', 'baz')
-    testtools.zadd(r, 'zset', {'bar': 1})
-    r.sadd('set', 'member')
-    r.rpush('list', 'item1')
-    r.set('string', 'value')
-    assert (
-            r.mget(['hash', 'zset', 'set', 'string', 'absent'])
-            == [None, None, None, b'value', None]
-    )
-
-
-def test_mset_with_no_keys(r):
-    with pytest.raises(redis.ResponseError):
-        r.mset({})
-
-
-def test_mset(r):
-    assert r.mset({'foo': 'one', 'bar': 'two'}) is True
-    assert r.mset({'foo': 'one', 'bar': 'two'}) is True
-    assert r.mget('foo', 'bar') == [b'one', b'two']
-
-
-def test_msetnx(r):
-    assert r.msetnx({'foo': 'one', 'bar': 'two'}) is True
-    assert r.msetnx({'bar': 'two', 'baz': 'three'}) is False
-    assert r.mget('foo', 'bar', 'baz') == [b'one', b'two', None]
-
-
-def test_setex(r):
-    assert r.setex('foo', 100, 'bar') is True
-    assert r.get('foo') == b'bar'
-
-
-def test_setex_using_timedelta(r):
-    assert r.setex('foo', timedelta(seconds=100), 'bar') is True
-    assert r.get('foo') == b'bar'
-
-
-def test_setex_using_float(r):
-    with pytest.raises(redis.ResponseError, match='integer'):
-        r.setex('foo', 1.2, 'bar')
-
-
-@pytest.mark.min_server('6.2')
-def test_setex_overflow(r):
-    with pytest.raises(ResponseError):
-        r.setex('foo', 18446744073709561, 'bar')  # Overflows long long in ms
-
-
-def test_set_ex(r):
-    assert r.set('foo', 'bar', ex=100) is True
-    assert r.get('foo') == b'bar'
-
-
-def test_set_ex_using_timedelta(r):
-    assert r.set('foo', 'bar', ex=timedelta(seconds=100)) is True
-    assert r.get('foo') == b'bar'
-
-
-def test_set_ex_overflow(r):
-    with pytest.raises(ResponseError):
-        r.set('foo', 'bar', ex=18446744073709561)  # Overflows long long in ms
-
-
-def test_set_px_overflow(r):
-    with pytest.raises(ResponseError):
-        r.set('foo', 'bar', px=2 ** 63 - 2)  # Overflows after adding current time
-
-
-def test_set_px(r):
-    assert r.set('foo', 'bar', px=100) is True
-    assert r.get('foo') == b'bar'
-
-
-def test_set_px_using_timedelta(r):
-    assert r.set('foo', 'bar', px=timedelta(milliseconds=100)) is True
-    assert r.get('foo') == b'bar'
-
-
-@pytest.mark.skipif(REDIS_VERSION < Version('3.5'), reason="Test is only applicable to redis-py 3.5+")
-@pytest.mark.min_server('6.0')
-def test_set_keepttl(r):
-    r.set('foo', 'bar', ex=100)
-    assert r.set('foo', 'baz', keepttl=True) is True
-    assert r.ttl('foo') == 100
-    assert r.get('foo') == b'baz'
-
-
-def test_set_conflicting_expire_options(r):
-    with pytest.raises(ResponseError):
-        r.set('foo', 'bar', ex=1, px=1)
-
-
-@pytest.mark.skipif(REDIS_VERSION < Version('3.5'), reason="Test is only applicable to redis-py 3.5+")
-def test_set_conflicting_expire_options_w_keepttl(r):
-    with pytest.raises(ResponseError):
-        r.set('foo', 'bar', ex=1, keepttl=True)
-    with pytest.raises(ResponseError):
-        r.set('foo', 'bar', px=1, keepttl=True)
-    with pytest.raises(ResponseError):
-        r.set('foo', 'bar', ex=1, px=1, keepttl=True)
-
-
-def test_set_raises_wrong_ex(r):
-    with pytest.raises(ResponseError):
-        r.set('foo', 'bar', ex=-100)
-    with pytest.raises(ResponseError):
-        r.set('foo', 'bar', ex=0)
-    assert not r.exists('foo')
-
-
-def test_set_using_timedelta_raises_wrong_ex(r):
-    with pytest.raises(ResponseError):
-        r.set('foo', 'bar', ex=timedelta(seconds=-100))
-    with pytest.raises(ResponseError):
-        r.set('foo', 'bar', ex=timedelta(seconds=0))
-    assert not r.exists('foo')
-
-
-def test_set_raises_wrong_px(r):
-    with pytest.raises(ResponseError):
-        r.set('foo', 'bar', px=-100)
-    with pytest.raises(ResponseError):
-        r.set('foo', 'bar', px=0)
-    assert not r.exists('foo')
-
-
-def test_set_using_timedelta_raises_wrong_px(r):
-    with pytest.raises(ResponseError):
-        r.set('foo', 'bar', px=timedelta(milliseconds=-100))
-    with pytest.raises(ResponseError):
-        r.set('foo', 'bar', px=timedelta(milliseconds=0))
-    assert not r.exists('foo')
-
-
-def test_setex_raises_wrong_ex(r):
-    with pytest.raises(ResponseError):
-        r.setex('foo', -100, 'bar')
-    with pytest.raises(ResponseError):
-        r.setex('foo', 0, 'bar')
-    assert not r.exists('foo')
-
-
-def test_setex_using_timedelta_raises_wrong_ex(r):
-    with pytest.raises(ResponseError):
-        r.setex('foo', timedelta(seconds=-100), 'bar')
-    with pytest.raises(ResponseError):
-        r.setex('foo', timedelta(seconds=-100), 'bar')
-    assert not r.exists('foo')
-
-
-def test_setnx(r):
-    assert r.setnx('foo', 'bar') is True
-    assert r.get('foo') == b'bar'
-    assert r.setnx('foo', 'baz') is False
-    assert r.get('foo') == b'bar'
-
-
-def test_set_nx(r):
-    assert r.set('foo', 'bar', nx=True) is True
-    assert r.get('foo') == b'bar'
-    assert r.set('foo', 'bar', nx=True) is None
-    assert r.get('foo') == b'bar'
-
-
-def test_set_xx(r):
-    assert r.set('foo', 'bar', xx=True) is None
-    r.set('foo', 'bar')
-    assert r.set('foo', 'bar', xx=True) is True
-
-
-@pytest.mark.min_server('6.2')
-def test_set_get(r):
-    assert raw_command(r, 'set', 'foo', 'bar', 'GET') is None
-    assert r.get('foo') == b'bar'
-    assert raw_command(r, 'set', 'foo', 'baz', 'GET') == b'bar'
-    assert r.get('foo') == b'baz'
-
-
-@pytest.mark.min_server('6.2')
-def test_set_get_xx(r):
-    assert raw_command(r, 'set', 'foo', 'bar', 'XX', 'GET') is None
-    assert r.get('foo') is None
-    r.set('foo', 'bar')
-    assert raw_command(r, 'set', 'foo', 'baz', 'XX', 'GET') == b'bar'
-    assert r.get('foo') == b'baz'
-    assert raw_command(r, 'set', 'foo', 'baz', 'GET') == b'baz'
-
-
-@pytest.mark.min_server('6.2')
-@pytest.mark.max_server('6.2.7')
-def test_set_get_nx(r):
-    # Note: this will most likely fail on a 7.0 server, based on the docs for SET
-    with pytest.raises(redis.ResponseError):
-        raw_command(r, 'set', 'foo', 'bar', 'NX', 'GET')
-
-
-@pytest.mark.min_server('6.2')
-def set_get_wrongtype(r):
-    r.lpush('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        raw_command(r, 'set', 'foo', 'bar', 'GET')
-
-
-def test_del_operator(r):
-    r['foo'] = 'bar'
-    del r['foo']
-    assert r.get('foo') is None
-
-
-def test_delete(r):
-    r['foo'] = 'bar'
-    assert r.delete('foo') == 1
-    assert r.get('foo') is None
-
-
-def test_echo(r):
-    assert r.echo(b'hello') == b'hello'
-    assert r.echo('hello') == b'hello'
-
-
-@pytest.mark.slow
-def test_delete_expire(r):
-    r.set("foo", "bar", ex=1)
-    r.delete("foo")
-    r.set("foo", "bar")
-    sleep(2)
-    assert r.get("foo") == b'bar'
-
-
-def test_delete_multiple(r):
-    r['one'] = 'one'
-    r['two'] = 'two'
-    r['three'] = 'three'
-    # Since redis>=2.7.6 returns number of deleted items.
-    assert r.delete('one', 'two') == 2
-    assert r.get('one') is None
-    assert r.get('two') is None
-    assert r.get('three') == b'three'
-    assert r.delete('one', 'two') == 0
-    # If any keys are deleted, True is returned.
-    assert r.delete('two', 'three', 'three') == 1
-    assert r.get('three') is None
-
-
-def test_delete_nonexistent_key(r):
-    assert r.delete('foo') == 0
-
-
-def test_lpush_then_lrange_all(r):
-    assert r.lpush('foo', 'bar') == 1
-    assert r.lpush('foo', 'baz') == 2
-    assert r.lpush('foo', 'bam', 'buzz') == 4
-    assert r.lrange('foo', 0, -1) == [b'buzz', b'bam', b'baz', b'bar']
-
-
-def test_lpush_then_lrange_portion(r):
-    r.lpush('foo', 'one')
-    r.lpush('foo', 'two')
-    r.lpush('foo', 'three')
-    r.lpush('foo', 'four')
-    assert r.lrange('foo', 0, 2) == [b'four', b'three', b'two']
-    assert r.lrange('foo', 0, 3) == [b'four', b'three', b'two', b'one']
-
-
-def test_lrange_negative_indices(r):
-    r.rpush('foo', 'a', 'b', 'c')
-    assert r.lrange('foo', -1, -2) == []
-    assert r.lrange('foo', -2, -1) == [b'b', b'c']
-
-
-def test_lpush_key_does_not_exist(r):
-    assert r.lrange('foo', 0, -1) == []
-
-
-def test_lpush_with_nonstr_key(r):
-    r.lpush(1, 'one')
-    r.lpush(1, 'two')
-    r.lpush(1, 'three')
-    assert r.lrange(1, 0, 2) == [b'three', b'two', b'one']
-    assert r.lrange('1', 0, 2) == [b'three', b'two', b'one']
-
-
-def test_lpush_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.lpush('foo', 'element')
-
-
-def test_llen(r):
-    r.lpush('foo', 'one')
-    r.lpush('foo', 'two')
-    r.lpush('foo', 'three')
-    assert r.llen('foo') == 3
-
-
-def test_llen_no_exist(r):
-    assert r.llen('foo') == 0
-
-
-def test_llen_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.llen('foo')
-
-
-def test_lrem_positive_count(r):
-    r.lpush('foo', 'same')
-    r.lpush('foo', 'same')
-    r.lpush('foo', 'different')
-    r.lrem('foo', 2, 'same')
-    assert r.lrange('foo', 0, -1) == [b'different']
-
-
-def test_lrem_negative_count(r):
-    r.lpush('foo', 'removeme')
-    r.lpush('foo', 'three')
-    r.lpush('foo', 'two')
-    r.lpush('foo', 'one')
-    r.lpush('foo', 'removeme')
-    r.lrem('foo', -1, 'removeme')
-    # Should remove it from the end of the list,
-    # leaving the 'removeme' from the front of the list alone.
-    assert r.lrange('foo', 0, -1) == [b'removeme', b'one', b'two', b'three']
-
-
-def test_lrem_zero_count(r):
-    r.lpush('foo', 'one')
-    r.lpush('foo', 'one')
-    r.lpush('foo', 'one')
-    r.lrem('foo', 0, 'one')
-    assert r.lrange('foo', 0, -1) == []
-
-
-def test_lrem_default_value(r):
-    r.lpush('foo', 'one')
-    r.lpush('foo', 'one')
-    r.lpush('foo', 'one')
-    r.lrem('foo', 0, 'one')
-    assert r.lrange('foo', 0, -1) == []
-
-
-def test_lrem_does_not_exist(r):
-    r.lpush('foo', 'one')
-    r.lrem('foo', 0, 'one')
-    # These should be noops.
-    r.lrem('foo', -2, 'one')
-    r.lrem('foo', 2, 'one')
-
-
-def test_lrem_return_value(r):
-    r.lpush('foo', 'one')
-    count = r.lrem('foo', 0, 'one')
-    assert count == 1
-    assert r.lrem('foo', 0, 'one') == 0
-
-
-def test_lrem_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.lrem('foo', 0, 'element')
-
-
-def test_rpush(r):
-    r.rpush('foo', 'one')
-    r.rpush('foo', 'two')
-    r.rpush('foo', 'three')
-    r.rpush('foo', 'four', 'five')
-    assert r.lrange('foo', 0, -1) == [b'one', b'two', b'three', b'four', b'five']
-
-
-def test_rpush_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.rpush('foo', 'element')
-
-
-def test_lpop(r):
-    assert r.rpush('foo', 'one') == 1
-    assert r.rpush('foo', 'two') == 2
-    assert r.rpush('foo', 'three') == 3
-    assert r.lpop('foo') == b'one'
-    assert r.lpop('foo') == b'two'
-    assert r.lpop('foo') == b'three'
-
-
-def test_lpop_empty_list(r):
-    r.rpush('foo', 'one')
-    r.lpop('foo')
-    assert r.lpop('foo') is None
-    # Verify what happens if we try to pop from a key
-    # we've never seen before.
-    assert r.lpop('noexists') is None
-
-
-def test_lpop_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.lpop('foo')
-
-
-@pytest.mark.min_server('6.2')
-def test_lpop_count(r):
-    assert r.rpush('foo', 'one') == 1
-    assert r.rpush('foo', 'two') == 2
-    assert r.rpush('foo', 'three') == 3
-    assert raw_command(r, 'lpop', 'foo', 2) == [b'one', b'two']
-    # See https://github.com/redis/redis/issues/9680
-    raw = raw_command(r, 'rpop', 'foo', 0)
-    assert raw is None or raw == []  # https://github.com/redis/redis/pull/10095
-
-
-@pytest.mark.min_server('6.2')
-def test_lpop_count_negative(r):
-    with pytest.raises(redis.ResponseError):
-        raw_command(r, 'lpop', 'foo', -1)
-
-
-def test_lset(r):
-    r.rpush('foo', 'one')
-    r.rpush('foo', 'two')
-    r.rpush('foo', 'three')
-    r.lset('foo', 0, 'four')
-    r.lset('foo', -2, 'five')
-    assert r.lrange('foo', 0, -1) == [b'four', b'five', b'three']
-
-
-def test_lset_index_out_of_range(r):
-    r.rpush('foo', 'one')
-    with pytest.raises(redis.ResponseError):
-        r.lset('foo', 3, 'three')
-
-
-def test_lset_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.lset('foo', 0, 'element')
-
-
-def test_rpushx(r):
-    r.rpush('foo', 'one')
-    r.rpushx('foo', 'two')
-    r.rpushx('bar', 'three')
-    assert r.lrange('foo', 0, -1) == [b'one', b'two']
-    assert r.lrange('bar', 0, -1) == []
-
-
-def test_rpushx_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.rpushx('foo', 'element')
-
-
-def test_ltrim(r):
-    r.rpush('foo', 'one')
-    r.rpush('foo', 'two')
-    r.rpush('foo', 'three')
-    r.rpush('foo', 'four')
-
-    assert r.ltrim('foo', 1, 3)
-    assert r.lrange('foo', 0, -1) == [b'two', b'three', b'four']
-    assert r.ltrim('foo', 1, -1)
-    assert r.lrange('foo', 0, -1) == [b'three', b'four']
-
-
-def test_ltrim_with_non_existent_key(r):
-    assert r.ltrim('foo', 0, -1)
-
-
-def test_ltrim_expiry(r):
-    r.rpush('foo', 'one', 'two', 'three')
-    r.expire('foo', 10)
-    r.ltrim('foo', 1, 2)
-    assert r.ttl('foo') > 0
-
-
-def test_ltrim_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.ltrim('foo', 1, -1)
-
-
-def test_lindex(r):
-    r.rpush('foo', 'one')
-    r.rpush('foo', 'two')
-    assert r.lindex('foo', 0) == b'one'
-    assert r.lindex('foo', 4) is None
-    assert r.lindex('bar', 4) is None
-
-
-def test_lindex_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.lindex('foo', 0)
-
-
-def test_lpushx(r):
-    r.lpush('foo', 'two')
-    r.lpushx('foo', 'one')
-    r.lpushx('bar', 'one')
-    assert r.lrange('foo', 0, -1) == [b'one', b'two']
-    assert r.lrange('bar', 0, -1) == []
-
-
-def test_lpushx_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.lpushx('foo', 'element')
-
-
-def test_rpop(r):
-    assert r.rpop('foo') is None
-    r.rpush('foo', 'one')
-    r.rpush('foo', 'two')
-    assert r.rpop('foo') == b'two'
-    assert r.rpop('foo') == b'one'
-    assert r.rpop('foo') is None
-
-
-def test_rpop_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.rpop('foo')
-
-
-@pytest.mark.min_server('6.2')
-def test_rpop_count(r):
-    assert r.rpush('foo', 'one') == 1
-    assert r.rpush('foo', 'two') == 2
-    assert r.rpush('foo', 'three') == 3
-    assert raw_command(r, 'rpop', 'foo', 2) == [b'three', b'two']
-    # See https://github.com/redis/redis/issues/9680
-    raw = raw_command(r, 'rpop', 'foo', 0)
-    assert raw is None or raw == []  # https://github.com/redis/redis/pull/10095
-
-
-@pytest.mark.min_server('6.2')
-def test_rpop_count_negative(r):
-    with pytest.raises(redis.ResponseError):
-        raw_command(r, 'rpop', 'foo', -1)
-
-
-def test_linsert_before(r):
-    r.rpush('foo', 'hello')
-    r.rpush('foo', 'world')
-    assert r.linsert('foo', 'before', 'world', 'there') == 3
-    assert r.lrange('foo', 0, -1) == [b'hello', b'there', b'world']
-
-
-def test_linsert_after(r):
-    r.rpush('foo', 'hello')
-    r.rpush('foo', 'world')
-    assert r.linsert('foo', 'after', 'hello', 'there') == 3
-    assert r.lrange('foo', 0, -1) == [b'hello', b'there', b'world']
-
-
-def test_linsert_no_pivot(r):
-    r.rpush('foo', 'hello')
-    r.rpush('foo', 'world')
-    assert r.linsert('foo', 'after', 'goodbye', 'bar') == -1
-    assert r.lrange('foo', 0, -1) == [b'hello', b'world']
-
-
-def test_linsert_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.linsert('foo', 'after', 'bar', 'element')
-
-
-def test_rpoplpush(r):
-    assert r.rpoplpush('foo', 'bar') is None
-    assert r.lpop('bar') is None
-    r.rpush('foo', 'one')
-    r.rpush('foo', 'two')
-    r.rpush('bar', 'one')
-
-    assert r.rpoplpush('foo', 'bar') == b'two'
-    assert r.lrange('foo', 0, -1) == [b'one']
-    assert r.lrange('bar', 0, -1) == [b'two', b'one']
-
-    # Catch instances where we store bytes and strings inconsistently
-    # and thus bar = ['two', b'one']
-    assert r.lrem('bar', -1, 'two') == 1
-
-
-def test_rpoplpush_to_nonexistent_destination(r):
-    r.rpush('foo', 'one')
-    assert r.rpoplpush('foo', 'bar') == b'one'
-    assert r.rpop('bar') == b'one'
-
-
-def test_rpoplpush_expiry(r):
-    r.rpush('foo', 'one')
-    r.rpush('bar', 'two')
-    r.expire('bar', 10)
-    r.rpoplpush('foo', 'bar')
-    assert r.ttl('bar') > 0
-
-
-def test_rpoplpush_one_to_self(r):
-    r.rpush('list', 'element')
-    assert r.brpoplpush('list', 'list') == b'element'
-    assert r.lrange('list', 0, -1) == [b'element']
-
-
-def test_rpoplpush_wrong_type(r):
-    r.set('foo', 'bar')
-    r.rpush('list', 'element')
-    with pytest.raises(redis.ResponseError):
-        r.rpoplpush('foo', 'list')
-    assert r.get('foo') == b'bar'
-    assert r.lrange('list', 0, -1) == [b'element']
-    with pytest.raises(redis.ResponseError):
-        r.rpoplpush('list', 'foo')
-    assert r.get('foo') == b'bar'
-    assert r.lrange('list', 0, -1) == [b'element']
-
-
-def test_blpop_single_list(r):
-    r.rpush('foo', 'one')
-    r.rpush('foo', 'two')
-    r.rpush('foo', 'three')
-    assert r.blpop(['foo'], timeout=1) == (b'foo', b'one')
-
-
-def test_blpop_test_multiple_lists(r):
-    r.rpush('baz', 'zero')
-    assert r.blpop(['foo', 'baz'], timeout=1) == (b'baz', b'zero')
-    assert not r.exists('baz')
-
-    r.rpush('foo', 'one')
-    r.rpush('foo', 'two')
-    # bar has nothing, so the returned value should come
-    # from foo.
-    assert r.blpop(['bar', 'foo'], timeout=1) == (b'foo', b'one')
-    r.rpush('bar', 'three')
-    # bar now has something, so the returned value should come
-    # from bar.
-    assert r.blpop(['bar', 'foo'], timeout=1) == (b'bar', b'three')
-    assert r.blpop(['bar', 'foo'], timeout=1) == (b'foo', b'two')
-
-
-def test_blpop_allow_single_key(r):
-    # blpop converts single key arguments to a one element list.
-    r.rpush('foo', 'one')
-    assert r.blpop('foo', timeout=1) == (b'foo', b'one')
-
-
-@pytest.mark.slow
-def test_blpop_block(r):
-    def push_thread():
-        sleep(0.5)
-        r.rpush('foo', 'value1')
-        sleep(0.5)
-        # Will wake the condition variable
-        r.set('bar', 'go back to sleep some more')
-        r.rpush('foo', 'value2')
-
-    thread = threading.Thread(target=push_thread)
-    thread.start()
-    try:
-        assert r.blpop('foo') == (b'foo', b'value1')
-        assert r.blpop('foo', timeout=5) == (b'foo', b'value2')
-    finally:
-        thread.join()
-
-
-def test_blpop_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.blpop('foo', timeout=1)
-
-
-def test_blpop_transaction(r):
-    p = r.pipeline()
-    p.multi()
-    p.blpop('missing', timeout=1000)
-    result = p.execute()
-    # Blocking commands behave like non-blocking versions in transactions
-    assert result == [None]
-
-
-def test_brpop_test_multiple_lists(r):
-    r.rpush('baz', 'zero')
-    assert r.brpop(['foo', 'baz'], timeout=1) == (b'baz', b'zero')
-    assert not r.exists('baz')
-
-    r.rpush('foo', 'one')
-    r.rpush('foo', 'two')
-    assert r.brpop(['bar', 'foo'], timeout=1) == (b'foo', b'two')
-
-
-def test_brpop_single_key(r):
-    r.rpush('foo', 'one')
-    r.rpush('foo', 'two')
-    assert r.brpop('foo', timeout=1) == (b'foo', b'two')
-
-
-@pytest.mark.slow
-def test_brpop_block(r):
-    def push_thread():
-        sleep(0.5)
-        r.rpush('foo', 'value1')
-        sleep(0.5)
-        # Will wake the condition variable
-        r.set('bar', 'go back to sleep some more')
-        r.rpush('foo', 'value2')
-
-    thread = threading.Thread(target=push_thread)
-    thread.start()
-    try:
-        assert r.brpop('foo') == (b'foo', b'value1')
-        assert r.brpop('foo', timeout=5) == (b'foo', b'value2')
-    finally:
-        thread.join()
-
-
-def test_brpop_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.brpop('foo', timeout=1)
-
-
-def test_brpoplpush_multi_keys(r):
-    assert r.lpop('bar') is None
-    r.rpush('foo', 'one')
-    r.rpush('foo', 'two')
-    assert r.brpoplpush('foo', 'bar', timeout=1) == b'two'
-    assert r.lrange('bar', 0, -1) == [b'two']
-
-    # Catch instances where we store bytes and strings inconsistently
-    # and thus bar = ['two']
-    assert r.lrem('bar', -1, 'two') == 1
-
-
-def test_brpoplpush_wrong_type(r):
-    r.set('foo', 'bar')
-    r.rpush('list', 'element')
-    with pytest.raises(redis.ResponseError):
-        r.brpoplpush('foo', 'list')
-    assert r.get('foo') == b'bar'
-    assert r.lrange('list', 0, -1) == [b'element']
-    with pytest.raises(redis.ResponseError):
-        r.brpoplpush('list', 'foo')
-    assert r.get('foo') == b'bar'
-    assert r.lrange('list', 0, -1) == [b'element']
-
-
-@pytest.mark.slow
-def test_blocking_operations_when_empty(r):
-    assert r.blpop(['foo'], timeout=1) is None
-    assert r.blpop(['bar', 'foo'], timeout=1) is None
-    assert r.brpop('foo', timeout=1) is None
-    assert r.brpoplpush('foo', 'bar', timeout=1) is None
-
-
-def test_empty_list(r):
-    r.rpush('foo', 'bar')
-    r.rpop('foo')
-    assert not r.exists('foo')
-
-
-# Tests for the hash type.
-
-def test_hstrlen_missing(r):
-    assert r.hstrlen('foo', 'doesnotexist') == 0
-
-    r.hset('foo', 'key', 'value')
-    assert r.hstrlen('foo', 'doesnotexist') == 0
-
-
-def test_hstrlen(r):
-    r.hset('foo', 'key', 'value')
-    assert r.hstrlen('foo', 'key') == 5
-
-
-def test_hset_then_hget(r):
-    assert r.hset('foo', 'key', 'value') == 1
-    assert r.hget('foo', 'key') == b'value'
-
-
-def test_hset_update(r):
-    assert r.hset('foo', 'key', 'value') == 1
-    assert r.hset('foo', 'key', 'value') == 0
-
-
-def test_hset_wrong_type(r):
-    testtools.zadd(r, 'foo', {'bar': 1})
-    with pytest.raises(redis.ResponseError):
-        r.hset('foo', 'key', 'value')
-
-
-def test_hgetall(r):
-    assert r.hset('foo', 'k1', 'v1') == 1
-    assert r.hset('foo', 'k2', 'v2') == 1
-    assert r.hset('foo', 'k3', 'v3') == 1
-    assert r.hgetall('foo') == {
-        b'k1': b'v1',
-        b'k2': b'v2',
-        b'k3': b'v3'
-    }
-
-
-def test_hgetall_empty_key(r):
-    assert r.hgetall('foo') == {}
-
-
-def test_hgetall_wrong_type(r):
-    testtools.zadd(r, 'foo', {'bar': 1})
-    with pytest.raises(redis.ResponseError):
-        r.hgetall('foo')
-
-
-def test_hexists(r):
-    r.hset('foo', 'bar', 'v1')
-    assert r.hexists('foo', 'bar') == 1
-    assert r.hexists('foo', 'baz') == 0
-    assert r.hexists('bar', 'bar') == 0
-
-
-def test_hexists_wrong_type(r):
-    testtools.zadd(r, 'foo', {'bar': 1})
-    with pytest.raises(redis.ResponseError):
-        r.hexists('foo', 'key')
-
-
-def test_hkeys(r):
-    r.hset('foo', 'k1', 'v1')
-    r.hset('foo', 'k2', 'v2')
-    assert set(r.hkeys('foo')) == {b'k1', b'k2'}
-    assert set(r.hkeys('bar')) == set()
-
-
-def test_hkeys_wrong_type(r):
-    testtools.zadd(r, 'foo', {'bar': 1})
-    with pytest.raises(redis.ResponseError):
-        r.hkeys('foo')
-
-
-def test_hlen(r):
-    r.hset('foo', 'k1', 'v1')
-    r.hset('foo', 'k2', 'v2')
-    assert r.hlen('foo') == 2
-
-
-def test_hlen_wrong_type(r):
-    testtools.zadd(r, 'foo', {'bar': 1})
-    with pytest.raises(redis.ResponseError):
-        r.hlen('foo')
-
-
-def test_hvals(r):
-    r.hset('foo', 'k1', 'v1')
-    r.hset('foo', 'k2', 'v2')
-    assert set(r.hvals('foo')) == {b'v1', b'v2'}
-    assert set(r.hvals('bar')) == set()
-
-
-def test_hvals_wrong_type(r):
-    testtools.zadd(r, 'foo', {'bar': 1})
-    with pytest.raises(redis.ResponseError):
-        r.hvals('foo')
-
-
-def test_hmget(r):
-    r.hset('foo', 'k1', 'v1')
-    r.hset('foo', 'k2', 'v2')
-    r.hset('foo', 'k3', 'v3')
-    # Normal case.
-    assert r.hmget('foo', ['k1', 'k3']) == [b'v1', b'v3']
-    assert r.hmget('foo', 'k1', 'k3') == [b'v1', b'v3']
-    # Key does not exist.
-    assert r.hmget('bar', ['k1', 'k3']) == [None, None]
-    assert r.hmget('bar', 'k1', 'k3') == [None, None]
-    # Some keys in the hash do not exist.
-    assert r.hmget('foo', ['k1', 'k500']) == [b'v1', None]
-    assert r.hmget('foo', 'k1', 'k500') == [b'v1', None]
-
-
-def test_hmget_wrong_type(r):
-    testtools.zadd(r, 'foo', {'bar': 1})
-    with pytest.raises(redis.ResponseError):
-        r.hmget('foo', 'key1', 'key2')
-
-
-def test_hdel(r):
-    r.hset('foo', 'k1', 'v1')
-    r.hset('foo', 'k2', 'v2')
-    r.hset('foo', 'k3', 'v3')
-    assert r.hget('foo', 'k1') == b'v1'
-    assert r.hdel('foo', 'k1') == 1
-    assert r.hget('foo', 'k1') is None
-    assert r.hdel('foo', 'k1') == 0
-    # Since redis>=2.7.6 returns number of deleted items.
-    assert r.hdel('foo', 'k2', 'k3') == 2
-    assert r.hget('foo', 'k2') is None
-    assert r.hget('foo', 'k3') is None
-    assert r.hdel('foo', 'k2', 'k3') == 0
-
-
-def test_hdel_wrong_type(r):
-    testtools.zadd(r, 'foo', {'bar': 1})
-    with pytest.raises(redis.ResponseError):
-        r.hdel('foo', 'key')
-
-
-def test_hincrby(r):
-    r.hset('foo', 'counter', 0)
-    assert r.hincrby('foo', 'counter') == 1
-    assert r.hincrby('foo', 'counter') == 2
-    assert r.hincrby('foo', 'counter') == 3
-
-
-def test_hincrby_with_no_starting_value(r):
-    assert r.hincrby('foo', 'counter') == 1
-    assert r.hincrby('foo', 'counter') == 2
-    assert r.hincrby('foo', 'counter') == 3
-
-
-def test_hincrby_with_range_param(r):
-    assert r.hincrby('foo', 'counter', 2) == 2
-    assert r.hincrby('foo', 'counter', 2) == 4
-    assert r.hincrby('foo', 'counter', 2) == 6
-
-
-def test_hincrby_wrong_type(r):
-    testtools.zadd(r, 'foo', {'bar': 1})
-    with pytest.raises(redis.ResponseError):
-        r.hincrby('foo', 'key', 2)
-
-
-def test_hincrbyfloat(r):
-    r.hset('foo', 'counter', 0.0)
-    assert r.hincrbyfloat('foo', 'counter') == 1.0
-    assert r.hincrbyfloat('foo', 'counter') == 2.0
-    assert r.hincrbyfloat('foo', 'counter') == 3.0
-
-
-def test_hincrbyfloat_with_no_starting_value(r):
-    assert r.hincrbyfloat('foo', 'counter') == 1.0
-    assert r.hincrbyfloat('foo', 'counter') == 2.0
-    assert r.hincrbyfloat('foo', 'counter') == 3.0
-
-
-def test_hincrbyfloat_with_range_param(r):
-    assert r.hincrbyfloat('foo', 'counter', 0.1) == pytest.approx(0.1)
-    assert r.hincrbyfloat('foo', 'counter', 0.1) == pytest.approx(0.2)
-    assert r.hincrbyfloat('foo', 'counter', 0.1) == pytest.approx(0.3)
-
-
-def test_hincrbyfloat_on_non_float_value_raises_error(r):
-    r.hset('foo', 'counter', 'cat')
-    with pytest.raises(redis.ResponseError):
-        r.hincrbyfloat('foo', 'counter')
-
-
-def test_hincrbyfloat_with_non_float_amount_raises_error(r):
-    with pytest.raises(redis.ResponseError):
-        r.hincrbyfloat('foo', 'counter', 'cat')
-
-
-def test_hincrbyfloat_wrong_type(r):
-    testtools.zadd(r, 'foo', {'bar': 1})
-    with pytest.raises(redis.ResponseError):
-        r.hincrbyfloat('foo', 'key', 0.1)
-
-
-def test_hincrbyfloat_precision(r):
-    x = 1.23456789123456789
-    assert r.hincrbyfloat('foo', 'bar', x) == x
-    assert float(r.hget('foo', 'bar')) == x
-
-
-def test_hsetnx(r):
-    assert r.hsetnx('foo', 'newkey', 'v1') == 1
-    assert r.hsetnx('foo', 'newkey', 'v1') == 0
-    assert r.hget('foo', 'newkey') == b'v1'
-
-
-def test_hmset_empty_raises_error(r):
-    with pytest.raises(redis.DataError):
-        r.hmset('foo', {})
-
-
-def test_hmset(r):
-    r.hset('foo', 'k1', 'v1')
-    assert r.hmset('foo', {'k2': 'v2', 'k3': 'v3'}) is True
-
-
-def test_hmset_wrong_type(r):
-    testtools.zadd(r, 'foo', {'bar': 1})
-    with pytest.raises(redis.ResponseError):
-        r.hmset('foo', {'key': 'value'})
-
-
-def test_empty_hash(r):
-    r.hset('foo', 'bar', 'baz')
-    r.hdel('foo', 'bar')
-    assert not r.exists('foo')
-
-
-def test_sadd(r):
-    assert r.sadd('foo', 'member1') == 1
-    assert r.sadd('foo', 'member1') == 0
-    assert r.smembers('foo') == {b'member1'}
-    assert r.sadd('foo', 'member2', 'member3') == 2
-    assert r.smembers('foo') == {b'member1', b'member2', b'member3'}
-    assert r.sadd('foo', 'member3', 'member4') == 1
-    assert r.smembers('foo') == {b'member1', b'member2', b'member3', b'member4'}
-
-
-def test_sadd_as_str_type(r):
-    assert r.sadd('foo', *range(3)) == 3
-    assert r.smembers('foo') == {b'0', b'1', b'2'}
-
-
-def test_sadd_wrong_type(r):
-    testtools.zadd(r, 'foo', {'member': 1})
-    with pytest.raises(redis.ResponseError):
-        r.sadd('foo', 'member2')
-
-
-def test_scan_single(r):
-    r.set('foo1', 'bar1')
-    assert r.scan(match="foo*") == (0, [b'foo1'])
-
-
-def test_scan_iter_single_page(r):
-    r.set('foo1', 'bar1')
-    r.set('foo2', 'bar2')
-    assert set(r.scan_iter(match="foo*")) == {b'foo1', b'foo2'}
-    assert set(r.scan_iter()) == {b'foo1', b'foo2'}
-    assert set(r.scan_iter(match="")) == set()
-
-
-def test_scan_iter_multiple_pages(r):
-    all_keys = key_val_dict(size=100)
-    assert all(r.set(k, v) for k, v in all_keys.items())
-    assert set(r.scan_iter()) == set(all_keys)
-
-
-def test_scan_iter_multiple_pages_with_match(r):
-    all_keys = key_val_dict(size=100)
-    assert all(r.set(k, v) for k, v in all_keys.items())
-    # Now add a few keys that don't match the key:<number> pattern.
-    r.set('otherkey', 'foo')
-    r.set('andanother', 'bar')
-    actual = set(r.scan_iter(match='key:*'))
-    assert actual == set(all_keys)
-
-
-@pytest.mark.skipif(REDIS_VERSION < Version('3.5'), reason="Test is only applicable to redis-py 3.5+")
-@pytest.mark.min_server('6.0')
-def test_scan_iter_multiple_pages_with_type(r):
-    all_keys = key_val_dict(size=100)
-    assert all(r.set(k, v) for k, v in all_keys.items())
-    # Now add a few keys of another type
-    testtools.zadd(r, 'zset1', {'otherkey': 1})
-    testtools.zadd(r, 'zset2', {'andanother': 1})
-    actual = set(r.scan_iter(_type='string'))
-    assert actual == set(all_keys)
-    actual = set(r.scan_iter(_type='ZSET'))
-    assert actual == {b'zset1', b'zset2'}
-
-
-def test_scan_multiple_pages_with_count_arg(r):
-    all_keys = key_val_dict(size=100)
-    assert all(r.set(k, v) for k, v in all_keys.items())
-    assert set(r.scan_iter(count=1000)) == set(all_keys)
-
-
-def test_scan_all_in_single_call(r):
-    all_keys = key_val_dict(size=100)
-    assert all(r.set(k, v) for k, v in all_keys.items())
-    # Specify way more than the 100 keys we've added.
-    actual = r.scan(count=1000)
-    assert set(actual[1]) == set(all_keys)
-    assert actual[0] == 0
-
-
-@pytest.mark.slow
-def test_scan_expired_key(r):
-    r.set('expiringkey', 'value')
-    r.pexpire('expiringkey', 1)
-    sleep(1)
-    assert r.scan()[1] == []
-
-
-def test_scard(r):
-    r.sadd('foo', 'member1')
-    r.sadd('foo', 'member2')
-    r.sadd('foo', 'member2')
-    assert r.scard('foo') == 2
-
-
-def test_scard_wrong_type(r):
-    testtools.zadd(r, 'foo', {'member': 1})
-    with pytest.raises(redis.ResponseError):
-        r.scard('foo')
-
-
-def test_sdiff(r):
-    r.sadd('foo', 'member1')
-    r.sadd('foo', 'member2')
-    r.sadd('bar', 'member2')
-    r.sadd('bar', 'member3')
-    assert r.sdiff('foo', 'bar') == {b'member1'}
-    # Original sets shouldn't be modified.
-    assert r.smembers('foo') == {b'member1', b'member2'}
-    assert r.smembers('bar') == {b'member2', b'member3'}
-
-
-def test_sdiff_one_key(r):
-    r.sadd('foo', 'member1')
-    r.sadd('foo', 'member2')
-    assert r.sdiff('foo') == {b'member1', b'member2'}
-
-
-def test_sdiff_empty(r):
-    assert r.sdiff('foo') == set()
-
-
-def test_sdiff_wrong_type(r):
-    testtools.zadd(r, 'foo', {'member': 1})
-    r.sadd('bar', 'member')
-    with pytest.raises(redis.ResponseError):
-        r.sdiff('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.sdiff('bar', 'foo')
-
-
-def test_sdiffstore(r):
-    r.sadd('foo', 'member1')
-    r.sadd('foo', 'member2')
-    r.sadd('bar', 'member2')
-    r.sadd('bar', 'member3')
-    assert r.sdiffstore('baz', 'foo', 'bar') == 1
-
-    # Catch instances where we store bytes and strings inconsistently
-    # and thus baz = {'member1', b'member1'}
-    r.sadd('baz', 'member1')
-    assert r.scard('baz') == 1
-
-
-def test_setrange(r):
-    r.set('foo', 'test')
-    assert r.setrange('foo', 1, 'aste') == 5
-    assert r.get('foo') == b'taste'
-
-    r.set('foo', 'test')
-    assert r.setrange('foo', 1, 'a') == 4
-    assert r.get('foo') == b'tast'
-
-    assert r.setrange('bar', 2, 'test') == 6
-    assert r.get('bar') == b'\x00\x00test'
-
-
-def test_setrange_expiry(r):
-    r.set('foo', 'test', ex=10)
-    r.setrange('foo', 1, 'aste')
-    assert r.ttl('foo') > 0
-
-
-def test_sinter(r):
-    r.sadd('foo', 'member1')
-    r.sadd('foo', 'member2')
-    r.sadd('bar', 'member2')
-    r.sadd('bar', 'member3')
-    assert r.sinter('foo', 'bar') == {b'member2'}
-    assert r.sinter('foo') == {b'member1', b'member2'}
-
-
-def test_sinter_bytes_keys(r):
-    foo = os.urandom(10)
-    bar = os.urandom(10)
-    r.sadd(foo, 'member1')
-    r.sadd(foo, 'member2')
-    r.sadd(bar, 'member2')
-    r.sadd(bar, 'member3')
-    assert r.sinter(foo, bar) == {b'member2'}
-    assert r.sinter(foo) == {b'member1', b'member2'}
-
-
-def test_sinter_wrong_type(r):
-    testtools.zadd(r, 'foo', {'member': 1})
-    r.sadd('bar', 'member')
-    with pytest.raises(redis.ResponseError):
-        r.sinter('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.sinter('bar', 'foo')
-
-
-def test_sinterstore(r):
-    r.sadd('foo', 'member1')
-    r.sadd('foo', 'member2')
-    r.sadd('bar', 'member2')
-    r.sadd('bar', 'member3')
-    assert r.sinterstore('baz', 'foo', 'bar') == 1
-
-    # Catch instances where we store bytes and strings inconsistently
-    # and thus baz = {'member2', b'member2'}
-    r.sadd('baz', 'member2')
-    assert r.scard('baz') == 1
-
-
-def test_sismember(r):
-    assert r.sismember('foo', 'member1') is False
-    r.sadd('foo', 'member1')
-    assert r.sismember('foo', 'member1') is True
-
-
-def test_sismember_wrong_type(r):
-    testtools.zadd(r, 'foo', {'member': 1})
-    with pytest.raises(redis.ResponseError):
-        r.sismember('foo', 'member')
-
-
-def test_smembers(r):
-    assert r.smembers('foo') == set()
-
-
-def test_smembers_copy(r):
-    r.sadd('foo', 'member1')
-    ret = r.smembers('foo')
-    r.sadd('foo', 'member2')
-    assert r.smembers('foo') != ret
-
-
-def test_smembers_wrong_type(r):
-    testtools.zadd(r, 'foo', {'member': 1})
-    with pytest.raises(redis.ResponseError):
-        r.smembers('foo')
-
-
-def test_smembers_runtime_error(r):
-    r.sadd('foo', 'member1', 'member2')
-    for member in r.smembers('foo'):
-        r.srem('foo', member)
-
-
-def test_smove(r):
-    r.sadd('foo', 'member1')
-    r.sadd('foo', 'member2')
-    assert r.smove('foo', 'bar', 'member1') is True
-    assert r.smembers('bar') == {b'member1'}
-
-
-def test_smove_non_existent_key(r):
-    assert r.smove('foo', 'bar', 'member1') is False
-
-
-def test_smove_wrong_type(r):
-    testtools.zadd(r, 'foo', {'member': 1})
-    r.sadd('bar', 'member')
-    with pytest.raises(redis.ResponseError):
-        r.smove('bar', 'foo', 'member')
-    # Must raise the error before removing member from bar
-    assert r.smembers('bar') == {b'member'}
-    with pytest.raises(redis.ResponseError):
-        r.smove('foo', 'bar', 'member')
-
-
-def test_spop(r):
-    # This is tricky because it pops a random element.
-    r.sadd('foo', 'member1')
-    assert r.spop('foo') == b'member1'
-    assert r.spop('foo') is None
-
-
-def test_spop_wrong_type(r):
-    testtools.zadd(r, 'foo', {'member': 1})
-    with pytest.raises(redis.ResponseError):
-        r.spop('foo')
-
-
-def test_srandmember(r):
-    r.sadd('foo', 'member1')
-    assert r.srandmember('foo') == b'member1'
-    # Shouldn't be removed from the set.
-    assert r.srandmember('foo') == b'member1'
-
-
-def test_srandmember_number(r):
-    """srandmember works with the number argument."""
-    assert r.srandmember('foo', 2) == []
-    r.sadd('foo', b'member1')
-    assert r.srandmember('foo', 2) == [b'member1']
-    r.sadd('foo', b'member2')
-    assert set(r.srandmember('foo', 2)) == {b'member1', b'member2'}
-    r.sadd('foo', b'member3')
-    res = r.srandmember('foo', 2)
-    assert len(res) == 2
-    for e in res:
-        assert e in {b'member1', b'member2', b'member3'}
-
-
-def test_srandmember_wrong_type(r):
-    testtools.zadd(r, 'foo', {'member': 1})
-    with pytest.raises(redis.ResponseError):
-        r.srandmember('foo')
-
-
-def test_srem(r):
-    r.sadd('foo', 'member1', 'member2', 'member3', 'member4')
-    assert r.smembers('foo') == {b'member1', b'member2', b'member3', b'member4'}
-    assert r.srem('foo', 'member1') == 1
-    assert r.smembers('foo') == {b'member2', b'member3', b'member4'}
-    assert r.srem('foo', 'member1') == 0
-    # Since redis>=2.7.6 returns number of deleted items.
-    assert r.srem('foo', 'member2', 'member3') == 2
-    assert r.smembers('foo') == {b'member4'}
-    assert r.srem('foo', 'member3', 'member4') == 1
-    assert r.smembers('foo') == set()
-    assert r.srem('foo', 'member3', 'member4') == 0
-
-
-def test_srem_wrong_type(r):
-    testtools.zadd(r, 'foo', {'member': 1})
-    with pytest.raises(redis.ResponseError):
-        r.srem('foo', 'member')
-
-
-def test_sunion(r):
-    r.sadd('foo', 'member1')
-    r.sadd('foo', 'member2')
-    r.sadd('bar', 'member2')
-    r.sadd('bar', 'member3')
-    assert r.sunion('foo', 'bar') == {b'member1', b'member2', b'member3'}
-
-
-def test_sunion_wrong_type(r):
-    testtools.zadd(r, 'foo', {'member': 1})
-    r.sadd('bar', 'member')
-    with pytest.raises(redis.ResponseError):
-        r.sunion('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.sunion('bar', 'foo')
-
-
-def test_sunionstore(r):
-    r.sadd('foo', 'member1')
-    r.sadd('foo', 'member2')
-    r.sadd('bar', 'member2')
-    r.sadd('bar', 'member3')
-    assert r.sunionstore('baz', 'foo', 'bar') == 3
-    assert r.smembers('baz') == {b'member1', b'member2', b'member3'}
-
-    # Catch instances where we store bytes and strings inconsistently
-    # and thus baz = {b'member1', b'member2', b'member3', 'member3'}
-    r.sadd('baz', 'member3')
-    assert r.scard('baz') == 3
-
-
-def test_empty_set(r):
-    r.sadd('foo', 'bar')
-    r.srem('foo', 'bar')
-    assert not r.exists('foo')
-
-
-def test_zadd(r):
-    testtools.zadd(r, 'foo', {'four': 4})
-    testtools.zadd(r, 'foo', {'three': 3})
-    assert testtools.zadd(r, 'foo', {'two': 2, 'one': 1, 'zero': 0}) == 3
-    assert r.zrange('foo', 0, -1) == [b'zero', b'one', b'two', b'three', b'four']
-    assert testtools.zadd(r, 'foo', {'zero': 7, 'one': 1, 'five': 5}) == 1
-    assert (
-            r.zrange('foo', 0, -1)
-            == [b'one', b'two', b'three', b'four', b'five', b'zero']
-    )
-
-
-def test_zadd_empty(r):
-    # Have to add at least one key/value pair
-    with pytest.raises(redis.RedisError):
-        testtools.zadd(r, 'foo', {})
-
-
-@pytest.mark.max_server('6.2.7')
-def test_zadd_minus_zero(r):
-    # Changing -0 to +0 is ignored
-    testtools.zadd(r, 'foo', {'a': -0.0})
-    testtools.zadd(r, 'foo', {'a': 0.0})
-    assert raw_command(r, 'zscore', 'foo', 'a') == b'-0'
-
-
-
-def test_zadd_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        testtools.zadd(r, 'foo', {'two': 2})
-
-
-def test_zadd_multiple(r):
-    testtools.zadd(r, 'foo', {'one': 1, 'two': 2})
-    assert r.zrange('foo', 0, 0) == [b'one']
-    assert r.zrange('foo', 1, 1) == [b'two']
-
-
-@testtools.run_test_if_redis_ver('above', '3')
-@pytest.mark.parametrize(
-    'param,return_value,state',
-    [
-        ({'four': 2.0, 'three': 1.0}, 0, [(b'three', 3.0), (b'four', 4.0)]),
-        ({'four': 2.0, 'three': 1.0, 'zero': 0.0}, 1, [(b'zero', 0.0), (b'three', 3.0), (b'four', 4.0)]),
-        ({'two': 2.0, 'one': 1.0}, 2, [(b'one', 1.0), (b'two', 2.0), (b'three', 3.0), (b'four', 4.0)])
-    ]
-)
-@pytest.mark.parametrize('ch', [False, True])
-def test_zadd_with_nx(r, param, return_value, state, ch):
-    testtools.zadd(r, 'foo', {'four': 4.0, 'three': 3.0})
-    assert testtools.zadd(r, 'foo', param, nx=True, ch=ch) == return_value
-    assert r.zrange('foo', 0, -1, withscores=True) == state
-
-
-@testtools.run_test_if_redis_ver('above', '3')
-@pytest.mark.parametrize(
-    'param,return_value,state',
-    [
-        ({'four': 4.0, 'three': 1.0}, 1, [(b'three', 1.0), (b'four', 4.0)]),
-        ({'four': 4.0, 'three': 1.0, 'zero': 0.0}, 2, [(b'zero', 0.0), (b'three', 1.0), (b'four', 4.0)]),
-        ({'two': 2.0, 'one': 1.0}, 2, [(b'one', 1.0), (b'two', 2.0), (b'three', 3.0), (b'four', 4.0)])
-    ]
-)
-def test_zadd_with_ch(r, param, return_value, state):
-    testtools.zadd(r, 'foo', {'four': 4.0, 'three': 3.0})
-    assert testtools.zadd(r, 'foo', param, ch=True) == return_value
-    assert r.zrange('foo', 0, -1, withscores=True) == state
-
-
-@testtools.run_test_if_redis_ver('above', '3')
-@pytest.mark.parametrize(
-    'param,changed,state',
-    [
-        ({'four': 2.0, 'three': 1.0}, 2, [(b'three', 1.0), (b'four', 2.0)]),
-        ({'four': 4.0, 'three': 3.0, 'zero': 0.0}, 0, [(b'three', 3.0), (b'four', 4.0)]),
-        ({'two': 2.0, 'one': 1.0}, 0, [(b'three', 3.0), (b'four', 4.0)])
-    ]
-)
-@pytest.mark.parametrize('ch', [False, True])
-def test_zadd_with_xx(r, param, changed, state, ch):
-    testtools.zadd(r, 'foo', {'four': 4.0, 'three': 3.0})
-    assert testtools.zadd(r, 'foo', param, xx=True, ch=ch) == (changed if ch else 0)
-    assert r.zrange('foo', 0, -1, withscores=True) == state
-
-
-@testtools.run_test_if_redis_ver('above', '3')
-@pytest.mark.parametrize('ch', [False, True])
-def test_zadd_with_nx_and_xx(r, ch):
-    testtools.zadd(r, 'foo', {'four': 4.0, 'three': 3.0})
-    with pytest.raises(redis.DataError):
-        testtools.zadd(r, 'foo', {'four': -4.0, 'three': -3.0}, nx=True, xx=True, ch=ch)
-
-
-@pytest.mark.skipif(REDIS_VERSION < Version('3.1'), reason="Test is only applicable to redis-py 3.1+")
-@pytest.mark.parametrize('ch', [False, True])
-def test_zadd_incr(r, ch):
-    testtools.zadd(r, 'foo', {'four': 4.0, 'three': 3.0})
-    assert testtools.zadd(r, 'foo', {'four': 1.0}, incr=True, ch=ch) == 5.0
-    assert testtools.zadd(r, 'foo', {'three': 1.0}, incr=True, nx=True, ch=ch) is None
-    assert r.zscore('foo', 'three') == 3.0
-    assert testtools.zadd(r, 'foo', {'bar': 1.0}, incr=True, xx=True, ch=ch) is None
-    assert testtools.zadd(r, 'foo', {'three': 1.0}, incr=True, xx=True, ch=ch) == 4.0
-
-
-def test_zrange_same_score(r):
-    testtools.zadd(r, 'foo', {'two_a': 2})
-    testtools.zadd(r, 'foo', {'two_b': 2})
-    testtools.zadd(r, 'foo', {'two_c': 2})
-    testtools.zadd(r, 'foo', {'two_d': 2})
-    testtools.zadd(r, 'foo', {'two_e': 2})
-    assert r.zrange('foo', 2, 3) == [b'two_c', b'two_d']
-
-
-def test_zcard(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    assert r.zcard('foo') == 2
-
-
-def test_zcard_non_existent_key(r):
-    assert r.zcard('foo') == 0
-
-
-def test_zcard_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zcard('foo')
-
-
-def test_zcount(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'three': 2})
-    testtools.zadd(r, 'foo', {'five': 5})
-    assert r.zcount('foo', 2, 4) == 1
-    assert r.zcount('foo', 1, 4) == 2
-    assert r.zcount('foo', 0, 5) == 3
-    assert r.zcount('foo', 4, '+inf') == 1
-    assert r.zcount('foo', '-inf', 4) == 2
-    assert r.zcount('foo', '-inf', '+inf') == 3
-
-
-def test_zcount_exclusive(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'three': 2})
-    testtools.zadd(r, 'foo', {'five': 5})
-    assert r.zcount('foo', '-inf', '(2') == 1
-    assert r.zcount('foo', '-inf', 2) == 2
-    assert r.zcount('foo', '(5', '+inf') == 0
-    assert r.zcount('foo', '(1', 5) == 2
-    assert r.zcount('foo', '(2', '(5') == 0
-    assert r.zcount('foo', '(1', '(5') == 1
-    assert r.zcount('foo', 2, '(5') == 1
-
-
-def test_zcount_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zcount('foo', '-inf', '+inf')
-
-
-def test_zincrby(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    assert zincrby(r, 'foo', 10, 'one') == 11
-    assert r.zrange('foo', 0, -1, withscores=True) == [(b'one', 11)]
-
-
-def test_zincrby_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        zincrby(r, 'foo', 10, 'one')
-
-
-def test_zrange_descending(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    assert r.zrange('foo', 0, -1, desc=True) == [b'three', b'two', b'one']
-
-
-def test_zrange_descending_with_scores(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    assert (
-            r.zrange('foo', 0, -1, desc=True, withscores=True)
-            == [(b'three', 3), (b'two', 2), (b'one', 1)]
-    )
-
-
-def test_zrange_with_positive_indices(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    assert r.zrange('foo', 0, 1) == [b'one', b'two']
-
-
-def test_zrange_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zrange('foo', 0, -1)
-
-
-def test_zrange_score_cast(r):
-    testtools.zadd(r, 'foo', {'one': 1.2})
-    testtools.zadd(r, 'foo', {'two': 2.2})
-
-    expected_without_cast_round = [(b'one', 1.2), (b'two', 2.2)]
-    expected_with_cast_round = [(b'one', 1.0), (b'two', 2.0)]
-    assert r.zrange('foo', 0, 2, withscores=True) == expected_without_cast_round
-    assert (
-            r.zrange('foo', 0, 2, withscores=True, score_cast_func=round_str)
-            == expected_with_cast_round
-    )
-
-
-def test_zrank(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    assert r.zrank('foo', 'one') == 0
-    assert r.zrank('foo', 'two') == 1
-    assert r.zrank('foo', 'three') == 2
-
-
-def test_zrank_non_existent_member(r):
-    assert r.zrank('foo', 'one') is None
-
-
-def test_zrank_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zrank('foo', 'one')
-
-
-def test_zrem(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    testtools.zadd(r, 'foo', {'four': 4})
-    assert r.zrem('foo', 'one') == 1
-    assert r.zrange('foo', 0, -1) == [b'two', b'three', b'four']
-    # Since redis>=2.7.6 returns number of deleted items.
-    assert r.zrem('foo', 'two', 'three') == 2
-    assert r.zrange('foo', 0, -1) == [b'four']
-    assert r.zrem('foo', 'three', 'four') == 1
-    assert r.zrange('foo', 0, -1) == []
-    assert r.zrem('foo', 'three', 'four') == 0
-
-
-def test_zrem_non_existent_member(r):
-    assert not r.zrem('foo', 'one')
-
-
-def test_zrem_numeric_member(r):
-    testtools.zadd(r, 'foo', {'128': 13.0, '129': 12.0})
-    assert r.zrem('foo', 128) == 1
-    assert r.zrange('foo', 0, -1) == [b'129']
-
-
-def test_zrem_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zrem('foo', 'bar')
-
-
-def test_zscore(r):
-    testtools.zadd(r, 'foo', {'one': 54})
-    assert r.zscore('foo', 'one') == 54
-
-
-def test_zscore_non_existent_member(r):
-    assert r.zscore('foo', 'one') is None
-
-
-def test_zscore_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zscore('foo', 'one')
-
-
-def test_zrevrank(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    assert r.zrevrank('foo', 'one') == 2
-    assert r.zrevrank('foo', 'two') == 1
-    assert r.zrevrank('foo', 'three') == 0
-
-
-def test_zrevrank_non_existent_member(r):
-    assert r.zrevrank('foo', 'one') is None
-
-
-def test_zrevrank_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zrevrank('foo', 'one')
-
-
-def test_zrevrange(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    assert r.zrevrange('foo', 0, 1) == [b'three', b'two']
-    assert r.zrevrange('foo', 0, -1) == [b'three', b'two', b'one']
-
-
-def test_zrevrange_sorted_keys(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'two_b': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    assert r.zrevrange('foo', 0, 2) == [b'three', b'two_b', b'two']
-    assert r.zrevrange('foo', 0, -1) == [b'three', b'two_b', b'two', b'one']
-
-
-def test_zrevrange_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zrevrange('foo', 0, 2)
-
-
-def test_zrevrange_score_cast(r):
-    testtools.zadd(r, 'foo', {'one': 1.2})
-    testtools.zadd(r, 'foo', {'two': 2.2})
-
-    expected_without_cast_round = [(b'two', 2.2), (b'one', 1.2)]
-    expected_with_cast_round = [(b'two', 2.0), (b'one', 1.0)]
-    assert r.zrevrange('foo', 0, 2, withscores=True) == expected_without_cast_round
-    assert (
-            r.zrevrange('foo', 0, 2, withscores=True, score_cast_func=round_str)
-            == expected_with_cast_round
-    )
-
-
-def test_zrangebyscore(r):
-    testtools.zadd(r, 'foo', {'zero': 0})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'two_a_also': 2})
-    testtools.zadd(r, 'foo', {'two_b_also': 2})
-    testtools.zadd(r, 'foo', {'four': 4})
-    assert r.zrangebyscore('foo', 1, 3) == [b'two', b'two_a_also', b'two_b_also']
-    assert r.zrangebyscore('foo', 2, 3) == [b'two', b'two_a_also', b'two_b_also']
-    assert (
-            r.zrangebyscore('foo', 0, 4)
-            == [b'zero', b'two', b'two_a_also', b'two_b_also', b'four']
-    )
-    assert r.zrangebyscore('foo', '-inf', 1) == [b'zero']
-    assert (
-            r.zrangebyscore('foo', 2, '+inf')
-            == [b'two', b'two_a_also', b'two_b_also', b'four']
-    )
-    assert (
-            r.zrangebyscore('foo', '-inf', '+inf')
-            == [b'zero', b'two', b'two_a_also', b'two_b_also', b'four']
-    )
-
-
-def test_zrangebysore_exclusive(r):
-    testtools.zadd(r, 'foo', {'zero': 0})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'four': 4})
-    testtools.zadd(r, 'foo', {'five': 5})
-    assert r.zrangebyscore('foo', '(0', 6) == [b'two', b'four', b'five']
-    assert r.zrangebyscore('foo', '(2', '(5') == [b'four']
-    assert r.zrangebyscore('foo', 0, '(4') == [b'zero', b'two']
-
-
-def test_zrangebyscore_raises_error(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    with pytest.raises(redis.ResponseError):
-        r.zrangebyscore('foo', 'one', 2)
-    with pytest.raises(redis.ResponseError):
-        r.zrangebyscore('foo', 2, 'three')
-    with pytest.raises(redis.ResponseError):
-        r.zrangebyscore('foo', 2, '3)')
-    with pytest.raises(redis.RedisError):
-        r.zrangebyscore('foo', 2, '3)', 0, None)
-
-
-def test_zrangebyscore_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zrangebyscore('foo', '(1', '(2')
-
-
-def test_zrangebyscore_slice(r):
-    testtools.zadd(r, 'foo', {'two_a': 2})
-    testtools.zadd(r, 'foo', {'two_b': 2})
-    testtools.zadd(r, 'foo', {'two_c': 2})
-    testtools.zadd(r, 'foo', {'two_d': 2})
-    assert r.zrangebyscore('foo', 0, 4, 0, 2) == [b'two_a', b'two_b']
-    assert r.zrangebyscore('foo', 0, 4, 1, 3) == [b'two_b', b'two_c', b'two_d']
-
-
-def test_zrangebyscore_withscores(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    assert r.zrangebyscore('foo', 1, 3, 0, 2, True) == [(b'one', 1), (b'two', 2)]
-
-
-def test_zrangebyscore_cast_scores(r):
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'two_a_also': 2.2})
-
-    expected_without_cast_round = [(b'two', 2.0), (b'two_a_also', 2.2)]
-    expected_with_cast_round = [(b'two', 2.0), (b'two_a_also', 2.0)]
-    assert (
-            sorted(r.zrangebyscore('foo', 2, 3, withscores=True))
-            == sorted(expected_without_cast_round)
-    )
-    assert (
-            sorted(r.zrangebyscore('foo', 2, 3, withscores=True,
-                                   score_cast_func=round_str))
-            == sorted(expected_with_cast_round)
-    )
-
-
-def test_zrevrangebyscore(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    assert r.zrevrangebyscore('foo', 3, 1) == [b'three', b'two', b'one']
-    assert r.zrevrangebyscore('foo', 3, 2) == [b'three', b'two']
-    assert r.zrevrangebyscore('foo', 3, 1, 0, 1) == [b'three']
-    assert r.zrevrangebyscore('foo', 3, 1, 1, 2) == [b'two', b'one']
-
-
-def test_zrevrangebyscore_exclusive(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    assert r.zrevrangebyscore('foo', '(3', 1) == [b'two', b'one']
-    assert r.zrevrangebyscore('foo', 3, '(2') == [b'three']
-    assert r.zrevrangebyscore('foo', '(3', '(1') == [b'two']
-    assert r.zrevrangebyscore('foo', '(2', 1, 0, 1) == [b'one']
-    assert r.zrevrangebyscore('foo', '(2', '(1', 0, 1) == []
-    assert r.zrevrangebyscore('foo', '(3', '(0', 1, 2) == [b'one']
-
-
-def test_zrevrangebyscore_raises_error(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    with pytest.raises(redis.ResponseError):
-        r.zrevrangebyscore('foo', 'three', 1)
-    with pytest.raises(redis.ResponseError):
-        r.zrevrangebyscore('foo', 3, 'one')
-    with pytest.raises(redis.ResponseError):
-        r.zrevrangebyscore('foo', 3, '1)')
-    with pytest.raises(redis.ResponseError):
-        r.zrevrangebyscore('foo', '((3', '1)')
-
-
-def test_zrevrangebyscore_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zrevrangebyscore('foo', '(3', '(1')
-
-
-def test_zrevrangebyscore_cast_scores(r):
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'two_a_also': 2.2})
-
-    expected_without_cast_round = [(b'two_a_also', 2.2), (b'two', 2.0)]
-    expected_with_cast_round = [(b'two_a_also', 2.0), (b'two', 2.0)]
-    assert (
-            r.zrevrangebyscore('foo', 3, 2, withscores=True)
-            == expected_without_cast_round
-    )
-    assert (
-            r.zrevrangebyscore('foo', 3, 2, withscores=True,
-                               score_cast_func=round_str)
-            == expected_with_cast_round
-    )
-
-
-def test_zrangebylex(r):
-    testtools.zadd(r, 'foo', {'one_a': 0})
-    testtools.zadd(r, 'foo', {'two_a': 0})
-    testtools.zadd(r, 'foo', {'two_b': 0})
-    testtools.zadd(r, 'foo', {'three_a': 0})
-    assert r.zrangebylex('foo', b'(t', b'+') == [b'three_a', b'two_a', b'two_b']
-    assert r.zrangebylex('foo', b'(t', b'[two_b') == [b'three_a', b'two_a', b'two_b']
-    assert r.zrangebylex('foo', b'(t', b'(two_b') == [b'three_a', b'two_a']
-    assert (
-            r.zrangebylex('foo', b'[three_a', b'[two_b')
-            == [b'three_a', b'two_a', b'two_b']
-    )
-    assert r.zrangebylex('foo', b'(three_a', b'[two_b') == [b'two_a', b'two_b']
-    assert r.zrangebylex('foo', b'-', b'(two_b') == [b'one_a', b'three_a', b'two_a']
-    assert r.zrangebylex('foo', b'[two_b', b'(two_b') == []
-    # reversed max + and min - boundaries
-    # these will be always empty, but allowed by redis
-    assert r.zrangebylex('foo', b'+', b'-') == []
-    assert r.zrangebylex('foo', b'+', b'[three_a') == []
-    assert r.zrangebylex('foo', b'[o', b'-') == []
-
-
-def test_zrangebylex_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zrangebylex('foo', b'-', b'+')
-
-
-def test_zlexcount(r):
-    testtools.zadd(r, 'foo', {'one_a': 0})
-    testtools.zadd(r, 'foo', {'two_a': 0})
-    testtools.zadd(r, 'foo', {'two_b': 0})
-    testtools.zadd(r, 'foo', {'three_a': 0})
-    assert r.zlexcount('foo', b'(t', b'+') == 3
-    assert r.zlexcount('foo', b'(t', b'[two_b') == 3
-    assert r.zlexcount('foo', b'(t', b'(two_b') == 2
-    assert r.zlexcount('foo', b'[three_a', b'[two_b') == 3
-    assert r.zlexcount('foo', b'(three_a', b'[two_b') == 2
-    assert r.zlexcount('foo', b'-', b'(two_b') == 3
-    assert r.zlexcount('foo', b'[two_b', b'(two_b') == 0
-    # reversed max + and min - boundaries
-    # these will be always empty, but allowed by redis
-    assert r.zlexcount('foo', b'+', b'-') == 0
-    assert r.zlexcount('foo', b'+', b'[three_a') == 0
-    assert r.zlexcount('foo', b'[o', b'-') == 0
-
-
-def test_zlexcount_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zlexcount('foo', b'-', b'+')
-
-
-def test_zrangebylex_with_limit(r):
-    testtools.zadd(r, 'foo', {'one_a': 0})
-    testtools.zadd(r, 'foo', {'two_a': 0})
-    testtools.zadd(r, 'foo', {'two_b': 0})
-    testtools.zadd(r, 'foo', {'three_a': 0})
-    assert r.zrangebylex('foo', b'-', b'+', 1, 2) == [b'three_a', b'two_a']
-
-    # negative offset no results
-    assert r.zrangebylex('foo', b'-', b'+', -1, 3) == []
-
-    # negative limit ignored
-    assert (
-            r.zrangebylex('foo', b'-', b'+', 0, -2)
-            == [b'one_a', b'three_a', b'two_a', b'two_b']
-    )
-    assert r.zrangebylex('foo', b'-', b'+', 1, -2) == [b'three_a', b'two_a', b'two_b']
-    assert r.zrangebylex('foo', b'+', b'-', 1, 1) == []
-
-
-def test_zrangebylex_raises_error(r):
-    testtools.zadd(r, 'foo', {'one_a': 0})
-    testtools.zadd(r, 'foo', {'two_a': 0})
-    testtools.zadd(r, 'foo', {'two_b': 0})
-    testtools.zadd(r, 'foo', {'three_a': 0})
-
-    with pytest.raises(redis.ResponseError):
-        r.zrangebylex('foo', b'', b'[two_b')
-
-    with pytest.raises(redis.ResponseError):
-        r.zrangebylex('foo', b'-', b'two_b')
-
-    with pytest.raises(redis.ResponseError):
-        r.zrangebylex('foo', b'(t', b'two_b')
-
-    with pytest.raises(redis.ResponseError):
-        r.zrangebylex('foo', b't', b'+')
-
-    with pytest.raises(redis.ResponseError):
-        r.zrangebylex('foo', b'[two_a', b'')
-
-    with pytest.raises(redis.RedisError):
-        r.zrangebylex('foo', b'(two_a', b'[two_b', 1)
-
-
-def test_zrevrangebylex(r):
-    testtools.zadd(r, 'foo', {'one_a': 0})
-    testtools.zadd(r, 'foo', {'two_a': 0})
-    testtools.zadd(r, 'foo', {'two_b': 0})
-    testtools.zadd(r, 'foo', {'three_a': 0})
-    assert r.zrevrangebylex('foo', b'+', b'(t') == [b'two_b', b'two_a', b'three_a']
-    assert (
-            r.zrevrangebylex('foo', b'[two_b', b'(t')
-            == [b'two_b', b'two_a', b'three_a']
-    )
-    assert r.zrevrangebylex('foo', b'(two_b', b'(t') == [b'two_a', b'three_a']
-    assert (
-            r.zrevrangebylex('foo', b'[two_b', b'[three_a')
-            == [b'two_b', b'two_a', b'three_a']
-    )
-    assert r.zrevrangebylex('foo', b'[two_b', b'(three_a') == [b'two_b', b'two_a']
-    assert r.zrevrangebylex('foo', b'(two_b', b'-') == [b'two_a', b'three_a', b'one_a']
-    assert r.zrangebylex('foo', b'(two_b', b'[two_b') == []
-    # reversed max + and min - boundaries
-    # these will be always empty, but allowed by redis
-    assert r.zrevrangebylex('foo', b'-', b'+') == []
-    assert r.zrevrangebylex('foo', b'[three_a', b'+') == []
-    assert r.zrevrangebylex('foo', b'-', b'[o') == []
-
-
-def test_zrevrangebylex_with_limit(r):
-    testtools.zadd(r, 'foo', {'one_a': 0})
-    testtools.zadd(r, 'foo', {'two_a': 0})
-    testtools.zadd(r, 'foo', {'two_b': 0})
-    testtools.zadd(r, 'foo', {'three_a': 0})
-    assert r.zrevrangebylex('foo', b'+', b'-', 1, 2) == [b'two_a', b'three_a']
-
-
-def test_zrevrangebylex_raises_error(r):
-    testtools.zadd(r, 'foo', {'one_a': 0})
-    testtools.zadd(r, 'foo', {'two_a': 0})
-    testtools.zadd(r, 'foo', {'two_b': 0})
-    testtools.zadd(r, 'foo', {'three_a': 0})
-
-    with pytest.raises(redis.ResponseError):
-        r.zrevrangebylex('foo', b'[two_b', b'')
-
-    with pytest.raises(redis.ResponseError):
-        r.zrevrangebylex('foo', b'two_b', b'-')
-
-    with pytest.raises(redis.ResponseError):
-        r.zrevrangebylex('foo', b'two_b', b'(t')
-
-    with pytest.raises(redis.ResponseError):
-        r.zrevrangebylex('foo', b'+', b't')
-
-    with pytest.raises(redis.ResponseError):
-        r.zrevrangebylex('foo', b'', b'[two_a')
-
-    with pytest.raises(redis.RedisError):
-        r.zrevrangebylex('foo', b'[two_a', b'(two_b', 1)
-
-
-def test_zrevrangebylex_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zrevrangebylex('foo', b'+', b'-')
-
-
-def test_zremrangebyrank(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    assert r.zremrangebyrank('foo', 0, 1) == 2
-    assert r.zrange('foo', 0, -1) == [b'three']
-
-
-def test_zremrangebyrank_negative_indices(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'three': 3})
-    assert r.zremrangebyrank('foo', -2, -1) == 2
-    assert r.zrange('foo', 0, -1) == [b'one']
-
-
-def test_zremrangebyrank_out_of_bounds(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    assert r.zremrangebyrank('foo', 1, 3) == 0
-
-
-def test_zremrangebyrank_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zremrangebyrank('foo', 1, 3)
-
-
-def test_zremrangebyscore(r):
-    testtools.zadd(r, 'foo', {'zero': 0})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'four': 4})
-    # Outside of range.
-    assert r.zremrangebyscore('foo', 5, 10) == 0
-    assert r.zrange('foo', 0, -1) == [b'zero', b'two', b'four']
-    # Middle of range.
-    assert r.zremrangebyscore('foo', 1, 3) == 1
-    assert r.zrange('foo', 0, -1) == [b'zero', b'four']
-    assert r.zremrangebyscore('foo', 1, 3) == 0
-    # Entire range.
-    assert r.zremrangebyscore('foo', 0, 4) == 2
-    assert r.zrange('foo', 0, -1) == []
-
-
-def test_zremrangebyscore_exclusive(r):
-    testtools.zadd(r, 'foo', {'zero': 0})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'four': 4})
-    assert r.zremrangebyscore('foo', '(0', 1) == 0
-    assert r.zrange('foo', 0, -1) == [b'zero', b'two', b'four']
-    assert r.zremrangebyscore('foo', '-inf', '(0') == 0
-    assert r.zrange('foo', 0, -1) == [b'zero', b'two', b'four']
-    assert r.zremrangebyscore('foo', '(2', 5) == 1
-    assert r.zrange('foo', 0, -1) == [b'zero', b'two']
-    assert r.zremrangebyscore('foo', 0, '(2') == 1
-    assert r.zrange('foo', 0, -1) == [b'two']
-    assert r.zremrangebyscore('foo', '(1', '(3') == 1
-    assert r.zrange('foo', 0, -1) == []
-
-
-def test_zremrangebyscore_raises_error(r):
-    testtools.zadd(r, 'foo', {'zero': 0})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'foo', {'four': 4})
-    with pytest.raises(redis.ResponseError):
-        r.zremrangebyscore('foo', 'three', 1)
-    with pytest.raises(redis.ResponseError):
-        r.zremrangebyscore('foo', 3, 'one')
-    with pytest.raises(redis.ResponseError):
-        r.zremrangebyscore('foo', 3, '1)')
-    with pytest.raises(redis.ResponseError):
-        r.zremrangebyscore('foo', '((3', '1)')
-
-
-def test_zremrangebyscore_badkey(r):
-    assert r.zremrangebyscore('foo', 0, 2) == 0
-
-
-def test_zremrangebyscore_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zremrangebyscore('foo', 0, 2)
-
-
-def test_zremrangebylex(r):
-    testtools.zadd(r, 'foo', {'two_a': 0})
-    testtools.zadd(r, 'foo', {'two_b': 0})
-    testtools.zadd(r, 'foo', {'one_a': 0})
-    testtools.zadd(r, 'foo', {'three_a': 0})
-    assert r.zremrangebylex('foo', b'(three_a', b'[two_b') == 2
-    assert r.zremrangebylex('foo', b'(three_a', b'[two_b') == 0
-    assert r.zremrangebylex('foo', b'-', b'(o') == 0
-    assert r.zremrangebylex('foo', b'-', b'[one_a') == 1
-    assert r.zremrangebylex('foo', b'[tw', b'+') == 0
-    assert r.zremrangebylex('foo', b'[t', b'+') == 1
-    assert r.zremrangebylex('foo', b'[t', b'+') == 0
-
-
-def test_zremrangebylex_error(r):
-    testtools.zadd(r, 'foo', {'two_a': 0})
-    testtools.zadd(r, 'foo', {'two_b': 0})
-    testtools.zadd(r, 'foo', {'one_a': 0})
-    testtools.zadd(r, 'foo', {'three_a': 0})
-    with pytest.raises(redis.ResponseError):
-        r.zremrangebylex('foo', b'(t', b'two_b')
-
-    with pytest.raises(redis.ResponseError):
-        r.zremrangebylex('foo', b't', b'+')
-
-    with pytest.raises(redis.ResponseError):
-        r.zremrangebylex('foo', b'[two_a', b'')
-
-
-def test_zremrangebylex_badkey(r):
-    assert r.zremrangebylex('foo', b'(three_a', b'[two_b') == 0
-
-
-def test_zremrangebylex_wrong_type(r):
-    r.sadd('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zremrangebylex('foo', b'bar', b'baz')
-
-
-def test_zunionstore(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'bar', {'one': 1})
-    testtools.zadd(r, 'bar', {'two': 2})
-    testtools.zadd(r, 'bar', {'three': 3})
-    r.zunionstore('baz', ['foo', 'bar'])
-    assert (
-            r.zrange('baz', 0, -1, withscores=True)
-            == [(b'one', 2), (b'three', 3), (b'two', 4)]
-    )
-
-
-def test_zunionstore_sum(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'bar', {'one': 1})
-    testtools.zadd(r, 'bar', {'two': 2})
-    testtools.zadd(r, 'bar', {'three': 3})
-    r.zunionstore('baz', ['foo', 'bar'], aggregate='SUM')
-    assert (
-            r.zrange('baz', 0, -1, withscores=True)
-            == [(b'one', 2), (b'three', 3), (b'two', 4)]
-    )
-
-
-def test_zunionstore_max(r):
-    testtools.zadd(r, 'foo', {'one': 0})
-    testtools.zadd(r, 'foo', {'two': 0})
-    testtools.zadd(r, 'bar', {'one': 1})
-    testtools.zadd(r, 'bar', {'two': 2})
-    testtools.zadd(r, 'bar', {'three': 3})
-    r.zunionstore('baz', ['foo', 'bar'], aggregate='MAX')
-    assert (
-            r.zrange('baz', 0, -1, withscores=True)
-            == [(b'one', 1), (b'two', 2), (b'three', 3)]
-    )
-
-
-def test_zunionstore_min(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'bar', {'one': 0})
-    testtools.zadd(r, 'bar', {'two': 0})
-    testtools.zadd(r, 'bar', {'three': 3})
-    r.zunionstore('baz', ['foo', 'bar'], aggregate='MIN')
-    assert (
-            r.zrange('baz', 0, -1, withscores=True)
-            == [(b'one', 0), (b'two', 0), (b'three', 3)]
-    )
-
-
-def test_zunionstore_weights(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'bar', {'one': 1})
-    testtools.zadd(r, 'bar', {'two': 2})
-    testtools.zadd(r, 'bar', {'four': 4})
-    r.zunionstore('baz', {'foo': 1, 'bar': 2}, aggregate='SUM')
-    assert (
-            r.zrange('baz', 0, -1, withscores=True)
-            == [(b'one', 3), (b'two', 6), (b'four', 8)]
-    )
-
-
-def test_zunionstore_nan_to_zero(r):
-    testtools.zadd(r, 'foo', {'x': math.inf})
-    testtools.zadd(r, 'foo2', {'x': math.inf})
-    r.zunionstore('bar', OrderedDict([('foo', 1.0), ('foo2', 0.0)]))
-    # This is different to test_zinterstore_nan_to_zero because of a quirk
-    # in redis. See https://github.com/antirez/redis/issues/3954.
-    assert r.zscore('bar', 'x') == math.inf
-
-
-def test_zunionstore_nan_to_zero2(r):
-    testtools.zadd(r, 'foo', {'zero': 0})
-    testtools.zadd(r, 'foo2', {'one': 1})
-    testtools.zadd(r, 'foo3', {'one': 1})
-    r.zunionstore('bar', {'foo': math.inf}, aggregate='SUM')
-    assert r.zrange('bar', 0, -1, withscores=True) == [(b'zero', 0)]
-    r.zunionstore('bar', OrderedDict([('foo2', math.inf), ('foo3', -math.inf)]))
-    assert r.zrange('bar', 0, -1, withscores=True) == [(b'one', 0)]
-
-
-def test_zunionstore_nan_to_zero_ordering(r):
-    testtools.zadd(r, 'foo', {'e1': math.inf})
-    testtools.zadd(r, 'bar', {'e1': -math.inf, 'e2': 0.0})
-    r.zunionstore('baz', ['foo', 'bar', 'foo'])
-    assert r.zscore('baz', 'e1') == 0.0
-
-
-def test_zunionstore_mixed_set_types(r):
-    # No score, redis will use 1.0.
-    r.sadd('foo', 'one')
-    r.sadd('foo', 'two')
-    testtools.zadd(r, 'bar', {'one': 1})
-    testtools.zadd(r, 'bar', {'two': 2})
-    testtools.zadd(r, 'bar', {'three': 3})
-    r.zunionstore('baz', ['foo', 'bar'], aggregate='SUM')
-    assert (
-            r.zrange('baz', 0, -1, withscores=True)
-            == [(b'one', 2), (b'three', 3), (b'two', 3)]
-    )
-
-
-def test_zunionstore_badkey(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    r.zunionstore('baz', ['foo', 'bar'], aggregate='SUM')
-    assert r.zrange('baz', 0, -1, withscores=True) == [(b'one', 1), (b'two', 2)]
-    r.zunionstore('baz', {'foo': 1, 'bar': 2}, aggregate='SUM')
-    assert r.zrange('baz', 0, -1, withscores=True) == [(b'one', 1), (b'two', 2)]
-
-
-def test_zunionstore_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zunionstore('baz', ['foo', 'bar'])
-
-
-def test_zinterstore(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    testtools.zadd(r, 'foo', {'two': 2})
-    testtools.zadd(r, 'bar', {'one': 1})
-    testtools.zadd(r, 'bar', {'two': 2})
-    testtools.zadd(r, 'bar', {'three': 3})
-    r.zinterstore('baz', ['foo', 'bar'])
-    assert r.zrange('baz', 0, -1, withscores=True) == [(b'one', 2), (b'two', 4)]
-
-
-def test_zinterstore_mixed_set_types(r):
-    r.sadd('foo', 'one')
-    r.sadd('foo', 'two')
-    testtools.zadd(r, 'bar', {'one': 1})
-    testtools.zadd(r, 'bar', {'two': 2})
-    testtools.zadd(r, 'bar', {'three': 3})
-    r.zinterstore('baz', ['foo', 'bar'], aggregate='SUM')
-    assert r.zrange('baz', 0, -1, withscores=True) == [(b'one', 2), (b'two', 3)]
-
-
-def test_zinterstore_max(r):
-    testtools.zadd(r, 'foo', {'one': 0})
-    testtools.zadd(r, 'foo', {'two': 0})
-    testtools.zadd(r, 'bar', {'one': 1})
-    testtools.zadd(r, 'bar', {'two': 2})
-    testtools.zadd(r, 'bar', {'three': 3})
-    r.zinterstore('baz', ['foo', 'bar'], aggregate='MAX')
-    assert r.zrange('baz', 0, -1, withscores=True) == [(b'one', 1), (b'two', 2)]
-
-
-def test_zinterstore_onekey(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    r.zinterstore('baz', ['foo'], aggregate='MAX')
-    assert r.zrange('baz', 0, -1, withscores=True) == [(b'one', 1)]
-
-
-def test_zinterstore_nokey(r):
-    with pytest.raises(redis.ResponseError):
-        r.zinterstore('baz', [], aggregate='MAX')
-
-
-def test_zinterstore_nan_to_zero(r):
-    testtools.zadd(r, 'foo', {'x': math.inf})
-    testtools.zadd(r, 'foo2', {'x': math.inf})
-    r.zinterstore('bar', OrderedDict([('foo', 1.0), ('foo2', 0.0)]))
-    assert r.zscore('bar', 'x') == 0.0
-
-
-def test_zunionstore_nokey(r):
-    with pytest.raises(redis.ResponseError):
-        r.zunionstore('baz', [], aggregate='MAX')
-
-
-def test_zinterstore_wrong_type(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError):
-        r.zinterstore('baz', ['foo', 'bar'])
-
-
-def test_empty_zset(r):
-    testtools.zadd(r, 'foo', {'one': 1})
-    r.zrem('foo', 'one')
-    assert not r.exists('foo')
-
-
-def test_multidb(r, create_redis):
-    r1 = create_redis(db=0)
-    r2 = create_redis(db=1)
-
-    r1['r1'] = 'r1'
-    r2['r2'] = 'r2'
-
-    assert 'r2' not in r1
-    assert 'r1' not in r2
-
-    assert r1['r1'] == b'r1'
-    assert r2['r2'] == b'r2'
-
-    assert r1.flushall() is True
-
-    assert 'r1' not in r1
-    assert 'r2' not in r2
-
-
-def test_basic_sort(r):
-    r.rpush('foo', '2')
-    r.rpush('foo', '1')
-    r.rpush('foo', '3')
-
-    assert r.sort('foo') == [b'1', b'2', b'3']
-
-
-def test_empty_sort(r):
-    assert r.sort('foo') == []
-
-
-def test_sort_range_offset_range(r):
-    r.rpush('foo', '2')
-    r.rpush('foo', '1')
-    r.rpush('foo', '4')
-    r.rpush('foo', '3')
-
-    assert r.sort('foo', start=0, num=2) == [b'1', b'2']
-
-
-def test_sort_range_offset_range_and_desc(r):
-    r.rpush('foo', '2')
-    r.rpush('foo', '1')
-    r.rpush('foo', '4')
-    r.rpush('foo', '3')
-
-    assert r.sort("foo", start=0, num=1, desc=True) == [b"4"]
-
-
-def test_sort_range_offset_norange(r):
-    with pytest.raises(redis.RedisError):
-        r.sort('foo', start=1)
-
-
-def test_sort_range_with_large_range(r):
-    r.rpush('foo', '2')
-    r.rpush('foo', '1')
-    r.rpush('foo', '4')
-    r.rpush('foo', '3')
-    # num=20 even though len(foo) is 4.
-    assert r.sort('foo', start=1, num=20) == [b'2', b'3', b'4']
-
-
-def test_sort_descending(r):
-    r.rpush('foo', '1')
-    r.rpush('foo', '2')
-    r.rpush('foo', '3')
-    assert r.sort('foo', desc=True) == [b'3', b'2', b'1']
-
-
-def test_sort_alpha(r):
-    r.rpush('foo', '2a')
-    r.rpush('foo', '1b')
-    r.rpush('foo', '2b')
-    r.rpush('foo', '1a')
-
-    assert r.sort('foo', alpha=True) == [b'1a', b'1b', b'2a', b'2b']
-
-
-def test_sort_wrong_type(r):
-    r.set('string', '3')
-    with pytest.raises(redis.ResponseError):
-        r.sort('string')
-
-
-def test_foo(r):
-    r.rpush('foo', '2a')
-    r.rpush('foo', '1b')
-    r.rpush('foo', '2b')
-    r.rpush('foo', '1a')
-    with pytest.raises(redis.ResponseError):
-        r.sort('foo', alpha=False)
-
-
-def test_sort_with_store_option(r):
-    r.rpush('foo', '2')
-    r.rpush('foo', '1')
-    r.rpush('foo', '4')
-    r.rpush('foo', '3')
-
-    assert r.sort('foo', store='bar') == 4
-    assert r.lrange('bar', 0, -1) == [b'1', b'2', b'3', b'4']
-
-
-def test_sort_with_by_and_get_option(r):
-    r.rpush('foo', '2')
-    r.rpush('foo', '1')
-    r.rpush('foo', '4')
-    r.rpush('foo', '3')
-
-    r['weight_1'] = '4'
-    r['weight_2'] = '3'
-    r['weight_3'] = '2'
-    r['weight_4'] = '1'
-
-    r['data_1'] = 'one'
-    r['data_2'] = 'two'
-    r['data_3'] = 'three'
-    r['data_4'] = 'four'
-
-    assert (
-            r.sort('foo', by='weight_*', get='data_*')
-            == [b'four', b'three', b'two', b'one']
-    )
-    assert r.sort('foo', by='weight_*', get='#') == [b'4', b'3', b'2', b'1']
-    assert (
-            r.sort('foo', by='weight_*', get=('data_*', '#'))
-            == [b'four', b'4', b'three', b'3', b'two', b'2', b'one', b'1']
-    )
-    assert r.sort('foo', by='weight_*', get='data_1') == [None, None, None, None]
-
-
-def test_sort_with_hash(r):
-    r.rpush('foo', 'middle')
-    r.rpush('foo', 'eldest')
-    r.rpush('foo', 'youngest')
-    r.hset('record_youngest', 'age', 1)
-    r.hset('record_youngest', 'name', 'baby')
-
-    r.hset('record_middle', 'age', 10)
-    r.hset('record_middle', 'name', 'teen')
-
-    r.hset('record_eldest', 'age', 20)
-    r.hset('record_eldest', 'name', 'adult')
-
-    assert r.sort('foo', by='record_*->age') == [b'youngest', b'middle', b'eldest']
-    assert (
-            r.sort('foo', by='record_*->age', get='record_*->name')
-            == [b'baby', b'teen', b'adult']
-    )
-
-
-def test_sort_with_set(r):
-    r.sadd('foo', '3')
-    r.sadd('foo', '1')
-    r.sadd('foo', '2')
-    assert r.sort('foo') == [b'1', b'2', b'3']
-
-
-def test_pipeline(r):
-    # The pipeline method returns an object for
-    # issuing multiple commands in a batch.
-    p = r.pipeline()
-    p.watch('bam')
-    p.multi()
-    p.set('foo', 'bar').get('foo')
-    p.lpush('baz', 'quux')
-    p.lpush('baz', 'quux2').lrange('baz', 0, -1)
-    res = p.execute()
-
-    # Check return values returned as list.
-    assert res == [True, b'bar', 1, 2, [b'quux2', b'quux']]
-
-    # Check side effects happened as expected.
-    assert r.lrange('baz', 0, -1) == [b'quux2', b'quux']
-
-    # Check that the command buffer has been emptied.
-    assert p.execute() == []
-
-
-def test_pipeline_ignore_errors(r):
-    """Test the pipeline ignoring errors when asked."""
-    with r.pipeline() as p:
-        p.set('foo', 'bar')
-        p.rename('baz', 'bats')
-        with pytest.raises(redis.exceptions.ResponseError):
-            p.execute()
-        assert [] == p.execute()
-    with r.pipeline() as p:
-        p.set('foo', 'bar')
-        p.rename('baz', 'bats')
-        res = p.execute(raise_on_error=False)
-
-        assert [] == p.execute()
-
-        assert len(res) == 2
-        assert isinstance(res[1], redis.exceptions.ResponseError)
-
-
-def test_multiple_successful_watch_calls(r):
-    p = r.pipeline()
-    p.watch('bam')
-    p.multi()
-    p.set('foo', 'bar')
-    # Check that the watched keys buffer has been emptied.
-    p.execute()
-
-    # bam is no longer being watched, so it's ok to modify
-    # it now.
-    p.watch('foo')
-    r.set('bam', 'boo')
-    p.multi()
-    p.set('foo', 'bats')
-    assert p.execute() == [True]
-
-
-def test_pipeline_non_transactional(r):
-    # For our simple-minded model I don't think
-    # there is any observable difference.
-    p = r.pipeline(transaction=False)
-    res = p.set('baz', 'quux').get('baz').execute()
-
-    assert res == [True, b'quux']
-
-
-def test_pipeline_raises_when_watched_key_changed(r):
-    r.set('foo', 'bar')
-    r.rpush('greet', 'hello')
-    p = r.pipeline()
-    try:
-        p.watch('greet', 'foo')
-        nextf = six.ensure_binary(p.get('foo')) + b'baz'
-        # Simulate change happening on another thread.
-        r.rpush('greet', 'world')
-        # Begin pipelining.
-        p.multi()
-        p.set('foo', nextf)
-
-        with pytest.raises(redis.WatchError):
-            p.execute()
-    finally:
-        p.reset()
-
-
-def test_pipeline_succeeds_despite_unwatched_key_changed(r):
-    # Same setup as before except for the params to the WATCH command.
-    r.set('foo', 'bar')
-    r.rpush('greet', 'hello')
-    p = r.pipeline()
-    try:
-        # Only watch one of the 2 keys.
-        p.watch('foo')
-        nextf = six.ensure_binary(p.get('foo')) + b'baz'
-        # Simulate change happening on another thread.
-        r.rpush('greet', 'world')
-        p.multi()
-        p.set('foo', nextf)
-        p.execute()
-
-        # Check the commands were executed.
-        assert r.get('foo') == b'barbaz'
-    finally:
-        p.reset()
-
-
-def test_pipeline_succeeds_when_watching_nonexistent_key(r):
-    r.set('foo', 'bar')
-    r.rpush('greet', 'hello')
-    p = r.pipeline()
-    try:
-        # Also watch a nonexistent key.
-        p.watch('foo', 'bam')
-        nextf = six.ensure_binary(p.get('foo')) + b'baz'
-        # Simulate change happening on another thread.
-        r.rpush('greet', 'world')
-        p.multi()
-        p.set('foo', nextf)
-        p.execute()
-
-        # Check the commands were executed.
-        assert r.get('foo') == b'barbaz'
-    finally:
-        p.reset()
-
-
-def test_watch_state_is_cleared_across_multiple_watches(r):
-    r.set('foo', 'one')
-    r.set('bar', 'baz')
-    p = r.pipeline()
-
-    try:
-        p.watch('foo')
-        # Simulate change happening on another thread.
-        r.set('foo', 'three')
-        p.multi()
-        p.set('foo', 'three')
-        with pytest.raises(redis.WatchError):
-            p.execute()
-
-        # Now watch another key.  It should be ok to change
-        # foo as we're no longer watching it.
-        p.watch('bar')
-        r.set('foo', 'four')
-        p.multi()
-        p.set('bar', 'five')
-        assert p.execute() == [True]
-    finally:
-        p.reset()
-
-
-def test_watch_state_is_cleared_after_abort(r):
-    # redis-py's pipeline handling and connection pooling interferes with this
-    # test, so raw commands are used instead.
-    raw_command(r, 'watch', 'foo')
-    raw_command(r, 'multi')
-    with pytest.raises(redis.ResponseError):
-        raw_command(r, 'mget')  # Wrong number of arguments
-    with pytest.raises(redis.exceptions.ExecAbortError):
-        raw_command(r, 'exec')
-
-    raw_command(r, 'set', 'foo', 'bar')  # Should NOT trigger the watch from earlier
-    raw_command(r, 'multi')
-    raw_command(r, 'set', 'abc', 'done')
-    raw_command(r, 'exec')
-
-    assert r.get('abc') == b'done'
-
-
-def test_pipeline_transaction_shortcut(r):
-    # This example taken pretty much from the redis-py documentation.
-    r.set('OUR-SEQUENCE-KEY', 13)
-    calls = []
-
-    def client_side_incr(pipe):
-        calls.append((pipe,))
-        current_value = pipe.get('OUR-SEQUENCE-KEY')
-        next_value = int(current_value) + 1
-
-        if len(calls) < 3:
-            # Simulate a change from another thread.
-            r.set('OUR-SEQUENCE-KEY', next_value)
-
-        pipe.multi()
-        pipe.set('OUR-SEQUENCE-KEY', next_value)
-
-    res = r.transaction(client_side_incr, 'OUR-SEQUENCE-KEY')
-
-    assert res == [True]
-    assert int(r.get('OUR-SEQUENCE-KEY')) == 16
-    assert len(calls) == 3
-
-
-def test_pipeline_transaction_value_from_callable(r):
-    def callback(pipe):
-        # No need to do anything here since we only want the return value
-        return 'OUR-RETURN-VALUE'
-
-    res = r.transaction(callback, 'OUR-SEQUENCE-KEY', value_from_callable=True)
-    assert res == 'OUR-RETURN-VALUE'
-
-
-def test_pipeline_empty(r):
-    p = r.pipeline()
-    assert len(p) == 0
-
-
-def test_pipeline_length(r):
-    p = r.pipeline()
-    p.set('baz', 'quux').get('baz')
-    assert len(p) == 2
-
-
-def test_pipeline_no_commands(r):
-    # Prior to 3.4, redis-py's execute is a nop if there are no commands
-    # queued, so it succeeds even if watched keys have been changed.
-    r.set('foo', '1')
-    p = r.pipeline()
-    p.watch('foo')
-    r.set('foo', '2')
-    if REDIS_VERSION >= Version('3.4'):
-        with pytest.raises(redis.WatchError):
-            p.execute()
-    else:
-        assert p.execute() == []
-
-
-def test_pipeline_failed_transaction(r):
-    p = r.pipeline()
-    p.multi()
-    p.set('foo', 'bar')
-    # Deliberately induce a syntax error
-    p.execute_command('set')
-    # It should be an ExecAbortError, but redis-py tries to DISCARD after the
-    # failed EXEC, which raises a ResponseError.
-    with pytest.raises(redis.ResponseError):
-        p.execute()
-    assert not r.exists('foo')
-
-
-def test_pipeline_srem_no_change(r):
-    # A regression test for a case picked up by hypothesis tests.
-    p = r.pipeline()
-    p.watch('foo')
-    r.srem('foo', 'bar')
-    p.multi()
-    p.set('foo', 'baz')
-    p.execute()
-    assert r.get('foo') == b'baz'
-
-
-# The behaviour changed in redis 6.0 (see https://github.com/redis/redis/issues/6594).
-@pytest.mark.min_server('6.0')
-def test_pipeline_move(r):
-    # A regression test for a case picked up by hypothesis tests.
-    r.set('foo', 'bar')
-    p = r.pipeline()
-    p.watch('foo')
-    r.move('foo', 1)
-    # Ensure the transaction isn't empty, which had different behaviour in
-    # older versions of redis-py.
-    p.multi()
-    p.set('bar', 'baz')
-    with pytest.raises(redis.exceptions.WatchError):
-        p.execute()
-
-
-@pytest.mark.min_server('6.0.6')
-def test_exec_bad_arguments(r):
-    # Redis 6.0.6 changed the behaviour of exec so that it always fails with
-    # EXECABORT, even when it's just bad syntax.
-    with pytest.raises(redis.exceptions.ExecAbortError):
-        r.execute_command('exec', 'blahblah')
-
-
-@pytest.mark.min_server('6.0.6')
-def test_exec_bad_arguments_abort(r):
-    r.execute_command('multi')
-    with pytest.raises(redis.exceptions.ExecAbortError):
-        r.execute_command('exec', 'blahblah')
-    # Should have aborted the transaction, so we can run another one
-    p = r.pipeline()
-    p.multi()
-    p.set('bar', 'baz')
-    p.execute()
-    assert r.get('bar') == b'baz'
-
-
-def test_key_patterns(r):
-    r.mset({'one': 1, 'two': 2, 'three': 3, 'four': 4})
-    assert sorted(r.keys('*o*')) == [b'four', b'one', b'two']
-    assert r.keys('t??') == [b'two']
-    assert sorted(r.keys('*')) == [b'four', b'one', b'three', b'two']
-    assert sorted(r.keys()) == [b'four', b'one', b'three', b'two']
-
-
-def test_ping(r):
-    assert r.ping()
-    assert raw_command(r, 'ping', 'test') == b'test'
-
-
-@testtools.run_test_if_redis_ver('above', '3')
-def test_ping_pubsub(r):
-    p = r.pubsub()
-    p.subscribe('channel')
-    p.parse_response()  # Consume the subscribe reply
-    p.ping()
-    assert p.parse_response() == [b'pong', b'']
-    p.ping('test')
-    assert p.parse_response() == [b'pong', b'test']
-
-
-@testtools.run_test_if_redis_ver('above', '3')
-def test_swapdb(r, create_redis):
-    r1 = create_redis(1)
-    r.set('foo', 'abc')
-    r.set('bar', 'xyz')
-    r1.set('foo', 'foo')
-    r1.set('baz', 'baz')
-    assert r.swapdb(0, 1)
-    assert r.get('foo') == b'foo'
-    assert r.get('bar') is None
-    assert r.get('baz') == b'baz'
-    assert r1.get('foo') == b'abc'
-    assert r1.get('bar') == b'xyz'
-    assert r1.get('baz') is None
-
-
-@testtools.run_test_if_redis_ver('above', '3')
-def test_swapdb_same_db(r):
-    assert r.swapdb(1, 1)
-
-
-def test_save(r):
-    assert r.save()
-
-
-def test_bgsave(r):
-    assert r.bgsave()
-    with pytest.raises(ResponseError):
-        r.execute_command('BGSAVE', 'SCHEDULE', 'FOO')
-    with pytest.raises(ResponseError):
-        r.execute_command('BGSAVE', 'FOO')
-
-
-def test_lastsave(r):
-    assert isinstance(r.lastsave(), datetime)
-
-
-@fake_only
-def test_time(r, mocker):
-    fake_time = mocker.patch('time.time')
-    fake_time.return_value = 1234567890.1234567
-    assert r.time() == (1234567890, 123457)
-    fake_time.return_value = 1234567890.000001
-    assert r.time() == (1234567890, 1)
-    fake_time.return_value = 1234567890.9999999
-    assert r.time() == (1234567891, 0)
-
-
-@pytest.mark.slow
-def test_bgsave_timestamp_update(r):
-    early_timestamp = r.lastsave()
-    sleep(1)
-    assert r.bgsave()
-    sleep(1)
-    late_timestamp = r.lastsave()
-    assert early_timestamp < late_timestamp
-
-
-@pytest.mark.slow
-def test_save_timestamp_update(r):
-    early_timestamp = r.lastsave()
-    sleep(1)
-    assert r.save()
-    late_timestamp = r.lastsave()
-    assert early_timestamp < late_timestamp
-
-
-def test_type(r):
-    r.set('string_key', "value")
-    r.lpush("list_key", "value")
-    r.sadd("set_key", "value")
-    testtools.zadd(r, "zset_key", {"value": 1})
-    r.hset('hset_key', 'key', 'value')
-
-    assert r.type('string_key') == b'string'
-    assert r.type('list_key') == b'list'
-    assert r.type('set_key') == b'set'
-    assert r.type('zset_key') == b'zset'
-    assert r.type('hset_key') == b'hash'
-    assert r.type('none_key') == b'none'
-
-
-@pytest.mark.slow
-def test_pubsub_subscribe(r):
-    pubsub = r.pubsub()
-    pubsub.subscribe("channel")
-    sleep(1)
-    expected_message = {'type': 'subscribe', 'pattern': None,
-                        'channel': b'channel', 'data': 1}
-    message = pubsub.get_message()
-    keys = list(pubsub.channels.keys())
-
-    key = keys[0]
-    key = (key if type(key) == bytes
-           else bytes(key, encoding='utf-8'))
-
-    assert len(keys) == 1
-    assert key == b'channel'
-    assert message == expected_message
-
-
-@pytest.mark.slow
-def test_pubsub_psubscribe(r):
-    pubsub = r.pubsub()
-    pubsub.psubscribe("channel.*")
-    sleep(1)
-    expected_message = {'type': 'psubscribe', 'pattern': None,
-                        'channel': b'channel.*', 'data': 1}
-
-    message = pubsub.get_message()
-    keys = list(pubsub.patterns.keys())
-    assert len(keys) == 1
-    assert message == expected_message
-
-
-@pytest.mark.slow
-def test_pubsub_unsubscribe(r):
-    pubsub = r.pubsub()
-    pubsub.subscribe('channel-1', 'channel-2', 'channel-3')
-    sleep(1)
-    expected_message = {'type': 'unsubscribe', 'pattern': None,
-                        'channel': b'channel-1', 'data': 2}
-    pubsub.get_message()
-    pubsub.get_message()
-    pubsub.get_message()
-
-    # unsubscribe from one
-    pubsub.unsubscribe('channel-1')
-    sleep(1)
-    message = pubsub.get_message()
-    keys = list(pubsub.channels.keys())
-    assert message == expected_message
-    assert len(keys) == 2
-
-    # unsubscribe from multiple
-    pubsub.unsubscribe()
-    sleep(1)
-    pubsub.get_message()
-    pubsub.get_message()
-    keys = list(pubsub.channels.keys())
-    assert message == expected_message
-    assert len(keys) == 0
-
-
-@pytest.mark.slow
-def test_pubsub_punsubscribe(r):
-    pubsub = r.pubsub()
-    pubsub.psubscribe('channel-1.*', 'channel-2.*', 'channel-3.*')
-    sleep(1)
-    expected_message = {'type': 'punsubscribe', 'pattern': None,
-                        'channel': b'channel-1.*', 'data': 2}
-    pubsub.get_message()
-    pubsub.get_message()
-    pubsub.get_message()
-
-    # unsubscribe from one
-    pubsub.punsubscribe('channel-1.*')
-    sleep(1)
-    message = pubsub.get_message()
-    keys = list(pubsub.patterns.keys())
-    assert message == expected_message
-    assert len(keys) == 2
-
-    # unsubscribe from multiple
-    pubsub.punsubscribe()
-    sleep(1)
-    pubsub.get_message()
-    pubsub.get_message()
-    keys = list(pubsub.patterns.keys())
-    assert len(keys) == 0
-
-
-@pytest.mark.slow
-def test_pubsub_listen(r):
-    def _listen(pubsub, q):
-        count = 0
-        for message in pubsub.listen():
-            q.put(message)
-            count += 1
-            if count == 4:
-                pubsub.close()
-
-    channel = 'ch1'
-    patterns = ['ch1*', 'ch[1]', 'ch?']
-    pubsub = r.pubsub()
-    pubsub.subscribe(channel)
-    pubsub.psubscribe(*patterns)
-    sleep(1)
-    msg1 = pubsub.get_message()
-    msg2 = pubsub.get_message()
-    msg3 = pubsub.get_message()
-    msg4 = pubsub.get_message()
-    assert msg1['type'] == 'subscribe'
-    assert msg2['type'] == 'psubscribe'
-    assert msg3['type'] == 'psubscribe'
-    assert msg4['type'] == 'psubscribe'
-
-    q = Queue()
-    t = threading.Thread(target=_listen, args=(pubsub, q))
-    t.start()
-    msg = 'hello world'
-    r.publish(channel, msg)
-    t.join()
-
-    msg1 = q.get()
-    msg2 = q.get()
-    msg3 = q.get()
-    msg4 = q.get()
-
-    bpatterns = [pattern.encode() for pattern in patterns]
-    bpatterns.append(channel.encode())
-    msg = msg.encode()
-    assert msg1['data'] == msg
-    assert msg1['channel'] in bpatterns
-    assert msg2['data'] == msg
-    assert msg2['channel'] in bpatterns
-    assert msg3['data'] == msg
-    assert msg3['channel'] in bpatterns
-    assert msg4['data'] == msg
-    assert msg4['channel'] in bpatterns
-
-
-@pytest.mark.slow
-def test_pubsub_listen_handler(r):
-    def _handler(message):
-        calls.append(message)
-
-    channel = 'ch1'
-    patterns = {'ch?': _handler}
-    calls = []
-
-    pubsub = r.pubsub()
-    pubsub.subscribe(ch1=_handler)
-    pubsub.psubscribe(**patterns)
-    sleep(1)
-    msg1 = pubsub.get_message()
-    msg2 = pubsub.get_message()
-    assert msg1['type'] == 'subscribe'
-    assert msg2['type'] == 'psubscribe'
-    msg = 'hello world'
-    r.publish(channel, msg)
-    sleep(1)
-    for i in range(2):
-        msg = pubsub.get_message()
-        assert msg is None  # get_message returns None when handler is used
-    pubsub.close()
-    calls.sort(key=lambda call: call['type'])
-    assert calls == [
-        {'pattern': None, 'channel': b'ch1', 'data': b'hello world', 'type': 'message'},
-        {'pattern': b'ch?', 'channel': b'ch1', 'data': b'hello world', 'type': 'pmessage'}
-    ]
-
-
-@pytest.mark.slow
-def test_pubsub_ignore_sub_messages_listen(r):
-    def _listen(pubsub, q):
-        count = 0
-        for message in pubsub.listen():
-            q.put(message)
-            count += 1
-            if count == 4:
-                pubsub.close()
-
-    channel = 'ch1'
-    patterns = ['ch1*', 'ch[1]', 'ch?']
-    pubsub = r.pubsub(ignore_subscribe_messages=True)
-    pubsub.subscribe(channel)
-    pubsub.psubscribe(*patterns)
-    sleep(1)
-
-    q = Queue()
-    t = threading.Thread(target=_listen, args=(pubsub, q))
-    t.start()
-    msg = 'hello world'
-    r.publish(channel, msg)
-    t.join()
-
-    msg1 = q.get()
-    msg2 = q.get()
-    msg3 = q.get()
-    msg4 = q.get()
-
-    bpatterns = [pattern.encode() for pattern in patterns]
-    bpatterns.append(channel.encode())
-    msg = msg.encode()
-    assert msg1['data'] == msg
-    assert msg1['channel'] in bpatterns
-    assert msg2['data'] == msg
-    assert msg2['channel'] in bpatterns
-    assert msg3['data'] == msg
-    assert msg3['channel'] in bpatterns
-    assert msg4['data'] == msg
-    assert msg4['channel'] in bpatterns
-
-
-@pytest.mark.slow
-def test_pubsub_binary(r):
-    def _listen(pubsub, q):
-        for message in pubsub.listen():
-            q.put(message)
-            pubsub.close()
-
-    pubsub = r.pubsub(ignore_subscribe_messages=True)
-    pubsub.subscribe('channel\r\n\xff')
-    sleep(1)
-
-    q = Queue()
-    t = threading.Thread(target=_listen, args=(pubsub, q))
-    t.start()
-    msg = b'\x00hello world\r\n\xff'
-    r.publish('channel\r\n\xff', msg)
-    t.join()
-
-    received = q.get()
-    assert received['data'] == msg
-
-
-@pytest.mark.slow
-def test_pubsub_run_in_thread(r):
-    q = Queue()
-
-    pubsub = r.pubsub()
-    pubsub.subscribe(channel=q.put)
-    pubsub_thread = pubsub.run_in_thread()
-
-    msg = b"Hello World"
-    r.publish("channel", msg)
-
-    retrieved = q.get()
-    assert retrieved["data"] == msg
-
-    pubsub_thread.stop()
-    # Newer versions of redis wait for an unsubscribe message, which sometimes comes early
-    # https://github.com/andymccurdy/redis-py/issues/1150
-    if pubsub.channels:
-        pubsub.channels = {}
-    pubsub_thread.join()
-    assert not pubsub_thread.is_alive()
-
-    pubsub.subscribe(channel=None)
-    with pytest.raises(redis.exceptions.PubSubError):
-        pubsub_thread = pubsub.run_in_thread()
-
-    pubsub.unsubscribe("channel")
-
-    pubsub.psubscribe(channel=None)
-    with pytest.raises(redis.exceptions.PubSubError):
-        pubsub_thread = pubsub.run_in_thread()
-
-
-@pytest.mark.slow
-@pytest.mark.parametrize(
-    "timeout_value",
-    [
-        1,
-        pytest.param(
-            None,
-            marks=pytest.mark.skipif(
-                Version("3.2") <= REDIS_VERSION < Version("3.3"),
-                reason="This test is not applicable to redis-py 3.2"
-            )
-        )
-    ]
-)
-def test_pubsub_timeout(r, timeout_value):
-    def publish():
-        sleep(0.1)
-        r.publish('channel', 'hello')
-
-    p = r.pubsub()
-    p.subscribe('channel')
-    p.parse_response()  # Drains the subscribe message
-    publish_thread = threading.Thread(target=publish)
-    publish_thread.start()
-    message = p.get_message(timeout=timeout_value)
-    assert message == {
-        'type': 'message', 'pattern': None,
-        'channel': b'channel', 'data': b'hello'
-    }
-    publish_thread.join()
-
-    if timeout_value is not None:
-        # For infinite timeout case don't wait for the message that will never appear.
-        message = p.get_message(timeout=timeout_value)
-        assert message is None
-
-
-def test_pfadd(r):
-    key = "hll-pfadd"
-    assert r.pfadd(key, "a", "b", "c", "d", "e", "f", "g") == 1
-    assert r.pfcount(key) == 7
-
-
-def test_pfcount(r):
-    key1 = "hll-pfcount01"
-    key2 = "hll-pfcount02"
-    key3 = "hll-pfcount03"
-    assert r.pfadd(key1, "foo", "bar", "zap") == 1
-    assert r.pfadd(key1, "zap", "zap", "zap") == 0
-    assert r.pfadd(key1, "foo", "bar") == 0
-    assert r.pfcount(key1) == 3
-    assert r.pfadd(key2, "1", "2", "3") == 1
-    assert r.pfcount(key2) == 3
-    assert r.pfcount(key1, key2) == 6
-    assert r.pfadd(key3, "foo", "bar", "zip") == 1
-    assert r.pfcount(key3) == 3
-    assert r.pfcount(key1, key3) == 4
-    assert r.pfcount(key1, key2, key3) == 7
-
-
-def test_pfmerge(r):
-    key1 = "hll-pfmerge01"
-    key2 = "hll-pfmerge02"
-    key3 = "hll-pfmerge03"
-    assert r.pfadd(key1, "foo", "bar", "zap", "a") == 1
-    assert r.pfadd(key2, "a", "b", "c", "foo") == 1
-    assert r.pfmerge(key3, key1, key2)
-    assert r.pfcount(key3) == 6
-
-
-def test_scan(r):
-    # Setup the data
-    for ix in range(20):
-        k = 'scan-test:%s' % ix
-        v = 'result:%s' % ix
-        r.set(k, v)
-    expected = r.keys()
-    assert len(expected) == 20  # Ensure we know what we're testing
-
-    # Test that we page through the results and get everything out
-    results = []
-    cursor = '0'
-    while cursor != 0:
-        cursor, data = r.scan(cursor, count=6)
-        results.extend(data)
-    assert set(expected) == set(results)
-
-    # Now test that the MATCH functionality works
-    results = []
-    cursor = '0'
-    while cursor != 0:
-        cursor, data = r.scan(cursor, match='*7', count=100)
-        results.extend(data)
-    assert b'scan-test:7' in results
-    assert b'scan-test:17' in results
-    assert len(results) == 2
-
-    # Test the match on iterator
-    results = [r for r in r.scan_iter(match='*7')]
-    assert b'scan-test:7' in results
-    assert b'scan-test:17' in results
-    assert len(results) == 2
-
-
-def test_sscan(r):
-    # Setup the data
-    name = 'sscan-test'
-    for ix in range(20):
-        k = 'sscan-test:%s' % ix
-        r.sadd(name, k)
-    expected = r.smembers(name)
-    assert len(expected) == 20  # Ensure we know what we're testing
-
-    # Test that we page through the results and get everything out
-    results = []
-    cursor = '0'
-    while cursor != 0:
-        cursor, data = r.sscan(name, cursor, count=6)
-        results.extend(data)
-    assert set(expected) == set(results)
-
-    # Test the iterator version
-    results = [r for r in r.sscan_iter(name, count=6)]
-    assert set(expected) == set(results)
-
-    # Now test that the MATCH functionality works
-    results = []
-    cursor = '0'
-    while cursor != 0:
-        cursor, data = r.sscan(name, cursor, match='*7', count=100)
-        results.extend(data)
-    assert b'sscan-test:7' in results
-    assert b'sscan-test:17' in results
-    assert len(results) == 2
-
-    # Test the match on iterator
-    results = [r for r in r.sscan_iter(name, match='*7')]
-    assert b'sscan-test:7' in results
-    assert b'sscan-test:17' in results
-    assert len(results) == 2
-
-
-def test_hscan(r):
-    # Setup the data
-    name = 'hscan-test'
-    for ix in range(20):
-        k = 'key:%s' % ix
-        v = 'result:%s' % ix
-        r.hset(name, k, v)
-    expected = r.hgetall(name)
-    assert len(expected) == 20  # Ensure we know what we're testing
-
-    # Test that we page through the results and get everything out
-    results = {}
-    cursor = '0'
-    while cursor != 0:
-        cursor, data = r.hscan(name, cursor, count=6)
-        results.update(data)
-    assert expected == results
-
-    # Test the iterator version
-    results = {}
-    for key, val in r.hscan_iter(name, count=6):
-        results[key] = val
-    assert expected == results
-
-    # Now test that the MATCH functionality works
-    results = {}
-    cursor = '0'
-    while cursor != 0:
-        cursor, data = r.hscan(name, cursor, match='*7', count=100)
-        results.update(data)
-    assert b'key:7' in results
-    assert b'key:17' in results
-    assert len(results) == 2
-
-    # Test the match on iterator
-    results = {}
-    for key, val in r.hscan_iter(name, match='*7'):
-        results[key] = val
-    assert b'key:7' in results
-    assert b'key:17' in results
-    assert len(results) == 2
-
-
-def test_zscan(r):
-    # Setup the data
-    name = 'zscan-test'
-    for ix in range(20):
-        testtools.zadd(r, name, {'key:%s' % ix: ix})
-    expected = dict(r.zrange(name, 0, -1, withscores=True))
-
-    # Test the basic version
-    results = {}
-    for key, val in r.zscan_iter(name, count=6):
-        results[key] = val
-    assert results == expected
-
-    # Now test that the MATCH functionality works
-    results = {}
-    cursor = '0'
-    while cursor != 0:
-        cursor, data = r.zscan(name, cursor, match='*7', count=6)
-        results.update(data)
-    assert results == {b'key:7': 7.0, b'key:17': 17.0}
-
-
-@pytest.mark.slow
-def test_set_ex_should_expire_value(r):
-    r.set('foo', 'bar')
-    assert r.get('foo') == b'bar'
-    r.set('foo', 'bar', ex=1)
-    sleep(2)
-    assert r.get('foo') is None
-
-
-@pytest.mark.slow
-def test_set_px_should_expire_value(r):
-    r.set('foo', 'bar', px=500)
-    sleep(1.5)
-    assert r.get('foo') is None
-
-
-@pytest.mark.slow
-def test_psetex_expire_value(r):
-    with pytest.raises(ResponseError):
-        r.psetex('foo', 0, 'bar')
-    r.psetex('foo', 500, 'bar')
-    sleep(1.5)
-    assert r.get('foo') is None
-
-
-@pytest.mark.slow
-def test_psetex_expire_value_using_timedelta(r):
-    with pytest.raises(ResponseError):
-        r.psetex('foo', timedelta(seconds=0), 'bar')
-    r.psetex('foo', timedelta(seconds=0.5), 'bar')
-    sleep(1.5)
-    assert r.get('foo') is None
-
-
-@pytest.mark.slow
-def test_expire_should_expire_key(r):
-    r.set('foo', 'bar')
-    assert r.get('foo') == b'bar'
-    r.expire('foo', 1)
-    sleep(1.5)
-    assert r.get('foo') is None
-    assert r.expire('bar', 1) is False
-
-
-def test_expire_should_return_true_for_existing_key(r):
-    r.set('foo', 'bar')
-    assert r.expire('foo', 1) is True
-
-
-def test_expire_should_return_false_for_missing_key(r):
-    assert r.expire('missing', 1) is False
-
-
-@pytest.mark.slow
-def test_expire_should_expire_key_using_timedelta(r):
-    r.set('foo', 'bar')
-    assert r.get('foo') == b'bar'
-    r.expire('foo', timedelta(seconds=1))
-    sleep(1.5)
-    assert r.get('foo') is None
-    assert r.expire('bar', 1) is False
-
-
-@pytest.mark.slow
-def test_expire_should_expire_immediately_with_millisecond_timedelta(r):
-    r.set('foo', 'bar')
-    assert r.get('foo') == b'bar'
-    r.expire('foo', timedelta(milliseconds=750))
-    assert r.get('foo') is None
-    assert r.expire('bar', 1) is False
-
-
-def test_watch_expire(r):
-    """EXPIRE should mark a key as changed for WATCH."""
-    r.set('foo', 'bar')
-    with r.pipeline() as p:
-        p.watch('foo')
-        r.expire('foo', 10000)
-        p.multi()
-        p.get('foo')
-        with pytest.raises(redis.exceptions.WatchError):
-            p.execute()
-
-
-@pytest.mark.slow
-def test_pexpire_should_expire_key(r):
-    r.set('foo', 'bar')
-    assert r.get('foo') == b'bar'
-    r.pexpire('foo', 150)
-    sleep(0.2)
-    assert r.get('foo') is None
-    assert r.pexpire('bar', 1) == 0
-
-
-def test_pexpire_should_return_truthy_for_existing_key(r):
-    r.set('foo', 'bar')
-    assert r.pexpire('foo', 1)
-
-
-def test_pexpire_should_return_falsey_for_missing_key(r):
-    assert not r.pexpire('missing', 1)
-
-
-@pytest.mark.slow
-def test_pexpire_should_expire_key_using_timedelta(r):
-    r.set('foo', 'bar')
-    assert r.get('foo') == b'bar'
-    r.pexpire('foo', timedelta(milliseconds=750))
-    sleep(0.5)
-    assert r.get('foo') == b'bar'
-    sleep(0.5)
-    assert r.get('foo') is None
-    assert r.pexpire('bar', 1) == 0
-
-
-@pytest.mark.slow
-def test_expireat_should_expire_key_by_datetime(r):
-    r.set('foo', 'bar')
-    assert r.get('foo') == b'bar'
-    r.expireat('foo', datetime.now() + timedelta(seconds=1))
-    sleep(1.5)
-    assert r.get('foo') is None
-    assert r.expireat('bar', datetime.now()) is False
-
-
-@pytest.mark.slow
-def test_expireat_should_expire_key_by_timestamp(r):
-    r.set('foo', 'bar')
-    assert r.get('foo') == b'bar'
-    r.expireat('foo', int(time() + 1))
-    sleep(1.5)
-    assert r.get('foo') is None
-    assert r.expire('bar', 1) is False
-
-
-def test_expireat_should_return_true_for_existing_key(r):
-    r.set('foo', 'bar')
-    assert r.expireat('foo', int(time() + 1)) is True
-
-
-def test_expireat_should_return_false_for_missing_key(r):
-    assert r.expireat('missing', int(time() + 1)) is False
-
-
-@pytest.mark.slow
-def test_pexpireat_should_expire_key_by_datetime(r):
-    r.set('foo', 'bar')
-    assert r.get('foo') == b'bar'
-    r.pexpireat('foo', datetime.now() + timedelta(milliseconds=150))
-    sleep(0.2)
-    assert r.get('foo') is None
-    assert r.pexpireat('bar', datetime.now()) == 0
-
-
-@pytest.mark.slow
-def test_pexpireat_should_expire_key_by_timestamp(r):
-    r.set('foo', 'bar')
-    assert r.get('foo') == b'bar'
-    r.pexpireat('foo', int(time() * 1000 + 150))
-    sleep(0.2)
-    assert r.get('foo') is None
-    assert r.expire('bar', 1) is False
-
-
-def test_pexpireat_should_return_true_for_existing_key(r):
-    r.set('foo', 'bar')
-    assert r.pexpireat('foo', int(time() * 1000 + 150))
-
-
-def test_pexpireat_should_return_false_for_missing_key(r):
-    assert not r.pexpireat('missing', int(time() * 1000 + 150))
-
-
-def test_expire_should_not_handle_floating_point_values(r):
-    r.set('foo', 'bar')
-    with pytest.raises(redis.ResponseError, match='value is not an integer or out of range'):
-        r.expire('something_new', 1.2)
-        r.pexpire('something_new', 1000.2)
-        r.expire('some_unused_key', 1.2)
-        r.pexpire('some_unused_key', 1000.2)
-
-
-def test_ttl_should_return_minus_one_for_non_expiring_key(r):
-    r.set('foo', 'bar')
-    assert r.get('foo') == b'bar'
-    assert r.ttl('foo') == -1
-
-
-def test_ttl_should_return_minus_two_for_non_existent_key(r):
-    assert r.get('foo') is None
-    assert r.ttl('foo') == -2
-
-
-def test_pttl_should_return_minus_one_for_non_expiring_key(r):
-    r.set('foo', 'bar')
-    assert r.get('foo') == b'bar'
-    assert r.pttl('foo') == -1
-
-
-def test_pttl_should_return_minus_two_for_non_existent_key(r):
-    assert r.get('foo') is None
-    assert r.pttl('foo') == -2
-
-
-def test_persist(r):
-    r.set('foo', 'bar', ex=20)
-    assert r.persist('foo') == 1
-    assert r.ttl('foo') == -1
-    assert r.persist('foo') == 0
-
-
-def test_watch_persist(r):
-    """PERSIST should mark a variable as changed."""
-    r.set('foo', 'bar', ex=10000)
-    with r.pipeline() as p:
-        p.watch('foo')
-        r.persist('foo')
-        p.multi()
-        p.get('foo')
-        with pytest.raises(redis.exceptions.WatchError):
-            p.execute()
-
-
-def test_set_existing_key_persists(r):
-    r.set('foo', 'bar', ex=20)
-    r.set('foo', 'foo')
-    assert r.ttl('foo') == -1
-
-
-@pytest.mark.max_server('6.2.7')
-def test_script_exists(r):
-    # test response for no arguments by bypassing the py-redis command
-    # as it requires at least one argument
-    assert raw_command(r, "SCRIPT EXISTS") == []
-
-    # use single character characters for non-existing scripts, as those
-    # will never be equal to an actual sha1 hash digest
-    assert r.script_exists("a") == [0]
-    assert r.script_exists("a", "b", "c", "d", "e", "f") == [0, 0, 0, 0, 0, 0]
-
-    sha1_one = r.script_load("return 'a'")
-    assert r.script_exists(sha1_one) == [1]
-    assert r.script_exists(sha1_one, "a") == [1, 0]
-    assert r.script_exists("a", "b", "c", sha1_one, "e") == [0, 0, 0, 1, 0]
-
-    sha1_two = r.script_load("return 'b'")
-    assert r.script_exists(sha1_one, sha1_two) == [1, 1]
-    assert r.script_exists("a", sha1_one, "c", sha1_two, "e", "f") == [0, 1, 0, 1, 0, 0]
-
-
-@pytest.mark.parametrize("args", [("a",), tuple("abcdefghijklmn")])
-def test_script_flush_errors_with_args(r, args):
-    with pytest.raises(redis.ResponseError):
-        raw_command(r, "SCRIPT FLUSH %s" % " ".join(args))
-
-
-def test_script_flush(r):
-    # generate/load six unique scripts and store their sha1 hash values
-    sha1_values = [r.script_load("return '%s'" % char) for char in "abcdef"]
-
-    # assert the scripts all exist prior to flushing
-    assert r.script_exists(*sha1_values) == [1] * len(sha1_values)
-
-    # flush and assert OK response
-    assert r.script_flush() is True
-
-    # assert none of the scripts exists after flushing
-    assert r.script_exists(*sha1_values) == [0] * len(sha1_values)
-
-
-@testtools.run_test_if_redis_ver('above', '3')
-def test_unlink(r):
-    r.set('foo', 'bar')
-    r.unlink('foo')
-    assert r.get('foo') is None
-
-
-@testtools.run_test_if_redis_ver('above', '3.4')
-@pytest.mark.fake
-def test_socket_cleanup_pubsub(fake_server):
-    r1 = fakeredis.FakeStrictRedis(server=fake_server)
-    r2 = fakeredis.FakeStrictRedis(server=fake_server)
-    ps = r1.pubsub()
-    with ps:
-        ps.subscribe('test')
-        ps.psubscribe('test*')
-    r2.publish('test', 'foo')
-
-
-@pytest.mark.fake
-def test_socket_cleanup_watch(fake_server):
-    r1 = fakeredis.FakeStrictRedis(server=fake_server)
-    r2 = fakeredis.FakeStrictRedis(server=fake_server)
-    pipeline = r1.pipeline(transaction=False)
-    # This needs some poking into redis-py internals to ensure that we reach
-    # FakeSocket._cleanup. We need to close the socket while there is still
-    # a watch in place, but not allow it to be garbage collected (hence we
-    # set 'sock' even though it is unused).
-    with pipeline:
-        pipeline.watch('test')
-        sock = pipeline.connection._sock  # noqa: F841
-        pipeline.connection.disconnect()
-    r2.set('test', 'foo')
-
-
-@pytest.mark.decode_responses
-class TestDecodeResponses:
-    def test_decode_str(self, r):
-        r.set('foo', 'bar')
-        assert r.get('foo') == 'bar'
-
-    def test_decode_set(self, r):
-        r.sadd('foo', 'member1')
-        assert r.smembers('foo') == {'member1'}
-
-    def test_decode_list(self, r):
-        r.rpush('foo', 'a', 'b')
-        assert r.lrange('foo', 0, -1) == ['a', 'b']
-
-    def test_decode_dict(self, r):
-        r.hset('foo', 'key', 'value')
-        assert r.hgetall('foo') == {'key': 'value'}
-
-    def test_decode_error(self, r):
-        r.set('foo', 'bar')
-        with pytest.raises(ResponseError) as exc_info:
-            r.hset('foo', 'bar', 'baz')
-        assert isinstance(exc_info.value.args[0], str)
-
-
-@pytest.mark.disconnected
-@fake_only
-class TestFakeStrictRedisConnectionErrors:
-    def test_flushdb(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.flushdb()
-
-    def test_flushall(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.flushall()
-
-    def test_append(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.append('key', 'value')
-
-    def test_bitcount(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.bitcount('key', 0, 20)
-
-    def test_decr(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.decr('key', 2)
-
-    def test_exists(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.exists('key')
-
-    def test_expire(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.expire('key', 20)
-
-    def test_pexpire(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.pexpire('key', 20)
-
-    def test_echo(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.echo('value')
-
-    def test_get(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.get('key')
-
-    def test_getbit(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.getbit('key', 2)
-
-    def test_getset(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.getset('key', 'value')
-
-    def test_incr(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.incr('key')
-
-    def test_incrby(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.incrby('key')
-
-    def test_ncrbyfloat(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.incrbyfloat('key')
-
-    def test_keys(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.keys()
-
-    def test_mget(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.mget(['key1', 'key2'])
-
-    def test_mset(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.mset({'key': 'value'})
-
-    def test_msetnx(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.msetnx({'key': 'value'})
-
-    def test_persist(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.persist('key')
-
-    def test_rename(self, r):
-        server = r.connection_pool.connection_kwargs['server']
-        server.connected = True
-        r.set('key1', 'value')
-        server.connected = False
-        with pytest.raises(redis.ConnectionError):
-            r.rename('key1', 'key2')
-        server.connected = True
-        assert r.exists('key1')
-
-    def test_eval(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.eval('', 0)
-
-    def test_lpush(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.lpush('name', 1, 2)
-
-    def test_lrange(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.lrange('name', 1, 5)
-
-    def test_llen(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.llen('name')
-
-    def test_lrem(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.lrem('name', 2, 2)
-
-    def test_rpush(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.rpush('name', 1)
-
-    def test_lpop(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.lpop('name')
-
-    def test_lset(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.lset('name', 1, 4)
-
-    def test_rpushx(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.rpushx('name', 1)
-
-    def test_ltrim(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.ltrim('name', 1, 4)
-
-    def test_lindex(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.lindex('name', 1)
-
-    def test_lpushx(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.lpushx('name', 1)
-
-    def test_rpop(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.rpop('name')
-
-    def test_linsert(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.linsert('name', 'where', 'refvalue', 'value')
-
-    def test_rpoplpush(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.rpoplpush('src', 'dst')
-
-    def test_blpop(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.blpop('keys')
-
-    def test_brpop(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.brpop('keys')
-
-    def test_brpoplpush(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.brpoplpush('src', 'dst')
-
-    def test_hdel(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.hdel('name')
-
-    def test_hexists(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.hexists('name', 'key')
-
-    def test_hget(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.hget('name', 'key')
-
-    def test_hgetall(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.hgetall('name')
-
-    def test_hincrby(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.hincrby('name', 'key')
-
-    def test_hincrbyfloat(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.hincrbyfloat('name', 'key')
-
-    def test_hkeys(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.hkeys('name')
-
-    def test_hlen(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.hlen('name')
-
-    def test_hset(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.hset('name', 'key', 1)
-
-    def test_hsetnx(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.hsetnx('name', 'key', 2)
-
-    def test_hmset(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.hmset('name', {'key': 1})
-
-    def test_hmget(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.hmget('name', ['a', 'b'])
-
-    def test_hvals(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.hvals('name')
-
-    def test_sadd(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.sadd('name', 1, 2)
-
-    def test_scard(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.scard('name')
-
-    def test_sdiff(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.sdiff(['a', 'b'])
-
-    def test_sdiffstore(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.sdiffstore('dest', ['a', 'b'])
-
-    def test_sinter(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.sinter(['a', 'b'])
-
-    def test_sinterstore(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.sinterstore('dest', ['a', 'b'])
-
-    def test_sismember(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.sismember('name', 20)
-
-    def test_smembers(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.smembers('name')
-
-    def test_smove(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.smove('src', 'dest', 20)
-
-    def test_spop(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.spop('name')
-
-    def test_srandmember(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.srandmember('name')
-
-    def test_srem(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.srem('name')
-
-    def test_sunion(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.sunion(['a', 'b'])
-
-    def test_sunionstore(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.sunionstore('dest', ['a', 'b'])
-
-    def test_zadd(self, r):
-        with pytest.raises(redis.ConnectionError):
-            testtools.zadd(r, 'name', {'key': 'value'})
-
-    def test_zcard(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zcard('name')
-
-    def test_zcount(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zcount('name', 1, 5)
-
-    def test_zincrby(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zincrby('name', 1, 1)
-
-    def test_zinterstore(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zinterstore('dest', ['a', 'b'])
-
-    def test_zrange(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zrange('name', 1, 5)
-
-    def test_zrangebyscore(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zrangebyscore('name', 1, 5)
-
-    def test_rangebylex(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zrangebylex('name', 1, 4)
-
-    def test_zrem(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zrem('name', 'value')
-
-    def test_zremrangebyrank(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zremrangebyrank('name', 1, 5)
-
-    def test_zremrangebyscore(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zremrangebyscore('name', 1, 5)
-
-    def test_zremrangebylex(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zremrangebylex('name', 1, 5)
-
-    def test_zlexcount(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zlexcount('name', 1, 5)
-
-    def test_zrevrange(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zrevrange('name', 1, 5, 1)
-
-    def test_zrevrangebyscore(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zrevrangebyscore('name', 5, 1)
-
-    def test_zrevrangebylex(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zrevrangebylex('name', 5, 1)
-
-    def test_zrevran(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zrevrank('name', 2)
-
-    def test_zscore(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zscore('name', 2)
-
-    def test_zunionstor(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.zunionstore('dest', ['1', '2'])
-
-    def test_pipeline(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.pipeline().watch('key')
-
-    def test_transaction(self, r):
-        with pytest.raises(redis.ConnectionError):
-            def func(a):
-                return a * a
-
-            r.transaction(func, 3)
-
-    def test_lock(self, r):
-        with pytest.raises(redis.ConnectionError):
-            with r.lock('name'):
-                pass
-
-    def test_pubsub(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.pubsub().subscribe('channel')
-
-    def test_pfadd(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.pfadd('name', 1)
-
-    def test_pfmerge(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.pfmerge('dest', 'a', 'b')
-
-    def test_scan(self, r):
-        with pytest.raises(redis.ConnectionError):
-            list(r.scan())
-
-    def test_sscan(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.sscan('name')
-
-    def test_hscan(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.hscan('name')
-
-    def test_scan_iter(self, r):
-        with pytest.raises(redis.ConnectionError):
-            list(r.scan_iter())
-
-    def test_sscan_iter(self, r):
-        with pytest.raises(redis.ConnectionError):
-            list(r.sscan_iter('name'))
-
-    def test_hscan_iter(self, r):
-        with pytest.raises(redis.ConnectionError):
-            list(r.hscan_iter('name'))
-
-
-@pytest.mark.disconnected
-@fake_only
-class TestPubSubConnected:
-    @pytest.fixture
-    def pubsub(self, r):
-        return r.pubsub()
-
-    def test_basic_subscribe(self, pubsub):
-        with pytest.raises(redis.ConnectionError):
-            pubsub.subscribe('logs')
-
-    def test_subscription_conn_lost(self, fake_server, pubsub):
-        fake_server.connected = True
-        pubsub.subscribe('logs')
-        fake_server.connected = False
-        # The initial message is already in the pipe
-        msg = pubsub.get_message()
-        check = {
-            'type': 'subscribe',
-            'pattern': None,
-            'channel': b'logs',
-            'data': 1
-        }
-        assert msg == check, 'Message was not published to channel'
-        with pytest.raises(redis.ConnectionError):
-            pubsub.get_message()
-
-
-@testtools.run_test_if_redis_ver('below', '4.2.0')
-@testtools.run_test_if_no_aioredis
-def test_fakeredis_aioredis_raises_if_missing_aioredis():
-    with pytest.raises(
-            ImportError, match="aioredis is required for redis-py below 4.2.0"
-    ):
-        import fakeredis.aioredis
-        v = fakeredis.aioredis
diff --git a/test/test_fakeredis7.py b/test/test_fakeredis7.py
deleted file mode 100644
index d2f583c..0000000
--- a/test/test_fakeredis7.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import pytest as pytest
-import redis
-
-from testtools import raw_command, zadd
-
-
-@pytest.mark.min_server('7')
-def test_script_exists(r):
-    # test response for no arguments by bypassing the py-redis command
-    # as it requires at least one argument
-    with pytest.raises(redis.ResponseError):
-        raw_command(r, "SCRIPT EXISTS")
-
-    # use single character characters for non-existing scripts, as those
-    # will never be equal to an actual sha1 hash digest
-    assert r.script_exists("a") == [0]
-    assert r.script_exists("a", "b", "c", "d", "e", "f") == [0, 0, 0, 0, 0, 0]
-
-    sha1_one = r.script_load("return 'a'")
-    assert r.script_exists(sha1_one) == [1]
-    assert r.script_exists(sha1_one, "a") == [1, 0]
-    assert r.script_exists("a", "b", "c", sha1_one, "e") == [0, 0, 0, 1, 0]
-
-    sha1_two = r.script_load("return 'b'")
-    assert r.script_exists(sha1_one, sha1_two) == [1, 1]
-    assert r.script_exists("a", sha1_one, "c", sha1_two, "e", "f") == [0, 1, 0, 1, 0, 0]
-
-
-@pytest.mark.min_server('7')
-def test_set_get_nx(r):
-    # Note: this will most likely fail on a 7.0 server, based on the docs for SET
-    assert raw_command(r, 'set', 'foo', 'bar', 'NX', 'GET') is None
-
-
-@pytest.mark.min_server('7.0')
-def test_zadd_minus_zero(r):
-    zadd(r, 'foo', {'a': -0.0})
-    zadd(r, 'foo', {'a': 0.0})
-    assert raw_command(r, 'zscore', 'foo', 'a') == b'0'
diff --git a/test/test_general.py b/test/test_general.py
new file mode 100644
index 0000000..b972a97
--- /dev/null
+++ b/test/test_general.py
@@ -0,0 +1,36 @@
+import pytest
+import redis
+
+import fakeredis
+from test.testtools import raw_command
+
+
+@pytest.mark.fake
+def test_singleton():
+    conn_generator = fakeredis.FakeRedisConnSingleton()
+    conn1 = conn_generator(dict(), False)
+    conn2 = conn_generator(dict(), False)
+    assert conn1.set('foo', 'bar') is True
+    assert conn2.get('foo') == b'bar'
+
+
+def test_asyncioio_is_used():
+    """Redis 4.2+ has support for asyncio and should be preferred over aioredis"""
+    from fakeredis import aioredis
+    assert not hasattr(aioredis, "__version__")
+
+
+def test_unknown_command(r: redis.Redis):
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, '0 3 3')
+
+
+def test_new_server_when_no_params():
+    from fakeredis import FakeRedis
+
+    fake_redis_1 = FakeRedis()
+    fake_redis_2 = FakeRedis()
+
+    fake_redis_1.set("foo", "bar")
+
+    assert fake_redis_2.get("foo") is None
diff --git a/test/test_hypothesis.py b/test/test_hypothesis.py
index ee8ab08..e106eff 100644
--- a/test/test_hypothesis.py
+++ b/test/test_hypothesis.py
@@ -1,5 +1,6 @@
 import functools
 import operator
+import sys
 
 import hypothesis
 import hypothesis.stateful
@@ -77,9 +78,8 @@ class WrappedException:
             return NotImplemented
         if type(self.wrapped) != type(other.wrapped):  # noqa: E721
             return False
-        # TODO: re-enable after more carefully handling order of error checks
-        # return self.wrapped.args == other.wrapped.args
         return True
+        # return self.wrapped.args == other.wrapped.args
 
     def __ne__(self, other):
         if not isinstance(other, WrappedException):
@@ -210,8 +210,7 @@ def build_zstore(command, dest, sources, weights, aggregate):
 zset_no_score_create_commands = (
     commands(st.just('zadd'), keys, st.lists(st.tuples(st.just(0), fields), min_size=1))
 )
-zset_no_score_commands = (
-    # TODO: test incr
+zset_no_score_commands = (  # TODO: test incr
         commands(st.just('zadd'), keys,
                  st.none() | st.just('nx'),
                  st.none() | st.just('xx'),
@@ -255,7 +254,7 @@ class CommonMachine(hypothesis.stateful.RuleBasedStateMachine):
         if self.real.info('server').get('arch_bits') != 64:
             self.real.connection_pool.disconnect()
             pytest.skip('redis server is not 64-bit')
-        self.fake = fakeredis.FakeStrictRedis(version=redis_ver)
+        self.fake = fakeredis.FakeStrictRedis(server=fakeredis.FakeServer(version=redis_ver))
         # Disable the response parsing so that we can check the raw values returned
         self.fake.response_callbacks.clear()
         self.real.response_callbacks.clear()
@@ -276,7 +275,8 @@ class CommonMachine(hypothesis.stateful.RuleBasedStateMachine):
         self.fake.connection_pool.disconnect()
         super().teardown()
 
-    def _evaluate(self, client, command):
+    @staticmethod
+    def _evaluate(client, command):
         try:
             result = client.execute_command(*command.args)
             if result != 'QUEUED':
@@ -291,6 +291,7 @@ class CommonMachine(hypothesis.stateful.RuleBasedStateMachine):
         real_result, real_exc = self._evaluate(self.real, command)
 
         if fake_exc is not None and real_exc is None:
+            print('{} raised on only on fake when running {}'.format(fake_exc, command), file=sys.stderr)
             raise fake_exc
         elif real_exc is not None and fake_exc is None:
             assert real_exc == fake_exc, "Expected exception {} not raised".format(real_exc)
@@ -305,7 +306,10 @@ class CommonMachine(hypothesis.stateful.RuleBasedStateMachine):
                 assert n(f) == n(r)
             self.transaction_normalize = []
         else:
-            assert fake_result == real_result
+            if fake_result != real_result:
+                print('{}!={} when running {}'.format(fake_result, real_result, command),
+                      file=sys.stderr)
+            assert fake_result == real_result, "Discrepancy when running command {}".format(command)
             if real_result == b'QUEUED':
                 # Since redis removes the distinction between simple strings and
                 # bulk strings, this might not actually indicate that we're in a
@@ -399,10 +403,8 @@ class TestString(BaseTest):
 
 
 class TestHash(BaseTest):
-    # TODO: add a test for hincrbyfloat. See incrbyfloat for why this is
-    # problematic.
     hash_commands = (
-            commands(st.just('hmset'), keys, st.lists(st.tuples(fields, values)))
+            commands(st.just('hset'), keys, st.lists(st.tuples(fields, values)))
             | commands(st.just('hdel'), keys, st.lists(fields))
             | commands(st.just('hexists'), keys, fields)
             | commands(st.just('hget'), keys, fields)
@@ -410,12 +412,12 @@ class TestHash(BaseTest):
             | commands(st.just('hincrby'), keys, fields, st.integers())
             | commands(st.just('hlen'), keys)
             | commands(st.just('hmget'), keys, st.lists(fields))
-            | commands(st.sampled_from(['hset', 'hmset']), keys, st.lists(st.tuples(fields, values)))
+            | commands(st.just('hset'), keys, st.lists(st.tuples(fields, values)))
             | commands(st.just('hsetnx'), keys, fields, values)
             | commands(st.just('hstrlen'), keys, fields)
     )
     create_command_strategy = (
-        commands(st.just('hmset'), keys, st.lists(st.tuples(fields, values), min_size=1))
+        commands(st.just('hset'), keys, st.lists(st.tuples(fields, values), min_size=1))
     )
     command_strategy = hash_commands | common_commands
 
@@ -511,10 +513,10 @@ class TestTransaction(BaseTest):
 
 
 class TestServer(BaseTest):
+    # TODO: real redis raises an error if there is a save already in progress.
+    # Find a better way to test this.
+    # commands(st.just('bgsave'))
     server_commands = (
-        # TODO: real redis raises an error if there is a save already in progress.
-        # Find a better way to test this.
-        # commands(st.just('bgsave'))
             commands(st.just('dbsize'))
             | commands(st.sampled_from(['flushdb', 'flushall']), st.sampled_from([[], 'async']))
             # TODO: result is non-deterministic
diff --git a/test/test_init_args.py b/test/test_init_args.py
index 1043191..0ba18c7 100644
--- a/test/test_init_args.py
+++ b/test/test_init_args.py
@@ -1,17 +1,35 @@
 import pytest
 
 import fakeredis
-import testtools
+
+
+def test_multidb(r, create_redis):
+    r1 = create_redis(db=0)
+    r2 = create_redis(db=1)
+
+    r1['r1'] = 'r1'
+    r2['r2'] = 'r2'
+
+    assert 'r2' not in r1
+    assert 'r1' not in r2
+
+    assert r1['r1'] == b'r1'
+    assert r2['r2'] == b'r2'
+
+    assert r1.flushall() is True
+
+    assert 'r1' not in r1
+    assert 'r2' not in r2
 
 
 @pytest.mark.fake
 class TestInitArgs:
     def test_singleton(self):
         shared_server = fakeredis.FakeServer()
-        r1 = fakeredis.FakeStrictRedis()
-        r2 = fakeredis.FakeStrictRedis()
-        r3 = fakeredis.FakeStrictRedis(server=shared_server)
-        r4 = fakeredis.FakeStrictRedis(server=shared_server)
+        r1 = fakeredis.FakeRedis()
+        r2 = fakeredis.FakeRedis(server=fakeredis.FakeServer())
+        r3 = fakeredis.FakeRedis(server=shared_server)
+        r4 = fakeredis.FakeRedis(server=shared_server)
 
         r1.set('foo', 'bar')
         r3.set('bar', 'baz')
@@ -24,6 +42,11 @@ class TestInitArgs:
         assert 'bar' in r4
         assert 'bar' not in r1
 
+    def test_host_init_arg(self):
+        db = fakeredis.FakeStrictRedis(host='localhost')
+        db.set('foo', 'bar')
+        assert db.get('foo') == b'bar'
+
     def test_from_url(self):
         db = fakeredis.FakeStrictRedis.from_url(
             'redis://localhost:6379/0')
@@ -43,13 +66,9 @@ class TestInitArgs:
         assert db.get('foo') == b'bar'
 
     def test_from_url_with_db_arg(self):
-        db = fakeredis.FakeStrictRedis.from_url(
-            'redis://localhost:6379/0')
-        db1 = fakeredis.FakeStrictRedis.from_url(
-            'redis://localhost:6379/1')
-        db2 = fakeredis.FakeStrictRedis.from_url(
-            'redis://localhost:6379/',
-            db=2)
+        db = fakeredis.FakeStrictRedis.from_url('redis://localhost:6379/0')
+        db1 = fakeredis.FakeStrictRedis.from_url('redis://localhost:6379/1')
+        db2 = fakeredis.FakeStrictRedis.from_url('redis://localhost:6379/', db=2)
         db.set('foo', 'foo0')
         db1.set('foo', 'foo1')
         db2.set('foo', 'foo2')
@@ -59,18 +78,14 @@ class TestInitArgs:
 
     def test_from_url_db_value_error(self):
         # In case of ValueError, should default to 0, or be absent in redis-py 4.0
-        db = fakeredis.FakeStrictRedis.from_url(
-            'redis://localhost:6379/a')
+        db = fakeredis.FakeStrictRedis.from_url('redis://localhost:6379/a')
         assert db.connection_pool.connection_kwargs.get('db', 0) == 0
 
     def test_can_pass_through_extra_args(self):
-        db = fakeredis.FakeStrictRedis.from_url(
-            'redis://localhost:6379/0',
-            decode_responses=True)
+        db = fakeredis.FakeStrictRedis.from_url('redis://localhost:6379/0', decode_responses=True)
         db.set('foo', 'bar')
         assert db.get('foo') == 'bar'
 
-    @testtools.run_test_if_redis_ver('above', '3')
     def test_can_allow_extra_args(self):
         db = fakeredis.FakeStrictRedis.from_url(
             'redis://localhost:6379/0',
@@ -106,3 +121,11 @@ class TestInitArgs:
         db = fakeredis.FakeStrictRedis.from_url('unix://a/b/c')
         db.set('foo', 'bar')
         assert db.get('foo') == b'bar'
+
+    def test_same_connection_params(self):
+        r1 = fakeredis.FakeStrictRedis.from_url('redis://localhost:6379/11')
+        r2 = fakeredis.FakeStrictRedis.from_url('redis://localhost:6379/11')
+        r3 = fakeredis.FakeStrictRedis(server=fakeredis.FakeServer())
+        r1.set('foo', 'bar')
+        assert r2.get('foo') == b'bar'
+        assert not r3.exists('foo')
diff --git a/test/test_json/__init__.py b/test/test_json/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/test/test_json/test_json.py b/test/test_json/test_json.py
new file mode 100644
index 0000000..fb4f2dd
--- /dev/null
+++ b/test/test_json/test_json.py
@@ -0,0 +1,557 @@
+"""
+Tests for `fakeredis-py`'s emulation of Redis's JSON.GET command subset.
+"""
+
+from __future__ import annotations
+
+import json
+
+import pytest
+import redis
+from redis.commands.json.path import Path
+
+from test.testtools import raw_command
+
+json_tests = pytest.importorskip("jsonpath_ng")
+
+
+def test_jsonget(r: redis.Redis):
+    data = {'x': "bar", 'y': {'x': 33}}
+    r.json().set("foo", Path.root_path(), data)
+    assert r.json().get("foo") == data
+    assert r.json().get("foo", Path("$..x")) == ['bar', 33]
+
+    data2 = {'x': "bar"}
+    r.json().set("foo2", Path.root_path(), data2, )
+    assert r.json().get("foo2") == data2
+    assert r.json().get("foo2", "$") == [data2, ]
+    assert r.json().get("foo2", Path("$.a"), Path("$.x")) == {'$.a': [], '$.x': ['bar']}
+
+    assert r.json().get("non-existing-key") is None
+
+    r.json().set("foo2", Path.root_path(), {'x': "bar", 'y': {'x': 33}}, )
+    assert r.json().get("foo2") == {'x': "bar", 'y': {'x': 33}}
+    assert r.json().get("foo2", Path("$..x")) == ['bar', 33]
+
+    r.json().set("foo", Path.root_path(), {'x': "bar"}, )
+    assert r.json().get("foo") == {'x': "bar"}
+    assert r.json().get("foo", Path("$.a"), Path("$.x")) == {'$.a': [], '$.x': ['bar']}
+
+
+def test_json_setgetdeleteforget(r: redis.Redis):
+    data = {'x': "bar"}
+    assert r.json().set("foo", Path.root_path(), data) == 1
+    assert r.json().get("foo") == data
+    assert r.json().get("baz") is None
+    assert r.json().delete("foo") == 1
+    assert r.json().forget("foo") == 0  # second delete
+    assert r.exists("foo") == 0
+
+
+def test_json_delete_with_dollar(r: redis.Redis):
+    doc1 = {"a": 1, "nested": {"a": 2, "b": 3}}
+    assert r.json().set("doc1", Path.root_path(), doc1)
+    assert r.json().delete("doc1", "$..a") == 2
+    assert r.json().get("doc1", Path.root_path()) == {"nested": {"b": 3}}
+
+    doc2 = {"a": {"a": 2, "b": 3}, "b": ["a", "b"], "nested": {"b": [True, "a", "b"]}}
+    r.json().set("doc2", "$", doc2)
+    assert r.json().delete("doc2", "$..a") == 1
+    assert r.json().get("doc2", Path.root_path()) == {"nested": {"b": [True, "a", "b"]}, "b": ["a", "b"]}
+
+    doc3 = [{
+        "ciao": ["non ancora"],
+        "nested": [
+            {"ciao": [1, "a"]},
+            {"ciao": [2, "a"]},
+            {"ciaoc": [3, "non", "ciao"]},
+            {"ciao": [4, "a"]},
+            {"e": [5, "non", "ciao"]},
+        ],
+    }]
+    assert r.json().set("doc3", Path.root_path(), doc3)
+    assert r.json().delete("doc3", '$.[0]["nested"]..ciao') == 3
+
+    doc3val = [[{
+        "ciao": ["non ancora"],
+        "nested": [
+            {}, {}, {"ciaoc": [3, "non", "ciao"]}, {}, {"e": [5, "non", "ciao"]},
+        ],
+    }]]
+    assert r.json().get("doc3", Path.root_path()) == doc3val[0]
+
+    # Test default path
+    assert r.json().delete("doc3") == 1
+    assert r.json().get("doc3", Path.root_path()) is None
+
+    r.json().delete("not_a_document", "..a")
+
+
+def test_json_et_non_dict_value(r: redis.Redis):
+    r.json().set("str", Path.root_path(), 'str_val', )
+    assert r.json().get('str') == 'str_val'
+
+    r.json().set("bool", Path.root_path(), True)
+    assert r.json().get('bool') is True
+
+    r.json().set("bool", Path.root_path(), False)
+    assert r.json().get('bool') is False
+
+
+def test_jsonset_existential_modifiers_should_succeed(r: redis.Redis):
+    obj = {"foo": "bar"}
+    assert r.json().set("obj", Path.root_path(), obj)
+
+    # Test that flags prevent updates when conditions are unmet
+    assert r.json().set("obj", Path("foo"), "baz", nx=True, ) is None
+    assert r.json().get("obj") == obj
+
+    assert r.json().set("obj", Path("qaz"), "baz", xx=True, ) is None
+    assert r.json().get("obj") == obj
+
+    # Test that flags allow updates when conditions are met
+    assert r.json().set("obj", Path("foo"), "baz", xx=True) == 1
+    assert r.json().set("obj", Path("foo2"), "qaz", nx=True) == 1
+    assert r.json().get("obj") == {"foo": "baz", "foo2": "qaz"}
+
+    # Test with raw
+    obj = {"foo": "bar"}
+    raw_command(r, 'json.set', 'obj', '$', json.dumps(obj))
+    assert r.json().get('obj') == obj
+
+
+def test_jsonset_flags_should_be_mutually_exclusive(r: redis.Redis):
+    with pytest.raises(Exception):
+        r.json().set("obj", Path("foo"), "baz", nx=True, xx=True)
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, 'json.set', 'obj', '$', json.dumps({"foo": "bar"}), 'NX', 'XX')
+
+
+def test_json_unknown_param(r: redis.Redis):
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, 'json.set', 'obj', '$', json.dumps({"foo": "bar"}), 'unknown')
+
+
+def test_jsonmget(r: redis.Redis):
+    # Test mget with multi paths
+    r.json().set("doc1", "$", {"a": 1, "b": 2, "nested": {"a": 3}, "c": None, "nested2": {"a": None}})
+    r.json().set("doc2", "$", {"a": 4, "b": 5, "nested": {"a": 6}, "c": None, "nested2": {"a": [None]}})
+    r.json().set("doc3", "$", {"a": 5, "b": 5, "nested": {"a": 8}, "c": None, "nested2": {"a": {"b": "nested3"}}})
+    # Compare also to single JSON.GET
+    assert r.json().get("doc1", Path("$..a")) == [1, 3, None]
+    assert r.json().get("doc2", "$..a") == [4, 6, [None]]
+    assert r.json().get("doc3", "$..a") == [5, 8, {"b": "nested3"}]
+
+    # Test mget with single path
+    assert r.json().mget(["doc1"], "$..a") == [[1, 3, None]]
+
+    # Test mget with multi path
+    assert r.json().mget(["doc1", "doc2", "doc3"], "$..a") == [[1, 3, None], [4, 6, [None]], [5, 8, {"b": "nested3"}]]
+
+    # Test missing key
+    assert r.json().mget(["doc1", "missing_doc"], "$..a") == [[1, 3, None], None]
+
+    assert r.json().mget(["missing_doc1", "missing_doc2"], "$..a") == [None, None]
+
+
+def test_jsonmget_should_succeed(r: redis.Redis):
+    r.json().set("1", Path.root_path(), 1)
+    r.json().set("2", Path.root_path(), 2)
+
+    assert r.json().mget(["1"], Path.root_path()) == [1]
+
+    assert r.json().mget([1, 2], Path.root_path()) == [1, 2]
+
+
+def test_jsonclear(r: redis.Redis):
+    r.json().set("arr", Path.root_path(), [0, 1, 2, 3, 4], )
+
+    assert 1 == r.json().clear("arr", Path.root_path(), )
+    assert [] == r.json().get("arr")
+
+
+def test_jsonclear_dollar(r: redis.Redis):
+    data = {
+        "nested1": {"a": {"foo": 10, "bar": 20}},
+        "a": ["foo"],
+        "nested2": {"a": "claro"},
+        "nested3": {"a": {"baz": 50}}
+    }
+    r.json().set("doc1", "$", data)
+    # Test multi
+    assert r.json().clear("doc1", "$..a") == 3
+
+    assert r.json().get("doc1", "$") == [
+        {"nested1": {"a": {}}, "a": [], "nested2": {"a": "claro"}, "nested3": {"a": {}}}
+    ]
+
+    # Test single
+    r.json().set("doc1", "$", data)
+    assert r.json().clear("doc1", "$.nested1.a") == 1
+    assert r.json().get("doc1", "$") == [
+        {
+            "nested1": {"a": {}},
+            "a": ["foo"],
+            "nested2": {"a": "claro"},
+            "nested3": {"a": {"baz": 50}},
+        }
+    ]
+
+    # Test missing path (defaults to root)
+    assert r.json().clear("doc1") == 1
+    assert r.json().get("doc1", "$") == [{}]
+
+
+def test_jsonclear_no_doc(r: redis.Redis):
+    # Test missing key
+    with pytest.raises(redis.ResponseError):
+        r.json().clear("non_existing_doc", "$..a")
+
+
+def test_jsonstrlen(r: redis.Redis):
+    data = {'x': "bar", 'y': {'x': 33}}
+    r.json().set("foo", Path.root_path(), data)
+    assert r.json().strlen("foo", Path("$..x")) == [3, None]
+
+    r.json().set("foo2", Path.root_path(), "data2")
+    assert r.json().strlen('foo2') == 5
+    assert r.json().strlen('foo2', Path.root_path()) == 5
+
+    r.json().set("foo3", Path.root_path(), {'x': 'string'})
+    assert r.json().strlen("foo3", Path("$.x")) == [6, ]
+
+    assert r.json().strlen('non-existing') is None
+
+    r.json().set("str", Path.root_path(), "foo")
+    assert r.json().strlen("str", Path.root_path()) == 3
+    # Test multi
+    r.json().set("doc1", "$", {"a": "foo", "nested1": {"a": "hello"}, "nested2": {"a": 31}})
+    assert r.json().strlen("doc1", "$..a") == [3, 5, None]
+
+    res2 = r.json().strappend("doc1", "bar", "$..a")
+    res1 = r.json().strlen("doc1", "$..a")
+    assert res1 == res2
+
+    # Test single
+    assert r.json().strlen("doc1", "$.nested1.a") == [8]
+    assert r.json().strlen("doc1", "$.nested2.a") == [None]
+
+    # Test missing key
+    with pytest.raises(redis.ResponseError):
+        r.json().strlen("non_existing_doc", "$..a")
+
+
+def test_toggle(r: redis.Redis):
+    r.json().set("bool", Path.root_path(), False)
+    assert r.json().toggle("bool", Path.root_path())
+    assert r.json().toggle("bool", Path.root_path()) is False
+
+    r.json().set("num", Path.root_path(), 1)
+
+    with pytest.raises(redis.exceptions.ResponseError):
+        r.json().toggle("num", Path.root_path())
+
+
+def test_toggle_dollar(r: redis.Redis):
+    data = {
+        "a": ["foo"],
+        "nested1": {"a": False},
+        "nested2": {"a": 31},
+        "nested3": {"a": True},
+    }
+    r.json().set("doc1", "$", data)
+    # Test multi
+    assert r.json().toggle("doc1", "$..a") == [None, 1, None, 0]
+    data['nested1']['a'] = True
+    data['nested3']['a'] = False
+    assert r.json().get("doc1", "$") == [data]
+
+    # Test missing key
+    with pytest.raises(redis.exceptions.ResponseError):
+        r.json().toggle("non_existing_doc", "$..a")
+
+
+def test_json_commands_in_pipeline(r: redis.Redis):
+    p = r.json().pipeline()
+    p.set("foo", Path.root_path(), "bar")
+    p.get("foo")
+    p.delete("foo")
+    assert [True, "bar", 1] == p.execute()
+    assert r.keys() == []
+    assert r.get("foo") is None
+
+    # now with a true, json object
+    r.flushdb()
+    p = r.json().pipeline()
+    d = {"hello": "world", "oh": "snap"}
+
+    with pytest.deprecated_call():
+        p.jsonset("foo", Path.root_path(), d)
+        p.jsonget("foo")
+
+    p.exists("not-a-real-key")
+    p.delete("foo")
+
+    assert [True, d, 0, 1] == p.execute()
+    assert r.keys() == []
+    assert r.get("foo") is None
+
+
+def test_strappend(r: redis.Redis):
+    # Test single
+    r.json().set("json-key", Path.root_path(), "foo")
+    assert r.json().strappend("json-key", "bar") == 6
+    assert "foobar" == r.json().get("json-key", Path.root_path())
+
+    # Test multi
+    r.json().set("doc1", Path.root_path(), {"a": "foo", "nested1": {"a": "hello"}, "nested2": {"a": 31}, })
+    assert r.json().strappend("doc1", "bar", "$..a") == [6, 8, None]
+    assert r.json().get("doc1") == {"a": "foobar", "nested1": {"a": "hellobar"}, "nested2": {"a": 31}, }
+
+    # Test single
+    assert r.json().strappend("doc1", "baz", "$.nested1.a", ) == [11]
+    assert r.json().get("doc1") == {"a": "foobar", "nested1": {"a": "hellobarbaz"}, "nested2": {"a": 31}, }
+
+    # Test missing key
+    with pytest.raises(redis.exceptions.ResponseError):
+        r.json().strappend("non_existing_doc", "$..a", "err")
+
+    # Test multi
+    r.json().set("doc2", Path.root_path(), {"a": "foo", "nested1": {"a": "hello"}, "nested2": {"a": "hi"}, })
+    assert r.json().strappend("doc2", "bar", "$.*.a") == [8, 5]
+    assert r.json().get("doc2") == {"a": "foo", "nested1": {"a": "hellobar"}, "nested2": {"a": "hibar"}, }
+
+    # Test missing path
+    r.json().set("doc1", Path.root_path(), {"a": "foo", "nested1": {"a": "hello"}, "nested2": {"a": 31}, })
+    with pytest.raises(redis.exceptions.ResponseError):
+        r.json().strappend("doc1", "add", "piu")
+
+    # Test raw command with no arguments
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, 'json.strappend', '')
+
+
+@pytest.mark.decode_responses(True)
+def test_decode_null(r: redis.Redis):
+    assert r.json().get("abc") is None
+
+
+def test_decode_response_disabaled_null(r: redis.Redis):
+    assert r.json().get("abc") is None
+
+
+def test_json_get_jset(r: redis.Redis):
+    assert r.json().set("foo", Path.root_path(), "bar", ) == 1
+    assert "bar" == r.json().get("foo")
+    assert r.json().get("baz") is None
+    assert 1 == r.json().delete("foo")
+    assert r.exists("foo") == 0
+
+
+def test_nonascii_setgetdelete(r: redis.Redis):
+    assert r.json().set("not-ascii", Path.root_path(), "hyvää-élève", )
+    assert "hyvää-élève" == r.json().get("not-ascii", no_escape=True, )
+    assert 1 == r.json().delete("not-ascii")
+    assert r.exists("not-ascii") == 0
+
+
+def test_json_setbinarykey(r: redis.Redis):
+    data = {"hello": "world", b"some": "value"}
+
+    with pytest.raises(TypeError):
+        r.json().set("some-key", Path.root_path(), data)
+
+    assert r.json().set("some-key", Path.root_path(), data, decode_keys=True)
+
+
+def test_set_file(r: redis.Redis):
+    # Standard Library Imports
+    import json
+    import tempfile
+
+    obj = {"hello": "world"}
+    jsonfile = tempfile.NamedTemporaryFile(suffix=".json")
+    with open(jsonfile.name, "w+") as fp:
+        fp.write(json.dumps(obj))
+
+    no_json_file = tempfile.NamedTemporaryFile()
+    no_json_file.write(b"Hello World")
+
+    assert r.json().set_file("test", Path.root_path(), jsonfile.name)
+    assert r.json().get("test") == obj
+    with pytest.raises(json.JSONDecodeError):
+        r.json().set_file("test2", Path.root_path(), no_json_file.name)
+
+
+def test_set_path(r: redis.Redis):
+    # Standard Library Imports
+    import json
+    import tempfile
+
+    root = tempfile.mkdtemp()
+    jsonfile = tempfile.NamedTemporaryFile(mode="w+", dir=root, delete=False)
+    no_json_file = tempfile.NamedTemporaryFile(mode="a+", dir=root, delete=False)
+    jsonfile.write(json.dumps({"hello": "world"}))
+    jsonfile.close()
+    no_json_file.write("hello")
+
+    result = {jsonfile.name: True, no_json_file.name: False}
+    assert r.json().set_path(Path.root_path(), root) == result
+    assert r.json().get(jsonfile.name.rsplit(".")[0]) == {"hello": "world"}
+
+
+def test_type(r: redis.Redis):
+    r.json().set("1", Path.root_path(), 1, )
+
+    assert r.json().type("1", Path.root_path(), ) == b"integer"
+    assert r.json().type("1") == b"integer"
+
+    meta_data = {"object": {}, "array": [], "string": "str", "integer": 42, "number": 1.2, "boolean": False,
+                 "null": None, }
+    data = {k: {'a': meta_data[k]} for k in meta_data}
+    r.json().set("doc1", "$", data)
+    # Test multi
+    assert r.json().type("doc1", "$..a") == [k.encode() for k in meta_data.keys()]
+
+    # Test single
+    assert r.json().type("doc1", "$.integer.a") == [b'integer']
+    assert r.json().type("doc1") == b'object'
+
+    # Test missing key
+    assert r.json().type("non_existing_doc", "..a") is None
+
+
+def test_objlen(r: redis.Redis):
+    # Test missing key, and path
+    with pytest.raises(redis.ResponseError):
+        r.json().objlen("non_existing_doc", "$..a")
+
+    obj = {"foo": "bar", "baz": "qaz"}
+
+    r.json().set("obj", Path.root_path(), obj, )
+    assert len(obj) == r.json().objlen("obj", Path.root_path(), )
+
+    r.json().set("obj", Path.root_path(), obj)
+    assert len(obj) == r.json().objlen("obj")
+    r.json().set("doc1", "$", {
+        "a": ["foo"],
+        "nested1": {"a": {"foo": 10, "bar": 20}},
+        "nested2": {"a": {"baz": 50}},
+    })
+    # Test multi
+    assert r.json().objlen("doc1", "$..a") == [None, 2, 1]
+    # Test single
+    assert r.json().objlen("doc1", "$.nested1.a") == [2]
+
+    assert r.json().objlen("doc1", "$.nowhere") == []
+
+    # Test legacy
+    assert r.json().objlen("doc1", ".*.a") == 2
+
+    # Test single
+    assert r.json().objlen("doc1", ".nested2.a") == 1
+
+    # Test missing key
+    assert r.json().objlen("non_existing_doc", "..a") is None
+
+    # Test missing path
+    # with pytest.raises(exceptions.ResponseError):
+    r.json().objlen("doc1", ".nowhere")
+
+
+def test_objkeys(r: redis.Redis):
+    obj = {"foo": "bar", "baz": "qaz"}
+    r.json().set("obj", Path.root_path(), obj)
+    keys = r.json().objkeys("obj", Path.root_path())
+    keys.sort()
+    exp = list(obj.keys())
+    exp.sort()
+    assert exp == keys
+
+    r.json().set("obj", Path.root_path(), obj)
+    assert r.json().objkeys("obj") == list(obj.keys())
+
+    assert r.json().objkeys("fakekey") is None
+
+    r.json().set(
+        "doc1",
+        "$",
+        {
+            "nested1": {"a": {"foo": 10, "bar": 20}},
+            "a": ["foo"],
+            "nested2": {"a": {"baz": 50}},
+        },
+    )
+
+    # Test single
+    assert r.json().objkeys("doc1", "$.nested1.a") == [[b"foo", b"bar"]]
+
+    # Test legacy
+    assert r.json().objkeys("doc1", ".*.a") == ["foo", "bar"]
+    # Test single
+    assert r.json().objkeys("doc1", ".nested2.a") == ["baz"]
+
+    # Test missing key
+    assert r.json().objkeys("non_existing_doc", "..a") is None
+
+    # Test non existing doc
+    with pytest.raises(redis.ResponseError):
+        assert r.json().objkeys("non_existing_doc", "$..a") == []
+
+    assert r.json().objkeys("doc1", "$..nowhere") == []
+
+
+def test_numincrby(r: redis.Redis):
+    r.json().set("num", Path.root_path(), 1)
+
+    assert 2 == r.json().numincrby("num", Path.root_path(), 1)
+    assert 2.5 == r.json().numincrby("num", Path.root_path(), 0.5)
+    assert 1.25 == r.json().numincrby("num", Path.root_path(), -1.25)
+    # Test NUMINCRBY
+    r.json().set("doc1", "$", {"a": "b", "b": [{"a": 2}, {"a": 5.0}, {"a": "c"}]})
+    # Test multi
+    assert r.json().numincrby("doc1", "$..a", 2) == [None, 4, 7.0, None]
+
+    assert r.json().numincrby("doc1", "$..a", 2.5) == [None, 6.5, 9.5, None]
+    # Test single
+    assert r.json().numincrby("doc1", "$.b[1].a", 2) == [11.5]
+
+    assert r.json().numincrby("doc1", "$.b[2].a", 2) == [None]
+    assert r.json().numincrby("doc1", "$.b[1].a", 3.5) == [15.0]
+
+
+def test_nummultby(r: redis.Redis):
+    r.json().set("num", Path.root_path(), 1)
+
+    with pytest.deprecated_call():
+        assert r.json().nummultby("num", Path.root_path(), 2) == 2
+        assert r.json().nummultby("num", Path.root_path(), 2.5) == 5
+        assert r.json().nummultby("num", Path.root_path(), 0.5) == 2.5
+
+    r.json().set("doc1", "$", {"a": "b", "b": [{"a": 2}, {"a": 5.0}, {"a": "c"}]})
+
+    # test list
+    with pytest.deprecated_call():
+        assert r.json().nummultby("doc1", "$..a", 2) == [None, 4, 10, None]
+        assert r.json().nummultby("doc1", "$..a", 2.5) == [None, 10.0, 25.0, None]
+
+    # Test single
+    with pytest.deprecated_call():
+        assert r.json().nummultby("doc1", "$.b[1].a", 2) == [50.0]
+        assert r.json().nummultby("doc1", "$.b[2].a", 2) == [None]
+        assert r.json().nummultby("doc1", "$.b[1].a", 3) == [150.0]
+
+    # test missing keys
+    with pytest.raises(redis.ResponseError):
+        r.json().numincrby("non_existing_doc", "$..a", 2)
+        r.json().nummultby("non_existing_doc", "$..a", 2)
+
+    # Test legacy NUMINCRBY
+    r.json().set("doc1", "$", {"a": "b", "b": [{"a": 2}, {"a": 5.0}, {"a": "c"}]})
+    assert r.json().numincrby("doc1", ".b[0].a", 3) == 5
+
+    # Test legacy NUMMULTBY
+    r.json().set("doc1", "$", {"a": "b", "b": [{"a": 2}, {"a": 5.0}, {"a": "c"}]})
+
+    with pytest.deprecated_call():
+        assert r.json().nummultby("doc1", ".b[0].a", 3) == 6
diff --git a/test/test_json/test_json_arr_commands.py b/test/test_json/test_json_arr_commands.py
new file mode 100644
index 0000000..df63754
--- /dev/null
+++ b/test/test_json/test_json_arr_commands.py
@@ -0,0 +1,306 @@
+import pytest
+import redis
+from redis.commands.json.path import Path
+
+from test.testtools import raw_command
+
+json_tests = pytest.importorskip("jsonpath_ng")
+
+
+def test_arrlen(r: redis.Redis):
+    r.json().set("arr", Path.root_path(), [0, 1, 2, 3, 4], )
+    assert r.json().arrlen("arr", Path.root_path(), ) == 5
+    assert r.json().arrlen("arr") == 5
+    assert r.json().arrlen("fake-key") is None
+
+    r.json().set("doc1", Path.root_path(),
+                 {"a": ["foo"], "nested1": {"a": ["hello", None, "world"]}, "nested2": {"a": 31}, })
+
+    assert r.json().arrlen("doc1", "$..a") == [1, 3, None]
+    assert r.json().arrlen("doc1", "$.nested1.a") == [3]
+
+    r.json().set("doc2", "$", {"a": ["foo"], "nested1": {"a": ["hello", 1, 1, None, "world"]}, "nested2": {"a": 31}, })
+    assert r.json().arrlen("doc2", "$..a") == [1, 5, None]
+    assert r.json().arrlen("doc2", ".nested1.a") == 5
+    r.json().set(
+        "doc1", "$", {
+            "a": ["foo"], "nested1": {"a": ["hello", None, "world"]}, "nested2": {"a": 31}, }, )
+
+    # Test multi
+    assert r.json().arrlen("doc1", "$..a") == [1, 3, None]
+    assert r.json().arrappend("doc1", "$..a", "non", "abba", "stanza") == [
+        4, 6, None, ]
+
+    r.json().clear("doc1", "$.a")
+    assert r.json().arrlen("doc1", "$..a") == [0, 6, None]
+    # Test single
+    assert r.json().arrlen("doc1", "$.nested1.a") == [6]
+
+    # Test missing key
+    with pytest.raises(redis.ResponseError):
+        r.json().arrappend("non_existing_doc", "$..a")
+
+    r.json().set(
+        "doc1", "$", {
+            "a": ["foo"], "nested1": {"a": ["hello", None, "world"]}, "nested2": {"a": 31}, }, )
+    # Test multi (return result of last path)
+    assert r.json().arrlen("doc1", "$..a") == [1, 3, None]
+    assert r.json().arrappend("doc1", "..a", "non", "abba", "stanza") == 6
+
+    # Test single
+    assert r.json().arrlen("doc1", ".nested1.a") == 6
+
+    # Test missing key
+    assert r.json().arrlen("non_existing_doc", "..a") is None
+
+
+def test_arrappend(r: redis.Redis):
+    with pytest.raises(redis.ResponseError):
+        r.json().arrappend("non-existing-key", Path.root_path(), 2)
+
+    r.json().set("arr", Path.root_path(), [1])
+    assert r.json().arrappend("arr", Path.root_path(), 2) == 2
+    assert r.json().arrappend("arr", Path.root_path(), 3, 4) == 4
+    assert r.json().arrappend("arr", Path.root_path(), *[5, 6, 7]) == 7
+    assert r.json().get("arr") == [1, 2, 3, 4, 5, 6, 7]
+    r.json().set(
+        "doc1", "$", {
+            "a": ["foo"], "nested1": {"a": ["hello", None, "world"]}, "nested2": {"a": 31}, }, )
+    # Test multi
+    assert r.json().arrappend("doc1", "$..a", "bar", "racuda") == [3, 5, None]
+    assert r.json().get("doc1", "$") == [{
+        "a": ["foo", "bar", "racuda"], "nested1": {"a": ["hello", None, "world", "bar", "racuda"]},
+        "nested2": {"a": 31}, }]
+    assert r.json().arrappend("doc1", "$.nested1.a", "baz") == [6]
+
+    # Test legacy
+    r.json().set("doc1", "$", {
+        "a": ["foo"], "nested1": {"a": ["hello", None, "world"]}, "nested2": {"a": 31}, })
+    # Test multi (all paths are updated, but return result of last path)
+    assert r.json().arrappend("doc1", "..a", "bar", "racuda") == 5
+
+    assert r.json().get("doc1", "$") == [{
+        "a": ["foo", "bar", "racuda"], "nested1": {"a": ["hello", None, "world", "bar", "racuda"]},
+        "nested2": {"a": 31}, }]
+    # Test single
+    assert r.json().arrappend("doc1", ".nested1.a", "baz") == 6
+    assert r.json().get("doc1", "$") == [{
+        "a": ["foo", "bar", "racuda"], "nested1": {"a": ["hello", None, "world", "bar", "racuda", "baz"]},
+        "nested2": {"a": 31}, }]
+
+    # Test missing key
+    with pytest.raises(redis.ResponseError):
+        r.json().arrappend("non_existing_doc", "$..a")
+
+
+def test_arrindex(r: redis.Redis):
+    r.json().set("foo", Path.root_path(), [0, 1, 2, 3, 4], )
+
+    assert r.json().arrindex("foo", Path.root_path(), 1) == 1
+    assert r.json().arrindex("foo", Path.root_path(), 1, 2) == -1
+
+    r.json().set("store", "$", {"store": {
+        "book": [{
+            "category": "reference", "author": "Nigel Rees", "title": "Sayings of the Century", "price": 8.95,
+            "size": [10, 20, 30, 40], }, {
+            "category": "fiction", "author": "Evelyn Waugh", "title": "Sword of Honour", "price": 12.99,
+            "size": [50, 60, 70, 80], }, {
+            "category": "fiction", "author": "Herman Melville", "title": "Moby Dick", "isbn": "0-553-21311-3",
+            "price": 8.99, "size": [5, 10, 20, 30], }, {
+            "category": "fiction", "author": "J. R. R. Tolkien", "title": "The Lord of the Rings",
+            "isbn": "0-395-19395-8", "price": 22.99, "size": [5, 6, 7, 8], }, ],
+        "bicycle": {"color": "red", "price": 19.95}, }})
+
+    assert r.json().get("store", "$.store.book[?(@.price<10)].size") == [
+        [10, 20, 30, 40], [5, 10, 20, 30], ]
+    assert r.json().arrindex("store", "$.store.book[?(@.price<10)].size", "20") == [-1, -1]
+
+    # Test index of int scalar in multi values
+    r.json().set("test_num", ".", [
+        {"arr": [0, 1, 3.0, 3, 2, 1, 0, 3]}, {"nested1_found": {"arr": [5, 4, 3, 2, 1, 0, 1, 2, 3.0, 2, 4, 5]}},
+        {"nested2_not_found": {"arr": [2, 4, 6]}}, {"nested3_scalar": {"arr": "3"}}, [
+            {"nested41_not_arr": {"arr_renamed": [1, 2, 3]}}, {"nested42_empty_arr": {"arr": []}}, ], ])
+
+    assert r.json().get("test_num", "$..arr") == [
+        [0, 1, 3.0, 3, 2, 1, 0, 3], [5, 4, 3, 2, 1, 0, 1, 2, 3.0, 2, 4, 5], [2, 4, 6], "3", [], ]
+
+    assert r.json().arrindex("test_num", "$..nonexistingpath", 3) == []
+    assert r.json().arrindex("test_num", "$..arr", 3) == [3, 2, -1, None, -1]
+
+    # Test index of double scalar in multi values
+    assert r.json().arrindex("test_num", "$..arr", 3.0) == [2, 8, -1, None, -1]
+
+    # Test index of string scalar in multi values
+    r.json().set("test_string", ".", [
+        {"arr": ["bazzz", "bar", 2, "baz", 2, "ba", "baz", 3]}, {
+            "nested1_found": {
+                "arr": [None, "baz2", "buzz", 2, 1, 0, 1, "2", "baz", 2, 4, 5]
+            }
+        }, {"nested2_not_found": {"arr": ["baz2", 4, 6]}}, {"nested3_scalar": {"arr": "3"}}, [
+            {"nested41_arr": {"arr_renamed": [1, "baz", 3]}}, {"nested42_empty_arr": {"arr": []}}, ], ])
+    assert r.json().get("test_string", "$..arr") == [
+        ["bazzz", "bar", 2, "baz", 2, "ba", "baz", 3], [None, "baz2", "buzz", 2, 1, 0, 1, "2", "baz", 2, 4, 5],
+        ["baz2", 4, 6], "3", [], ]
+
+    assert r.json().arrindex("test_string", "$..arr", "baz") == [3, 8, -1, None, -1, ]
+
+    assert r.json().arrindex("test_string", "$..arr", "baz", 2) == [3, 8, -1, None, -1, ]
+    assert r.json().arrindex("test_string", "$..arr", "baz", 4) == [6, 8, -1, None, -1, ]
+    assert r.json().arrindex("test_string", "$..arr", "baz", -5) == [3, 8, -1, None, -1, ]
+    assert r.json().arrindex("test_string", "$..arr", "baz", 4, 7) == [6, -1, -1, None, -1, ]
+    assert r.json().arrindex("test_string", "$..arr", "baz", 4, -1) == [6, 8, -1, None, -1, ]
+    assert r.json().arrindex("test_string", "$..arr", "baz", 4, 0) == [6, 8, -1, None, -1, ]
+    assert r.json().arrindex("test_string", "$..arr", "5", 7, -1) == [-1, -1, -1, None, -1, ]
+    assert r.json().arrindex("test_string", "$..arr", "5", 7, 0) == [-1, -1, -1, None, -1, ]
+
+    # Test index of None scalar in multi values
+    r.json().set("test_None", ".", [
+        {"arr": ["bazzz", "None", 2, None, 2, "ba", "baz", 3]}, {
+            "nested1_found": {
+                "arr": ["zaz", "baz2", "buzz", 2, 1, 0, 1, "2", None, 2, 4, 5]
+            }
+        }, {"nested2_not_found": {"arr": ["None", 4, 6]}}, {"nested3_scalar": {"arr": None}}, [
+            {"nested41_arr": {"arr_renamed": [1, None, 3]}}, {"nested42_empty_arr": {"arr": []}}, ], ])
+    assert r.json().get("test_None", "$..arr") == [
+        ["bazzz", "None", 2, None, 2, "ba", "baz", 3], ["zaz", "baz2", "buzz", 2, 1, 0, 1, "2", None, 2, 4, 5],
+        ["None", 4, 6], None, [], ]
+
+    # Test with none-scalar value
+    # assert r.json().arrindex("test_None", "$..nested42_empty_arr.arr", {"arr": []}) == [-1]
+
+    # Test legacy (path begins with dot)
+    # Test index of int scalar in single value
+    assert r.json().arrindex("test_num", ".[0].arr", 3) == 3
+    assert r.json().arrindex("test_num", ".[0].arr", 9) == -1
+
+    with pytest.raises(redis.ResponseError):
+        r.json().arrindex("test_num", ".[0].arr_not", 3)
+    # Test index of string scalar in single value
+    assert r.json().arrindex("test_string", ".[0].arr", "baz") == 3
+    assert r.json().arrindex("test_string", ".[0].arr", "faz") == -1
+    # Test index of None scalar in single value
+    assert r.json().arrindex("test_None", ".[0].arr", "None") == 1
+    assert r.json().arrindex("test_None", "..nested2_not_found.arr", "None") == 0
+
+
+def test_arrinsert(r: redis.Redis):
+    r.json().set("arr", Path.root_path(), [0, 4], )
+
+    assert r.json().arrinsert("arr", Path.root_path(), 1, *[1, 2, 3], ) == 5
+    assert r.json().get("arr") == [0, 1, 2, 3, 4]
+
+    # test prepends
+    r.json().set("val2", Path.root_path(), [5, 6, 7, 8, 9], )
+    assert r.json().arrinsert("val2", Path.root_path(), 0, ["some", "thing"], ) == 6
+    assert r.json().get("val2") == [["some", "thing"], 5, 6, 7, 8, 9]
+    r.json().set("doc1", "$", {"a": ["foo"], "nested1": {"a": ["hello", None, "world"]}, "nested2": {"a": 31}, })
+    # Test multi
+    assert r.json().arrinsert("doc1", "$..a", "1", "bar", "racuda") == [3, 5, None]
+
+    assert r.json().get("doc1", "$") == [{
+        "a": ["foo", "bar", "racuda"], "nested1": {"a": ["hello", "bar", "racuda", None, "world"]},
+        "nested2": {"a": 31}, }]
+    # Test single
+    assert r.json().arrinsert("doc1", "$.nested1.a", -2, "baz") == [6]
+    assert r.json().get("doc1", "$") == [{
+        "a": ["foo", "bar", "racuda"], "nested1": {"a": ["hello", "bar", "racuda", "baz", None, "world"]},
+        "nested2": {"a": 31}, }]
+
+    # Test missing key
+    with pytest.raises(redis.ResponseError):
+        r.json().arrappend("non_existing_doc", "$..a")
+
+
+def test_arrpop(r: redis.Redis):
+    r.json().set("arr", Path.root_path(), [0, 1, 2, 3, 4], )
+    assert raw_command(r, 'json.arrpop', 'arr') == b'4'
+
+    r.json().set("arr", Path.root_path(), [0, 1, 2, 3, 4], )
+    assert r.json().arrpop("arr", Path.root_path(), 4, ) == 4
+    assert r.json().arrpop("arr", Path.root_path(), -1, ) == 3
+    assert r.json().arrpop("arr", Path.root_path(), ) == 2
+    assert r.json().arrpop("arr", Path.root_path(), 0, ) == 0
+    assert r.json().get("arr") == [1]
+
+    # test out of bounds
+    r.json().set("arr", Path.root_path(), [0, 1, 2, 3, 4], )
+    assert r.json().arrpop("arr", Path.root_path(), 99, ) == 4
+
+    # none test
+    r.json().set("arr", Path.root_path(), [], )
+    assert r.json().arrpop("arr") is None
+
+    r.json().set("doc1", "$", {"a": ["foo"], "nested1": {"a": ["hello", None, "world"]}, "nested2": {"a": 31}, })
+
+    # # # Test multi
+    assert r.json().arrpop("doc1", "$..a", 1) == ['"foo"', None, None]
+    assert r.json().get("doc1", "$") == [{"a": [], "nested1": {"a": ["hello", "world"]}, "nested2": {"a": 31}}]
+
+    # Test missing key
+    with pytest.raises(redis.ResponseError):
+        r.json().arrpop("non_existing_doc", "..a")
+
+    # # Test legacy
+    r.json().set("doc1", "$", {"a": ["foo"], "nested1": {"a": ["hello", None, "world"]}, "nested2": {"a": 31}, })
+    # Test multi (all paths are updated, but return result of last path)
+    assert r.json().arrpop("doc1", "..a", "1") is None
+    assert r.json().get("doc1", "$") == [{"a": [], "nested1": {"a": ["hello", "world"]}, "nested2": {"a": 31}}]
+
+    # # Test missing key
+    with pytest.raises(redis.ResponseError):
+        r.json().arrpop("non_existing_doc", "..a")
+
+
+def test_arrtrim(r: redis.Redis):
+    r.json().set("arr", Path.root_path(), [0, 1, 2, 3, 4], )
+
+    assert r.json().arrtrim("arr", Path.root_path(), 1, 3, ) == 3
+    assert r.json().get("arr") == [1, 2, 3]
+
+    # <0 test, should be 0 equivalent
+    r.json().set("arr", Path.root_path(), [0, 1, 2, 3, 4], )
+    assert r.json().arrtrim("arr", Path.root_path(), -1, 3, ) == 0
+
+    # testing stop > end
+    r.json().set("arr", Path.root_path(), [0, 1, 2, 3, 4], )
+    assert r.json().arrtrim("arr", Path.root_path(), 3, 99, ) == 2
+
+    # start > array size and stop
+    r.json().set("arr", Path.root_path(), [0, 1, 2, 3, 4], )
+    assert r.json().arrtrim("arr", Path.root_path(), 9, 1, ) == 0
+
+    # all larger
+    r.json().set("arr", Path.root_path(), [0, 1, 2, 3, 4], )
+    assert r.json().arrtrim("arr", Path.root_path(), 9, 11, ) == 0
+
+    r.json().set("doc1", "$", {"a": ["foo"], "nested1": {"a": ["hello", None, "world"]}, "nested2": {"a": 31}, })
+    # Test multi
+    assert r.json().arrtrim("doc1", "$..a", "1", -1) == [0, 2, None]
+    assert r.json().get("doc1", "$") == [{"a": [], "nested1": {"a": [None, "world"]}, "nested2": {"a": 31}}]
+
+    r.json().set('doc1', '$', {"a": [], "nested1": {"a": [None, "world"]}, "nested2": {"a": 31}})
+    assert r.json().arrtrim("doc1", "$..a", "1", "1") == [0, 1, None]
+    assert r.json().get("doc1", "$") == [{"a": [], "nested1": {"a": ["world"]}, "nested2": {"a": 31}}]
+    # Test single
+    assert r.json().arrtrim("doc1", "$.nested1.a", 1, 0) == [0]
+    assert r.json().get("doc1", "$") == [{"a": [], "nested1": {"a": []}, "nested2": {"a": 31}}]
+
+    # Test missing key
+    with pytest.raises(redis.ResponseError):
+        r.json().arrtrim("non_existing_doc", "..a", "0", 1)
+
+    # Test legacy
+    r.json().set("doc1", "$", {"a": ["foo"], "nested1": {"a": ["hello", None, "world"]}, "nested2": {"a": 31}, })
+
+    # Test multi (all paths are updated, but return result of last path)
+    assert r.json().arrtrim("doc1", "..a", "1", "-1") == 2
+
+    # Test single
+    assert r.json().arrtrim("doc1", ".nested1.a", "1", "1") == 1
+    assert r.json().get("doc1", "$") == [
+        {"a": [], "nested1": {"a": ["world"]}, "nested2": {"a": 31}}
+    ]
+
+    # Test missing key
+    with pytest.raises(redis.ResponseError):
+        r.json().arrtrim("non_existing_doc", "..a", 1, 1)
diff --git a/test/test_json/test_json_commands.py b/test/test_json/test_json_commands.py
new file mode 100644
index 0000000..f67b710
--- /dev/null
+++ b/test/test_json/test_json_commands.py
@@ -0,0 +1,232 @@
+"""Tests for `fakeredis-py`'s emulation of Redis's JSON command subset."""
+
+from __future__ import annotations
+
+from typing import (Any, Dict, List, Tuple, )
+
+import pytest
+import redis
+from redis.commands.json.path import Path
+
+json_tests = pytest.importorskip("jsonpath_ng")
+
+SAMPLE_DATA = {
+    "a": ["foo"],
+    "nested1": {"a": ["hello", None, "world"]},
+    "nested2": {"a": 31},
+}
+
+
+@pytest.fixture(scope="function")
+def json_data() -> Dict[str, Any]:
+    """A module-scoped "blob" of JSON-encodable data."""
+    return {
+        "L1": {
+            "a": {
+                "A1_B1": 10,
+                "A1_B2": False,
+                "A1_B3": {
+                    "A1_B3_C1": None,
+                    "A1_B3_C2": [
+                        "A1_B3_C2_D1_1",
+                        "A1_B3_C2_D1_2",
+                        -19.5,
+                        "A1_B3_C2_D1_4",
+                        "A1_B3_C2_D1_5",
+                        {"A1_B3_C2_D1_6_E1": True},
+                    ],
+                    "A1_B3_C3": [1],
+                },
+                "A1_B4": {"A1_B4_C1": "foo"},
+            }
+        },
+        "L2": {
+            "a": {
+                "A2_B1": 20,
+                "A2_B2": False,
+                "A2_B3": {
+                    "A2_B3_C1": None,
+                    "A2_B3_C2": [
+                        "A2_B3_C2_D1_1",
+                        "A2_B3_C2_D1_2",
+                        -37.5,
+                        "A2_B3_C2_D1_4",
+                        "A2_B3_C2_D1_5",
+                        {"A2_B3_C2_D1_6_E1": False},
+                    ],
+                    "A2_B3_C3": [2],
+                },
+                "A2_B4": {"A2_B4_C1": "bar"},
+            }
+        },
+    }
+
+
+@pytest.mark.xfail
+def test_debug(r: redis.Redis):
+    r.json().set("str", Path.root_path(), "foo")
+    assert 24 == r.json().debug("MEMORY", "str", Path.root_path())
+    assert 24 == r.json().debug("MEMORY", "str")
+
+    # technically help is valid
+    assert isinstance(r.json().debug("HELP"), list)
+
+
+@pytest.mark.xfail
+def test_resp(r: redis.Redis):
+    obj = {"foo": "bar", "baz": 1, "qaz": True, }
+    r.json().set("obj", Path.root_path(), obj, )
+
+    assert "bar" == r.json().resp("obj", Path("foo"), )
+    assert 1 == r.json().resp("obj", Path("baz"), )
+    assert r.json().resp(
+        "obj",
+        Path("qaz"),
+    )
+    assert isinstance(r.json().resp("obj"), list)
+
+
+def load_types_data(nested_key_name: str) -> Tuple[Dict[str, Any], List[str]]:
+    """Generate a structure with sample of all types
+    """
+    type_samples = {
+        "object": {},
+        "array": [],
+        "string": "str",
+        "integer": 42,
+        "number": 1.2,
+        "boolean": False,
+        "null": None,
+    }
+    jdata = {}
+
+    for (k, v) in type_samples.items():
+        jdata[f"nested_{k}"] = {nested_key_name: v}
+
+    return jdata, [k.encode() for k in type_samples.keys()]
+
+
+@pytest.mark.xfail
+def test_debug_dollar(r: redis.Redis):
+    jdata, jtypes = load_types_data("a")
+
+    r.json().set("doc1", "$", jdata)
+
+    # Test multi
+    assert r.json().debug("MEMORY", "doc1", "$..a") == [72, 24, 24, 16, 16, 1, 0]
+
+    # Test single
+    assert r.json().debug("MEMORY", "doc1", "$.nested2.a") == [24]
+
+    # Test legacy
+    assert r.json().debug("MEMORY", "doc1", "..a") == 72
+
+    # Test missing path (defaults to root)
+    assert r.json().debug("MEMORY", "doc1") == 72
+
+    # Test missing key
+    assert r.json().debug("MEMORY", "non_existing_doc", "$..a") == []
+
+
+@pytest.mark.xfail
+def test_resp_dollar(r: redis.Redis, json_data: Dict[str, Any]):
+    r.json().set("doc1", "$", json_data)
+
+    # Test multi
+    res = r.json().resp("doc1", "$..a")
+
+    assert res == [
+        [
+            "{",
+            "A1_B1",
+            10,
+            "A1_B2",
+            "false",
+            "A1_B3",
+            [
+                "{",
+                "A1_B3_C1",
+                None,
+                "A1_B3_C2",
+                [
+                    "[",
+                    "A1_B3_C2_D1_1",
+                    "A1_B3_C2_D1_2",
+                    "-19.5",
+                    "A1_B3_C2_D1_4",
+                    "A1_B3_C2_D1_5",
+                    ["{", "A1_B3_C2_D1_6_E1", "true"],
+                ],
+                "A1_B3_C3",
+                ["[", 1],
+            ],
+            "A1_B4",
+            ["{", "A1_B4_C1", "foo"],
+        ],
+        [
+            "{",
+            "A2_B1",
+            20,
+            "A2_B2",
+            "false",
+            "A2_B3",
+            [
+                "{",
+                "A2_B3_C1",
+                None,
+                "A2_B3_C2",
+                [
+                    "[",
+                    "A2_B3_C2_D1_1",
+                    "A2_B3_C2_D1_2",
+                    "-37.5",
+                    "A2_B3_C2_D1_4",
+                    "A2_B3_C2_D1_5",
+                    ["{", "A2_B3_C2_D1_6_E1", "false"],
+                ],
+                "A2_B3_C3",
+                ["[", 2],
+            ],
+            "A2_B4",
+            ["{", "A2_B4_C1", "bar"],
+        ],
+    ]
+
+    # Test single
+    resSingle = r.json().resp("doc1", "$.L1.a")
+    assert resSingle == [
+        [
+            "{",
+            "A1_B1",
+            10,
+            "A1_B2",
+            "false",
+            "A1_B3",
+            [
+                "{",
+                "A1_B3_C1",
+                None,
+                "A1_B3_C2",
+                [
+                    "[",
+                    "A1_B3_C2_D1_1",
+                    "A1_B3_C2_D1_2",
+                    "-19.5",
+                    "A1_B3_C2_D1_4",
+                    "A1_B3_C2_D1_5",
+                    ["{", "A1_B3_C2_D1_6_E1", "true"],
+                ],
+                "A1_B3_C3",
+                ["[", 1],
+            ],
+            "A1_B4",
+            ["{", "A1_B4_C1", "foo"],
+        ]
+    ]
+
+    # Test missing path
+    r.json().resp("doc1", "$.nowhere")
+
+    # Test missing key
+    # with pytest.raises(exceptions.ResponseError):
+    r.json().resp("non_existing_doc", "$..a")
diff --git a/test/test_mixins/__init__.py b/test/test_mixins/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/test/test_mixins/test_bitmap_commands.py b/test/test_mixins/test_bitmap_commands.py
new file mode 100644
index 0000000..576e6cb
--- /dev/null
+++ b/test/test_mixins/test_bitmap_commands.py
@@ -0,0 +1,210 @@
+import pytest
+import redis
+import redis.client
+
+from test.testtools import raw_command
+
+
+def test_getbit(r: redis.Redis):
+    r.setbit('foo', 3, 1)
+    assert r.getbit('foo', 0) == 0
+    assert r.getbit('foo', 1) == 0
+    assert r.getbit('foo', 2) == 0
+    assert r.getbit('foo', 3) == 1
+    assert r.getbit('foo', 4) == 0
+    assert r.getbit('foo', 100) == 0
+
+
+def test_getbit_wrong_type(r: redis.Redis):
+    r.rpush('foo', b'x')
+    with pytest.raises(redis.ResponseError):
+        r.getbit('foo', 1)
+
+
+def test_multiple_bits_set(r: redis.Redis):
+    r.setbit('foo', 1, 1)
+    r.setbit('foo', 3, 1)
+    r.setbit('foo', 5, 1)
+
+    assert r.getbit('foo', 0) == 0
+    assert r.getbit('foo', 1) == 1
+    assert r.getbit('foo', 2) == 0
+    assert r.getbit('foo', 3) == 1
+    assert r.getbit('foo', 4) == 0
+    assert r.getbit('foo', 5) == 1
+    assert r.getbit('foo', 6) == 0
+
+
+def test_unset_bits(r: redis.Redis):
+    r.setbit('foo', 1, 1)
+    r.setbit('foo', 2, 0)
+    r.setbit('foo', 3, 1)
+    assert r.getbit('foo', 1) == 1
+    r.setbit('foo', 1, 0)
+    assert r.getbit('foo', 1) == 0
+    r.setbit('foo', 3, 0)
+    assert r.getbit('foo', 3) == 0
+
+
+def test_get_set_bits(r: redis.Redis):
+    # set bit 5
+    assert not r.setbit('a', 5, True)
+    assert r.getbit('a', 5)
+    # unset bit 4
+    assert not r.setbit('a', 4, False)
+    assert not r.getbit('a', 4)
+    # set bit 4
+    assert not r.setbit('a', 4, True)
+    assert r.getbit('a', 4)
+    # set bit 5 again
+    assert r.setbit('a', 5, True)
+    assert r.getbit('a', 5)
+
+
+def test_setbits_and_getkeys(r: redis.Redis):
+    # The bit operations and the get commands
+    # should play nicely with each other.
+    r.setbit('foo', 1, 1)
+    assert r.get('foo') == b'@'
+    r.setbit('foo', 2, 1)
+    assert r.get('foo') == b'`'
+    r.setbit('foo', 3, 1)
+    assert r.get('foo') == b'p'
+    r.setbit('foo', 9, 1)
+    assert r.get('foo') == b'p@'
+    r.setbit('foo', 54, 1)
+    assert r.get('foo') == b'p@\x00\x00\x00\x00\x02'
+
+
+def test_setbit_wrong_type(r: redis.Redis):
+    r.rpush('foo', b'x')
+    with pytest.raises(redis.ResponseError):
+        r.setbit('foo', 0, 1)
+
+
+def test_setbit_expiry(r: redis.Redis):
+    r.set('foo', b'0x00', ex=10)
+    r.setbit('foo', 1, 1)
+    assert r.ttl('foo') > 0
+
+
+def test_bitcount(r: redis.Redis):
+    r.delete('foo')
+    assert r.bitcount('foo') == 0
+    r.setbit('foo', 1, 1)
+    assert r.bitcount('foo') == 1
+    r.setbit('foo', 8, 1)
+    assert r.bitcount('foo') == 2
+    assert r.bitcount('foo', 1, 1) == 1
+    r.setbit('foo', 57, 1)
+    assert r.bitcount('foo') == 3
+    r.set('foo', ' ')
+    assert r.bitcount('foo') == 1
+    r.set('key', 'foobar')
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, 'bitcount', 'key', '1', '2', 'dsd')
+    assert r.bitcount('key') == 26
+    assert r.bitcount('key', start=0, end=0) == 4
+    assert r.bitcount('key', start=1, end=1) == 6
+
+
+@pytest.mark.max_server('6.2.7')
+def test_bitcount_mode_redis6(r: redis.Redis):
+    r.set('key', 'foobar')
+    with pytest.raises(redis.ResponseError):
+        r.bitcount('key', start=1, end=1, mode='byte')
+    with pytest.raises(redis.ResponseError):
+        r.bitcount('key', start=1, end=1, mode='bit')
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, 'bitcount', 'key', '1', '2', 'dsd', 'cd')
+
+
+@pytest.mark.min_server('7')
+def test_bitcount_mode_redis7(r: redis.Redis):
+    r.set('key', 'foobar')
+    assert r.bitcount('key', start=1, end=1, mode='byte') == 6
+    assert r.bitcount('key', start=5, end=30, mode='bit') == 17
+    with pytest.raises(redis.ResponseError):
+        r.bitcount('key', start=5, end=30, mode='dscd')
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, 'bitcount', 'key', '1', '2', 'dsd', 'cd')
+
+
+def test_bitcount_wrong_type(r: redis.Redis):
+    r.rpush('foo', b'x')
+    with pytest.raises(redis.ResponseError):
+        r.bitcount('foo')
+
+
+def test_bitop(r: redis.Redis):
+    r.set('key1', 'foobar')
+    r.set('key2', 'abcdef')
+
+    assert r.bitop('and', 'dest', 'key1', 'key2') == 6
+    assert r.get('dest') == b'`bc`ab'
+
+    assert r.bitop('not', 'dest1', 'key1') == 6
+    assert r.get('dest1') == b'\x99\x90\x90\x9d\x9e\x8d'
+
+    assert r.bitop('or', 'dest-or', 'key1', 'key2') == 6
+    assert r.get('dest-or') == b'goofev'
+
+    assert r.bitop('xor', 'dest-xor', 'key1', 'key2') == 6
+    assert r.get('dest-xor') == b'\x07\r\x0c\x06\x04\x14'
+
+
+def test_bitop_errors(r: redis.Redis):
+    r.set('key1', 'foobar')
+    r.set('key2', 'abcdef')
+    r.sadd('key-set', 'member1')
+    with pytest.raises(redis.ResponseError):
+        r.bitop('not', 'dest', 'key1', 'key2')
+    with pytest.raises(redis.ResponseError):
+        r.bitop('badop', 'dest', 'key1', 'key2')
+    with pytest.raises(redis.ResponseError):
+        r.bitop('and', 'dest', 'key1', 'key-set')
+    with pytest.raises(redis.ResponseError):
+        r.bitop('and', 'dest')
+
+
+def test_bitpos(r: redis.Redis):
+    key = "key:bitpos"
+    r.set(key, b"\xff\xf0\x00")
+    assert r.bitpos(key, 0) == 12
+    assert r.bitpos(key, 0, 2, -1) == 16
+    assert r.bitpos(key, 0, -2, -1) == 12
+    r.set(key, b"\x00\xff\xf0")
+    assert r.bitpos(key, 1, 0) == 8
+    assert r.bitpos(key, 1, 1) == 8
+    r.set(key, b"\x00\x00\x00")
+    assert r.bitpos(key, 1) == -1
+    r.set(key, b"\xff\xf0\x00")
+
+
+@pytest.mark.min_server('7')
+def test_bitops_mode_redis7(r: redis.Redis):
+    key = "key:bitpos"
+    r.set(key, b"\xff\xf0\x00")
+    assert r.bitpos(key, 0, 8, -1, 'bit') == 12
+    assert r.bitpos(key, 1, 8, -1, 'bit') == 8
+    with pytest.raises(redis.ResponseError):
+        assert r.bitpos(key, 0, 8, -1, 'bad_mode') == 12
+
+
+@pytest.mark.max_server('6.2.7')
+def test_bitops_mode_redis6(r: redis.Redis):
+    key = "key:bitpos"
+    r.set(key, b"\xff\xf0\x00")
+    with pytest.raises(redis.ResponseError):
+        assert r.bitpos(key, 0, 8, -1, 'bit') == 12
+
+
+def test_bitpos_wrong_arguments(r: redis.Redis):
+    key = "key:bitpos:wrong:args"
+    r.set(key, b"\xff\xf0\x00")
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, 'bitpos', key, '7')
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, 'bitpos', key, 1, '6', '5', 'BYTE', '6')
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, 'bitpos', key)
diff --git a/test/test_mixins/test_generic_commands.py b/test/test_mixins/test_generic_commands.py
new file mode 100644
index 0000000..aecb5c6
--- /dev/null
+++ b/test/test_mixins/test_generic_commands.py
@@ -0,0 +1,666 @@
+from datetime import datetime, timedelta
+from time import sleep, time
+
+import pytest
+import redis
+from redis.exceptions import ResponseError
+
+from fakeredis import _msgs as msgs
+from test.testtools import raw_command
+
+
+@pytest.mark.slow
+def test_expireat_should_expire_key_by_datetime(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    r.expireat('foo', datetime.now() + timedelta(seconds=1))
+    sleep(1.5)
+    assert r.get('foo') is None
+    assert r.expireat('bar', datetime.now()) is False
+
+
+@pytest.mark.slow
+def test_expireat_should_expire_key_by_timestamp(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    r.expireat('foo', int(time() + 1))
+    sleep(1.5)
+    assert r.get('foo') is None
+    assert r.expire('bar', 1) is False
+
+
+def test_expireat_should_return_true_for_existing_key(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.expireat('foo', int(time() + 1)) is True
+
+
+def test_expireat_should_return_false_for_missing_key(r: redis.Redis):
+    assert r.expireat('missing', int(time() + 1)) is False
+
+
+def test_del_operator(r: redis.Redis):
+    r['foo'] = 'bar'
+    del r['foo']
+    assert r.get('foo') is None
+
+
+def test_expire_should_not_handle_floating_point_values(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError, match='value is not an integer or out of range'):
+        r.expire('something_new', 1.2)
+        r.pexpire('something_new', 1000.2)
+        r.expire('some_unused_key', 1.2)
+        r.pexpire('some_unused_key', 1000.2)
+
+
+def test_ttl_should_return_minus_one_for_non_expiring_key(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    assert r.ttl('foo') == -1
+
+
+def test_sort_range_offset_range(r: redis.Redis):
+    r.rpush('foo', '2')
+    r.rpush('foo', '1')
+    r.rpush('foo', '4')
+    r.rpush('foo', '3')
+
+    assert r.sort('foo', start=0, num=2) == [b'1', b'2']
+
+
+def test_sort_range_offset_range_and_desc(r: redis.Redis):
+    r.rpush('foo', '2')
+    r.rpush('foo', '1')
+    r.rpush('foo', '4')
+    r.rpush('foo', '3')
+
+    assert r.sort("foo", start=0, num=1, desc=True) == [b"4"]
+
+
+def test_sort_range_offset_norange(r: redis.Redis):
+    with pytest.raises(redis.RedisError):
+        r.sort('foo', start=1)
+
+
+def test_sort_range_with_large_range(r: redis.Redis):
+    r.rpush('foo', '2')
+    r.rpush('foo', '1')
+    r.rpush('foo', '4')
+    r.rpush('foo', '3')
+    # num=20 even though len(foo) is 4.
+    assert r.sort('foo', start=1, num=20) == [b'2', b'3', b'4']
+
+
+def test_sort_descending(r: redis.Redis):
+    r.rpush('foo', '1')
+    r.rpush('foo', '2')
+    r.rpush('foo', '3')
+    assert r.sort('foo', desc=True) == [b'3', b'2', b'1']
+
+
+def test_sort_alpha(r: redis.Redis):
+    r.rpush('foo', '2a')
+    r.rpush('foo', '1b')
+    r.rpush('foo', '2b')
+    r.rpush('foo', '1a')
+
+    assert r.sort('foo', alpha=True) == [b'1a', b'1b', b'2a', b'2b']
+
+
+def test_sort_foo(r: redis.Redis):
+    r.rpush('foo', '2a')
+    r.rpush('foo', '1b')
+    r.rpush('foo', '2b')
+    r.rpush('foo', '1a')
+    with pytest.raises(redis.ResponseError):
+        r.sort('foo', alpha=False)
+
+
+def test_sort_empty(r: redis.Redis):
+    assert r.sort('foo') == []
+
+
+def test_sort_wrong_type(r: redis.Redis):
+    r.set('string', '3')
+    with pytest.raises(redis.ResponseError):
+        r.sort('string')
+
+
+def test_sort_with_store_option(r: redis.Redis):
+    r.rpush('foo', '2')
+    r.rpush('foo', '1')
+    r.rpush('foo', '4')
+    r.rpush('foo', '3')
+
+    assert r.sort('foo', store='bar') == 4
+    assert r.lrange('bar', 0, -1) == [b'1', b'2', b'3', b'4']
+
+
+def test_sort_with_by_and_get_option(r: redis.Redis):
+    r.rpush('foo', '2')
+    r.rpush('foo', '1')
+    r.rpush('foo', '4')
+    r.rpush('foo', '3')
+
+    r['weight_1'] = '4'
+    r['weight_2'] = '3'
+    r['weight_3'] = '2'
+    r['weight_4'] = '1'
+
+    r['data_1'] = 'one'
+    r['data_2'] = 'two'
+    r['data_3'] = 'three'
+    r['data_4'] = 'four'
+
+    assert (
+            r.sort('foo', by='weight_*', get='data_*')
+            == [b'four', b'three', b'two', b'one']
+    )
+    assert r.sort('foo', by='weight_*', get='#') == [b'4', b'3', b'2', b'1']
+    assert (
+            r.sort('foo', by='weight_*', get=('data_*', '#'))
+            == [b'four', b'4', b'three', b'3', b'two', b'2', b'one', b'1']
+    )
+    assert r.sort('foo', by='weight_*', get='data_1') == [None, None, None, None]
+    # Test sort with different parameters order
+    assert (
+            raw_command(r, 'sort', 'foo', 'get', 'data_*', 'by', 'weight_*', 'get', '#')
+            == [b'four', b'4', b'three', b'3', b'two', b'2', b'one', b'1']
+    )
+
+
+def test_sort_with_hash(r: redis.Redis):
+    r.rpush('foo', 'middle')
+    r.rpush('foo', 'eldest')
+    r.rpush('foo', 'youngest')
+    r.hset('record_youngest', 'age', 1)
+    r.hset('record_youngest', 'name', 'baby')
+
+    r.hset('record_middle', 'age', 10)
+    r.hset('record_middle', 'name', 'teen')
+
+    r.hset('record_eldest', 'age', 20)
+    r.hset('record_eldest', 'name', 'adult')
+
+    assert r.sort('foo', by='record_*->age') == [b'youngest', b'middle', b'eldest']
+    assert (
+            r.sort('foo', by='record_*->age', get='record_*->name')
+            == [b'baby', b'teen', b'adult']
+    )
+
+
+def test_sort_with_set(r: redis.Redis):
+    r.sadd('foo', '3')
+    r.sadd('foo', '1')
+    r.sadd('foo', '2')
+    assert r.sort('foo') == [b'1', b'2', b'3']
+
+
+def test_ttl_should_return_minus_two_for_non_existent_key(r: redis.Redis):
+    assert r.get('foo') is None
+    assert r.ttl('foo') == -2
+
+
+def test_type(r: redis.Redis):
+    r.set('string_key', "value")
+    r.lpush("list_key", "value")
+    r.sadd("set_key", "value")
+    r.zadd("zset_key", {"value": 1})
+    r.hset('hset_key', 'key', 'value')
+
+    assert r.type('string_key') == b'string'
+    assert r.type('list_key') == b'list'
+    assert r.type('set_key') == b'set'
+    assert r.type('zset_key') == b'zset'
+    assert r.type('hset_key') == b'hash'
+    assert r.type('none_key') == b'none'
+
+
+def test_unlink(r: redis.Redis):
+    r.set('foo', 'bar')
+    r.unlink('foo')
+    assert r.get('foo') is None
+
+
+def test_dump_missing(r: redis.Redis):
+    assert r.dump('foo') is None
+
+
+def test_dump_restore(r: redis.Redis):
+    r.set('foo', 'bar')
+    dump = r.dump('foo')
+    r.restore('baz', 0, dump)
+    assert r.get('baz') == b'bar'
+    assert r.ttl('baz') == -1
+
+
+def test_dump_restore_ttl(r: redis.Redis):
+    r.set('foo', 'bar')
+    dump = r.dump('foo')
+    r.restore('baz', 2000, dump)
+    assert r.get('baz') == b'bar'
+    assert 1000 <= r.pttl('baz') <= 2000
+
+
+def test_dump_restore_replace(r: redis.Redis):
+    r.set('foo', 'bar')
+    dump = r.dump('foo')
+    r.set('foo', 'baz')
+    r.restore('foo', 0, dump, replace=True)
+    assert r.get('foo') == b'bar'
+
+
+def test_restore_exists(r: redis.Redis):
+    r.set('foo', 'bar')
+    dump = r.dump('foo')
+    with pytest.raises(redis.exceptions.ResponseError):
+        r.restore('foo', 0, dump)
+
+
+def test_restore_invalid_dump(r: redis.Redis):
+    r.set('foo', 'bar')
+    dump = r.dump('foo')
+    with pytest.raises(redis.exceptions.ResponseError):
+        r.restore('baz', 0, dump[:-1])
+
+
+def test_restore_invalid_ttl(r: redis.Redis):
+    r.set('foo', 'bar')
+    dump = r.dump('foo')
+    with pytest.raises(redis.exceptions.ResponseError):
+        r.restore('baz', -1, dump)
+
+
+def test_set_then_get(r: redis.Redis):
+    assert r.set('foo', 'bar') is True
+    assert r.get('foo') == b'bar'
+
+
+def test_exists(r: redis.Redis):
+    assert 'foo' not in r
+    r.set('foo', 'bar')
+    assert 'foo' in r
+    with pytest.raises(redis.ResponseError, match=msgs.WRONG_ARGS_MSG6.format('exists')[4:]):
+        raw_command(r, 'exists')
+
+
+@pytest.mark.slow
+def test_expire_should_expire_key(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    r.expire('foo', 1)
+    sleep(1.5)
+    assert r.get('foo') is None
+    assert r.expire('bar', 1) is False
+
+
+def test_expire_should_throw_error(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    with pytest.raises(ResponseError):
+        r.expire('foo', 1, nx=True, xx=True)
+    with pytest.raises(ResponseError):
+        r.expire('foo', 1, nx=True, gt=True)
+    with pytest.raises(ResponseError):
+        r.expire('foo', 1, nx=True, lt=True)
+    with pytest.raises(ResponseError):
+        r.expire('foo', 1, gt=True, lt=True)
+
+
+@pytest.mark.max_server('7')
+def test_expire_extra_params_return_error(r: redis.Redis):
+    with pytest.raises(redis.exceptions.ResponseError):
+        r.expire('foo', 1, nx=True)
+
+
+def test_expire_should_return_true_for_existing_key(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.expire('foo', 1) is True
+
+
+def test_expire_should_return_false_for_missing_key(r: redis.Redis):
+    assert r.expire('missing', 1) is False
+
+
+@pytest.mark.slow
+def test_expire_should_expire_key_using_timedelta(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    r.expire('foo', timedelta(seconds=1))
+    sleep(1.5)
+    assert r.get('foo') is None
+    assert r.expire('bar', 1) is False
+
+
+@pytest.mark.slow
+def test_expire_should_expire_immediately_with_millisecond_timedelta(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    r.expire('foo', timedelta(milliseconds=750))
+    assert r.get('foo') is None
+    assert r.expire('bar', 1) is False
+
+
+def test_watch_expire(r: redis.Redis):
+    """EXPIRE should mark a key as changed for WATCH."""
+    r.set('foo', 'bar')
+    with r.pipeline() as p:
+        p.watch('foo')
+        r.expire('foo', 10000)
+        p.multi()
+        p.get('foo')
+        with pytest.raises(redis.exceptions.WatchError):
+            p.execute()
+
+
+@pytest.mark.slow
+def test_pexpire_should_expire_key(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    r.pexpire('foo', 150)
+    sleep(0.2)
+    assert r.get('foo') is None
+    assert r.pexpire('bar', 1) == 0
+
+
+def test_pexpire_should_return_truthy_for_existing_key(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.pexpire('foo', 1)
+
+
+def test_pexpire_should_return_falsey_for_missing_key(r: redis.Redis):
+    assert not r.pexpire('missing', 1)
+
+
+@pytest.mark.slow
+def test_pexpire_should_expire_key_using_timedelta(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    r.pexpire('foo', timedelta(milliseconds=750))
+    sleep(0.5)
+    assert r.get('foo') == b'bar'
+    sleep(0.5)
+    assert r.get('foo') is None
+    assert r.pexpire('bar', 1) == 0
+
+
+@pytest.mark.slow
+def test_pexpireat_should_expire_key_by_datetime(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    r.pexpireat('foo', datetime.now() + timedelta(milliseconds=150))
+    sleep(0.2)
+    assert r.get('foo') is None
+    assert r.pexpireat('bar', datetime.now()) == 0
+
+
+@pytest.mark.slow
+def test_pexpireat_should_expire_key_by_timestamp(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    r.pexpireat('foo', int(time() * 1000 + 150))
+    sleep(0.2)
+    assert r.get('foo') is None
+    assert r.expire('bar', 1) is False
+
+
+def test_pexpireat_should_return_true_for_existing_key(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.pexpireat('foo', int(time() * 1000 + 150))
+
+
+def test_pexpireat_should_return_false_for_missing_key(r: redis.Redis):
+    assert not r.pexpireat('missing', int(time() * 1000 + 150))
+
+
+def test_pttl_should_return_minus_one_for_non_expiring_key(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    assert r.pttl('foo') == -1
+
+
+def test_pttl_should_return_minus_two_for_non_existent_key(r: redis.Redis):
+    assert r.get('foo') is None
+    assert r.pttl('foo') == -2
+
+
+def test_randomkey_returns_none_on_empty_db(r: redis.Redis):
+    assert r.randomkey() is None
+
+
+def test_randomkey_returns_existing_key(r: redis.Redis):
+    r.set("foo", 1)
+    r.set("bar", 2)
+    r.set("baz", 3)
+    assert r.randomkey().decode() in ("foo", "bar", "baz")
+
+
+def test_persist(r: redis.Redis):
+    r.set('foo', 'bar', ex=20)
+    assert r.persist('foo') == 1
+    assert r.ttl('foo') == -1
+    assert r.persist('foo') == 0
+
+
+def test_watch_persist(r: redis.Redis):
+    """PERSIST should mark a variable as changed."""
+    r.set('foo', 'bar', ex=10000)
+    with r.pipeline() as p:
+        p.watch('foo')
+        r.persist('foo')
+        p.multi()
+        p.get('foo')
+        with pytest.raises(redis.exceptions.WatchError):
+            p.execute()
+
+
+def test_set_existing_key_persists(r: redis.Redis):
+    r.set('foo', 'bar', ex=20)
+    r.set('foo', 'foo')
+    assert r.ttl('foo') == -1
+
+
+def test_set_non_str_keys(r: redis.Redis):
+    assert r.set(2, 'bar') is True
+    assert r.get(2) == b'bar'
+    assert r.get('2') == b'bar'
+
+
+def test_getset_not_exist(r: redis.Redis):
+    val = r.getset('foo', 'bar')
+    assert val is None
+    assert r.get('foo') == b'bar'
+
+
+def test_get_float_type(r: redis.Redis):  # Test for issue #58
+    r.set('key', 123)
+    assert r.get('key') == b'123'
+    r.incr('key')
+    assert r.get('key') == b'124'
+
+
+def test_set_float_value(r: redis.Redis):
+    x = 1.23456789123456789
+    r.set('foo', x)
+    assert float(r.get('foo')) == x
+
+
+@pytest.mark.min_server('7')
+def test_expire_should_not_expire__when_no_expire_is_set(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    assert r.expire('foo', 1, xx=True) == 0
+
+
+@pytest.mark.min_server('7')
+def test_expire_should_not_expire__when_expire_is_set(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    assert r.expire('foo', 1, nx=True) == 1
+    assert r.expire('foo', 2, nx=True) == 0
+
+
+@pytest.mark.min_server('7')
+def test_expire_should_expire__when_expire_is_greater(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    assert r.expire('foo', 100) == 1
+    assert r.get('foo') == b'bar'
+    assert r.expire('foo', 200, gt=True) == 1
+
+
+@pytest.mark.min_server('7')
+def test_expire_should_expire__when_expire_is_lessthan(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    assert r.expire('foo', 20) == 1
+    assert r.expire('foo', 10, lt=True) == 1
+
+
+def test_rename(r: redis.Redis):
+    r.set('foo', 'unique value')
+    assert r.rename('foo', 'bar')
+    assert r.get('foo') is None
+    assert r.get('bar') == b'unique value'
+
+
+def test_rename_nonexistent_key(r: redis.Redis):
+    with pytest.raises(redis.ResponseError):
+        r.rename('foo', 'bar')
+
+
+def test_renamenx_doesnt_exist(r: redis.Redis):
+    r.set('foo', 'unique value')
+    assert r.renamenx('foo', 'bar')
+    assert r.get('foo') is None
+    assert r.get('bar') == b'unique value'
+
+
+def test_rename_does_exist(r: redis.Redis):
+    r.set('foo', 'unique value')
+    r.set('bar', 'unique value2')
+    assert not r.renamenx('foo', 'bar')
+    assert r.get('foo') == b'unique value'
+    assert r.get('bar') == b'unique value2'
+
+
+def test_rename_expiry(r: redis.Redis):
+    r.set('foo', 'value1', ex=10)
+    r.set('bar', 'value2')
+    r.rename('foo', 'bar')
+    assert r.ttl('bar') > 0
+
+
+def test_keys(r: redis.Redis):
+    r.set('', 'empty')
+    r.set('abc\n', '')
+    r.set('abc\\', '')
+    r.set('abcde', '')
+    r.set(b'\xfe\xcd', '')
+    assert sorted(r.keys()) == [b'', b'abc\n', b'abc\\', b'abcde', b'\xfe\xcd']
+    assert r.keys('??') == [b'\xfe\xcd']
+    # empty pattern not the same as no pattern
+    assert r.keys('') == [b'']
+    # ? must match \n
+    assert sorted(r.keys('abc?')) == [b'abc\n', b'abc\\']
+    # must be anchored at both ends
+    assert r.keys('abc') == []
+    assert r.keys('bcd') == []
+    # wildcard test
+    assert r.keys('a*de') == [b'abcde']
+    # positive groups
+    assert sorted(r.keys('abc[d\n]*')) == [b'abc\n', b'abcde']
+    assert r.keys('abc[c-e]?') == [b'abcde']
+    assert r.keys('abc[e-c]?') == [b'abcde']
+    assert r.keys('abc[e-e]?') == []
+    assert r.keys('abcd[ef') == [b'abcde']
+    assert r.keys('abcd[]') == []
+    # negative groups
+    assert r.keys('abc[^d\\\\]*') == [b'abc\n']
+    assert r.keys('abc[^]e') == [b'abcde']
+    # escaping
+    assert r.keys(r'abc\?e') == []
+    assert r.keys(r'abc\de') == [b'abcde']
+    assert r.keys(r'abc[\d]e') == [b'abcde']
+    # some escaping cases that redis handles strangely
+    assert r.keys('abc\\') == [b'abc\\']
+    assert r.keys(r'abc[\c-e]e') == []
+    assert r.keys(r'abc[c-\e]e') == []
+
+
+def test_contains(r: redis.Redis):
+    assert not r.exists('foo')
+    r.set('foo', 'bar')
+    assert r.exists('foo')
+
+
+def test_delete(r: redis.Redis):
+    r['foo'] = 'bar'
+    assert r.delete('foo') == 1
+    assert r.get('foo') is None
+
+
+@pytest.mark.slow
+def test_delete_expire(r: redis.Redis):
+    r.set("foo", "bar", ex=1)
+    r.delete("foo")
+    r.set("foo", "bar")
+    sleep(2)
+    assert r.get("foo") == b'bar'
+
+
+def test_delete_multiple(r: redis.Redis):
+    r['one'] = 'one'
+    r['two'] = 'two'
+    r['three'] = 'three'
+    # Since redis>=2.7.6 returns number of deleted items.
+    assert r.delete('one', 'two') == 2
+    assert r.get('one') is None
+    assert r.get('two') is None
+    assert r.get('three') == b'three'
+    assert r.delete('one', 'two') == 0
+    # If any keys are deleted, True is returned.
+    assert r.delete('two', 'three', 'three') == 1
+    assert r.get('three') is None
+
+
+def test_delete_nonexistent_key(r: redis.Redis):
+    assert r.delete('foo') == 0
+
+
+def test_basic_sort(r: redis.Redis):
+    r.rpush('foo', '2')
+    r.rpush('foo', '1')
+    r.rpush('foo', '3')
+
+    assert r.sort('foo') == [b'1', b'2', b'3']
+    assert raw_command(r, 'sort', 'foo', 'asc') == [b'1', b'2', b'3']
+
+
+def test_key_patterns(r: redis.Redis):
+    r.mset({'one': 1, 'two': 2, 'three': 3, 'four': 4})
+    assert sorted(r.keys('*o*')) == [b'four', b'one', b'two']
+    assert r.keys('t??') == [b'two']
+    assert sorted(r.keys('*')) == [b'four', b'one', b'three', b'two']
+    assert sorted(r.keys()) == [b'four', b'one', b'three', b'two']
+
+
+@pytest.mark.min_server('7')
+def test_watch_when_setbit_does_not_change_value(r: redis.Redis):
+    r.set('foo', b'0')
+
+    with r.pipeline() as p:
+        p.watch('foo')
+        assert r.setbit('foo', 0, 0) == 0
+        assert p.multi() is None
+        assert p.execute() == []
+
+
+def test_from_hypothesis_redis7(r: redis.Redis):
+    r.set('foo', b'0')
+    assert r.setbit('foo', 0, 0) == 0
+    assert r.append('foo', b'') == 1
+
+    r.set(b'', b'')
+    assert r.setbit(b'', 0, 0) == 0
+    assert r.get(b'') == b'\x00'
diff --git a/test/test_mixins/test_geo_commands.py b/test/test_mixins/test_geo_commands.py
new file mode 100644
index 0000000..1d2f1ea
--- /dev/null
+++ b/test/test_mixins/test_geo_commands.py
@@ -0,0 +1,210 @@
+from typing import Dict, Any
+
+import pytest
+import redis
+
+from test import testtools
+
+
+def test_geoadd_ch(r: redis.Redis):
+    values = (2.1909389952632, 41.433791470673, "place1")
+    assert r.geoadd("a", values) == 1
+    values = (2.1909389952632, 31.433791470673, "place1",
+              2.1873744593677, 41.406342043777, "place2",)
+    assert r.geoadd("a", values, ch=True) == 2
+    assert r.zrange("a", 0, -1) == [b"place1", b"place2"]
+
+
+def test_geoadd(r: redis.Redis):
+    values = (2.1909389952632, 41.433791470673, "place1",
+              2.1873744593677, 41.406342043777, "place2",)
+    assert r.geoadd("barcelona", values) == 2
+    assert r.zcard("barcelona") == 2
+
+    values = (2.1909389952632, 41.433791470673, "place1")
+    assert r.geoadd("a", values) == 1
+
+    with pytest.raises(redis.DataError):
+        r.geoadd("barcelona", (1, 2))
+    with pytest.raises(redis.DataError):
+        r.geoadd("t", values, ch=True, nx=True, xx=True)
+    with pytest.raises(redis.ResponseError):
+        testtools.raw_command(r, "geoadd", "barcelona", "1", "2")
+    with pytest.raises(redis.ResponseError):
+        testtools.raw_command(r, "geoadd", "barcelona", "nx", "xx", *values, )
+
+
+def test_geoadd_xx(r: redis.Redis):
+    values = (2.1909389952632, 41.433791470673, "place1",
+              2.1873744593677, 41.406342043777, "place2",)
+    assert r.geoadd("a", values) == 2
+    values = (
+        2.1909389952632, 41.433791470673, b"place1",
+        2.1873744593677, 41.406342043777, b"place2",
+        2.1804738294738, 41.405647879212, b"place3",
+    )
+    assert r.geoadd("a", values, nx=True) == 1
+    assert r.zrange("a", 0, -1) == [b"place3", b"place2", b"place1"]
+
+
+def test_geohash(r: redis.Redis):
+    values = (2.1909389952632, 41.433791470673, "place1",
+              2.1873744593677, 41.406342043777, "place2",)
+    r.geoadd("barcelona", values)
+    assert r.geohash("barcelona", "place1", "place2", "place3") == [
+        "sp3e9yg3kd0",
+        "sp3e9cbc3t0",
+        None,
+    ]
+
+
+def test_geopos(r: redis.Redis):
+    values = (2.1909389952632, 41.433791470673, "place1",
+              2.1873744593677, 41.406342043777, "place2",)
+    r.geoadd("barcelona", values)
+    # small errors may be introduced.
+    assert r.geopos("barcelona", "place1", "place4", "place2") == [
+        pytest.approx((2.1909389952632, 41.433791470673), 0.00001),
+        None,
+        pytest.approx((2.1873744593677, 41.406342043777), 0.00001),
+    ]
+
+
+def test_geodist(r: redis.Redis):
+    values = (2.1909389952632, 41.433791470673, "place1",
+              2.1873744593677, 41.406342043777, "place2",)
+    assert r.geoadd("barcelona", values) == 2
+    assert r.geodist("barcelona", "place1", "place2") == pytest.approx(3067.4157, 0.0001)
+
+
+def test_geodist_units(r: redis.Redis):
+    values = (2.1909389952632, 41.433791470673, "place1",
+              2.1873744593677, 41.406342043777, "place2",)
+    r.geoadd("barcelona", values)
+    assert r.geodist("barcelona", "place1", "place2", "km") == pytest.approx(3.0674, 0.0001)
+    assert r.geodist("barcelona", "place1", "place2", "mi") == pytest.approx(1.906, 0.0001)
+    assert r.geodist("barcelona", "place1", "place2", "ft") == pytest.approx(10063.6998, 0.0001)
+    with pytest.raises(redis.RedisError):
+        assert r.geodist("x", "y", "z", "inches")
+
+
+def test_geodist_missing_one_member(r: redis.Redis):
+    values = (2.1909389952632, 41.433791470673, "place1")
+    r.geoadd("barcelona", values)
+    assert r.geodist("barcelona", "place1", "missing_member", "km") is None
+
+
+@pytest.mark.parametrize(
+    "long,lat,radius,extra,expected", [
+        (2.191, 41.433, 1000, {}, [b"place1"]),
+        (2.187, 41.406, 1000, {}, [b"place2"]),
+        (1, 2, 1000, {}, []),
+        (2.191, 41.433, 1, {"unit": "km"}, [b"place1"]),
+        (2.191, 41.433, 3000, {"count": 1}, [b"place1"]),
+    ])
+def test_georadius(
+        r: redis.Redis, long: float, lat: float, radius: float,
+        extra: Dict[str, Any],
+        expected):
+    values = (2.1909389952632, 41.433791470673, "place1",
+              2.1873744593677, 41.406342043777, "place2",)
+    r.geoadd("barcelona", values)
+    assert r.georadius("barcelona", long, lat, radius, **extra) == expected
+
+
+@pytest.mark.parametrize(
+    "member,radius,extra,expected", [
+        ('place1', 1000, {}, [b"place1"]),
+        ('place2', 1000, {}, [b"place2"]),
+        ('place1', 1, {"unit": "km"}, [b"place1"]),
+        ('place1', 3000, {"count": 1}, [b"place1"]),
+    ])
+def test_georadiusbymember(
+        r: redis.Redis, member: str, radius: float,
+        extra: Dict[str, Any],
+        expected):
+    values = (2.1909389952632, 41.433791470673, "place1",
+              2.1873744593677, 41.406342043777, b"place2",)
+    r.geoadd("barcelona", values)
+    assert r.georadiusbymember("barcelona", member, radius, **extra) == expected
+    assert r.georadiusbymember("barcelona", member, radius, **extra, store_dist='extract') == len(expected)
+    assert r.zcard("extract") == len(expected)
+
+
+def test_georadius_with(r: redis.Redis):
+    values = (2.1909389952632, 41.433791470673, "place1",
+              2.1873744593677, 41.406342043777, "place2",)
+
+    r.geoadd("barcelona", values)
+    # test a bunch of combinations to test the parse response function.
+    res = r.georadius("barcelona", 2.191, 41.433, 1, unit="km", withdist=True, withcoord=True, )
+    assert res == [pytest.approx([b"place1", 0.0881, pytest.approx((2.1909, 41.4337), 0.0001)], 0.001)]
+
+    res = r.georadius("barcelona", 2.191, 41.433, 1, unit="km", withdist=True, withcoord=True)
+    assert res == [pytest.approx([b"place1", 0.0881, pytest.approx((2.1909, 41.4337), 0.0001)], 0.001)]
+
+    res = r.georadius("barcelona", 2.191, 41.433, 1, unit="km", withcoord=True)
+    assert res == [[b"place1", pytest.approx((2.1909, 41.4337), 0.0001)]]
+
+    # test no values.
+    assert (r.georadius("barcelona", 2, 1, 1, unit="km", withdist=True, withcoord=True, ) == [])
+
+
+def test_georadius_count(r: redis.Redis):
+    values = (2.1909389952632, 41.433791470673, "place1",
+              2.1873744593677, 41.406342043777, "place2",)
+
+    r.geoadd("barcelona", values)
+
+    assert r.georadius("barcelona", 2.191, 41.433, 3000, count=1, store='barcelona') == 1
+    assert r.georadius("barcelona", 2.191, 41.433, 3000, store_dist='extract') == 1
+    assert r.zcard("extract") == 1
+    res = r.georadius("barcelona", 2.191, 41.433, 3000, count=1, any=True)
+    assert (res == [b"place2"]) or res == [b'place1']
+
+    values = (13.361389, 38.115556, "Palermo",
+              15.087269, 37.502669, "Catania",)
+
+    r.geoadd("Sicily", values)
+    assert testtools.raw_command(
+        r, "GEORADIUS", "Sicily", "15", "37", "200", "km",
+        "STOREDIST", "neardist", "STORE", "near") == 2
+    assert r.zcard("near") == 2
+    assert r.zcard("neardist") == 0
+
+
+def test_georadius_errors(r: redis.Redis):
+    values = (13.361389, 38.115556, "Palermo",
+              15.087269, 37.502669, "Catania",)
+
+    r.geoadd("Sicily", values)
+
+    with pytest.raises(redis.DataError):  # Unsupported unit
+        r.georadius("barcelona", 2.191, 41.433, 3000, unit='dsf')
+    with pytest.raises(redis.ResponseError):  # Unsupported unit
+        testtools.raw_command(
+            r, "GEORADIUS", "Sicily", "15", "37", "200", "ddds",
+            "STOREDIST", "neardist", "STORE", "near")
+
+    bad_values = (13.361389, 38.115556, "Palermo", 15.087269, "Catania",)
+    with pytest.raises(redis.DataError):
+        r.geoadd('newgroup', bad_values)
+    with pytest.raises(redis.ResponseError):
+        testtools.raw_command(r, 'geoadd', 'newgroup', *bad_values)
+
+
+def test_geosearch(r: redis.Redis):
+    values = (
+        2.1909389952632, 41.433791470673, b"place1",
+        2.1873744593677, 41.406342043777, b"place2",
+        2.583333, 41.316667, b"place3",
+    )
+    r.geoadd("barcelona", values)
+    assert r.geosearch("barcelona", longitude=2.191, latitude=41.433, radius=1000) == [b"place1"]
+    assert r.geosearch("barcelona", longitude=2.187, latitude=41.406, radius=1000) == [b"place2"]
+    # assert r.geosearch("barcelona", longitude=2.191, latitude=41.433, height=1000, width=1000) == [b"place1"]
+    assert set(r.geosearch("barcelona", member="place3", radius=100, unit="km")) == {b"place2", b"place1", b"place3", }
+    # test count
+    assert r.geosearch("barcelona", member="place3", radius=100, unit="km", count=2) == [b"place3", b"place2"]
+    assert r.geosearch("barcelona", member="place3", radius=100, unit="km", count=1, any=True)[0] in [
+        b"place1", b"place3", b"place2"]
diff --git a/test/test_mixins/test_hash_commands.py b/test/test_mixins/test_hash_commands.py
new file mode 100644
index 0000000..82838d7
--- /dev/null
+++ b/test/test_mixins/test_hash_commands.py
@@ -0,0 +1,291 @@
+import pytest
+import redis
+import redis.client
+
+
+# Tests for the hash type.
+
+def test_hstrlen_missing(r: redis.Redis):
+    assert r.hstrlen('foo', 'doesnotexist') == 0
+
+    r.hset('foo', 'key', 'value')
+    assert r.hstrlen('foo', 'doesnotexist') == 0
+
+
+def test_hstrlen(r: redis.Redis):
+    r.hset('foo', 'key', 'value')
+    assert r.hstrlen('foo', 'key') == 5
+
+
+def test_hset_then_hget(r: redis.Redis):
+    assert r.hset('foo', 'key', 'value') == 1
+    assert r.hget('foo', 'key') == b'value'
+
+
+def test_hset_update(r: redis.Redis):
+    assert r.hset('foo', 'key', 'value') == 1
+    assert r.hset('foo', 'key', 'value') == 0
+
+
+def test_hset_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'bar': 1})
+    with pytest.raises(redis.ResponseError):
+        r.hset('foo', 'key', 'value')
+
+
+def test_hgetall(r: redis.Redis):
+    assert r.hset('foo', 'k1', 'v1') == 1
+    assert r.hset('foo', 'k2', 'v2') == 1
+    assert r.hset('foo', 'k3', 'v3') == 1
+    assert r.hgetall('foo') == {
+        b'k1': b'v1',
+        b'k2': b'v2',
+        b'k3': b'v3'
+    }
+
+
+def test_hgetall_empty_key(r: redis.Redis):
+    assert r.hgetall('foo') == {}
+
+
+def test_hgetall_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'bar': 1})
+    with pytest.raises(redis.ResponseError):
+        r.hgetall('foo')
+
+
+def test_hexists(r: redis.Redis):
+    r.hset('foo', 'bar', 'v1')
+    assert r.hexists('foo', 'bar') == 1
+    assert r.hexists('foo', 'baz') == 0
+    assert r.hexists('bar', 'bar') == 0
+
+
+def test_hexists_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'bar': 1})
+    with pytest.raises(redis.ResponseError):
+        r.hexists('foo', 'key')
+
+
+def test_hkeys(r: redis.Redis):
+    r.hset('foo', 'k1', 'v1')
+    r.hset('foo', 'k2', 'v2')
+    assert set(r.hkeys('foo')) == {b'k1', b'k2'}
+    assert set(r.hkeys('bar')) == set()
+
+
+def test_hkeys_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'bar': 1})
+    with pytest.raises(redis.ResponseError):
+        r.hkeys('foo')
+
+
+def test_hlen(r: redis.Redis):
+    r.hset('foo', 'k1', 'v1')
+    r.hset('foo', 'k2', 'v2')
+    assert r.hlen('foo') == 2
+
+
+def test_hlen_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'bar': 1})
+    with pytest.raises(redis.ResponseError):
+        r.hlen('foo')
+
+
+def test_hvals(r: redis.Redis):
+    r.hset('foo', 'k1', 'v1')
+    r.hset('foo', 'k2', 'v2')
+    assert set(r.hvals('foo')) == {b'v1', b'v2'}
+    assert set(r.hvals('bar')) == set()
+
+
+def test_hvals_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'bar': 1})
+    with pytest.raises(redis.ResponseError):
+        r.hvals('foo')
+
+
+def test_hmget(r: redis.Redis):
+    r.hset('foo', 'k1', 'v1')
+    r.hset('foo', 'k2', 'v2')
+    r.hset('foo', 'k3', 'v3')
+    # Normal case.
+    assert r.hmget('foo', ['k1', 'k3']) == [b'v1', b'v3']
+    assert r.hmget('foo', 'k1', 'k3') == [b'v1', b'v3']
+    # Key does not exist.
+    assert r.hmget('bar', ['k1', 'k3']) == [None, None]
+    assert r.hmget('bar', 'k1', 'k3') == [None, None]
+    # Some keys in the hash do not exist.
+    assert r.hmget('foo', ['k1', 'k500']) == [b'v1', None]
+    assert r.hmget('foo', 'k1', 'k500') == [b'v1', None]
+
+
+def test_hmget_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'bar': 1})
+    with pytest.raises(redis.ResponseError):
+        r.hmget('foo', 'key1', 'key2')
+
+
+def test_hdel(r: redis.Redis):
+    r.hset('foo', 'k1', 'v1')
+    r.hset('foo', 'k2', 'v2')
+    r.hset('foo', 'k3', 'v3')
+    assert r.hget('foo', 'k1') == b'v1'
+    assert r.hdel('foo', 'k1') == 1
+    assert r.hget('foo', 'k1') is None
+    assert r.hdel('foo', 'k1') == 0
+    # Since redis>=2.7.6 returns number of deleted items.
+    assert r.hdel('foo', 'k2', 'k3') == 2
+    assert r.hget('foo', 'k2') is None
+    assert r.hget('foo', 'k3') is None
+    assert r.hdel('foo', 'k2', 'k3') == 0
+
+
+def test_hdel_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'bar': 1})
+    with pytest.raises(redis.ResponseError):
+        r.hdel('foo', 'key')
+
+
+def test_hincrby(r: redis.Redis):
+    r.hset('foo', 'counter', 0)
+    assert r.hincrby('foo', 'counter') == 1
+    assert r.hincrby('foo', 'counter') == 2
+    assert r.hincrby('foo', 'counter') == 3
+
+
+def test_hincrby_with_no_starting_value(r: redis.Redis):
+    assert r.hincrby('foo', 'counter') == 1
+    assert r.hincrby('foo', 'counter') == 2
+    assert r.hincrby('foo', 'counter') == 3
+
+
+def test_hincrby_with_range_param(r: redis.Redis):
+    assert r.hincrby('foo', 'counter', 2) == 2
+    assert r.hincrby('foo', 'counter', 2) == 4
+    assert r.hincrby('foo', 'counter', 2) == 6
+
+
+def test_hincrby_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'bar': 1})
+    with pytest.raises(redis.ResponseError):
+        r.hincrby('foo', 'key', 2)
+
+
+def test_hincrbyfloat(r: redis.Redis):
+    r.hset('foo', 'counter', 0.0)
+    assert r.hincrbyfloat('foo', 'counter') == 1.0
+    assert r.hincrbyfloat('foo', 'counter') == 2.0
+    assert r.hincrbyfloat('foo', 'counter') == 3.0
+
+
+def test_hincrbyfloat_with_no_starting_value(r: redis.Redis):
+    assert r.hincrbyfloat('foo', 'counter') == 1.0
+    assert r.hincrbyfloat('foo', 'counter') == 2.0
+    assert r.hincrbyfloat('foo', 'counter') == 3.0
+
+
+def test_hincrbyfloat_with_range_param(r: redis.Redis):
+    assert r.hincrbyfloat('foo', 'counter', 0.1) == pytest.approx(0.1)
+    assert r.hincrbyfloat('foo', 'counter', 0.1) == pytest.approx(0.2)
+    assert r.hincrbyfloat('foo', 'counter', 0.1) == pytest.approx(0.3)
+
+
+def test_hincrbyfloat_on_non_float_value_raises_error(r: redis.Redis):
+    r.hset('foo', 'counter', 'cat')
+    with pytest.raises(redis.ResponseError):
+        r.hincrbyfloat('foo', 'counter')
+
+
+def test_hincrbyfloat_with_non_float_amount_raises_error(r: redis.Redis):
+    with pytest.raises(redis.ResponseError):
+        r.hincrbyfloat('foo', 'counter', 'cat')
+
+
+def test_hincrbyfloat_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'bar': 1})
+    with pytest.raises(redis.ResponseError):
+        r.hincrbyfloat('foo', 'key', 0.1)
+
+
+def test_hincrbyfloat_precision(r: redis.Redis):
+    x = 1.23456789123456789
+    assert r.hincrbyfloat('foo', 'bar', x) == x
+    assert float(r.hget('foo', 'bar')) == x
+
+
+def test_hsetnx(r: redis.Redis):
+    assert r.hsetnx('foo', 'newkey', 'v1') == 1
+    assert r.hsetnx('foo', 'newkey', 'v1') == 0
+    assert r.hget('foo', 'newkey') == b'v1'
+
+
+def test_hmset_empty_raises_error(r: redis.Redis):
+    with pytest.raises(redis.DataError):
+        r.hmset('foo', {})
+
+
+def test_hmset(r: redis.Redis):
+    r.hset('foo', 'k1', 'v1')
+    assert r.hmset('foo', {'k2': 'v2', 'k3': 'v3'}) is True
+
+
+def test_hmset_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'bar': 1})
+    with pytest.raises(redis.ResponseError):
+        r.hmset('foo', {'key': 'value'})
+
+
+def test_empty_hash(r: redis.Redis):
+    r.hset('foo', 'bar', 'baz')
+    r.hdel('foo', 'bar')
+    assert not r.exists('foo')
+
+
+def test_hset_removing_last_field_delete_key(r: redis.Redis):
+    r.hset(b'3L', b'f1', b'v1')
+    r.hdel(b'3L', b'f1')
+    assert r.keys('*') == []
+
+
+def test_hscan(r: redis.Redis):
+    # Set up the data
+    name = 'hscan-test'
+    for ix in range(20):
+        k = 'key:%s' % ix
+        v = 'result:%s' % ix
+        r.hset(name, k, v)
+    expected = r.hgetall(name)
+    assert len(expected) == 20  # Ensure we know what we're testing
+
+    # Test that we page through the results and get everything out
+    results = {}
+    cursor = '0'
+    while cursor != 0:
+        cursor, data = r.hscan(name, cursor, count=6)
+        results.update(data)
+    assert expected == results
+
+    # Test the iterator version
+    results = {}
+    for key, val in r.hscan_iter(name, count=6):
+        results[key] = val
+    assert expected == results
+
+    # Now test that the MATCH functionality works
+    results = {}
+    cursor = '0'
+    while cursor != 0:
+        cursor, data = r.hscan(name, cursor, match='*7', count=100)
+        results.update(data)
+    assert b'key:7' in results
+    assert b'key:17' in results
+    assert len(results) == 2
+
+    # Test the match on iterator
+    results = {}
+    for key, val in r.hscan_iter(name, match='*7'):
+        results[key] = val
+    assert b'key:7' in results
+    assert b'key:17' in results
+    assert len(results) == 2
diff --git a/test/test_mixins/test_list_commands.py b/test/test_mixins/test_list_commands.py
new file mode 100644
index 0000000..830e525
--- /dev/null
+++ b/test/test_mixins/test_list_commands.py
@@ -0,0 +1,599 @@
+import threading
+from time import sleep
+
+import pytest
+import redis
+import redis.client
+
+from .. import testtools
+
+
+def test_lpush_then_lrange_all(r: redis.Redis):
+    assert r.lpush('foo', 'bar') == 1
+    assert r.lpush('foo', 'baz') == 2
+    assert r.lpush('foo', 'bam', 'buzz') == 4
+    assert r.lrange('foo', 0, -1) == [b'buzz', b'bam', b'baz', b'bar']
+
+
+def test_lpush_then_lrange_portion(r: redis.Redis):
+    r.lpush('foo', 'one')
+    r.lpush('foo', 'two')
+    r.lpush('foo', 'three')
+    r.lpush('foo', 'four')
+    assert r.lrange('foo', 0, 2) == [b'four', b'three', b'two']
+    assert r.lrange('foo', 0, 3) == [b'four', b'three', b'two', b'one']
+
+
+def test_lrange_negative_indices(r: redis.Redis):
+    r.rpush('foo', 'a', 'b', 'c')
+    assert r.lrange('foo', -1, -2) == []
+    assert r.lrange('foo', -2, -1) == [b'b', b'c']
+
+
+def test_lpush_key_does_not_exist(r: redis.Redis):
+    assert r.lrange('foo', 0, -1) == []
+
+
+def test_lpush_with_nonstr_key(r: redis.Redis):
+    r.lpush(1, 'one')
+    r.lpush(1, 'two')
+    r.lpush(1, 'three')
+    assert r.lrange(1, 0, 2) == [b'three', b'two', b'one']
+    assert r.lrange('1', 0, 2) == [b'three', b'two', b'one']
+
+
+def test_lpush_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.lpush('foo', 'element')
+
+
+def test_llen(r: redis.Redis):
+    r.lpush('foo', 'one')
+    r.lpush('foo', 'two')
+    r.lpush('foo', 'three')
+    assert r.llen('foo') == 3
+
+
+def test_llen_no_exist(r: redis.Redis):
+    assert r.llen('foo') == 0
+
+
+def test_llen_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.llen('foo')
+
+
+def test_lrem_positive_count(r: redis.Redis):
+    r.lpush('foo', 'same')
+    r.lpush('foo', 'same')
+    r.lpush('foo', 'different')
+    r.lrem('foo', 2, 'same')
+    assert r.lrange('foo', 0, -1) == [b'different']
+
+
+def test_lrem_negative_count(r: redis.Redis):
+    r.lpush('foo', 'removeme')
+    r.lpush('foo', 'three')
+    r.lpush('foo', 'two')
+    r.lpush('foo', 'one')
+    r.lpush('foo', 'removeme')
+    r.lrem('foo', -1, 'removeme')
+    # Should remove it from the end of the list,
+    # leaving the 'removeme' from the front of the list alone.
+    assert r.lrange('foo', 0, -1) == [b'removeme', b'one', b'two', b'three']
+
+
+def test_lrem_zero_count(r: redis.Redis):
+    r.lpush('foo', 'one')
+    r.lpush('foo', 'one')
+    r.lpush('foo', 'one')
+    r.lrem('foo', 0, 'one')
+    assert r.lrange('foo', 0, -1) == []
+
+
+def test_lrem_default_value(r: redis.Redis):
+    r.lpush('foo', 'one')
+    r.lpush('foo', 'one')
+    r.lpush('foo', 'one')
+    r.lrem('foo', 0, 'one')
+    assert r.lrange('foo', 0, -1) == []
+
+
+def test_lrem_does_not_exist(r: redis.Redis):
+    r.lpush('foo', 'one')
+    r.lrem('foo', 0, 'one')
+    # These should be noops.
+    r.lrem('foo', -2, 'one')
+    r.lrem('foo', 2, 'one')
+
+
+def test_lrem_return_value(r: redis.Redis):
+    r.lpush('foo', 'one')
+    count = r.lrem('foo', 0, 'one')
+    assert count == 1
+    assert r.lrem('foo', 0, 'one') == 0
+
+
+def test_lrem_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.lrem('foo', 0, 'element')
+
+
+def test_rpush(r: redis.Redis):
+    r.rpush('foo', 'one')
+    r.rpush('foo', 'two')
+    r.rpush('foo', 'three')
+    r.rpush('foo', 'four', 'five')
+    assert r.lrange('foo', 0, -1) == [b'one', b'two', b'three', b'four', b'five']
+
+
+def test_rpush_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.rpush('foo', 'element')
+
+
+def test_lpop(r: redis.Redis):
+    assert r.rpush('foo', 'one') == 1
+    assert r.rpush('foo', 'two') == 2
+    assert r.rpush('foo', 'three') == 3
+    assert r.lpop('foo') == b'one'
+    assert r.lpop('foo') == b'two'
+    assert r.lpop('foo') == b'three'
+
+
+def test_lpop_empty_list(r: redis.Redis):
+    r.rpush('foo', 'one')
+    r.lpop('foo')
+    assert r.lpop('foo') is None
+    # Verify what happens if we try to pop from a key
+    # we've never seen before.
+    assert r.lpop('noexists') is None
+
+
+def test_lpop_zero_elem(r: redis.Redis):
+    r.rpush(b'\x00', b'')
+    assert r.lpop(b'\x00', 0) == []
+
+
+def test_lpop_zero_non_existing_list(r: redis.Redis):
+    assert r.lpop(b'', 0) is None
+
+
+def test_lpop_zero_wrong_type(r: redis.Redis):
+    r.set(b'', b'')
+    with pytest.raises(redis.ResponseError):
+        r.lpop(b'', 0)
+
+
+def test_lpop_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.lpop('foo')
+
+
+@pytest.mark.min_server('6.2')
+def test_lpop_count(r: redis.Redis):
+    assert r.rpush('foo', 'one') == 1
+    assert r.rpush('foo', 'two') == 2
+    assert r.rpush('foo', 'three') == 3
+    assert testtools.raw_command(r, 'lpop', 'foo', 2) == [b'one', b'two']
+    # See https://github.com/redis/redis/issues/9680
+    raw = testtools.raw_command(r, 'rpop', 'foo', 0)
+    assert raw is None or raw == []  # https://github.com/redis/redis/pull/10095
+
+
+@pytest.mark.min_server('6.2')
+def test_lpop_count_negative(r: redis.Redis):
+    with pytest.raises(redis.ResponseError):
+        testtools.raw_command(r, 'lpop', 'foo', -1)
+
+
+def test_lset(r: redis.Redis):
+    r.rpush('foo', 'one')
+    r.rpush('foo', 'two')
+    r.rpush('foo', 'three')
+    r.lset('foo', 0, 'four')
+    r.lset('foo', -2, 'five')
+    assert r.lrange('foo', 0, -1) == [b'four', b'five', b'three']
+
+
+def test_lset_index_out_of_range(r: redis.Redis):
+    r.rpush('foo', 'one')
+    with pytest.raises(redis.ResponseError):
+        r.lset('foo', 3, 'three')
+
+
+def test_lset_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.lset('foo', 0, 'element')
+
+
+def test_rpushx(r: redis.Redis):
+    r.rpush('foo', 'one')
+    r.rpushx('foo', 'two')
+    r.rpushx('bar', 'three')
+    assert r.lrange('foo', 0, -1) == [b'one', b'two']
+    assert r.lrange('bar', 0, -1) == []
+
+
+def test_rpushx_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.rpushx('foo', 'element')
+
+
+def test_ltrim(r: redis.Redis):
+    r.rpush('foo', 'one')
+    r.rpush('foo', 'two')
+    r.rpush('foo', 'three')
+    r.rpush('foo', 'four')
+
+    assert r.ltrim('foo', 1, 3)
+    assert r.lrange('foo', 0, -1) == [b'two', b'three', b'four']
+    assert r.ltrim('foo', 1, -1)
+    assert r.lrange('foo', 0, -1) == [b'three', b'four']
+
+
+def test_ltrim_with_non_existent_key(r: redis.Redis):
+    assert r.ltrim('foo', 0, -1)
+
+
+def test_ltrim_expiry(r: redis.Redis):
+    r.rpush('foo', 'one', 'two', 'three')
+    r.expire('foo', 10)
+    r.ltrim('foo', 1, 2)
+    assert r.ttl('foo') > 0
+
+
+def test_ltrim_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.ltrim('foo', 1, -1)
+
+
+def test_lindex(r: redis.Redis):
+    r.rpush('foo', 'one')
+    r.rpush('foo', 'two')
+    assert r.lindex('foo', 0) == b'one'
+    assert r.lindex('foo', 4) is None
+    assert r.lindex('bar', 4) is None
+
+
+def test_lindex_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.lindex('foo', 0)
+
+
+def test_lpushx(r: redis.Redis):
+    r.lpush('foo', 'two')
+    r.lpushx('foo', 'one')
+    r.lpushx('bar', 'one')
+    assert r.lrange('foo', 0, -1) == [b'one', b'two']
+    assert r.lrange('bar', 0, -1) == []
+
+
+def test_lpushx_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.lpushx('foo', 'element')
+
+
+def test_rpop(r: redis.Redis):
+    assert r.rpop('foo') is None
+    r.rpush('foo', 'one')
+    r.rpush('foo', 'two')
+    assert r.rpop('foo') == b'two'
+    assert r.rpop('foo') == b'one'
+    assert r.rpop('foo') is None
+
+
+def test_rpop_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.rpop('foo')
+
+
+@pytest.mark.min_server('6.2')
+def test_rpop_count(r: redis.Redis):
+    assert r.rpush('foo', 'one') == 1
+    assert r.rpush('foo', 'two') == 2
+    assert r.rpush('foo', 'three') == 3
+    assert testtools.raw_command(r, 'rpop', 'foo', 2) == [b'three', b'two']
+    # See https://github.com/redis/redis/issues/9680
+    raw = testtools.raw_command(r, 'rpop', 'foo', 0)
+    assert raw is None or raw == []  # https://github.com/redis/redis/pull/10095
+
+
+@pytest.mark.min_server('6.2')
+def test_rpop_count_negative(r: redis.Redis):
+    with pytest.raises(redis.ResponseError):
+        testtools.raw_command(r, 'rpop', 'foo', -1)
+
+
+def test_linsert_before(r: redis.Redis):
+    r.rpush('foo', 'hello')
+    r.rpush('foo', 'world')
+    assert r.linsert('foo', 'before', 'world', 'there') == 3
+    assert r.lrange('foo', 0, -1) == [b'hello', b'there', b'world']
+
+
+def test_linsert_after(r: redis.Redis):
+    r.rpush('foo', 'hello')
+    r.rpush('foo', 'world')
+    assert r.linsert('foo', 'after', 'hello', 'there') == 3
+    assert r.lrange('foo', 0, -1) == [b'hello', b'there', b'world']
+
+
+def test_linsert_no_pivot(r: redis.Redis):
+    r.rpush('foo', 'hello')
+    r.rpush('foo', 'world')
+    assert r.linsert('foo', 'after', 'goodbye', 'bar') == -1
+    assert r.lrange('foo', 0, -1) == [b'hello', b'world']
+
+
+def test_linsert_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.linsert('foo', 'after', 'bar', 'element')
+
+
+def test_rpoplpush(r: redis.Redis):
+    assert r.rpoplpush('foo', 'bar') is None
+    assert r.lpop('bar') is None
+    r.rpush('foo', 'one')
+    r.rpush('foo', 'two')
+    r.rpush('bar', 'one')
+
+    assert r.rpoplpush('foo', 'bar') == b'two'
+    assert r.lrange('foo', 0, -1) == [b'one']
+    assert r.lrange('bar', 0, -1) == [b'two', b'one']
+
+    # Catch instances where we store bytes and strings inconsistently
+    # and thus bar = ['two', b'one']
+    assert r.lrem('bar', -1, 'two') == 1
+
+
+def test_rpoplpush_to_nonexistent_destination(r: redis.Redis):
+    r.rpush('foo', 'one')
+    assert r.rpoplpush('foo', 'bar') == b'one'
+    assert r.rpop('bar') == b'one'
+
+
+def test_rpoplpush_expiry(r: redis.Redis):
+    r.rpush('foo', 'one')
+    r.rpush('bar', 'two')
+    r.expire('bar', 10)
+    r.rpoplpush('foo', 'bar')
+    assert r.ttl('bar') > 0
+
+
+def test_rpoplpush_one_to_self(r: redis.Redis):
+    r.rpush('list', 'element')
+    assert r.brpoplpush('list', 'list') == b'element'
+    assert r.lrange('list', 0, -1) == [b'element']
+
+
+def test_rpoplpush_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    r.rpush('list', 'element')
+    with pytest.raises(redis.ResponseError):
+        r.rpoplpush('foo', 'list')
+    assert r.get('foo') == b'bar'
+    assert r.lrange('list', 0, -1) == [b'element']
+    with pytest.raises(redis.ResponseError):
+        r.rpoplpush('list', 'foo')
+    assert r.get('foo') == b'bar'
+    assert r.lrange('list', 0, -1) == [b'element']
+
+
+def test_blpop_single_list(r: redis.Redis):
+    r.rpush('foo', 'one')
+    r.rpush('foo', 'two')
+    r.rpush('foo', 'three')
+    assert r.blpop(['foo'], timeout=1) == (b'foo', b'one')
+
+
+def test_blpop_test_multiple_lists(r: redis.Redis):
+    r.rpush('baz', 'zero')
+    assert r.blpop(['foo', 'baz'], timeout=1) == (b'baz', b'zero')
+    assert not r.exists('baz')
+
+    r.rpush('foo', 'one')
+    r.rpush('foo', 'two')
+    # bar has nothing, so the returned value should come
+    # from foo.
+    assert r.blpop(['bar', 'foo'], timeout=1) == (b'foo', b'one')
+    r.rpush('bar', 'three')
+    # bar now has something, so the returned value should come
+    # from bar.
+    assert r.blpop(['bar', 'foo'], timeout=1) == (b'bar', b'three')
+    assert r.blpop(['bar', 'foo'], timeout=1) == (b'foo', b'two')
+
+
+def test_blpop_allow_single_key(r: redis.Redis):
+    # blpop converts single key arguments to a one element list.
+    r.rpush('foo', 'one')
+    assert r.blpop('foo', timeout=1) == (b'foo', b'one')
+
+
+@pytest.mark.slow
+def test_blpop_block(r: redis.Redis):
+    def push_thread():
+        sleep(0.5)
+        r.rpush('foo', 'value1')
+        sleep(0.5)
+        # Will wake the condition variable
+        r.set('bar', 'go back to sleep some more')
+        r.rpush('foo', 'value2')
+
+    thread = threading.Thread(target=push_thread)
+    thread.start()
+    try:
+        assert r.blpop('foo') == (b'foo', b'value1')
+        assert r.blpop('foo', timeout=5) == (b'foo', b'value2')
+    finally:
+        thread.join()
+
+
+def test_blpop_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.blpop('foo', timeout=1)
+
+
+def test_blpop_transaction(r: redis.Redis):
+    p = r.pipeline()
+    p.multi()
+    p.blpop('missing', timeout=1000)
+    result = p.execute()
+    # Blocking commands behave like non-blocking versions in transactions
+    assert result == [None]
+
+
+def test_brpop_test_multiple_lists(r: redis.Redis):
+    r.rpush('baz', 'zero')
+    assert r.brpop(['foo', 'baz'], timeout=1) == (b'baz', b'zero')
+    assert not r.exists('baz')
+
+    r.rpush('foo', 'one')
+    r.rpush('foo', 'two')
+    assert r.brpop(['bar', 'foo'], timeout=1) == (b'foo', b'two')
+
+
+def test_brpop_single_key(r: redis.Redis):
+    r.rpush('foo', 'one')
+    r.rpush('foo', 'two')
+    assert r.brpop('foo', timeout=1) == (b'foo', b'two')
+
+
+@pytest.mark.slow
+def test_brpop_block(r: redis.Redis):
+    def push_thread():
+        sleep(0.5)
+        r.rpush('foo', 'value1')
+        sleep(0.5)
+        # Will wake the condition variable
+        r.set('bar', 'go back to sleep some more')
+        r.rpush('foo', 'value2')
+
+    thread = threading.Thread(target=push_thread)
+    thread.start()
+    try:
+        assert r.brpop('foo') == (b'foo', b'value1')
+        assert r.brpop('foo', timeout=5) == (b'foo', b'value2')
+    finally:
+        thread.join()
+
+
+def test_brpop_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.brpop('foo', timeout=1)
+
+
+def test_brpoplpush_multi_keys(r: redis.Redis):
+    assert r.lpop('bar') is None
+    r.rpush('foo', 'one')
+    r.rpush('foo', 'two')
+    assert r.brpoplpush('foo', 'bar', timeout=1) == b'two'
+    assert r.lrange('bar', 0, -1) == [b'two']
+
+    # Catch instances where we store bytes and strings inconsistently
+    # and thus bar = ['two']
+    assert r.lrem('bar', -1, 'two') == 1
+
+
+def test_brpoplpush_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    r.rpush('list', 'element')
+    with pytest.raises(redis.ResponseError):
+        r.brpoplpush('foo', 'list')
+    assert r.get('foo') == b'bar'
+    assert r.lrange('list', 0, -1) == [b'element']
+    with pytest.raises(redis.ResponseError):
+        r.brpoplpush('list', 'foo')
+    assert r.get('foo') == b'bar'
+    assert r.lrange('list', 0, -1) == [b'element']
+
+
+@pytest.mark.slow
+def test_blocking_operations_when_empty(r: redis.Redis):
+    assert r.blpop(['foo'], timeout=1) is None
+    assert r.blpop(['bar', 'foo'], timeout=1) is None
+    assert r.brpop('foo', timeout=1) is None
+    assert r.brpoplpush('foo', 'bar', timeout=1) is None
+
+
+def test_empty_list(r: redis.Redis):
+    r.rpush('foo', 'bar')
+    r.rpop('foo')
+    assert not r.exists('foo')
+
+
+def test_lmove_to_nonexistent_destination(r: redis.Redis):
+    r.rpush('foo', 'one')
+    assert r.lmove('foo', 'bar', 'RIGHT', 'LEFT') == b'one'
+    assert r.rpop('bar') == b'one'
+
+
+def test_lmove_expiry(r: redis.Redis):
+    r.rpush('foo', 'one')
+    r.rpush('bar', 'two')
+    r.expire('bar', 10)
+    r.lmove('foo', 'bar', 'RIGHT', 'LEFT')
+    assert r.ttl('bar') > 0
+
+
+def test_lmove_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    r.rpush('list', 'element')
+    with pytest.raises(redis.ResponseError):
+        r.lmove('foo', 'list', 'RIGHT', 'LEFT')
+    assert r.get('foo') == b'bar'
+    assert r.lrange('list', 0, -1) == [b'element']
+    with pytest.raises(redis.ResponseError):
+        r.lmove('list', 'foo', 'RIGHT', 'LEFT')
+    assert r.get('foo') == b'bar'
+    assert r.lrange('list', 0, -1) == [b'element']
+
+
+def test_lmove(r: redis.Redis):
+    assert r.lmove('foo', 'bar', 'RIGHT', 'LEFT') is None
+    assert r.lpop('bar') is None
+    r.rpush('foo', 'one')
+    r.rpush('foo', 'two')
+    r.rpush('bar', 'one')
+
+    # RPOPLPUSH
+    assert r.lmove('foo', 'bar', 'RIGHT', 'LEFT') == b'two'
+    assert r.lrange('foo', 0, -1) == [b'one']
+    assert r.lrange('bar', 0, -1) == [b'two', b'one']
+    # LPOPRPUSH
+    assert r.lmove('bar', 'bar', 'LEFT', 'RIGHT') == b'two'
+    assert r.lrange('bar', 0, -1) == [b'one', b'two']
+    # RPOPRPUSH
+    r.rpush('foo', 'three')
+    assert r.lmove('foo', 'bar', 'RIGHT', 'RIGHT') == b'three'
+    assert r.lrange('foo', 0, -1) == [b'one']
+    assert r.lrange('bar', 0, -1) == [b'one', b'two', b'three']
+    # LPOPLPUSH
+    assert r.lmove('bar', 'foo', 'LEFT', 'LEFT') == b'one'
+    assert r.lrange('foo', 0, -1) == [b'one', b'one']
+    assert r.lrange('bar', 0, -1) == [b'two', b'three']
+
+    # Catch instances where we store bytes and strings inconsistently
+    # and thus bar = ['two', b'one']
+    assert r.lrem('bar', -1, 'two') == 1
+
+
+@pytest.mark.disconnected
+@testtools.fake_only
+def test_lmove_disconnected_raises_connection_error(r: redis.Redis):
+    with pytest.raises(redis.ConnectionError):
+        r.lmove(1, 2, 'LEFT', 'RIGHT')
diff --git a/test/test_mixins/test_pubsub_commands.py b/test/test_mixins/test_pubsub_commands.py
new file mode 100644
index 0000000..6dc8966
--- /dev/null
+++ b/test/test_mixins/test_pubsub_commands.py
@@ -0,0 +1,407 @@
+import threading
+import uuid
+from queue import Queue
+from time import sleep
+
+import pytest
+import redis
+
+import fakeredis
+from .. import testtools
+from ..testtools import raw_command
+
+
+def test_ping_pubsub(r: redis.Redis):
+    p = r.pubsub()
+    p.subscribe('channel')
+    p.parse_response()  # Consume the subscribe command reply
+    p.ping()
+    assert p.parse_response() == [b'pong', b'']
+    p.ping('test')
+    assert p.parse_response() == [b'pong', b'test']
+
+
+@pytest.mark.slow
+def test_pubsub_subscribe(r: redis.Redis):
+    pubsub = r.pubsub()
+    pubsub.subscribe("channel")
+    sleep(1)
+    expected_message = {'type': 'subscribe', 'pattern': None,
+                        'channel': b'channel', 'data': 1}
+    message = pubsub.get_message()
+    keys = list(pubsub.channels.keys())
+
+    key = keys[0]
+    key = (key if type(key) == bytes
+           else bytes(key, encoding='utf-8'))
+
+    assert len(keys) == 1
+    assert key == b'channel'
+    assert message == expected_message
+
+
+@pytest.mark.slow
+def test_pubsub_psubscribe(r: redis.Redis):
+    pubsub = r.pubsub()
+    pubsub.psubscribe("channel.*")
+    sleep(1)
+    expected_message = {'type': 'psubscribe', 'pattern': None,
+                        'channel': b'channel.*', 'data': 1}
+
+    message = pubsub.get_message()
+    keys = list(pubsub.patterns.keys())
+    assert len(keys) == 1
+    assert message == expected_message
+
+
+@pytest.mark.slow
+def test_pubsub_unsubscribe(r: redis.Redis):
+    pubsub = r.pubsub()
+    pubsub.subscribe('channel-1', 'channel-2', 'channel-3')
+    sleep(1)
+    expected_message = {'type': 'unsubscribe', 'pattern': None,
+                        'channel': b'channel-1', 'data': 2}
+    pubsub.get_message()
+    pubsub.get_message()
+    pubsub.get_message()
+
+    # unsubscribe from one
+    pubsub.unsubscribe('channel-1')
+    sleep(1)
+    message = pubsub.get_message()
+    keys = list(pubsub.channels.keys())
+    assert message == expected_message
+    assert len(keys) == 2
+
+    # unsubscribe from multiple
+    pubsub.unsubscribe()
+    sleep(1)
+    pubsub.get_message()
+    pubsub.get_message()
+    keys = list(pubsub.channels.keys())
+    assert message == expected_message
+    assert len(keys) == 0
+
+
+@pytest.mark.slow
+def test_pubsub_punsubscribe(r: redis.Redis):
+    pubsub = r.pubsub()
+    pubsub.psubscribe('channel-1.*', 'channel-2.*', 'channel-3.*')
+    sleep(1)
+    expected_message = {'type': 'punsubscribe', 'pattern': None,
+                        'channel': b'channel-1.*', 'data': 2}
+    pubsub.get_message()
+    pubsub.get_message()
+    pubsub.get_message()
+
+    # unsubscribe from one
+    pubsub.punsubscribe('channel-1.*')
+    sleep(1)
+    message = pubsub.get_message()
+    keys = list(pubsub.patterns.keys())
+    assert message == expected_message
+    assert len(keys) == 2
+
+    # unsubscribe from multiple
+    pubsub.punsubscribe()
+    sleep(1)
+    pubsub.get_message()
+    pubsub.get_message()
+    keys = list(pubsub.patterns.keys())
+    assert len(keys) == 0
+
+
+@pytest.mark.slow
+def test_pubsub_listen(r: redis.Redis):
+    def _listen(pubsub, q):
+        count = 0
+        for message in pubsub.listen():
+            q.put(message)
+            count += 1
+            if count == 4:
+                pubsub.close()
+
+    channel = 'ch1'
+    patterns = ['ch1*', 'ch[1]', 'ch?']
+    pubsub = r.pubsub()
+    pubsub.subscribe(channel)
+    pubsub.psubscribe(*patterns)
+    sleep(1)
+    msg1 = pubsub.get_message()
+    msg2 = pubsub.get_message()
+    msg3 = pubsub.get_message()
+    msg4 = pubsub.get_message()
+    assert msg1['type'] == 'subscribe'
+    assert msg2['type'] == 'psubscribe'
+    assert msg3['type'] == 'psubscribe'
+    assert msg4['type'] == 'psubscribe'
+
+    q = Queue()
+    t = threading.Thread(target=_listen, args=(pubsub, q))
+    t.start()
+    msg = 'hello world'
+    r.publish(channel, msg)
+    t.join()
+
+    msg1 = q.get()
+    msg2 = q.get()
+    msg3 = q.get()
+    msg4 = q.get()
+
+    bpatterns = [pattern.encode() for pattern in patterns]
+    bpatterns.append(channel.encode())
+    msg = msg.encode()
+    assert msg1['data'] == msg
+    assert msg1['channel'] in bpatterns
+    assert msg2['data'] == msg
+    assert msg2['channel'] in bpatterns
+    assert msg3['data'] == msg
+    assert msg3['channel'] in bpatterns
+    assert msg4['data'] == msg
+    assert msg4['channel'] in bpatterns
+
+
+@pytest.mark.slow
+def test_pubsub_listen_handler(r: redis.Redis):
+    def _handler(message):
+        calls.append(message)
+
+    channel = 'ch1'
+    patterns = {'ch?': _handler}
+    calls = []
+
+    pubsub = r.pubsub()
+    pubsub.subscribe(ch1=_handler)
+    pubsub.psubscribe(**patterns)
+    sleep(1)
+    msg1 = pubsub.get_message()
+    msg2 = pubsub.get_message()
+    assert msg1['type'] == 'subscribe'
+    assert msg2['type'] == 'psubscribe'
+    msg = 'hello world'
+    r.publish(channel, msg)
+    sleep(1)
+    for i in range(2):
+        msg = pubsub.get_message()
+        assert msg is None  # get_message returns None when handler is used
+    pubsub.close()
+    calls.sort(key=lambda call: call['type'])
+    assert calls == [
+        {'pattern': None, 'channel': b'ch1', 'data': b'hello world', 'type': 'message'},
+        {'pattern': b'ch?', 'channel': b'ch1', 'data': b'hello world', 'type': 'pmessage'}
+    ]
+
+
+@pytest.mark.slow
+def test_pubsub_ignore_sub_messages_listen(r: redis.Redis):
+    def _listen(pubsub, q):
+        count = 0
+        for message in pubsub.listen():
+            q.put(message)
+            count += 1
+            if count == 4:
+                pubsub.close()
+
+    channel = 'ch1'
+    patterns = ['ch1*', 'ch[1]', 'ch?']
+    pubsub = r.pubsub(ignore_subscribe_messages=True)
+    pubsub.subscribe(channel)
+    pubsub.psubscribe(*patterns)
+    sleep(1)
+
+    q = Queue()
+    t = threading.Thread(target=_listen, args=(pubsub, q))
+    t.start()
+    msg = 'hello world'
+    r.publish(channel, msg)
+    t.join()
+
+    msg1 = q.get()
+    msg2 = q.get()
+    msg3 = q.get()
+    msg4 = q.get()
+
+    bpatterns = [pattern.encode() for pattern in patterns]
+    bpatterns.append(channel.encode())
+    msg = msg.encode()
+    assert msg1['data'] == msg
+    assert msg1['channel'] in bpatterns
+    assert msg2['data'] == msg
+    assert msg2['channel'] in bpatterns
+    assert msg3['data'] == msg
+    assert msg3['channel'] in bpatterns
+    assert msg4['data'] == msg
+    assert msg4['channel'] in bpatterns
+
+
+@pytest.mark.slow
+def test_pubsub_binary(r: redis.Redis):
+    def _listen(pubsub, q):
+        for message in pubsub.listen():
+            q.put(message)
+            pubsub.close()
+
+    pubsub = r.pubsub(ignore_subscribe_messages=True)
+    pubsub.subscribe('channel\r\n\xff')
+    sleep(1)
+
+    q = Queue()
+    t = threading.Thread(target=_listen, args=(pubsub, q))
+    t.start()
+    msg = b'\x00hello world\r\n\xff'
+    r.publish('channel\r\n\xff', msg)
+    t.join()
+
+    received = q.get()
+    assert received['data'] == msg
+
+
+@pytest.mark.slow
+def test_pubsub_run_in_thread(r: redis.Redis):
+    q = Queue()
+
+    pubsub = r.pubsub()
+    pubsub.subscribe(channel=q.put)
+    pubsub_thread = pubsub.run_in_thread()
+
+    msg = b"Hello World"
+    r.publish("channel", msg)
+
+    retrieved = q.get()
+    assert retrieved["data"] == msg
+
+    pubsub_thread.stop()
+    # Newer versions of redis wait for an unsubscribe message, which sometimes comes early
+    # https://github.com/andymccurdy/redis-py/issues/1150
+    if pubsub.channels:
+        pubsub.channels = {}
+    pubsub_thread.join()
+    assert not pubsub_thread.is_alive()
+
+    pubsub.subscribe(channel=None)
+    with pytest.raises(redis.exceptions.PubSubError):
+        pubsub_thread = pubsub.run_in_thread()
+
+    pubsub.unsubscribe("channel")
+
+    pubsub.psubscribe(channel=None)
+    with pytest.raises(redis.exceptions.PubSubError):
+        pubsub_thread = pubsub.run_in_thread()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "timeout_value",
+    [
+        1,
+        pytest.param(
+            None,
+            marks=testtools.run_test_if_redispy_ver('above', '3.2')
+        )
+    ]
+)
+def test_pubsub_timeout(r, timeout_value):
+    def publish():
+        sleep(0.1)
+        r.publish('channel', 'hello')
+
+    p = r.pubsub()
+    p.subscribe('channel')
+    p.parse_response()  # Drains the subscribe command message
+    publish_thread = threading.Thread(target=publish)
+    publish_thread.start()
+    message = p.get_message(timeout=timeout_value)
+    assert message == {
+        'type': 'message', 'pattern': None,
+        'channel': b'channel', 'data': b'hello'
+    }
+    publish_thread.join()
+
+    if timeout_value is not None:
+        # For infinite timeout case don't wait for the message that will never appear.
+        message = p.get_message(timeout=timeout_value)
+        assert message is None
+
+
+@pytest.mark.fake
+def test_socket_cleanup_pubsub(fake_server):
+    r1 = fakeredis.FakeStrictRedis(server=fake_server)
+    r2 = fakeredis.FakeStrictRedis(server=fake_server)
+    ps = r1.pubsub()
+    with ps:
+        ps.subscribe('test')
+        ps.psubscribe('test*')
+    r2.publish('test', 'foo')
+
+
+def test_pubsub_channels(r: redis.Redis):
+    p = r.pubsub()
+    p.subscribe("foo", "bar", "baz", "test")
+    expected = {b"foo", b"bar", b"baz", b"test"}
+    assert set(r.pubsub_channels()) == expected
+
+
+def test_pubsub_channels_pattern(r: redis.Redis):
+    p = r.pubsub()
+    p.subscribe("foo", "bar", "baz", "test")
+    assert set(r.pubsub_channels("b*")) == {b"bar", b"baz", }
+
+
+def test_pubsub_no_subcommands(r: redis.Redis):
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, "PUBSUB")
+
+
+@pytest.mark.min_server('7')
+def test_pubsub_help_redis7(r: redis.Redis):
+    assert raw_command(r, "PUBSUB HELP") == [
+        b'PUBSUB <subcommand> [<arg> [value] [opt] ...]. Subcommands are:',
+        b'CHANNELS [<pattern>]',
+        b"    Return the currently active channels matching a <pattern> (default: '*')"
+        b'.',
+        b'NUMPAT',
+        b'    Return number of subscriptions to patterns.',
+        b'NUMSUB [<channel> ...]',
+        b'    Return the number of subscribers for the specified channels, excluding',
+        b'    pattern subscriptions(default: no channels).',
+        b'SHARDCHANNELS [<pattern>]',
+        b'    Return the currently active shard level channels matching a <pattern> (d'
+        b"efault: '*').",
+        b'SHARDNUMSUB [<shardchannel> ...]',
+        b'    Return the number of subscribers for the specified shard level channel(s'
+        b')',
+        b'HELP',
+        b'    Prints this help.'
+    ]
+
+
+@pytest.mark.max_server('6.2.7')
+def test_pubsub_help_redis6(r: redis.Redis):
+    assert raw_command(r, "PUBSUB HELP") == [
+        b'PUBSUB <subcommand> [<arg> [value] [opt] ...]. Subcommands are:',
+        b'CHANNELS [<pattern>]',
+        b"    Return the currently active channels matching a <pattern> (default: '*')"
+        b'.',
+        b'NUMPAT',
+        b'    Return number of subscriptions to patterns.',
+        b'NUMSUB [<channel> ...]',
+        b'    Return the number of subscribers for the specified channels, excluding',
+        b'    pattern subscriptions(default: no channels).',
+        b'HELP',
+        b'    Prints this help.'
+    ]
+
+
+def test_pubsub_numsub(r: redis.Redis):
+    a = uuid.uuid4().hex
+    b = uuid.uuid4().hex
+    c = uuid.uuid4().hex
+    p1 = r.pubsub()
+    p2 = r.pubsub()
+
+    p1.subscribe(a, b, c)
+    p2.subscribe(a, b)
+
+    assert r.pubsub_numsub(a, b, c) == [(a.encode(), 2), (b.encode(), 2), (c.encode(), 1), ]
+    assert r.pubsub_numsub() == []
+    assert r.pubsub_numsub(a, "non-existing") == [(a.encode(), 2), (b"non-existing", 0)]
diff --git a/test/test_mixins/test_scripting.py b/test/test_mixins/test_scripting.py
new file mode 100644
index 0000000..8430670
--- /dev/null
+++ b/test/test_mixins/test_scripting.py
@@ -0,0 +1,99 @@
+from __future__ import annotations
+
+import pytest
+import redis
+import redis.client
+
+from test.testtools import raw_command
+
+
+@pytest.mark.min_server('7')
+def test_script_exists_redis7(r: redis.Redis):
+    # test response for no arguments by bypassing the py-redis command
+    # as it requires at least one argument
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, "SCRIPT EXISTS")
+
+    # use single character characters for non-existing scripts, as those
+    # will never be equal to an actual sha1 hash digest
+    assert r.script_exists("a") == [0]
+    assert r.script_exists("a", "b", "c", "d", "e", "f") == [0, 0, 0, 0, 0, 0]
+
+    sha1_one = r.script_load("return 'a'")
+    assert r.script_exists(sha1_one) == [1]
+    assert r.script_exists(sha1_one, "a") == [1, 0]
+    assert r.script_exists("a", "b", "c", sha1_one, "e") == [0, 0, 0, 1, 0]
+
+    sha1_two = r.script_load("return 'b'")
+    assert r.script_exists(sha1_one, sha1_two) == [1, 1]
+    assert r.script_exists("a", sha1_one, "c", sha1_two, "e", "f") == [0, 1, 0, 1, 0, 0]
+
+
+@pytest.mark.max_server('6.2.7')
+def test_script_exists_redis6(r: redis.Redis):
+    # test response for no arguments by bypassing the py-redis command
+    # as it requires at least one argument
+    assert raw_command(r, "SCRIPT EXISTS") == []
+
+    # use single character characters for non-existing scripts, as those
+    # will never be equal to an actual sha1 hash digest
+    assert r.script_exists("a") == [0]
+    assert r.script_exists("a", "b", "c", "d", "e", "f") == [0, 0, 0, 0, 0, 0]
+
+    sha1_one = r.script_load("return 'a'")
+    assert r.script_exists(sha1_one) == [1]
+    assert r.script_exists(sha1_one, "a") == [1, 0]
+    assert r.script_exists("a", "b", "c", sha1_one, "e") == [0, 0, 0, 1, 0]
+
+    sha1_two = r.script_load("return 'b'")
+    assert r.script_exists(sha1_one, sha1_two) == [1, 1]
+    assert r.script_exists("a", sha1_one, "c", sha1_two, "e", "f") == [0, 1, 0, 1, 0, 0]
+
+
+@pytest.mark.parametrize("args", [("a",), tuple("abcdefghijklmn")])
+def test_script_flush_errors_with_args(r, args):
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, "SCRIPT FLUSH %s" % " ".join(args))
+
+
+def test_script_flush(r: redis.Redis):
+    # generate/load six unique scripts and store their sha1 hash values
+    sha1_values = [r.script_load("return '%s'" % char) for char in "abcdef"]
+
+    # assert the scripts all exist prior to flushing
+    assert r.script_exists(*sha1_values) == [1] * len(sha1_values)
+
+    # flush and assert OK response
+    assert r.script_flush() is True
+
+    # assert none of the scripts exists after flushing
+    assert r.script_exists(*sha1_values) == [0] * len(sha1_values)
+
+
+def test_script_no_subcommands(r: redis.Redis):
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, "SCRIPT")
+
+
+def test_script_help(r: redis.Redis):
+    assert raw_command(r, "SCRIPT HELP") == [
+        b'SCRIPT <subcommand> [<arg> [value] [opt] ...]. Subcommands are:',
+        b'DEBUG (YES|SYNC|NO)',
+        b'    Set the debug mode for subsequent scripts executed.',
+        b'EXISTS <sha1> [<sha1> ...]',
+        b'    Return information about the existence of the scripts in the script cach'
+        b'e.',
+        b'FLUSH [ASYNC|SYNC]',
+        b'    Flush the Lua scripts cache. Very dangerous on replicas.',
+        b'    When called without the optional mode argument, the behavior is determin'
+        b'ed by the',
+        b'    lazyfree-lazy-user-flush configuration directive. Valid modes are:',
+        b'    * ASYNC: Asynchronously flush the scripts cache.',
+        b'    * SYNC: Synchronously flush the scripts cache.',
+        b'KILL',
+        b'    Kill the currently executing Lua script.',
+        b'LOAD <script>',
+        b'    Load a script into the scripts cache without executing it.',
+        b'HELP',
+        b'    Prints this help.'
+    ]
diff --git a/test/test_mixins/test_server_commands.py b/test/test_mixins/test_server_commands.py
new file mode 100644
index 0000000..f96c808
--- /dev/null
+++ b/test/test_mixins/test_server_commands.py
@@ -0,0 +1,74 @@
+from datetime import datetime
+from time import sleep
+
+import pytest
+import redis
+from redis.exceptions import ResponseError
+
+
+def test_swapdb(r, create_redis):
+    r1 = create_redis(1)
+    r.set('foo', 'abc')
+    r.set('bar', 'xyz')
+    r1.set('foo', 'foo')
+    r1.set('baz', 'baz')
+    assert r.swapdb(0, 1)
+    assert r.get('foo') == b'foo'
+    assert r.get('bar') is None
+    assert r.get('baz') == b'baz'
+    assert r1.get('foo') == b'abc'
+    assert r1.get('bar') == b'xyz'
+    assert r1.get('baz') is None
+
+
+def test_swapdb_same_db(r: redis.Redis):
+    assert r.swapdb(1, 1)
+
+
+def test_save(r: redis.Redis):
+    assert r.save()
+
+
+def test_bgsave(r: redis.Redis):
+    assert r.bgsave()
+    with pytest.raises(ResponseError):
+        r.execute_command('BGSAVE', 'SCHEDULE', 'FOO')
+    with pytest.raises(ResponseError):
+        r.execute_command('BGSAVE', 'FOO')
+
+
+def test_lastsave(r: redis.Redis):
+    assert isinstance(r.lastsave(), datetime)
+
+
+@pytest.mark.slow
+def test_bgsave_timestamp_update(r: redis.Redis):
+    early_timestamp = r.lastsave()
+    sleep(1)
+    assert r.bgsave()
+    sleep(1)
+    late_timestamp = r.lastsave()
+    assert early_timestamp < late_timestamp
+
+
+@pytest.mark.slow
+def test_save_timestamp_update(r: redis.Redis):
+    early_timestamp = r.lastsave()
+    sleep(1)
+    assert r.save()
+    late_timestamp = r.lastsave()
+    assert early_timestamp < late_timestamp
+
+
+def test_dbsize(r: redis.Redis):
+    assert r.dbsize() == 0
+    r.set('foo', 'bar')
+    r.set('bar', 'foo')
+    assert r.dbsize() == 2
+
+
+def test_flushdb(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.keys() == [b'foo']
+    assert r.flushdb() is True
+    assert r.keys() == []
diff --git a/test/test_mixins/test_set_commands.py b/test/test_mixins/test_set_commands.py
new file mode 100644
index 0000000..3dc3b96
--- /dev/null
+++ b/test/test_mixins/test_set_commands.py
@@ -0,0 +1,467 @@
+from __future__ import annotations
+
+import os
+from datetime import timedelta
+from time import sleep
+
+import pytest
+import redis
+import redis.client
+from redis.exceptions import ResponseError
+
+
+def test_sadd(r: redis.Redis):
+    assert r.sadd('foo', 'member1') == 1
+    assert r.sadd('foo', 'member1') == 0
+    assert r.smembers('foo') == {b'member1'}
+    assert r.sadd('foo', 'member2', 'member3') == 2
+    assert r.smembers('foo') == {b'member1', b'member2', b'member3'}
+    assert r.sadd('foo', 'member3', 'member4') == 1
+    assert r.smembers('foo') == {b'member1', b'member2', b'member3', b'member4'}
+
+
+def test_sadd_as_str_type(r: redis.Redis):
+    assert r.sadd('foo', *range(3)) == 3
+    assert r.smembers('foo') == {b'0', b'1', b'2'}
+
+
+def test_sadd_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'member': 1})
+    with pytest.raises(redis.ResponseError):
+        r.sadd('foo', 'member2')
+
+
+def test_scard(r: redis.Redis):
+    r.sadd('foo', 'member1')
+    r.sadd('foo', 'member2')
+    r.sadd('foo', 'member2')
+    assert r.scard('foo') == 2
+
+
+def test_scard_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'member': 1})
+    with pytest.raises(redis.ResponseError):
+        r.scard('foo')
+
+
+def test_sdiff(r: redis.Redis):
+    r.sadd('foo', 'member1')
+    r.sadd('foo', 'member2')
+    r.sadd('bar', 'member2')
+    r.sadd('bar', 'member3')
+    assert r.sdiff('foo', 'bar') == {b'member1'}
+    # Original sets shouldn't be modified.
+    assert r.smembers('foo') == {b'member1', b'member2'}
+    assert r.smembers('bar') == {b'member2', b'member3'}
+
+
+def test_sdiff_one_key(r: redis.Redis):
+    r.sadd('foo', 'member1')
+    r.sadd('foo', 'member2')
+    assert r.sdiff('foo') == {b'member1', b'member2'}
+
+
+def test_sdiff_empty(r: redis.Redis):
+    assert r.sdiff('foo') == set()
+
+
+def test_sdiff_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'member': 1})
+    r.sadd('bar', 'member')
+    with pytest.raises(redis.ResponseError):
+        r.sdiff('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.sdiff('bar', 'foo')
+
+
+def test_sdiffstore(r: redis.Redis):
+    r.sadd('foo', 'member1')
+    r.sadd('foo', 'member2')
+    r.sadd('bar', 'member2')
+    r.sadd('bar', 'member3')
+    assert r.sdiffstore('baz', 'foo', 'bar') == 1
+
+    # Catch instances where we store bytes and strings inconsistently
+    # and thus baz = {'member1', b'member1'}
+    r.sadd('baz', 'member1')
+    assert r.scard('baz') == 1
+
+
+def test_sinter(r: redis.Redis):
+    r.sadd('foo', 'member1')
+    r.sadd('foo', 'member2')
+    r.sadd('bar', 'member2')
+    r.sadd('bar', 'member3')
+    assert r.sinter('foo', 'bar') == {b'member2'}
+    assert r.sinter('foo') == {b'member1', b'member2'}
+
+
+def test_sinter_bytes_keys(r: redis.Redis):
+    foo = os.urandom(10)
+    bar = os.urandom(10)
+    r.sadd(foo, 'member1')
+    r.sadd(foo, 'member2')
+    r.sadd(bar, 'member2')
+    r.sadd(bar, 'member3')
+    assert r.sinter(foo, bar) == {b'member2'}
+    assert r.sinter(foo) == {b'member1', b'member2'}
+
+
+def test_sinter_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'member': 1})
+    r.sadd('bar', 'member')
+    with pytest.raises(redis.ResponseError):
+        r.sinter('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.sinter('bar', 'foo')
+
+
+def test_sinterstore(r: redis.Redis):
+    r.sadd('foo', 'member1')
+    r.sadd('foo', 'member2')
+    r.sadd('bar', 'member2')
+    r.sadd('bar', 'member3')
+    assert r.sinterstore('baz', 'foo', 'bar') == 1
+
+    # Catch instances where we store bytes and strings inconsistently
+    # and thus baz = {'member2', b'member2'}
+    r.sadd('baz', 'member2')
+    assert r.scard('baz') == 1
+
+
+def test_sismember(r: redis.Redis):
+    assert r.sismember('foo', 'member1') is False
+    r.sadd('foo', 'member1')
+    assert r.sismember('foo', 'member1') is True
+
+
+def test_smismember(r: redis.Redis):
+    assert r.smismember('foo', ['member1', 'member2', 'member3']) == [0, 0, 0]
+    r.sadd('foo', 'member1', 'member2', 'member3')
+    assert r.smismember('foo', ['member1', 'member2', 'member3']) == [1, 1, 1]
+    assert r.smismember('foo', ['member1', 'member2', 'member3', 'member4']) == [1, 1, 1, 0]
+    assert r.smismember('foo', ['member4', 'member2', 'member3']) == [0, 1, 1]
+    # should also work if provided values as arguments
+    assert r.smismember('foo', 'member4', 'member2', 'member3') == [0, 1, 1]
+
+
+def test_smismember_wrong_type(r: redis.Redis):
+    # verify that command fails when the key itself is not a SET
+    r.zadd('foo', {'member': 1})
+    with pytest.raises(redis.ResponseError):
+        r.smismember('foo', 'member')
+
+    # verify that command fails if the input parameter is of wrong type
+    r.sadd('foo2', 'member1', 'member2', 'member3')
+    with pytest.raises(redis.DataError, match='Invalid input of type'):
+        r.smismember('foo2', [["member1", "member2"]])
+
+
+def test_sismember_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'member': 1})
+    with pytest.raises(redis.ResponseError):
+        r.sismember('foo', 'member')
+
+
+def test_smembers(r: redis.Redis):
+    assert r.smembers('foo') == set()
+
+
+def test_smembers_copy(r: redis.Redis):
+    r.sadd('foo', 'member1')
+    ret = r.smembers('foo')
+    r.sadd('foo', 'member2')
+    assert r.smembers('foo') != ret
+
+
+def test_smembers_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'member': 1})
+    with pytest.raises(redis.ResponseError):
+        r.smembers('foo')
+
+
+def test_smembers_runtime_error(r: redis.Redis):
+    r.sadd('foo', 'member1', 'member2')
+    for member in r.smembers('foo'):
+        r.srem('foo', member)
+
+
+def test_smove(r: redis.Redis):
+    r.sadd('foo', 'member1')
+    r.sadd('foo', 'member2')
+    assert r.smove('foo', 'bar', 'member1') is True
+    assert r.smembers('bar') == {b'member1'}
+
+
+def test_smove_non_existent_key(r: redis.Redis):
+    assert r.smove('foo', 'bar', 'member1') is False
+
+
+def test_smove_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'member': 1})
+    r.sadd('bar', 'member')
+    with pytest.raises(redis.ResponseError):
+        r.smove('bar', 'foo', 'member')
+    # Must raise the error before removing member from bar
+    assert r.smembers('bar') == {b'member'}
+    with pytest.raises(redis.ResponseError):
+        r.smove('foo', 'bar', 'member')
+
+
+def test_spop(r: redis.Redis):
+    # This is tricky because it pops a random element.
+    r.sadd('foo', 'member1')
+    assert r.spop('foo') == b'member1'
+    assert r.spop('foo') is None
+
+
+def test_spop_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'member': 1})
+    with pytest.raises(redis.ResponseError):
+        r.spop('foo')
+
+
+def test_srandmember(r: redis.Redis):
+    r.sadd('foo', 'member1')
+    assert r.srandmember('foo') == b'member1'
+    # Shouldn't be removed from the set.
+    assert r.srandmember('foo') == b'member1'
+
+
+def test_srandmember_number(r: redis.Redis):
+    """srandmember works with the number argument."""
+    assert r.srandmember('foo', 2) == []
+    r.sadd('foo', b'member1')
+    assert r.srandmember('foo', 2) == [b'member1']
+    r.sadd('foo', b'member2')
+    assert set(r.srandmember('foo', 2)) == {b'member1', b'member2'}
+    r.sadd('foo', b'member3')
+    res = r.srandmember('foo', 2)
+    assert len(res) == 2
+    for e in res:
+        assert e in {b'member1', b'member2', b'member3'}
+
+
+def test_srandmember_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'member': 1})
+    with pytest.raises(redis.ResponseError):
+        r.srandmember('foo')
+
+
+def test_srem(r: redis.Redis):
+    r.sadd('foo', 'member1', 'member2', 'member3', 'member4')
+    assert r.smembers('foo') == {b'member1', b'member2', b'member3', b'member4'}
+    assert r.srem('foo', 'member1') == 1
+    assert r.smembers('foo') == {b'member2', b'member3', b'member4'}
+    assert r.srem('foo', 'member1') == 0
+    # Since redis>=2.7.6 returns number of deleted items.
+    assert r.srem('foo', 'member2', 'member3') == 2
+    assert r.smembers('foo') == {b'member4'}
+    assert r.srem('foo', 'member3', 'member4') == 1
+    assert r.smembers('foo') == set()
+    assert r.srem('foo', 'member3', 'member4') == 0
+
+
+def test_srem_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'member': 1})
+    with pytest.raises(redis.ResponseError):
+        r.srem('foo', 'member')
+
+
+def test_sunion(r: redis.Redis):
+    r.sadd('foo', 'member1')
+    r.sadd('foo', 'member2')
+    r.sadd('bar', 'member2')
+    r.sadd('bar', 'member3')
+    assert r.sunion('foo', 'bar') == {b'member1', b'member2', b'member3'}
+
+
+def test_sunion_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'member': 1})
+    r.sadd('bar', 'member')
+    with pytest.raises(redis.ResponseError):
+        r.sunion('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.sunion('bar', 'foo')
+
+
+def test_sunionstore(r: redis.Redis):
+    r.sadd('foo', 'member1')
+    r.sadd('foo', 'member2')
+    r.sadd('bar', 'member2')
+    r.sadd('bar', 'member3')
+    assert r.sunionstore('baz', 'foo', 'bar') == 3
+    assert r.smembers('baz') == {b'member1', b'member2', b'member3'}
+
+    # Catch instances where we store bytes and strings inconsistently
+    # and thus baz = {b'member1', b'member2', b'member3', 'member3'}
+    r.sadd('baz', 'member3')
+    assert r.scard('baz') == 3
+
+
+def test_empty_set(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    r.srem('foo', 'bar')
+    assert not r.exists('foo')
+
+
+def test_sscan(r: redis.Redis):
+    # Set up the data
+    name = 'sscan-test'
+    for ix in range(20):
+        k = 'sscan-test:%s' % ix
+        r.sadd(name, k)
+    expected = r.smembers(name)
+    assert len(expected) == 20  # Ensure we know what we're testing
+
+    # Test that we page through the results and get everything out
+    results = []
+    cursor = '0'
+    while cursor != 0:
+        cursor, data = r.sscan(name, cursor, count=6)
+        results.extend(data)
+    assert set(expected) == set(results)
+
+    # Test the iterator version
+    results = [r for r in r.sscan_iter(name, count=6)]
+    assert set(expected) == set(results)
+
+    # Now test that the MATCH functionality works
+    results = []
+    cursor = '0'
+    while cursor != 0:
+        cursor, data = r.sscan(name, cursor, match='*7', count=100)
+        results.extend(data)
+    assert b'sscan-test:7' in results
+    assert b'sscan-test:17' in results
+    assert len(results) == 2
+
+    # Test the match on iterator
+    results = [r for r in r.sscan_iter(name, match='*7')]
+    assert b'sscan-test:7' in results
+    assert b'sscan-test:17' in results
+    assert len(results) == 2
+
+
+@pytest.mark.min_server('7')
+def test_sintercard(r: redis.Redis):
+    r.sadd('foo', 'member1')
+    r.sadd('foo', 'member2')
+    r.sadd('bar', 'member2')
+    r.sadd('bar', 'member3')
+    assert r.sintercard(2, ['foo', 'bar']) == 1
+    assert r.sintercard(1, ['foo']) == 2
+
+
+@pytest.mark.min_server('7')
+def test_sintercard_key_doesnt_exist(r: redis.Redis):
+    r.sadd('foo', 'member1')
+    r.sadd('foo', 'member2')
+    r.sadd('bar', 'member2')
+    r.sadd('bar', 'member3')
+    assert r.sintercard(2, ['foo', 'bar']) == 1
+    assert r.sintercard(1, ['foo']) == 2
+    assert r.sintercard(1, ['foo'], limit=1) == 1
+    assert r.sintercard(3, ['foo', 'bar', 'ddd']) == 0
+
+
+@pytest.mark.min_server('7')
+def test_sintercard_bytes_keys(r: redis.Redis):
+    foo = os.urandom(10)
+    bar = os.urandom(10)
+    r.sadd(foo, 'member1')
+    r.sadd(foo, 'member2')
+    r.sadd(bar, 'member2')
+    r.sadd(bar, 'member3')
+    assert r.sintercard(2, [foo, bar]) == 1
+    assert r.sintercard(1, [foo]) == 2
+    assert r.sintercard(1, [foo], limit=1) == 1
+
+
+@pytest.mark.min_server('7')
+def test_sintercard_wrong_type(r: redis.Redis):
+    r.zadd('foo', {'member': 1})
+    r.sadd('bar', 'member')
+    with pytest.raises(redis.ResponseError):
+        r.sintercard(2, ['foo', 'bar'])
+    with pytest.raises(redis.ResponseError):
+        r.sintercard(2, ['bar', 'foo'])
+
+
+@pytest.mark.min_server('7')
+def test_sintercard_syntax_error(r: redis.Redis):
+    r.zadd('foo', {'member': 1})
+    r.sadd('bar', 'member')
+    with pytest.raises(redis.ResponseError):
+        r.sintercard(3, ['foo', 'bar'])
+    with pytest.raises(redis.ResponseError):
+        r.sintercard(1, ['bar', 'foo'])
+    with pytest.raises(redis.ResponseError):
+        r.sintercard(1, ['bar', 'foo'], limit='x')
+
+
+def test_pfadd(r: redis.Redis):
+    key = "hll-pfadd"
+    assert r.pfadd(key, "a", "b", "c", "d", "e", "f", "g") == 1
+    assert r.pfcount(key) == 7
+
+
+def test_pfcount(r: redis.Redis):
+    key1 = "hll-pfcount01"
+    key2 = "hll-pfcount02"
+    key3 = "hll-pfcount03"
+    assert r.pfadd(key1, "foo", "bar", "zap") == 1
+    assert r.pfadd(key1, "zap", "zap", "zap") == 0
+    assert r.pfadd(key1, "foo", "bar") == 0
+    assert r.pfcount(key1) == 3
+    assert r.pfadd(key2, "1", "2", "3") == 1
+    assert r.pfcount(key2) == 3
+    assert r.pfcount(key1, key2) == 6
+    assert r.pfadd(key3, "foo", "bar", "zip") == 1
+    assert r.pfcount(key3) == 3
+    assert r.pfcount(key1, key3) == 4
+    assert r.pfcount(key1, key2, key3) == 7
+
+
+def test_pfmerge(r: redis.Redis):
+    key1 = "hll-pfmerge01"
+    key2 = "hll-pfmerge02"
+    key3 = "hll-pfmerge03"
+    assert r.pfadd(key1, "foo", "bar", "zap", "a") == 1
+    assert r.pfadd(key2, "a", "b", "c", "foo") == 1
+    assert r.pfmerge(key3, key1, key2)
+    assert r.pfcount(key3) == 6
+
+
+@pytest.mark.slow
+def test_set_ex_should_expire_value(r: redis.Redis):
+    r.set('foo', 'bar')
+    assert r.get('foo') == b'bar'
+    r.set('foo', 'bar', ex=1)
+    sleep(2)
+    assert r.get('foo') is None
+
+
+@pytest.mark.slow
+def test_set_px_should_expire_value(r: redis.Redis):
+    r.set('foo', 'bar', px=500)
+    sleep(1.5)
+    assert r.get('foo') is None
+
+
+@pytest.mark.slow
+def test_psetex_expire_value(r: redis.Redis):
+    with pytest.raises(ResponseError):
+        r.psetex('foo', 0, 'bar')
+    r.psetex('foo', 500, 'bar')
+    sleep(1.5)
+    assert r.get('foo') is None
+
+
+@pytest.mark.slow
+def test_psetex_expire_value_using_timedelta(r: redis.Redis):
+    with pytest.raises(ResponseError):
+        r.psetex('foo', timedelta(seconds=0), 'bar')
+    r.psetex('foo', timedelta(seconds=0.5), 'bar')
+    sleep(1.5)
+    assert r.get('foo') is None
diff --git a/test/test_mixins/test_sortedset_commands.py b/test/test_mixins/test_sortedset_commands.py
new file mode 100644
index 0000000..622a2ed
--- /dev/null
+++ b/test/test_mixins/test_sortedset_commands.py
@@ -0,0 +1,1081 @@
+from __future__ import annotations
+
+import math
+from collections import OrderedDict
+from typing import Tuple, List, Optional
+
+import pytest
+import redis
+import redis.client
+from packaging.version import Version
+
+from test import testtools
+
+REDIS_VERSION = Version(redis.__version__)
+
+
+def round_str(x):
+    assert isinstance(x, bytes)
+    return round(float(x))
+
+
+def zincrby(r, key, amount, value):
+    return r.zincrby(key, amount, value)
+
+
+def test_zpopmin(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zpopmin('foo', count=2) == [(b'one', 1.0), (b'two', 2.0)]
+    assert r.zpopmin('foo', count=2) == [(b'three', 3.0)]
+
+
+def test_zpopmin_too_many(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zpopmin('foo', count=5) == [(b'one', 1.0), (b'two', 2.0), (b'three', 3.0)]
+
+
+def test_zpopmax(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zpopmax('foo', count=2) == [(b'three', 3.0), (b'two', 2.0)]
+    assert r.zpopmax('foo', count=2) == [(b'one', 1.0)]
+
+
+def test_zrange_same_score(r: redis.Redis):
+    r.zadd('foo', {'two_a': 2})
+    r.zadd('foo', {'two_b': 2})
+    r.zadd('foo', {'two_c': 2})
+    r.zadd('foo', {'two_d': 2})
+    r.zadd('foo', {'two_e': 2})
+    assert r.zrange('foo', 2, 3) == [b'two_c', b'two_d']
+
+
+def test_zrange_with_bylex_and_byscore(r: redis.Redis):
+    r.zadd('foo', {'one_a': 0})
+    r.zadd('foo', {'two_a': 0})
+    r.zadd('foo', {'two_b': 0})
+    r.zadd('foo', {'three_a': 0})
+    with pytest.raises(redis.ResponseError):
+        testtools.raw_command(r, 'zrange', 'foo', '(t', '+', 'bylex', 'byscore')
+
+
+def test_zrange_with_rev_and_bylex(r: redis.Redis):
+    r.zadd('foo', {'one_a': 0})
+    r.zadd('foo', {'two_a': 0})
+    r.zadd('foo', {'two_b': 0})
+    r.zadd('foo', {'three_a': 0})
+    assert r.zrange('foo', b'+', b'(t', desc=True, bylex=True) == [b'two_b', b'two_a', b'three_a']
+    assert (
+            r.zrange('foo', b'[two_b', b'(t', desc=True, bylex=True)
+            == [b'two_b', b'two_a', b'three_a']
+    )
+    assert r.zrange('foo', b'(two_b', b'(t', desc=True, bylex=True) == [b'two_a', b'three_a']
+    assert (
+            r.zrange('foo', b'[two_b', b'[three_a', desc=True, bylex=True)
+            == [b'two_b', b'two_a', b'three_a']
+    )
+    assert r.zrange('foo', b'[two_b', b'(three_a', desc=True, bylex=True) == [b'two_b', b'two_a']
+    assert r.zrange('foo', b'(two_b', b'-', desc=True, bylex=True) == [b'two_a', b'three_a', b'one_a']
+    assert r.zrange('foo', b'(two_b', b'[two_b', bylex=True) == []
+    # reversed max + and min - boundaries
+    # these will be always empty, but allowed by redis
+    assert r.zrange('foo', b'-', b'+', desc=True, bylex=True) == []
+    assert r.zrange('foo', b'[three_a', b'+', desc=True, bylex=True) == []
+    assert r.zrange('foo', b'-', b'[o', desc=True, bylex=True) == []
+
+
+def test_zrange_with_bylex(r: redis.Redis):
+    r.zadd('foo', {'one_a': 0})
+    r.zadd('foo', {'two_a': 0})
+    r.zadd('foo', {'two_b': 0})
+    r.zadd('foo', {'three_a': 0})
+    assert r.zrange('foo', b'(t', b'+', bylex=True) == [b'three_a', b'two_a', b'two_b']
+    assert r.zrange('foo', b'(t', b'[two_b', bylex=True) == [b'three_a', b'two_a', b'two_b']
+    assert r.zrange('foo', b'(t', b'(two_b', bylex=True) == [b'three_a', b'two_a']
+    assert (
+            r.zrange('foo', b'[three_a', b'[two_b', bylex=True)
+            == [b'three_a', b'two_a', b'two_b']
+    )
+    assert r.zrange('foo', b'(three_a', b'[two_b', bylex=True) == [b'two_a', b'two_b']
+    assert r.zrange('foo', b'-', b'(two_b', bylex=True) == [b'one_a', b'three_a', b'two_a']
+    assert r.zrange('foo', b'[two_b', b'(two_b', bylex=True) == []
+    # reversed max + and min - boundaries
+    # these will be always empty, but allowed by redis
+    assert r.zrange('foo', b'+', b'-', bylex=True) == []
+    assert r.zrange('foo', b'+', b'[three_a', bylex=True) == []
+    assert r.zrange('foo', b'[o', b'-', bylex=True) == []
+
+
+def test_zrange_with_byscore(r: redis.Redis):
+    r.zadd('foo', {'zero': 0})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'two_a_also': 2})
+    r.zadd('foo', {'two_b_also': 2})
+    r.zadd('foo', {'four': 4})
+    assert r.zrange('foo', 1, 3, byscore=True) == [b'two', b'two_a_also', b'two_b_also']
+    assert r.zrange('foo', 2, 3, byscore=True) == [b'two', b'two_a_also', b'two_b_also']
+    assert (
+            r.zrange('foo', 0, 4, byscore=True)
+            == [b'zero', b'two', b'two_a_also', b'two_b_also', b'four']
+    )
+    assert r.zrange('foo', '-inf', 1, byscore=True) == [b'zero']
+    assert (
+            r.zrange('foo', 2, '+inf', byscore=True)
+            == [b'two', b'two_a_also', b'two_b_also', b'four']
+    )
+    assert (
+            r.zrange('foo', '-inf', '+inf', byscore=True)
+            == [b'zero', b'two', b'two_a_also', b'two_b_also', b'four']
+    )
+
+
+def test_zcard(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    assert r.zcard('foo') == 2
+
+
+def test_zcard_non_existent_key(r: redis.Redis):
+    assert r.zcard('foo') == 0
+
+
+def test_zcard_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zcard('foo')
+
+
+def test_zcount(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'three': 2})
+    r.zadd('foo', {'five': 5})
+    assert r.zcount('foo', 2, 4) == 1
+    assert r.zcount('foo', 1, 4) == 2
+    assert r.zcount('foo', 0, 5) == 3
+    assert r.zcount('foo', 4, '+inf') == 1
+    assert r.zcount('foo', '-inf', 4) == 2
+    assert r.zcount('foo', '-inf', '+inf') == 3
+
+
+def test_zcount_exclusive(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'three': 2})
+    r.zadd('foo', {'five': 5})
+    assert r.zcount('foo', '-inf', '(2') == 1
+    assert r.zcount('foo', '-inf', 2) == 2
+    assert r.zcount('foo', '(5', '+inf') == 0
+    assert r.zcount('foo', '(1', 5) == 2
+    assert r.zcount('foo', '(2', '(5') == 0
+    assert r.zcount('foo', '(1', '(5') == 1
+    assert r.zcount('foo', 2, '(5') == 1
+
+
+def test_zcount_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zcount('foo', '-inf', '+inf')
+
+
+def test_zincrby(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    assert zincrby(r, 'foo', 10, 'one') == 11
+    assert r.zrange('foo', 0, -1, withscores=True) == [(b'one', 11)]
+
+
+def test_zincrby_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        zincrby(r, 'foo', 10, 'one')
+
+
+def test_zrange_descending(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zrange('foo', 0, -1, desc=True) == [b'three', b'two', b'one']
+
+
+def test_zrange_descending_with_scores(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert (
+            r.zrange('foo', 0, -1, desc=True, withscores=True)
+            == [(b'three', 3), (b'two', 2), (b'one', 1)]
+    )
+
+
+def test_zrange_with_positive_indices(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zrange('foo', 0, 1) == [b'one', b'two']
+
+
+def test_zrange_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zrange('foo', 0, -1)
+
+
+def test_zrange_score_cast(r: redis.Redis):
+    r.zadd('foo', {'one': 1.2})
+    r.zadd('foo', {'two': 2.2})
+
+    expected_without_cast_round = [(b'one', 1.2), (b'two', 2.2)]
+    expected_with_cast_round = [(b'one', 1.0), (b'two', 2.0)]
+    assert r.zrange('foo', 0, 2, withscores=True) == expected_without_cast_round
+    assert (
+            r.zrange('foo', 0, 2, withscores=True, score_cast_func=round_str)
+            == expected_with_cast_round
+    )
+
+
+def test_zrank(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zrank('foo', 'one') == 0
+    assert r.zrank('foo', 'two') == 1
+    assert r.zrank('foo', 'three') == 2
+
+
+def test_zrank_non_existent_member(r: redis.Redis):
+    assert r.zrank('foo', 'one') is None
+
+
+def test_zrank_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zrank('foo', 'one')
+
+
+def test_zrem(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    r.zadd('foo', {'four': 4})
+    assert r.zrem('foo', 'one') == 1
+    assert r.zrange('foo', 0, -1) == [b'two', b'three', b'four']
+    # Since redis>=2.7.6 returns number of deleted items.
+    assert r.zrem('foo', 'two', 'three') == 2
+    assert r.zrange('foo', 0, -1) == [b'four']
+    assert r.zrem('foo', 'three', 'four') == 1
+    assert r.zrange('foo', 0, -1) == []
+    assert r.zrem('foo', 'three', 'four') == 0
+
+
+def test_zrem_non_existent_member(r: redis.Redis):
+    assert not r.zrem('foo', 'one')
+
+
+def test_zrem_numeric_member(r: redis.Redis):
+    r.zadd('foo', {'128': 13.0, '129': 12.0})
+    assert r.zrem('foo', 128) == 1
+    assert r.zrange('foo', 0, -1) == [b'129']
+
+
+def test_zrem_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zrem('foo', 'bar')
+
+
+def test_zscore(r: redis.Redis):
+    r.zadd('foo', {'one': 54})
+    assert r.zscore('foo', 'one') == 54
+
+
+def test_zscore_non_existent_member(r: redis.Redis):
+    assert r.zscore('foo', 'one') is None
+
+
+def test_zscore_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zscore('foo', 'one')
+
+
+def test_zmscore(r: redis.Redis):
+    """When all the requested sorted-set members are in the cache, a valid
+    float value should be returned for each requested member.
+
+    The order of the returned scores should always match the order in
+    which the set members were supplied.
+    """
+    cache_key: str = "scored-set-members"
+    members: Tuple[str, ...] = ("one", "two", "three", "four", "five", "six")
+    scores: Tuple[float, ...] = (1.1, 2.2, 3.3, 4.4, 5.5, 6.6)
+
+    r.zadd(cache_key, dict(zip(members, scores)))
+    cached_scores: List[Optional[float]] = r.zmscore(
+        cache_key,
+        list(members),
+    )
+
+    assert all(cached_scores[idx] == score for idx, score in enumerate(scores))
+
+
+def test_zmscore_missing_members(r: redis.Redis):
+    """When none of the requested sorted-set members are in the cache, a value
+    of `None` should be returned once for each requested member."""
+    cache_key: str = "scored-set-members"
+    members: Tuple[str, ...] = ("one", "two", "three", "four", "five", "six")
+
+    r.zadd(cache_key, {"eight": 8.8})
+    cached_scores: List[Optional[float]] = r.zmscore(
+        cache_key,
+        list(members),
+    )
+
+    assert all(score is None for score in cached_scores)
+
+
+def test_zmscore_mixed_membership(r: redis.Redis):
+    """When only some requested sorted-set members are in the cache, a
+    valid float value should be returned for each present member and `None` for
+    each missing member.
+
+    The order of the returned scores should always match the order in
+    which the set members were supplied.
+    """
+    cache_key: str = "scored-set-members"
+    members: Tuple[str, ...] = ("one", "two", "three", "four", "five", "six")
+    scores: Tuple[float, ...] = (1.1, 2.2, 3.3, 4.4, 5.5, 6.6)
+
+    r.zadd(
+        cache_key,
+        dict((member, scores[idx]) for (idx, member) in enumerate(members) if idx % 2 != 0),
+    )
+
+    cached_scores: List[Optional[float]] = r.zmscore(cache_key, list(members))
+
+    assert all(cached_scores[idx] is None for (idx, score) in enumerate(scores) if idx % 2 == 0)
+    assert all(cached_scores[idx] == score for (idx, score) in enumerate(scores) if idx % 2 != 0)
+
+
+def test_zrevrank(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zrevrank('foo', 'one') == 2
+    assert r.zrevrank('foo', 'two') == 1
+    assert r.zrevrank('foo', 'three') == 0
+
+
+def test_zrevrank_non_existent_member(r: redis.Redis):
+    assert r.zrevrank('foo', 'one') is None
+
+
+def test_zrevrank_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zrevrank('foo', 'one')
+
+
+def test_zrevrange(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zrevrange('foo', 0, 1) == [b'three', b'two']
+    assert r.zrevrange('foo', 0, -1) == [b'three', b'two', b'one']
+
+
+def test_zrevrange_sorted_keys(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'two_b': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zrevrange('foo', 0, 2) == [b'three', b'two_b', b'two']
+    assert r.zrevrange('foo', 0, -1) == [b'three', b'two_b', b'two', b'one']
+
+
+def test_zrevrange_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zrevrange('foo', 0, 2)
+
+
+def test_zrevrange_score_cast(r: redis.Redis):
+    r.zadd('foo', {'one': 1.2})
+    r.zadd('foo', {'two': 2.2})
+
+    expected_without_cast_round = [(b'two', 2.2), (b'one', 1.2)]
+    expected_with_cast_round = [(b'two', 2.0), (b'one', 1.0)]
+    assert r.zrevrange('foo', 0, 2, withscores=True) == expected_without_cast_round
+    assert (
+            r.zrevrange('foo', 0, 2, withscores=True, score_cast_func=round_str)
+            == expected_with_cast_round
+    )
+
+
+def test_zrange_with_large_int(r: redis.Redis):
+    with pytest.raises(redis.ResponseError, match='value is not an integer or out of range'):
+        r.zrange('', 0, 9223372036854775808)
+    with pytest.raises(redis.ResponseError, match='value is not an integer or out of range'):
+        r.zrange('', 0, -9223372036854775809)
+
+
+def test_zrangebyscore(r: redis.Redis):
+    r.zadd('foo', {'zero': 0})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'two_a_also': 2})
+    r.zadd('foo', {'two_b_also': 2})
+    r.zadd('foo', {'four': 4})
+    assert r.zrangebyscore('foo', 1, 3) == [b'two', b'two_a_also', b'two_b_also']
+    assert r.zrangebyscore('foo', 2, 3) == [b'two', b'two_a_also', b'two_b_also']
+    assert (
+            r.zrangebyscore('foo', 0, 4)
+            == [b'zero', b'two', b'two_a_also', b'two_b_also', b'four']
+    )
+    assert r.zrangebyscore('foo', '-inf', 1) == [b'zero']
+    assert (
+            r.zrangebyscore('foo', 2, '+inf')
+            == [b'two', b'two_a_also', b'two_b_also', b'four']
+    )
+    assert (
+            r.zrangebyscore('foo', '-inf', '+inf')
+            == [b'zero', b'two', b'two_a_also', b'two_b_also', b'four']
+    )
+
+
+def test_zrangebysore_exclusive(r: redis.Redis):
+    r.zadd('foo', {'zero': 0})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'four': 4})
+    r.zadd('foo', {'five': 5})
+    assert r.zrangebyscore('foo', '(0', 6) == [b'two', b'four', b'five']
+    assert r.zrangebyscore('foo', '(2', '(5') == [b'four']
+    assert r.zrangebyscore('foo', 0, '(4') == [b'zero', b'two']
+
+
+def test_zrangebyscore_raises_error(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    with pytest.raises(redis.ResponseError):
+        r.zrangebyscore('foo', 'one', 2)
+    with pytest.raises(redis.ResponseError):
+        r.zrangebyscore('foo', 2, 'three')
+    with pytest.raises(redis.ResponseError):
+        r.zrangebyscore('foo', 2, '3)')
+    with pytest.raises(redis.RedisError):
+        r.zrangebyscore('foo', 2, '3)', 0, None)
+
+
+def test_zrangebyscore_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zrangebyscore('foo', '(1', '(2')
+
+
+def test_zrangebyscore_slice(r: redis.Redis):
+    r.zadd('foo', {'two_a': 2})
+    r.zadd('foo', {'two_b': 2})
+    r.zadd('foo', {'two_c': 2})
+    r.zadd('foo', {'two_d': 2})
+    assert r.zrangebyscore('foo', 0, 4, 0, 2) == [b'two_a', b'two_b']
+    assert r.zrangebyscore('foo', 0, 4, 1, 3) == [b'two_b', b'two_c', b'two_d']
+
+
+def test_zrangebyscore_withscores(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zrangebyscore('foo', 1, 3, 0, 2, True) == [(b'one', 1), (b'two', 2)]
+
+
+def test_zrangebyscore_cast_scores(r: redis.Redis):
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'two_a_also': 2.2})
+
+    expected_without_cast_round = [(b'two', 2.0), (b'two_a_also', 2.2)]
+    expected_with_cast_round = [(b'two', 2.0), (b'two_a_also', 2.0)]
+    assert (
+            sorted(r.zrangebyscore('foo', 2, 3, withscores=True))
+            == sorted(expected_without_cast_round)
+    )
+    assert (
+            sorted(r.zrangebyscore('foo', 2, 3, withscores=True,
+                                   score_cast_func=round_str))
+            == sorted(expected_with_cast_round)
+    )
+
+
+def test_zrevrangebyscore(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zrevrangebyscore('foo', 3, 1) == [b'three', b'two', b'one']
+    assert r.zrevrangebyscore('foo', 3, 2) == [b'three', b'two']
+    assert r.zrevrangebyscore('foo', 3, 1, 0, 1) == [b'three']
+    assert r.zrevrangebyscore('foo', 3, 1, 1, 2) == [b'two', b'one']
+
+
+def test_zrevrangebyscore_exclusive(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zrevrangebyscore('foo', '(3', 1) == [b'two', b'one']
+    assert r.zrevrangebyscore('foo', 3, '(2') == [b'three']
+    assert r.zrevrangebyscore('foo', '(3', '(1') == [b'two']
+    assert r.zrevrangebyscore('foo', '(2', 1, 0, 1) == [b'one']
+    assert r.zrevrangebyscore('foo', '(2', '(1', 0, 1) == []
+    assert r.zrevrangebyscore('foo', '(3', '(0', 1, 2) == [b'one']
+
+
+def test_zrevrangebyscore_raises_error(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    with pytest.raises(redis.ResponseError):
+        r.zrevrangebyscore('foo', 'three', 1)
+    with pytest.raises(redis.ResponseError):
+        r.zrevrangebyscore('foo', 3, 'one')
+    with pytest.raises(redis.ResponseError):
+        r.zrevrangebyscore('foo', 3, '1)')
+    with pytest.raises(redis.ResponseError):
+        r.zrevrangebyscore('foo', '((3', '1)')
+
+
+def test_zrevrangebyscore_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zrevrangebyscore('foo', '(3', '(1')
+
+
+def test_zrevrangebyscore_cast_scores(r: redis.Redis):
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'two_a_also': 2.2})
+
+    expected_without_cast_round = [(b'two_a_also', 2.2), (b'two', 2.0)]
+    expected_with_cast_round = [(b'two_a_also', 2.0), (b'two', 2.0)]
+    assert (
+            r.zrevrangebyscore('foo', 3, 2, withscores=True)
+            == expected_without_cast_round
+    )
+    assert (
+            r.zrevrangebyscore('foo', 3, 2, withscores=True,
+                               score_cast_func=round_str)
+            == expected_with_cast_round
+    )
+
+
+def test_zrangebylex(r: redis.Redis):
+    r.zadd('foo', {'one_a': 0})
+    r.zadd('foo', {'two_a': 0})
+    r.zadd('foo', {'two_b': 0})
+    r.zadd('foo', {'three_a': 0})
+    assert r.zrangebylex('foo', b'(t', b'+') == [b'three_a', b'two_a', b'two_b']
+    assert r.zrangebylex('foo', b'(t', b'[two_b') == [b'three_a', b'two_a', b'two_b']
+    assert r.zrangebylex('foo', b'(t', b'(two_b') == [b'three_a', b'two_a']
+    assert (
+            r.zrangebylex('foo', b'[three_a', b'[two_b')
+            == [b'three_a', b'two_a', b'two_b']
+    )
+    assert r.zrangebylex('foo', b'(three_a', b'[two_b') == [b'two_a', b'two_b']
+    assert r.zrangebylex('foo', b'-', b'(two_b') == [b'one_a', b'three_a', b'two_a']
+    assert r.zrangebylex('foo', b'[two_b', b'(two_b') == []
+    # reversed max + and min - boundaries
+    # these will be always empty, but allowed by redis
+    assert r.zrangebylex('foo', b'+', b'-') == []
+    assert r.zrangebylex('foo', b'+', b'[three_a') == []
+    assert r.zrangebylex('foo', b'[o', b'-') == []
+
+
+def test_zrangebylex_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zrangebylex('foo', b'-', b'+')
+
+
+def test_zlexcount(r: redis.Redis):
+    r.zadd('foo', {'one_a': 0})
+    r.zadd('foo', {'two_a': 0})
+    r.zadd('foo', {'two_b': 0})
+    r.zadd('foo', {'three_a': 0})
+    assert r.zlexcount('foo', b'(t', b'+') == 3
+    assert r.zlexcount('foo', b'(t', b'[two_b') == 3
+    assert r.zlexcount('foo', b'(t', b'(two_b') == 2
+    assert r.zlexcount('foo', b'[three_a', b'[two_b') == 3
+    assert r.zlexcount('foo', b'(three_a', b'[two_b') == 2
+    assert r.zlexcount('foo', b'-', b'(two_b') == 3
+    assert r.zlexcount('foo', b'[two_b', b'(two_b') == 0
+    # reversed max + and min - boundaries
+    # these will be always empty, but allowed by redis
+    assert r.zlexcount('foo', b'+', b'-') == 0
+    assert r.zlexcount('foo', b'+', b'[three_a') == 0
+    assert r.zlexcount('foo', b'[o', b'-') == 0
+
+
+def test_zlexcount_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zlexcount('foo', b'-', b'+')
+
+
+def test_zrangebylex_with_limit(r: redis.Redis):
+    r.zadd('foo', {'one_a': 0})
+    r.zadd('foo', {'two_a': 0})
+    r.zadd('foo', {'two_b': 0})
+    r.zadd('foo', {'three_a': 0})
+    assert r.zrangebylex('foo', b'-', b'+', 1, 2) == [b'three_a', b'two_a']
+
+    # negative offset no results
+    assert r.zrangebylex('foo', b'-', b'+', -1, 3) == []
+
+    # negative limit ignored
+    assert (
+            r.zrangebylex('foo', b'-', b'+', 0, -2)
+            == [b'one_a', b'three_a', b'two_a', b'two_b']
+    )
+    assert r.zrangebylex('foo', b'-', b'+', 1, -2) == [b'three_a', b'two_a', b'two_b']
+    assert r.zrangebylex('foo', b'+', b'-', 1, 1) == []
+
+
+def test_zrangebylex_raises_error(r: redis.Redis):
+    r.zadd('foo', {'one_a': 0})
+    r.zadd('foo', {'two_a': 0})
+    r.zadd('foo', {'two_b': 0})
+    r.zadd('foo', {'three_a': 0})
+
+    with pytest.raises(redis.ResponseError):
+        r.zrangebylex('foo', b'', b'[two_b')
+
+    with pytest.raises(redis.ResponseError):
+        r.zrangebylex('foo', b'-', b'two_b')
+
+    with pytest.raises(redis.ResponseError):
+        r.zrangebylex('foo', b'(t', b'two_b')
+
+    with pytest.raises(redis.ResponseError):
+        r.zrangebylex('foo', b't', b'+')
+
+    with pytest.raises(redis.ResponseError):
+        r.zrangebylex('foo', b'[two_a', b'')
+
+    with pytest.raises(redis.RedisError):
+        r.zrangebylex('foo', b'(two_a', b'[two_b', 1)
+
+
+def test_zrevrangebylex(r: redis.Redis):
+    r.zadd('foo', {'one_a': 0})
+    r.zadd('foo', {'two_a': 0})
+    r.zadd('foo', {'two_b': 0})
+    r.zadd('foo', {'three_a': 0})
+    assert r.zrevrangebylex('foo', b'+', b'(t') == [b'two_b', b'two_a', b'three_a']
+    assert (
+            r.zrevrangebylex('foo', b'[two_b', b'(t')
+            == [b'two_b', b'two_a', b'three_a']
+    )
+    assert r.zrevrangebylex('foo', b'(two_b', b'(t') == [b'two_a', b'three_a']
+    assert (
+            r.zrevrangebylex('foo', b'[two_b', b'[three_a')
+            == [b'two_b', b'two_a', b'three_a']
+    )
+    assert r.zrevrangebylex('foo', b'[two_b', b'(three_a') == [b'two_b', b'two_a']
+    assert r.zrevrangebylex('foo', b'(two_b', b'-') == [b'two_a', b'three_a', b'one_a']
+    assert r.zrangebylex('foo', b'(two_b', b'[two_b') == []
+    # reversed max + and min - boundaries
+    # these will be always empty, but allowed by redis
+    assert r.zrevrangebylex('foo', b'-', b'+') == []
+    assert r.zrevrangebylex('foo', b'[three_a', b'+') == []
+    assert r.zrevrangebylex('foo', b'-', b'[o') == []
+
+
+def test_zrevrangebylex_with_limit(r: redis.Redis):
+    r.zadd('foo', {'one_a': 0})
+    r.zadd('foo', {'two_a': 0})
+    r.zadd('foo', {'two_b': 0})
+    r.zadd('foo', {'three_a': 0})
+    assert r.zrevrangebylex('foo', b'+', b'-', 1, 2) == [b'two_a', b'three_a']
+
+
+def test_zrevrangebylex_raises_error(r: redis.Redis):
+    r.zadd('foo', {'one_a': 0})
+    r.zadd('foo', {'two_a': 0})
+    r.zadd('foo', {'two_b': 0})
+    r.zadd('foo', {'three_a': 0})
+
+    with pytest.raises(redis.ResponseError):
+        r.zrevrangebylex('foo', b'[two_b', b'')
+
+    with pytest.raises(redis.ResponseError):
+        r.zrevrangebylex('foo', b'two_b', b'-')
+
+    with pytest.raises(redis.ResponseError):
+        r.zrevrangebylex('foo', b'two_b', b'(t')
+
+    with pytest.raises(redis.ResponseError):
+        r.zrevrangebylex('foo', b'+', b't')
+
+    with pytest.raises(redis.ResponseError):
+        r.zrevrangebylex('foo', b'', b'[two_a')
+
+    with pytest.raises(redis.RedisError):
+        r.zrevrangebylex('foo', b'[two_a', b'(two_b', 1)
+
+
+def test_zrevrangebylex_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zrevrangebylex('foo', b'+', b'-')
+
+
+def test_zremrangebyrank(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zremrangebyrank('foo', 0, 1) == 2
+    assert r.zrange('foo', 0, -1) == [b'three']
+
+
+def test_zremrangebyrank_negative_indices(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zremrangebyrank('foo', -2, -1) == 2
+    assert r.zrange('foo', 0, -1) == [b'one']
+
+
+def test_zremrangebyrank_out_of_bounds(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    assert r.zremrangebyrank('foo', 1, 3) == 0
+
+
+def test_zremrangebyrank_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zremrangebyrank('foo', 1, 3)
+
+
+def test_zremrangebyscore(r: redis.Redis):
+    r.zadd('foo', {'zero': 0})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'four': 4})
+    # Outside of range.
+    assert r.zremrangebyscore('foo', 5, 10) == 0
+    assert r.zrange('foo', 0, -1) == [b'zero', b'two', b'four']
+    # Middle of range.
+    assert r.zremrangebyscore('foo', 1, 3) == 1
+    assert r.zrange('foo', 0, -1) == [b'zero', b'four']
+    assert r.zremrangebyscore('foo', 1, 3) == 0
+    # Entire range.
+    assert r.zremrangebyscore('foo', 0, 4) == 2
+    assert r.zrange('foo', 0, -1) == []
+
+
+def test_zremrangebyscore_exclusive(r: redis.Redis):
+    r.zadd('foo', {'zero': 0})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'four': 4})
+    assert r.zremrangebyscore('foo', '(0', 1) == 0
+    assert r.zrange('foo', 0, -1) == [b'zero', b'two', b'four']
+    assert r.zremrangebyscore('foo', '-inf', '(0') == 0
+    assert r.zrange('foo', 0, -1) == [b'zero', b'two', b'four']
+    assert r.zremrangebyscore('foo', '(2', 5) == 1
+    assert r.zrange('foo', 0, -1) == [b'zero', b'two']
+    assert r.zremrangebyscore('foo', 0, '(2') == 1
+    assert r.zrange('foo', 0, -1) == [b'two']
+    assert r.zremrangebyscore('foo', '(1', '(3') == 1
+    assert r.zrange('foo', 0, -1) == []
+
+
+def test_zremrangebyscore_raises_error(r: redis.Redis):
+    r.zadd('foo', {'zero': 0})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'four': 4})
+    with pytest.raises(redis.ResponseError):
+        r.zremrangebyscore('foo', 'three', 1)
+    with pytest.raises(redis.ResponseError):
+        r.zremrangebyscore('foo', 3, 'one')
+    with pytest.raises(redis.ResponseError):
+        r.zremrangebyscore('foo', 3, '1)')
+    with pytest.raises(redis.ResponseError):
+        r.zremrangebyscore('foo', '((3', '1)')
+
+
+def test_zremrangebyscore_badkey(r: redis.Redis):
+    assert r.zremrangebyscore('foo', 0, 2) == 0
+
+
+def test_zremrangebyscore_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zremrangebyscore('foo', 0, 2)
+
+
+def test_zremrangebylex(r: redis.Redis):
+    r.zadd('foo', {'two_a': 0})
+    r.zadd('foo', {'two_b': 0})
+    r.zadd('foo', {'one_a': 0})
+    r.zadd('foo', {'three_a': 0})
+    assert r.zremrangebylex('foo', b'(three_a', b'[two_b') == 2
+    assert r.zremrangebylex('foo', b'(three_a', b'[two_b') == 0
+    assert r.zremrangebylex('foo', b'-', b'(o') == 0
+    assert r.zremrangebylex('foo', b'-', b'[one_a') == 1
+    assert r.zremrangebylex('foo', b'[tw', b'+') == 0
+    assert r.zremrangebylex('foo', b'[t', b'+') == 1
+    assert r.zremrangebylex('foo', b'[t', b'+') == 0
+
+
+def test_zremrangebylex_error(r: redis.Redis):
+    r.zadd('foo', {'two_a': 0})
+    r.zadd('foo', {'two_b': 0})
+    r.zadd('foo', {'one_a': 0})
+    r.zadd('foo', {'three_a': 0})
+    with pytest.raises(redis.ResponseError):
+        r.zremrangebylex('foo', b'(t', b'two_b')
+
+    with pytest.raises(redis.ResponseError):
+        r.zremrangebylex('foo', b't', b'+')
+
+    with pytest.raises(redis.ResponseError):
+        r.zremrangebylex('foo', b'[two_a', b'')
+
+
+def test_zremrangebylex_badkey(r: redis.Redis):
+    assert r.zremrangebylex('foo', b'(three_a', b'[two_b') == 0
+
+
+def test_zremrangebylex_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zremrangebylex('foo', b'bar', b'baz')
+
+
+def test_zunionstore(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('bar', {'one': 1})
+    r.zadd('bar', {'two': 2})
+    r.zadd('bar', {'three': 3})
+    r.zunionstore('baz', ['foo', 'bar'])
+    assert (
+            r.zrange('baz', 0, -1, withscores=True)
+            == [(b'one', 2), (b'three', 3), (b'two', 4)]
+    )
+
+
+def test_zunionstore_sum(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('bar', {'one': 1})
+    r.zadd('bar', {'two': 2})
+    r.zadd('bar', {'three': 3})
+    r.zunionstore('baz', ['foo', 'bar'], aggregate='SUM')
+    assert (
+            r.zrange('baz', 0, -1, withscores=True)
+            == [(b'one', 2), (b'three', 3), (b'two', 4)]
+    )
+
+
+def test_zunionstore_max(r: redis.Redis):
+    r.zadd('foo', {'one': 0})
+    r.zadd('foo', {'two': 0})
+    r.zadd('bar', {'one': 1})
+    r.zadd('bar', {'two': 2})
+    r.zadd('bar', {'three': 3})
+    r.zunionstore('baz', ['foo', 'bar'], aggregate='MAX')
+    assert (
+            r.zrange('baz', 0, -1, withscores=True)
+            == [(b'one', 1), (b'two', 2), (b'three', 3)]
+    )
+
+
+def test_zunionstore_min(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('bar', {'one': 0})
+    r.zadd('bar', {'two': 0})
+    r.zadd('bar', {'three': 3})
+    r.zunionstore('baz', ['foo', 'bar'], aggregate='MIN')
+    assert (
+            r.zrange('baz', 0, -1, withscores=True)
+            == [(b'one', 0), (b'two', 0), (b'three', 3)]
+    )
+
+
+def test_zunionstore_weights(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('bar', {'one': 1})
+    r.zadd('bar', {'two': 2})
+    r.zadd('bar', {'four': 4})
+    r.zunionstore('baz', {'foo': 1, 'bar': 2}, aggregate='SUM')
+    assert (
+            r.zrange('baz', 0, -1, withscores=True)
+            == [(b'one', 3), (b'two', 6), (b'four', 8)]
+    )
+
+
+def test_zunionstore_nan_to_zero(r: redis.Redis):
+    r.zadd('foo', {'x': math.inf})
+    r.zadd('foo2', {'x': math.inf})
+    r.zunionstore('bar', OrderedDict([('foo', 1.0), ('foo2', 0.0)]))
+    # This is different to test_zinterstore_nan_to_zero because of a quirk
+    # in redis. See https://github.com/antirez/redis/issues/3954.
+    assert r.zscore('bar', 'x') == math.inf
+
+
+def test_zunionstore_nan_to_zero2(r: redis.Redis):
+    r.zadd('foo', {'zero': 0})
+    r.zadd('foo2', {'one': 1})
+    r.zadd('foo3', {'one': 1})
+    r.zunionstore('bar', {'foo': math.inf}, aggregate='SUM')
+    assert r.zrange('bar', 0, -1, withscores=True) == [(b'zero', 0)]
+    r.zunionstore('bar', OrderedDict([('foo2', math.inf), ('foo3', -math.inf)]))
+    assert r.zrange('bar', 0, -1, withscores=True) == [(b'one', 0)]
+
+
+def test_zunionstore_nan_to_zero_ordering(r: redis.Redis):
+    r.zadd('foo', {'e1': math.inf})
+    r.zadd('bar', {'e1': -math.inf, 'e2': 0.0})
+    r.zunionstore('baz', ['foo', 'bar', 'foo'])
+    assert r.zscore('baz', 'e1') == 0.0
+
+
+def test_zunionstore_mixed_set_types(r: redis.Redis):
+    # No score, redis will use 1.0.
+    r.sadd('foo', 'one')
+    r.sadd('foo', 'two')
+    r.zadd('bar', {'one': 1})
+    r.zadd('bar', {'two': 2})
+    r.zadd('bar', {'three': 3})
+    r.zunionstore('baz', ['foo', 'bar'], aggregate='SUM')
+    assert (
+            r.zrange('baz', 0, -1, withscores=True)
+            == [(b'one', 2), (b'three', 3), (b'two', 3)]
+    )
+
+
+def test_zunionstore_badkey(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zunionstore('baz', ['foo', 'bar'], aggregate='SUM')
+    assert r.zrange('baz', 0, -1, withscores=True) == [(b'one', 1), (b'two', 2)]
+    r.zunionstore('baz', {'foo': 1, 'bar': 2}, aggregate='SUM')
+    assert r.zrange('baz', 0, -1, withscores=True) == [(b'one', 1), (b'two', 2)]
+
+
+def test_zunionstore_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zunionstore('baz', ['foo', 'bar'])
+
+
+def test_zinterstore(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('bar', {'one': 1})
+    r.zadd('bar', {'two': 2})
+    r.zadd('bar', {'three': 3})
+    r.zinterstore('baz', ['foo', 'bar'])
+    assert r.zrange('baz', 0, -1, withscores=True) == [(b'one', 2), (b'two', 4)]
+
+
+def test_zinterstore_mixed_set_types(r: redis.Redis):
+    r.sadd('foo', 'one')
+    r.sadd('foo', 'two')
+    r.zadd('bar', {'one': 1})
+    r.zadd('bar', {'two': 2})
+    r.zadd('bar', {'three': 3})
+    r.zinterstore('baz', ['foo', 'bar'], aggregate='SUM')
+    assert r.zrange('baz', 0, -1, withscores=True) == [(b'one', 2), (b'two', 3)]
+
+
+def test_zinterstore_max(r: redis.Redis):
+    r.zadd('foo', {'one': 0})
+    r.zadd('foo', {'two': 0})
+    r.zadd('bar', {'one': 1})
+    r.zadd('bar', {'two': 2})
+    r.zadd('bar', {'three': 3})
+    r.zinterstore('baz', ['foo', 'bar'], aggregate='MAX')
+    assert r.zrange('baz', 0, -1, withscores=True) == [(b'one', 1), (b'two', 2)]
+
+
+def test_zinterstore_onekey(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zinterstore('baz', ['foo'], aggregate='MAX')
+    assert r.zrange('baz', 0, -1, withscores=True) == [(b'one', 1)]
+
+
+def test_zinterstore_nokey(r: redis.Redis):
+    with pytest.raises(redis.ResponseError):
+        r.zinterstore('baz', [], aggregate='MAX')
+
+
+def test_zinterstore_nan_to_zero(r: redis.Redis):
+    r.zadd('foo', {'x': math.inf})
+    r.zadd('foo2', {'x': math.inf})
+    r.zinterstore('bar', OrderedDict([('foo', 1.0), ('foo2', 0.0)]))
+    assert r.zscore('bar', 'x') == 0.0
+
+
+def test_zunionstore_nokey(r: redis.Redis):
+    with pytest.raises(redis.ResponseError):
+        r.zunionstore('baz', [], aggregate='MAX')
+
+
+def test_zinterstore_wrong_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zinterstore('baz', ['foo', 'bar'])
+
+
+def test_empty_zset(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zrem('foo', 'one')
+    assert not r.exists('foo')
+
+
+def test_zpopmax_too_many(r: redis.Redis):
+    r.zadd('foo', {'one': 1})
+    r.zadd('foo', {'two': 2})
+    r.zadd('foo', {'three': 3})
+    assert r.zpopmax('foo', count=5) == [(b'three', 3.0), (b'two', 2.0), (b'one', 1.0), ]
+
+
+def test_bzpopmin(r: redis.Redis):
+    r.zadd('foo', {'one': 1, 'two': 2, 'three': 3})
+    r.zadd('bar', {'a': 1.5, 'b': 2, 'c': 3})
+    assert r.bzpopmin(['foo', 'bar'], 0) == (b'foo', b'one', 1.0)
+    assert r.bzpopmin(['foo', 'bar'], 0) == (b'foo', b'two', 2.0)
+    assert r.bzpopmin(['foo', 'bar'], 0) == (b'foo', b'three', 3.0)
+    assert r.bzpopmin(['foo', 'bar'], 0) == (b'bar', b'a', 1.5)
+
+
+def test_bzpopmax(r: redis.Redis):
+    r.zadd('foo', {'one': 1, 'two': 2, 'three': 3})
+    r.zadd('bar', {'a': 1.5, 'b': 2.5, 'c': 3.5})
+    assert r.bzpopmax(['foo', 'bar'], 0) == (b'foo', b'three', 3.0)
+    assert r.bzpopmax(['foo', 'bar'], 0) == (b'foo', b'two', 2.0)
+    assert r.bzpopmax(['foo', 'bar'], 0) == (b'foo', b'one', 1.0)
+    assert r.bzpopmax(['foo', 'bar'], 0) == (b'bar', b'c', 3.5)
+
+
+def test_zscan(r: redis.Redis):
+    # Set up the data
+    name = 'zscan-test'
+    for ix in range(20):
+        r.zadd(name, {'key:%s' % ix: ix})
+    expected = dict(r.zrange(name, 0, -1, withscores=True))
+
+    # Test the basic version
+    results = {}
+    for key, val in r.zscan_iter(name, count=6):
+        results[key] = val
+    assert results == expected
+
+    # Now test that the MATCH functionality works
+    results = {}
+    cursor = '0'
+    while cursor != 0:
+        cursor, data = r.zscan(name, cursor, match='*7', count=6)
+        results.update(data)
+    assert results == {b'key:7': 7.0, b'key:17': 17.0}
diff --git a/test/test_mixins/test_streams_commands.py b/test/test_mixins/test_streams_commands.py
new file mode 100644
index 0000000..e31d94d
--- /dev/null
+++ b/test/test_mixins/test_streams_commands.py
@@ -0,0 +1,290 @@
+import time
+
+import pytest
+import redis
+
+from fakeredis import _msgs as msgs
+from fakeredis._stream import XStream
+from test import testtools
+
+
+def get_ids(results):
+    return [result[0] for result in results]
+
+
+def add_items(r: redis.Redis, stream: str, n: int):
+    id_list = list()
+    for i in range(n):
+        id_list.append(r.xadd(stream, {"k": i}))
+    return id_list
+
+
+def test_xstream(r: redis.Redis):
+    stream = XStream()
+    stream.add([0, 0, 1, 1, 2, 2, 3, 3], '0-1')
+    stream.add([1, 1, 2, 2, 3, 3, 4, 4], '1-2')
+    stream.add([2, 2, 3, 3, 4, 4], '1-3')
+    stream.add([3, 3, 4, 4], '2-1')
+    stream.add([3, 3, 4, 4], '2-2')
+    stream.add([3, 3, 4, 4], '3-1')
+    assert len(stream) == 6
+    i = iter(stream)
+    assert next(i) == [b'0-1', [0, 0, 1, 1, 2, 2, 3, 3]]
+    assert next(i) == [b'1-2', [1, 1, 2, 2, 3, 3, 4, 4]]
+    assert next(i) == [b'1-3', [2, 2, 3, 3, 4, 4]]
+    assert next(i) == [b'2-1', [3, 3, 4, 4]]
+    assert next(i) == [b'2-2', [3, 3, 4, 4]]
+
+    assert stream.find_index('1-2') == (1, True)
+    assert stream.find_index('0-1') == (0, True)
+    assert stream.find_index('2-1') == (3, True)
+    assert stream.find_index('1-4') == (3, False)
+
+    lst = stream.irange((0, 2), (3, 0))
+    assert len(lst) == 4
+
+    stream = XStream()
+    assert stream.delete(['1']) == 0
+    id_str = stream.add([0, 0, 1, 1, 2, 2, 3, 3])
+    assert stream.delete([id_str, ]) == 1
+    assert len(stream) == 0
+
+
+@pytest.mark.max_server('6.3')
+def test_xadd_redis6(r: redis.Redis):
+    stream = "stream"
+    before = time.time()
+    m1 = r.xadd(stream, {"some": "other"})
+    after = time.time()
+    ts1, seq1 = m1.decode().split('-')
+    seq1 = int(seq1)
+    m2 = r.xadd(stream, {'add': 'more'}, id=f'{ts1}-{seq1 + 1}')
+    ts2, seq2 = m2.decode().split('-')
+    assert int(1000 * before) <= int(ts1) <= int(1000 * after)
+    assert ts1 == ts2
+    assert int(seq2) == int(seq1) + 1
+
+    stream = "stream2"
+    m1 = r.xadd(stream, {"some": "other"})
+    ts1, seq1 = m1.decode().split('-')
+    ts1 = int(ts1) - 1
+    with pytest.raises(redis.ResponseError):
+        r.xadd(stream, {'add': 'more'}, id=f'{ts1}-*')
+    with pytest.raises(redis.ResponseError):
+        r.xadd(stream, {'add': 'more'}, id=f'{ts1}-1')
+
+
+@pytest.mark.min_server('7')
+def test_xadd_redis7(r: redis.Redis):
+    stream = "stream"
+    before = time.time()
+    m1 = r.xadd(stream, {"some": "other"})
+    after = time.time()
+    ts1, seq1 = m1.decode().split('-')
+    m2 = r.xadd(stream, {'add': 'more'}, id=f'{ts1}-*')
+    ts2, seq2 = m2.decode().split('-')
+    assert int(1000 * before) <= int(ts1) <= int(1000 * after)
+    assert ts1 == ts2
+    assert int(seq2) == int(seq1) + 1
+
+    stream = "stream2"
+    m1 = r.xadd(stream, {"some": "other"})
+    ts1, seq1 = m1.decode().split('-')
+    ts1 = int(ts1) - 1
+    with pytest.raises(redis.ResponseError):
+        r.xadd(stream, {'add': 'more'}, id=f'{ts1}-*')
+    with pytest.raises(redis.ResponseError):
+        r.xadd(stream, {'add': 'more'}, id=f'{ts1}-1')
+
+
+def test_xadd_maxlen(r: redis.Redis):
+    stream = "stream"
+    id_list = add_items(r, stream, 10)
+    maxlen = 5
+    id_list.append(r.xadd(stream, {'k': 'new'}, maxlen=maxlen, approximate=False))
+    assert r.xlen(stream) == maxlen
+    results = r.xrange(stream, id_list[0])
+    assert get_ids(results) == id_list[len(id_list) - maxlen:]
+    with pytest.raises(redis.ResponseError):
+        testtools.raw_command(
+            r, 'xadd', stream,
+            'maxlen', '3', 'minid', 'sometestvalue', 'field', 'value')
+
+
+def test_xadd_minid(r: redis.Redis):
+    stream = "stream"
+    id_list = add_items(r, stream, 10)
+    minid = id_list[6]
+    id_list.append(r.xadd(stream, {'k': 'new'}, minid=minid, approximate=False))
+    assert r.xlen(stream) == len(id_list) - 6
+    results = r.xrange(stream, id_list[0])
+    assert get_ids(results) == id_list[6:]
+
+
+def test_xtrim(r: redis.Redis):
+    stream = "stream"
+
+    # trimming an empty key doesn't do anything
+    assert r.xtrim(stream, 1000) == 0
+    add_items(r, stream, 4)
+
+    # trimming an amount large than the number of messages
+    # doesn't do anything
+    assert r.xtrim(stream, 5, approximate=False) == 0
+
+    # 1 message is trimmed
+    assert r.xtrim(stream, 3, approximate=False) == 1
+
+
+@pytest.mark.min_server("6.2.4")
+def test_xtrim_minlen_and_length_args(r: redis.Redis):
+    stream = "stream"
+    add_items(r, stream, 4)
+
+    # Future self: No limits without approximate, according to the api
+    # with pytest.raises(redis.ResponseError):
+    #     assert r.xtrim(stream, 3, approximate=False, limit=2)
+
+    with pytest.raises(redis.DataError):
+        assert r.xtrim(stream, maxlen=3, minid="sometestvalue")
+
+    with pytest.raises(redis.ResponseError):
+        testtools.raw_command(r, 'xtrim', stream, 'maxlen', '3', 'minid', 'sometestvalue')
+    # minid with a limit
+    stream = 's2'
+    m1 = add_items(r, stream, 4)[0]
+    assert r.xtrim(stream, minid=m1, limit=3) == 0
+
+    # pure minid
+    m4 = add_items(r, stream, 4)[-1]
+    assert r.xtrim(stream, approximate=False, minid=m4) == 7
+
+    # minid approximate
+    r.xadd(stream, {"foo": "bar"})
+    r.xadd(stream, {"foo": "bar"})
+    m3 = r.xadd(stream, {"foo": "bar"})
+    r.xadd(stream, {"foo": "bar"})
+    assert r.xtrim(stream, approximate=False, minid=m3) == 3
+
+
+def test_xadd_nomkstream(r: redis.Redis):
+    r.xadd('stream2', {"some": "other"}, nomkstream=True)
+    assert r.xlen('stream2') == 0
+    # nomkstream option
+    stream = "stream"
+    r.xadd(stream, {"foo": "bar"})
+    r.xadd(stream, {"some": "other"}, nomkstream=False)
+    assert r.xlen(stream) == 2
+    r.xadd(stream, {"some": "other"}, nomkstream=True)
+    assert r.xlen(stream) == 3
+
+
+def test_xrevrange(r: redis.Redis):
+    stream = "stream"
+    m1 = r.xadd(stream, {"foo": "bar"})
+    m2 = r.xadd(stream, {"foo": "bar"})
+    m3 = r.xadd(stream, {"foo": "bar"})
+    m4 = r.xadd(stream, {"foo": "bar"})
+
+    results = r.xrevrange(stream, max=m4)
+    assert get_ids(results) == [m4, m3, m2, m1]
+
+    results = r.xrevrange(stream, max=m3, min=m2)
+    assert get_ids(results) == [m3, m2]
+
+    results = r.xrevrange(stream, min=m3)
+    assert get_ids(results) == [m4, m3]
+
+    results = r.xrevrange(stream, min=m2, count=1)
+    assert get_ids(results) == [m4]
+
+
+def test_xrange(r: redis.Redis):
+    m = r.xadd('stream1', {"foo": "bar"})
+    assert r.xrange('stream1') == [(m, {b"foo": b"bar"}), ]
+
+    stream = 'stream2'
+    m = testtools.raw_command(r, 'xadd', stream, '*', b'field', b'value', b'foo', b'bar')
+    results = r.xrevrange(stream)
+
+    assert results == [(m, {b'field': b'value', b'foo': b'bar'}), ]
+
+    stream = "stream"
+    m1 = r.xadd(stream, {"foo": "bar"})
+    m2 = r.xadd(stream, {"foo": "bar"})
+    m3 = r.xadd(stream, {"foo": "bar"})
+    m4 = r.xadd(stream, {"foo": "bar"})
+
+    results = r.xrange(stream, min=m1)
+    assert get_ids(results) == [m1, m2, m3, m4]
+
+    results = r.xrange(stream, min=m2, max=m3)
+    assert get_ids(results) == [m2, m3]
+
+    results = r.xrange(stream, max=m3)
+    assert get_ids(results) == [m1, m2, m3]
+
+    results = r.xrange(stream, max=m2, count=1)
+    assert get_ids(results) == [m1]
+
+
+def get_stream_message(client, stream, message_id):
+    """Fetch a stream message and format it as a (message_id, fields) pair"""
+    response = client.xrange(stream, min=message_id, max=message_id)
+    assert len(response) == 1
+    return response[0]
+
+
+def test_xread(r: redis.Redis):
+    stream = "stream"
+    m1 = r.xadd(stream, {"foo": "bar"})
+    m2 = r.xadd(stream, {"bing": "baz"})
+
+    expected = [
+        [
+            stream.encode(),
+            [get_stream_message(r, stream, m1), get_stream_message(r, stream, m2)],
+        ]
+    ]
+    # xread starting at 0 returns both messages
+    assert r.xread(streams={stream: 0}) == expected
+
+    expected = [[stream.encode(), [get_stream_message(r, stream, m1)]]]
+    # xread starting at 0 and count=1 returns only the first message
+    assert r.xread(streams={stream: 0}, count=1) == expected
+
+    expected = [[stream.encode(), [get_stream_message(r, stream, m2)]]]
+    # xread starting at m1 returns only the second message
+    assert r.xread(streams={stream: m1}) == expected
+
+    # xread starting at the last message returns an empty list
+    assert r.xread(streams={stream: m2}) == []
+
+
+def test_xread_bad_commands(r: redis.Redis):
+    with pytest.raises(redis.ResponseError) as exc_info:
+        testtools.raw_command(r, 'xread', 'foo', '11-1')
+    print(exc_info)
+    with pytest.raises(redis.ResponseError) as ex2:
+        testtools.raw_command(r, 'xread', 'streams', 'foo', )
+    print(ex2)
+
+
+def test_xdel(r: redis.Redis):
+    stream = "stream"
+
+    # deleting from an empty stream doesn't do anything
+    assert r.xdel(stream, 1) == 0
+
+    m1 = r.xadd(stream, {"foo": "bar"})
+    m2 = r.xadd(stream, {"foo": "bar"})
+    m3 = r.xadd(stream, {"foo": "bar"})
+
+    # xdel returns the number of deleted elements
+    assert r.xdel(stream, m1) == 1
+    assert r.xdel(stream, m2, m3) == 2
+
+    with pytest.raises(redis.ResponseError) as ex:
+        testtools.raw_command(r, 'XDEL', stream)
+    assert ex.value.args[0] == msgs.WRONG_ARGS_MSG6.format('xdel')[4:]
+    assert r.xdel('non-existing-key', '1-1') == 0
diff --git a/test/test_mixins/test_string_commands.py b/test/test_mixins/test_string_commands.py
new file mode 100644
index 0000000..5698536
--- /dev/null
+++ b/test/test_mixins/test_string_commands.py
@@ -0,0 +1,529 @@
+from __future__ import annotations
+
+import time
+from datetime import timedelta
+
+import pytest
+import redis
+import redis.client
+from redis.exceptions import ResponseError
+
+from ..testtools import raw_command
+
+
+def test_append(r: redis.Redis):
+    assert r.set('foo', 'bar')
+    assert r.append('foo', 'baz') == 6
+    assert r.get('foo') == b'barbaz'
+
+
+def test_append_with_no_preexisting_key(r: redis.Redis):
+    assert r.append('foo', 'bar') == 3
+    assert r.get('foo') == b'bar'
+
+
+def test_append_wrong_type(r: redis.Redis):
+    r.rpush('foo', b'x')
+    with pytest.raises(redis.ResponseError):
+        r.append('foo', b'x')
+
+
+def test_decr(r: redis.Redis):
+    r.set('foo', 10)
+    assert r.decr('foo') == 9
+    assert r.get('foo') == b'9'
+
+
+def test_decr_newkey(r: redis.Redis):
+    r.decr('foo')
+    assert r.get('foo') == b'-1'
+
+
+def test_decr_expiry(r: redis.Redis):
+    r.set('foo', 10, ex=10)
+    r.decr('foo', 5)
+    assert r.ttl('foo') > 0
+
+
+def test_decr_badtype(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.decr('foo', 15)
+    r.rpush('foo2', 1)
+    with pytest.raises(redis.ResponseError):
+        r.decr('foo2', 15)
+
+
+def test_get_does_not_exist(r: redis.Redis):
+    assert r.get('foo') is None
+
+
+def test_get_with_non_str_keys(r: redis.Redis):
+    assert r.set('2', 'bar') is True
+    assert r.get(2) == b'bar'
+
+
+def test_get_invalid_type(r: redis.Redis):
+    assert r.hset('foo', 'key', 'value') == 1
+    with pytest.raises(redis.ResponseError):
+        r.get('foo')
+
+
+def test_getset_exists(r: redis.Redis):
+    r.set('foo', 'bar')
+    val = r.getset('foo', b'baz')
+    assert val == b'bar'
+    val = r.getset('foo', b'baz2')
+    assert val == b'baz'
+
+
+def test_getset_wrong_type(r: redis.Redis):
+    r.rpush('foo', b'x')
+    with pytest.raises(redis.ResponseError):
+        r.getset('foo', 'bar')
+
+
+def test_getdel(r: redis.Redis):
+    r['foo'] = 'bar'
+    assert r.getdel('foo') == b'bar'
+    assert r.get('foo') is None
+
+
+def test_getdel_doesnt_exist(r: redis.Redis):
+    assert r.getdel('foo') is None
+
+
+def test_incr_with_no_preexisting_key(r: redis.Redis):
+    assert r.incr('foo') == 1
+    assert r.incr('bar', 2) == 2
+
+
+def test_incr_by(r: redis.Redis):
+    assert r.incrby('foo') == 1
+    assert r.incrby('bar', 2) == 2
+
+
+def test_incr_preexisting_key(r: redis.Redis):
+    r.set('foo', 15)
+    assert r.incr('foo', 5) == 20
+    assert r.get('foo') == b'20'
+
+
+def test_incr_expiry(r: redis.Redis):
+    r.set('foo', 15, ex=10)
+    r.incr('foo', 5)
+    assert r.ttl('foo') > 0
+
+
+def test_incr_bad_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.incr('foo', 15)
+    r.rpush('foo2', 1)
+    with pytest.raises(redis.ResponseError):
+        r.incr('foo2', 15)
+
+
+def test_incr_with_float(r: redis.Redis):
+    with pytest.raises(redis.ResponseError):
+        r.incr('foo', 2.0)
+
+
+def test_incr_followed_by_mget(r: redis.Redis):
+    r.set('foo', 15)
+    assert r.incr('foo', 5) == 20
+    assert r.get('foo') == b'20'
+
+
+def test_incr_followed_by_mget_returns_strings(r: redis.Redis):
+    r.incr('foo', 1)
+    assert r.mget(['foo']) == [b'1']
+
+
+def test_incrbyfloat(r: redis.Redis):
+    r.set('foo', 0)
+    assert r.incrbyfloat('foo', 1.0) == 1.0
+    assert r.incrbyfloat('foo', 1.0) == 2.0
+
+
+def test_incrbyfloat_with_noexist(r: redis.Redis):
+    assert r.incrbyfloat('foo', 1.0) == 1.0
+    assert r.incrbyfloat('foo', 1.0) == 2.0
+
+
+def test_incrbyfloat_expiry(r: redis.Redis):
+    r.set('foo', 1.5, ex=10)
+    r.incrbyfloat('foo', 2.5)
+    assert r.ttl('foo') > 0
+
+
+def test_incrbyfloat_bad_type(r: redis.Redis):
+    r.set('foo', 'bar')
+    with pytest.raises(redis.ResponseError, match='not a valid float'):
+        r.incrbyfloat('foo', 1.0)
+    r.rpush('foo2', 1)
+    with pytest.raises(redis.ResponseError):
+        r.incrbyfloat('foo2', 1.0)
+
+
+def test_incrbyfloat_precision(r: redis.Redis):
+    x = 1.23456789123456789
+    assert r.incrbyfloat('foo', x) == x
+    assert float(r.get('foo')) == x
+
+
+def test_mget(r: redis.Redis):
+    r.set('foo', 'one')
+    r.set('bar', 'two')
+    assert r.mget(['foo', 'bar']) == [b'one', b'two']
+    assert r.mget(['foo', 'bar', 'baz']) == [b'one', b'two', None]
+    assert r.mget('foo', 'bar') == [b'one', b'two']
+
+
+def test_mget_with_no_keys(r: redis.Redis):
+    assert r.mget([]) == []
+
+
+def test_mget_mixed_types(r: redis.Redis):
+    r.hset('hash', 'bar', 'baz')
+    r.zadd('zset', {'bar': 1})
+    r.sadd('set', 'member')
+    r.rpush('list', 'item1')
+    r.set('string', 'value')
+    assert (
+            r.mget(['hash', 'zset', 'set', 'string', 'absent'])
+            == [None, None, None, b'value', None]
+    )
+
+
+def test_mset_with_no_keys(r: redis.Redis):
+    with pytest.raises(redis.ResponseError):
+        r.mset({})
+
+
+def test_mset(r: redis.Redis):
+    assert r.mset({'foo': 'one', 'bar': 'two'}) is True
+    assert r.mset({'foo': 'one', 'bar': 'two'}) is True
+    assert r.mget('foo', 'bar') == [b'one', b'two']
+
+
+def test_msetnx(r: redis.Redis):
+    assert r.msetnx({'foo': 'one', 'bar': 'two'}) is True
+    assert r.msetnx({'bar': 'two', 'baz': 'three'}) is False
+    assert r.mget('foo', 'bar', 'baz') == [b'one', b'two', None]
+
+
+def test_setex(r: redis.Redis):
+    assert r.setex('foo', 100, 'bar') is True
+    assert r.get('foo') == b'bar'
+
+
+def test_setex_using_timedelta(r: redis.Redis):
+    assert r.setex('foo', timedelta(seconds=100), 'bar') is True
+    assert r.get('foo') == b'bar'
+
+
+def test_setex_using_float(r: redis.Redis):
+    with pytest.raises(redis.ResponseError, match='integer'):
+        r.setex('foo', 1.2, 'bar')
+
+
+@pytest.mark.min_server('6.2')
+def test_setex_overflow(r: redis.Redis):
+    with pytest.raises(ResponseError):
+        r.setex('foo', 18446744073709561, 'bar')  # Overflows longlong in ms
+
+
+def test_set_ex(r: redis.Redis):
+    assert r.set('foo', 'bar', ex=100) is True
+    assert r.get('foo') == b'bar'
+
+
+def test_set_ex_using_timedelta(r: redis.Redis):
+    assert r.set('foo', 'bar', ex=timedelta(seconds=100)) is True
+    assert r.get('foo') == b'bar'
+
+
+def test_set_ex_overflow(r: redis.Redis):
+    with pytest.raises(ResponseError):
+        r.set('foo', 'bar', ex=18446744073709561)  # Overflows longlong in ms
+
+
+def test_set_px_overflow(r: redis.Redis):
+    with pytest.raises(ResponseError):
+        r.set('foo', 'bar', px=2 ** 63 - 2)  # Overflows after adding current time
+
+
+def test_set_px(r: redis.Redis):
+    assert r.set('foo', 'bar', px=100) is True
+    assert r.get('foo') == b'bar'
+
+
+def test_set_px_using_timedelta(r: redis.Redis):
+    assert r.set('foo', 'bar', px=timedelta(milliseconds=100)) is True
+    assert r.get('foo') == b'bar'
+
+
+def test_set_conflicting_expire_options(r: redis.Redis):
+    with pytest.raises(ResponseError):
+        r.set('foo', 'bar', ex=1, px=1)
+
+
+def test_set_raises_wrong_ex(r: redis.Redis):
+    with pytest.raises(ResponseError):
+        r.set('foo', 'bar', ex=-100)
+    with pytest.raises(ResponseError):
+        r.set('foo', 'bar', ex=0)
+    assert not r.exists('foo')
+
+
+def test_set_using_timedelta_raises_wrong_ex(r: redis.Redis):
+    with pytest.raises(ResponseError):
+        r.set('foo', 'bar', ex=timedelta(seconds=-100))
+    with pytest.raises(ResponseError):
+        r.set('foo', 'bar', ex=timedelta(seconds=0))
+    assert not r.exists('foo')
+
+
+def test_set_raises_wrong_px(r: redis.Redis):
+    with pytest.raises(ResponseError):
+        r.set('foo', 'bar', px=-100)
+    with pytest.raises(ResponseError):
+        r.set('foo', 'bar', px=0)
+    assert not r.exists('foo')
+
+
+def test_set_using_timedelta_raises_wrong_px(r: redis.Redis):
+    with pytest.raises(ResponseError):
+        r.set('foo', 'bar', px=timedelta(milliseconds=-100))
+    with pytest.raises(ResponseError):
+        r.set('foo', 'bar', px=timedelta(milliseconds=0))
+    assert not r.exists('foo')
+
+
+def test_setex_raises_wrong_ex(r: redis.Redis):
+    with pytest.raises(ResponseError):
+        r.setex('foo', -100, 'bar')
+    with pytest.raises(ResponseError):
+        r.setex('foo', 0, 'bar')
+    assert not r.exists('foo')
+
+
+def test_setex_using_timedelta_raises_wrong_ex(r: redis.Redis):
+    with pytest.raises(ResponseError):
+        r.setex('foo', timedelta(seconds=-100), 'bar')
+    with pytest.raises(ResponseError):
+        r.setex('foo', timedelta(seconds=-100), 'bar')
+    assert not r.exists('foo')
+
+
+def test_setnx(r: redis.Redis):
+    assert r.setnx('foo', 'bar') is True
+    assert r.get('foo') == b'bar'
+    assert r.setnx('foo', 'baz') is False
+    assert r.get('foo') == b'bar'
+
+
+def test_set_nx(r: redis.Redis):
+    assert r.set('foo', 'bar', nx=True) is True
+    assert r.get('foo') == b'bar'
+    assert r.set('foo', 'bar', nx=True) is None
+    assert r.get('foo') == b'bar'
+
+
+def test_set_xx(r: redis.Redis):
+    assert r.set('foo', 'bar', xx=True) is None
+    r.set('foo', 'bar')
+    assert r.set('foo', 'bar', xx=True) is True
+
+
+@pytest.mark.min_server('6.2')
+def test_set_get(r: redis.Redis):
+    assert raw_command(r, 'set', 'foo', 'bar', 'GET') is None
+    assert r.get('foo') == b'bar'
+    assert raw_command(r, 'set', 'foo', 'baz', 'GET') == b'bar'
+    assert r.get('foo') == b'baz'
+
+
+@pytest.mark.min_server('6.2')
+def test_set_get_xx(r: redis.Redis):
+    assert raw_command(r, 'set', 'foo', 'bar', 'XX', 'GET') is None
+    assert r.get('foo') is None
+    r.set('foo', 'bar')
+    assert raw_command(r, 'set', 'foo', 'baz', 'XX', 'GET') == b'bar'
+    assert r.get('foo') == b'baz'
+    assert raw_command(r, 'set', 'foo', 'baz', 'GET') == b'baz'
+
+
+@pytest.mark.min_server('6.2')
+@pytest.mark.max_server('6.2.7')
+def test_set_get_nx_redis6(r: redis.Redis):
+    # Note: this will most likely fail on a 7.0 server, based on the docs for SET
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, 'set', 'foo', 'bar', 'NX', 'GET')
+
+
+@pytest.mark.min_server('7')
+def test_set_get_nx_redis7(r: redis.Redis):
+    # Note: this will most likely fail on a 7.0 server, based on the docs for SET
+    assert raw_command(r, 'set', 'foo', 'bar', 'NX', 'GET') is None
+
+
+@pytest.mark.min_server('6.2')
+def set_get_wrongtype(r: redis.Redis):
+    r.lpush('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, 'set', 'foo', 'bar', 'GET')
+
+
+def test_substr(r: redis.Redis):
+    r['foo'] = 'one_two_three'
+    assert r.substr('foo', 0) == b'one_two_three'
+    assert r.substr('foo', 0, 2) == b'one'
+    assert r.substr('foo', 4, 6) == b'two'
+    assert r.substr('foo', -5) == b'three'
+    assert r.substr('foo', -4, -5) == b''
+    assert r.substr('foo', -5, -3) == b'thr'
+
+
+def test_substr_noexist_key(r: redis.Redis):
+    assert r.substr('foo', 0) == b''
+    assert r.substr('foo', 10) == b''
+    assert r.substr('foo', -5, -1) == b''
+
+
+def test_substr_wrong_type(r: redis.Redis):
+    r.rpush('foo', b'x')
+    with pytest.raises(redis.ResponseError):
+        r.substr('foo', 0)
+
+
+def test_strlen(r: redis.Redis):
+    r['foo'] = 'bar'
+
+    assert r.strlen('foo') == 3
+    assert r.strlen('noexists') == 0
+
+
+def test_strlen_wrong_type(r: redis.Redis):
+    r.rpush('foo', b'x')
+    with pytest.raises(redis.ResponseError):
+        r.strlen('foo')
+
+
+def test_setrange(r: redis.Redis):
+    r.set('foo', 'test')
+    assert r.setrange('foo', 1, 'aste') == 5
+    assert r.get('foo') == b'taste'
+
+    r.set('foo', 'test')
+    assert r.setrange('foo', 1, 'a') == 4
+    assert r.get('foo') == b'tast'
+
+    assert r.setrange('bar', 2, 'test') == 6
+    assert r.get('bar') == b'\x00\x00test'
+
+
+def test_setrange_expiry(r: redis.Redis):
+    r.set('foo', 'test', ex=10)
+    r.setrange('foo', 1, 'aste')
+    assert r.ttl('foo') > 0
+
+
+def test_large_command(r: redis.Redis):
+    r.set('foo', 'bar' * 10000)
+    assert r.get('foo') == b'bar' * 10000
+
+
+def test_saving_non_ascii_chars_as_value(r: redis.Redis):
+    assert r.set('foo', 'Ñandu') is True
+    assert r.get('foo') == 'Ñandu'.encode()
+
+
+def test_saving_unicode_type_as_value(r: redis.Redis):
+    assert r.set('foo', 'Ñandu') is True
+    assert r.get('foo') == 'Ñandu'.encode()
+
+
+def test_saving_non_ascii_chars_as_key(r: redis.Redis):
+    assert r.set('Ñandu', 'foo') is True
+    assert r.get('Ñandu') == b'foo'
+
+
+def test_saving_unicode_type_as_key(r: redis.Redis):
+    assert r.set('Ñandu', 'foo') is True
+    assert r.get('Ñandu') == b'foo'
+
+
+def test_future_newbytes(r: redis.Redis):
+    # bytes = pytest.importorskip('builtins', reason='future.types not available').bytes
+    r.set(bytes(b'\xc3\x91andu'), 'foo')
+    assert r.get('Ñandu') == b'foo'
+
+
+def test_future_newstr(r: redis.Redis):
+    # str = pytest.importorskip('builtins', reason='future.types not available').str
+    r.set(str('Ñandu'), 'foo')
+    assert r.get('Ñandu') == b'foo'
+
+
+def test_setitem_getitem(r: redis.Redis):
+    assert r.keys() == []
+    r['foo'] = 'bar'
+    assert r['foo'] == b'bar'
+
+
+def test_getitem_non_existent_key(r: redis.Redis):
+    assert r.keys() == []
+    assert 'noexists' not in r.keys()
+
+
+@pytest.mark.slow
+def test_getex(r: redis.Redis):
+    # Exceptions
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, 'getex', 'foo', 'px', 1000, 'ex', 1)
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, 'getex', 'foo', 'dsac', 1000, 'ex', 1)
+
+    r.set('foo', 'val')
+    assert r.getex('foo', ex=1) == b'val'
+    time.sleep(1.5)
+    assert r.get('foo') is None
+
+    r.set('foo2', 'val')
+    assert r.getex('foo2', px=1000) == b'val'
+    time.sleep(1.5)
+    assert r.get('foo2') is None
+
+    r.set('foo4', 'val')
+    r.getex('foo4', exat=int(time.time() + 1))
+    time.sleep(1.5)
+    assert r.get('foo4') is None
+
+    r.set('foo2', 'val')
+    r.getex('foo2', pxat=int(time.time() + 1) * 1000)
+    time.sleep(1.5)
+    assert r.get('foo2') is None
+
+    r.setex('foo5', 1, 'val')
+    r.getex('foo5', persist=True)
+    assert r.ttl('foo5') == -1
+    time.sleep(1.5)
+    assert r.get('foo5') == b'val'
+
+
+@pytest.mark.min_server('7')
+def test_lcs(r: redis.Redis):
+    r.mset({"key1": "ohmytext", "key2": "mynewtext"})
+    assert r.lcs('key1', 'key2') == b'mytext'
+    assert r.lcs('key1', 'key2', len=True) == 6
+
+    assert (r.lcs("key1", "key2", idx=True, minmatchlen=3, withmatchlen=True)
+            == [b"matches", [[[4, 7], [5, 8], 4]], b"len", 6])
+    assert r.lcs("key1", "key2", idx=True, minmatchlen=3) == [b"matches", [[[4, 7], [5, 8]]], b"len", 6]
+
+    with pytest.raises(redis.ResponseError):
+        assert r.lcs("key1", "key2", len=True, idx=True)
+    with pytest.raises(redis.ResponseError):
+        raw_command(r, 'lcs', "key1", "key2", 'not_supported_arg')
diff --git a/test/test_mixins/test_transactions_commands.py b/test/test_mixins/test_transactions_commands.py
new file mode 100644
index 0000000..b6fd336
--- /dev/null
+++ b/test/test_mixins/test_transactions_commands.py
@@ -0,0 +1,309 @@
+from __future__ import annotations
+
+import pytest
+import redis
+import redis.client
+
+import fakeredis
+from .. import testtools
+
+
+def test_multiple_successful_watch_calls(r: redis.Redis):
+    p = r.pipeline()
+    p.watch('bam')
+    p.multi()
+    p.set('foo', 'bar')
+    # Check that the watched keys buffer has been emptied.
+    p.execute()
+
+    # bam is no longer being watched, so it's ok to modify
+    # it now.
+    p.watch('foo')
+    r.set('bam', 'boo')
+    p.multi()
+    p.set('foo', 'bats')
+    assert p.execute() == [True]
+
+
+def test_watch_state_is_cleared_after_abort(r: redis.Redis):
+    # redis-py's pipeline handling and connection pooling interferes with this
+    # test, so raw commands are used instead.
+    testtools.raw_command(r, 'watch', 'foo')
+    testtools.raw_command(r, 'multi')
+    with pytest.raises(redis.ResponseError):
+        testtools.raw_command(r, 'mget')  # Wrong number of arguments
+    with pytest.raises(redis.exceptions.ExecAbortError):
+        testtools.raw_command(r, 'exec')
+
+    testtools.raw_command(r, 'set', 'foo', 'bar')  # Should NOT trigger the watch from earlier
+    testtools.raw_command(r, 'multi')
+    testtools.raw_command(r, 'set', 'abc', 'done')
+    testtools.raw_command(r, 'exec')
+
+    assert r.get('abc') == b'done'
+
+
+def test_pipeline_transaction_shortcut(r: redis.Redis):
+    # This example taken pretty much from the redis-py documentation.
+    r.set('OUR-SEQUENCE-KEY', 13)
+    calls = []
+
+    def client_side_incr(pipe):
+        calls.append((pipe,))
+        current_value = pipe.get('OUR-SEQUENCE-KEY')
+        next_value = int(current_value) + 1
+
+        if len(calls) < 3:
+            # Simulate a change from another thread.
+            r.set('OUR-SEQUENCE-KEY', next_value)
+
+        pipe.multi()
+        pipe.set('OUR-SEQUENCE-KEY', next_value)
+
+    res = r.transaction(client_side_incr, 'OUR-SEQUENCE-KEY')
+
+    assert res == [True]
+    assert int(r.get('OUR-SEQUENCE-KEY')) == 16
+    assert len(calls) == 3
+
+
+def test_pipeline_transaction_value_from_callable(r: redis.Redis):
+    def callback(pipe):
+        # No need to do anything here since we only want the return value
+        return 'OUR-RETURN-VALUE'
+
+    res = r.transaction(callback, 'OUR-SEQUENCE-KEY', value_from_callable=True)
+    assert res == 'OUR-RETURN-VALUE'
+
+
+def test_pipeline_empty(r: redis.Redis):
+    p = r.pipeline()
+    assert len(p) == 0
+
+
+def test_pipeline_length(r: redis.Redis):
+    p = r.pipeline()
+    p.set('baz', 'quux').get('baz')
+    assert len(p) == 2
+
+
+def test_pipeline_no_commands(r: redis.Redis):
+    # Prior to 3.4, redis-py's execute is a nop if there are no commands
+    # queued, so it succeeds even if watched keys have been changed.
+    r.set('foo', '1')
+    p = r.pipeline()
+    p.watch('foo')
+    r.set('foo', '2')
+    with pytest.raises(redis.WatchError):
+        p.execute()
+
+
+def test_pipeline_failed_transaction(r: redis.Redis):
+    p = r.pipeline()
+    p.multi()
+    p.set('foo', 'bar')
+    # Deliberately induce a syntax error
+    p.execute_command('set')
+    # It should be an ExecAbortError, but redis-py tries to DISCARD after the
+    # failed EXEC, which raises a ResponseError.
+    with pytest.raises(redis.ResponseError):
+        p.execute()
+    assert not r.exists('foo')
+
+
+def test_pipeline_srem_no_change(r: redis.Redis):
+    # A regression test for a case picked up by hypothesis tests.
+    p = r.pipeline()
+    p.watch('foo')
+    r.srem('foo', 'bar')
+    p.multi()
+    p.set('foo', 'baz')
+    p.execute()
+    assert r.get('foo') == b'baz'
+
+
+# The behaviour changed in redis 6.0 (see https://github.com/redis/redis/issues/6594).
+@pytest.mark.min_server('6.0')
+def test_pipeline_move(r: redis.Redis):
+    # A regression test for a case picked up by hypothesis tests.
+    r.set('foo', 'bar')
+    p = r.pipeline()
+    p.watch('foo')
+    r.move('foo', 1)
+    # Ensure the transaction isn't empty, which had different behaviour in
+    # older versions of redis-py.
+    p.multi()
+    p.set('bar', 'baz')
+    with pytest.raises(redis.exceptions.WatchError):
+        p.execute()
+
+
+@pytest.mark.min_server('6.0.6')
+def test_exec_bad_arguments(r: redis.Redis):
+    # Redis 6.0.6 changed the behaviour of exec so that it always fails with
+    # EXECABORT, even when it's just bad syntax.
+    with pytest.raises(redis.exceptions.ExecAbortError):
+        r.execute_command('exec', 'blahblah')
+
+
+@pytest.mark.min_server('6.0.6')
+def test_exec_bad_arguments_abort(r: redis.Redis):
+    r.execute_command('multi')
+    with pytest.raises(redis.exceptions.ExecAbortError):
+        r.execute_command('exec', 'blahblah')
+    # Should have aborted the transaction, so we can run another one
+    p = r.pipeline()
+    p.multi()
+    p.set('bar', 'baz')
+    p.execute()
+    assert r.get('bar') == b'baz'
+
+
+def test_pipeline(r: redis.Redis):
+    # The pipeline method returns an object for
+    # issuing multiple commands in a batch.
+    p = r.pipeline()
+    p.watch('bam')
+    p.multi()
+    p.set('foo', 'bar').get('foo')
+    p.lpush('baz', 'quux')
+    p.lpush('baz', 'quux2').lrange('baz', 0, -1)
+    res = p.execute()
+
+    # Check return values returned as list.
+    assert res == [True, b'bar', 1, 2, [b'quux2', b'quux']]
+
+    # Check side effects happened as expected.
+    assert r.lrange('baz', 0, -1) == [b'quux2', b'quux']
+
+    # Check that the command buffer has been emptied.
+    assert p.execute() == []
+
+
+def test_pipeline_ignore_errors(r: redis.Redis):
+    """Test the pipeline ignoring errors when asked."""
+    with r.pipeline() as p:
+        p.set('foo', 'bar')
+        p.rename('baz', 'bats')
+        with pytest.raises(redis.exceptions.ResponseError):
+            p.execute()
+        assert [] == p.execute()
+    with r.pipeline() as p:
+        p.set('foo', 'bar')
+        p.rename('baz', 'bats')
+        res = p.execute(raise_on_error=False)
+
+        assert [] == p.execute()
+
+        assert len(res) == 2
+        assert isinstance(res[1], redis.exceptions.ResponseError)
+
+
+def test_pipeline_non_transactional(r: redis.Redis):
+    # For our simple-minded model I don't think
+    # there is any observable difference.
+    p = r.pipeline(transaction=False)
+    res = p.set('baz', 'quux').get('baz').execute()
+
+    assert res == [True, b'quux']
+
+
+def test_pipeline_raises_when_watched_key_changed(r: redis.Redis):
+    r.set('foo', 'bar')
+    r.rpush('greet', 'hello')
+    p = r.pipeline()
+    try:
+        p.watch('greet', 'foo')
+        nextf = bytes(p.get('foo')) + b'baz'
+        # Simulate change happening on another thread.
+        r.rpush('greet', 'world')
+        # Begin pipelining.
+        p.multi()
+        p.set('foo', nextf)
+
+        with pytest.raises(redis.WatchError):
+            p.execute()
+    finally:
+        p.reset()
+
+
+def test_pipeline_succeeds_despite_unwatched_key_changed(r: redis.Redis):
+    # Same setup as before except for the params to the WATCH command.
+    r.set('foo', 'bar')
+    r.rpush('greet', 'hello')
+    p = r.pipeline()
+    try:
+        # Only watch one of the 2 keys.
+        p.watch('foo')
+        nextf = bytes(p.get('foo')) + b'baz'
+        # Simulate change happening on another thread.
+        r.rpush('greet', 'world')
+        p.multi()
+        p.set('foo', nextf)
+        p.execute()
+
+        # Check the commands were executed.
+        assert r.get('foo') == b'barbaz'
+    finally:
+        p.reset()
+
+
+def test_pipeline_succeeds_when_watching_nonexistent_key(r: redis.Redis):
+    r.set('foo', 'bar')
+    r.rpush('greet', 'hello')
+    p = r.pipeline()
+    try:
+        # Also watch a nonexistent key.
+        p.watch('foo', 'bam')
+        nextf = bytes(p.get('foo')) + b'baz'
+        # Simulate change happening on another thread.
+        r.rpush('greet', 'world')
+        p.multi()
+        p.set('foo', nextf)
+        p.execute()
+
+        # Check the commands were executed.
+        assert r.get('foo') == b'barbaz'
+    finally:
+        p.reset()
+
+
+def test_watch_state_is_cleared_across_multiple_watches(r: redis.Redis):
+    r.set('foo', 'one')
+    r.set('bar', 'baz')
+    p = r.pipeline()
+
+    try:
+        p.watch('foo')
+        # Simulate change happening on another thread.
+        r.set('foo', 'three')
+        p.multi()
+        p.set('foo', 'three')
+        with pytest.raises(redis.WatchError):
+            p.execute()
+
+        # Now watch another key.  It should be ok to change
+        # foo as we're no longer watching it.
+        p.watch('bar')
+        r.set('foo', 'four')
+        p.multi()
+        p.set('bar', 'five')
+        assert p.execute() == [True]
+    finally:
+        p.reset()
+
+
+@pytest.mark.fake
+def test_socket_cleanup_watch(fake_server):
+    r1 = fakeredis.FakeStrictRedis(server=fake_server)
+    r2 = fakeredis.FakeStrictRedis(server=fake_server)
+    pipeline = r1.pipeline(transaction=False)
+    # This needs some poking into redis-py internals to ensure that we reach
+    # FakeSocket._cleanup. We need to close the socket while there is still
+    # a watch in place, but not allow it to be garbage collected (hence we
+    # set 'sock' even though it is unused).
+    with pipeline:
+        pipeline.watch('test')
+        sock = pipeline.connection._sock  # noqa: F841
+        pipeline.connection.disconnect()
+    r2.set('test', 'foo')
diff --git a/test/test_mock.py b/test/test_mock.py
index 3714e73..f5e3c26 100644
--- a/test/test_mock.py
+++ b/test/test_mock.py
@@ -1,24 +1,17 @@
-import redis
-
-
-def connect_redis_conn(redis_host: str, redis_port: int) -> redis.Redis:
-    redis_con = redis.Redis(redis_host, redis_port)
-    return redis_con
-
-
-def bar():
-    redis_con = connect_redis_conn('localhost', 6000)
-    pass
+from unittest.mock import patch
 
+import redis
 
-from unittest.mock import patch
 from fakeredis import FakeRedis
 
 
-def test_bar():
+def test_mock():
     # Mock Redis connection
+    def bar(redis_host: str, redis_port: int):
+        redis.Redis(redis_host, redis_port)
+
     with patch('redis.Redis', FakeRedis):
         # Call function
-        bar()
+        bar('localhost', 6000)
 
         # Related to #36 - this should fail if mocking Redis does not work
diff --git a/test/test_redis_asyncio.py b/test/test_redis_asyncio.py
new file mode 100644
index 0000000..e59e8ac
--- /dev/null
+++ b/test/test_redis_asyncio.py
@@ -0,0 +1,435 @@
+import asyncio
+import re
+import sys
+
+from test.conftest import _marker_version_value
+
+if sys.version_info >= (3, 11):
+    from asyncio import timeout as async_timeout
+else:
+    from async_timeout import timeout as async_timeout
+import pytest
+import pytest_asyncio
+import redis
+import redis.asyncio
+from packaging.version import Version
+
+from fakeredis import FakeServer, aioredis
+from test import testtools
+
+pytestmark = [
+]
+fake_only = pytest.mark.parametrize(
+    'req_aioredis2',
+    [pytest.param('fake', marks=pytest.mark.fake)],
+    indirect=True
+)
+pytestmark.extend([
+    pytest.mark.asyncio,
+])
+
+
+@pytest_asyncio.fixture(
+    name='req_aioredis2',
+    params=[
+        pytest.param('fake', marks=pytest.mark.fake),
+        pytest.param('real', marks=pytest.mark.real)
+    ]
+)
+async def _req_aioredis2(request) -> redis.asyncio.Redis:
+    server_version = request.getfixturevalue('real_redis_version')
+    min_server_marker = _marker_version_value(request, 'min_server')
+    max_server_marker = _marker_version_value(request, 'max_server')
+    if Version(server_version) < min_server_marker:
+        pytest.skip(f'Redis server {min_server_marker.base_version} or more required but {server_version} found')
+    if Version(server_version) > max_server_marker:
+        pytest.skip(f'Redis server {max_server_marker.base_version} or less required but {server_version} found')
+    if request.param == 'fake':
+        fake_server = request.getfixturevalue('fake_server')
+        ret = aioredis.FakeRedis(server=fake_server)
+    else:
+        if not server_version:
+            pytest.skip('Redis is not running')
+        ret = redis.asyncio.Redis()
+        fake_server = None
+    if not fake_server or fake_server.connected:
+        await ret.flushall()
+
+    yield ret
+
+    if not fake_server or fake_server.connected:
+        await ret.flushall()
+    await ret.connection_pool.disconnect()
+
+
+@pytest_asyncio.fixture
+async def conn(req_aioredis2: redis.asyncio.Redis):
+    """A single connection, rather than a pool."""
+    async with req_aioredis2.client() as conn:
+        yield conn
+
+
+async def test_ping(req_aioredis2: redis.asyncio.Redis):
+    pong = await req_aioredis2.ping()
+    assert pong is True
+
+
+async def test_types(req_aioredis2: redis.asyncio.Redis):
+    await req_aioredis2.hset('hash', mapping={'key1': 'value1', 'key2': 'value2', 'key3': 123})
+    result = await req_aioredis2.hgetall('hash')
+    assert result == {
+        b'key1': b'value1',
+        b'key2': b'value2',
+        b'key3': b'123'
+    }
+
+
+async def test_transaction(req_aioredis2: redis.asyncio.Redis):
+    async with req_aioredis2.pipeline(transaction=True) as tr:
+        tr.set('key1', 'value1')
+        tr.set('key2', 'value2')
+        ok1, ok2 = await tr.execute()
+    assert ok1
+    assert ok2
+    result = await req_aioredis2.get('key1')
+    assert result == b'value1'
+
+
+async def test_transaction_fail(req_aioredis2: redis.asyncio.Redis):
+    await req_aioredis2.set('foo', '1')
+    async with req_aioredis2.pipeline(transaction=True) as tr:
+        await tr.watch('foo')
+        await req_aioredis2.set('foo', '2')  # Different connection
+        tr.multi()
+        tr.get('foo')
+        with pytest.raises(redis.asyncio.WatchError):
+            await tr.execute()
+
+
+async def test_pubsub(req_aioredis2, event_loop):
+    queue = asyncio.Queue()
+
+    async def reader(ps):
+        while True:
+            message = await ps.get_message(ignore_subscribe_messages=True, timeout=5)
+            if message is not None:
+                if message.get('data') == b'stop':
+                    break
+                queue.put_nowait(message)
+
+    async with async_timeout(5), req_aioredis2.pubsub() as ps:
+        await ps.subscribe('channel')
+        task = event_loop.create_task(reader(ps))
+        await req_aioredis2.publish('channel', 'message1')
+        await req_aioredis2.publish('channel', 'message2')
+        result1 = await queue.get()
+        result2 = await queue.get()
+        assert result1 == {
+            'channel': b'channel',
+            'pattern': None,
+            'type': 'message',
+            'data': b'message1'
+        }
+        assert result2 == {
+            'channel': b'channel',
+            'pattern': None,
+            'type': 'message',
+            'data': b'message2'
+        }
+        await req_aioredis2.publish('channel', 'stop')
+        await task
+
+
+@pytest.mark.slow
+async def test_pubsub_timeout(req_aioredis2: redis.asyncio.Redis):
+    async with req_aioredis2.pubsub() as ps:
+        await ps.subscribe('channel')
+        await ps.get_message(timeout=0.5)  # Subscription message
+        message = await ps.get_message(timeout=0.5)
+        assert message is None
+
+
+@pytest.mark.slow
+async def test_pubsub_disconnect(req_aioredis2: redis.asyncio.Redis):
+    async with req_aioredis2.pubsub() as ps:
+        await ps.subscribe('channel')
+        await ps.connection.disconnect()
+        message = await ps.get_message(timeout=0.5)  # Subscription message
+        assert message is not None
+        message = await ps.get_message(timeout=0.5)
+        assert message is None
+
+
+async def test_blocking_ready(req_aioredis2, conn):
+    """Blocking command which does not need to block."""
+    await req_aioredis2.rpush('list', 'x')
+    result = await conn.blpop('list', timeout=1)
+    assert result == (b'list', b'x')
+
+
+@pytest.mark.slow
+async def test_blocking_timeout(conn):
+    """Blocking command that times out without completing."""
+    result = await conn.blpop('missing', timeout=1)
+    assert result is None
+
+
+@pytest.mark.slow
+async def test_blocking_unblock(req_aioredis2, conn, event_loop):
+    """Blocking command that gets unblocked after some time."""
+
+    async def unblock():
+        await asyncio.sleep(0.1)
+        await req_aioredis2.rpush('list', 'y')
+
+    task = event_loop.create_task(unblock())
+    result = await conn.blpop('list', timeout=1)
+    assert result == (b'list', b'y')
+    await task
+
+
+async def test_wrongtype_error(req_aioredis2: redis.asyncio.Redis):
+    await req_aioredis2.set('foo', 'bar')
+    with pytest.raises(redis.asyncio.ResponseError, match='^WRONGTYPE'):
+        await req_aioredis2.rpush('foo', 'baz')
+
+
+async def test_syntax_error(req_aioredis2: redis.asyncio.Redis):
+    with pytest.raises(redis.asyncio.ResponseError,
+                       match="^wrong number of arguments for 'get' command$"):
+        await req_aioredis2.execute_command('get')
+
+
+async def test_no_script_error(req_aioredis2: redis.asyncio.Redis):
+    with pytest.raises(redis.exceptions.NoScriptError):
+        await req_aioredis2.evalsha('0123456789abcdef0123456789abcdef', 0)
+
+
+@testtools.run_test_if_lupa
+class TestScripts:
+
+    @pytest.mark.max_server('6.2.7')
+    async def test_failed_script_error6(self, req_aioredis2):
+        await req_aioredis2.set('foo', 'bar')
+        with pytest.raises(redis.asyncio.ResponseError, match='^Error running script'):
+            await req_aioredis2.eval('return redis.call("ZCOUNT", KEYS[1])', 1, 'foo')
+
+    @pytest.mark.min_server('7')
+    async def test_failed_script_error7(self, req_aioredis2):
+        await req_aioredis2.set('foo', 'bar')
+        with pytest.raises(redis.asyncio.ResponseError):
+            await req_aioredis2.eval('return redis.call("ZCOUNT", KEYS[1])', 1, 'foo')
+
+
+@fake_only
+async def test_repr(req_aioredis2: redis.asyncio.Redis):
+    assert re.fullmatch(
+        r'ConnectionPool<FakeConnection<server=<fakeredis._server.FakeServer object at .*>,db=0>>',
+        repr(req_aioredis2.connection_pool)
+    )
+
+
+@fake_only
+@pytest.mark.disconnected
+async def test_not_connected(req_aioredis2: redis.asyncio.Redis):
+    with pytest.raises(redis.asyncio.ConnectionError):
+        await req_aioredis2.ping()
+
+
+@fake_only
+async def test_disconnect_server(req_aioredis2, fake_server):
+    await req_aioredis2.ping()
+    fake_server.connected = False
+    with pytest.raises(redis.asyncio.ConnectionError):
+        await req_aioredis2.ping()
+    fake_server.connected = True
+
+
+async def test_xdel(req_aioredis2: redis.asyncio.Redis):
+    stream = "stream"
+
+    # deleting from an empty stream doesn't do anything
+    assert await req_aioredis2.xdel(stream, 1) == 0
+
+    m1 = await req_aioredis2.xadd(stream, {"foo": "bar"})
+    m2 = await req_aioredis2.xadd(stream, {"foo": "bar"})
+    m3 = await req_aioredis2.xadd(stream, {"foo": "bar"})
+
+    # xdel returns the number of deleted elements
+    assert await req_aioredis2.xdel(stream, m1) == 1
+    assert await req_aioredis2.xdel(stream, m2, m3) == 2
+
+
+@pytest.mark.fake
+async def test_from_url():
+    r0 = aioredis.FakeRedis.from_url('redis://localhost?db=0')
+    r1 = aioredis.FakeRedis.from_url('redis://localhost?db=1')
+    # Check that they are indeed different databases
+    await r0.set('foo', 'a')
+    await r1.set('foo', 'b')
+    assert await r0.get('foo') == b'a'
+    assert await r1.get('foo') == b'b'
+    await r0.connection_pool.disconnect()
+    await r1.connection_pool.disconnect()
+
+
+@fake_only
+async def test_from_url_with_server(req_aioredis2, fake_server):
+    r2 = aioredis.FakeRedis.from_url('redis://localhost', server=fake_server)
+    await req_aioredis2.set('foo', 'bar')
+    assert await r2.get('foo') == b'bar'
+    await r2.connection_pool.disconnect()
+
+
+@pytest.mark.fake
+async def test_without_server():
+    r = aioredis.FakeRedis()
+    assert await r.ping()
+
+
+@pytest.mark.fake
+async def test_without_server_disconnected():
+    r = aioredis.FakeRedis(connected=False)
+    with pytest.raises(redis.asyncio.ConnectionError):
+        await r.ping()
+
+
+@pytest.mark.fake
+async def test_async():
+    # arrange
+    cache = aioredis.FakeRedis()
+    # act
+    await cache.set("fakeredis", "plz")
+    x = await cache.get("fakeredis")
+    # assert
+    assert x == b"plz"
+
+
+@testtools.run_test_if_redispy_ver('above', '4.4.0')
+@pytest.mark.parametrize('nowait', [False, True])
+@pytest.mark.fake
+async def test_connection_disconnect(nowait):
+    server = FakeServer()
+    r = aioredis.FakeRedis(server=server)
+    conn = await r.connection_pool.get_connection("_")
+    assert conn is not None
+
+    await conn.disconnect(nowait=nowait)
+
+    assert conn._sock is None
+
+
+async def test_connection_with_username_and_password():
+    server = FakeServer()
+    r = aioredis.FakeRedis(server=server, username='username', password='password')
+
+    test_value = "this_is_a_test"
+    await r.hset('test:key', "test_hash", test_value)
+    result = await r.hget('test:key', "test_hash")
+    assert result.decode() == test_value
+
+
+@pytest.mark.fake
+class TestInitArgs:
+    async def test_singleton(self):
+        shared_server = FakeServer()
+        r1 = aioredis.FakeRedis()
+        r2 = aioredis.FakeRedis(server=FakeServer())
+        r3 = aioredis.FakeRedis(server=shared_server)
+        r4 = aioredis.FakeRedis(server=shared_server)
+
+        await r1.set('foo', 'bar')
+        await r3.set('bar', 'baz')
+        assert await r1.get('foo') == b'bar'
+        assert not (await r2.exists('foo'))
+        assert not (await r3.exists('foo'))
+        assert await r3.get('bar') == b'baz'
+        assert await r4.get('bar') == b'baz'
+        assert not (await r1.exists('bar'))
+
+    async def test_host_init_arg(self):
+        db = aioredis.FakeRedis(host='localhost')
+        await db.set('foo', 'bar')
+        assert await db.get('foo') == b'bar'
+
+    async def test_from_url(self):
+        db = aioredis.FakeRedis.from_url(
+            'redis://localhost:6379/0')
+        await db.set('foo', 'bar')
+        assert await db.get('foo') == b'bar'
+
+    async def test_from_url_user(self):
+        db = aioredis.FakeRedis.from_url(
+            'redis://user@localhost:6379/0')
+        await db.set('foo', 'bar')
+        assert await db.get('foo') == b'bar'
+
+    async def test_from_url_user_password(self):
+        db = aioredis.FakeRedis.from_url(
+            'redis://user:password@localhost:6379/0')
+        await db.set('foo', 'bar')
+        assert await db.get('foo') == b'bar'
+
+    async def test_from_url_with_db_arg(self):
+        db = aioredis.FakeRedis.from_url(
+            'redis://localhost:6379/0')
+        db1 = aioredis.FakeRedis.from_url(
+            'redis://localhost:6379/1')
+        db2 = aioredis.FakeRedis.from_url(
+            'redis://localhost:6379/',
+            db=2)
+        await db.set('foo', 'foo0')
+        await db1.set('foo', 'foo1')
+        await db2.set('foo', 'foo2')
+        assert await db.get('foo') == b'foo0'
+        assert await db1.get('foo') == b'foo1'
+        assert await db2.get('foo') == b'foo2'
+
+    async def test_from_url_db_value_error(self):
+        # In case of ValueError, should default to 0, or be absent in redis-py 4.0
+        db = aioredis.FakeRedis.from_url(
+            'redis://localhost:6379/a')
+        assert db.connection_pool.connection_kwargs.get('db', 0) == 0
+
+    async def test_can_pass_through_extra_args(self):
+        db = aioredis.FakeRedis.from_url(
+            'redis://localhost:6379/0',
+            decode_responses=True)
+        await db.set('foo', 'bar')
+        assert await db.get('foo') == 'bar'
+
+    async def test_can_allow_extra_args(self):
+        db = aioredis.FakeRedis.from_url(
+            'redis://localhost:6379/0',
+            socket_connect_timeout=11, socket_timeout=12, socket_keepalive=True,
+            socket_keepalive_options={60: 30}, socket_type=1,
+            retry_on_timeout=True,
+        )
+        fake_conn = db.connection_pool.make_connection()
+        assert fake_conn.socket_connect_timeout == 11
+        assert fake_conn.socket_timeout == 12
+        assert fake_conn.socket_keepalive is True
+        assert fake_conn.socket_keepalive_options == {60: 30}
+        assert fake_conn.socket_type == 1
+        assert fake_conn.retry_on_timeout is True
+
+        # Make fallback logic match redis-py
+        db = aioredis.FakeRedis.from_url(
+            'redis://localhost:6379/0',
+            socket_connect_timeout=None, socket_timeout=30
+        )
+        fake_conn = db.connection_pool.make_connection()
+        assert fake_conn.socket_connect_timeout == fake_conn.socket_timeout
+        assert fake_conn.socket_keepalive_options == {}
+
+    async def test_repr(self):
+        # repr is human-readable, so we only test that it doesn't crash,
+        # and that it contains the db number.
+        db = aioredis.FakeRedis.from_url('redis://localhost:6379/11')
+        rep = repr(db)
+        assert 'db=11' in rep
+
+    async def test_from_unix_socket(self):
+        db = aioredis.FakeRedis.from_url('unix://a/b/c')
+        await db.set('foo', 'bar')
+        assert await db.get('foo') == b'bar'
diff --git a/test/test_redispy2_only.py b/test/test_redispy2_only.py
deleted file mode 100644
index ba08354..0000000
--- a/test/test_redispy2_only.py
+++ /dev/null
@@ -1,303 +0,0 @@
-from datetime import timedelta
-from time import sleep
-
-import pytest
-import pytest_asyncio
-import redis
-
-import testtools
-
-pytestmark = [
-    testtools.run_test_if_redis_ver('below', '3'),
-]
-
-
-def test_zadd_uses_str(r):
-    r.zadd('foo', 12345, (1, 2, 3))
-    assert r.zrange('foo', 0, 0) == [b'(1, 2, 3)']
-
-
-def test_zadd_errors(r):
-    # The args are backwards, it should be 2, "two", so we
-    # expect an exception to be raised.
-    with pytest.raises(redis.ResponseError):
-        r.zadd('foo', 'two', 2)
-    with pytest.raises(redis.ResponseError):
-        r.zadd('foo', two='two')
-    # It's expected an equal number of values and scores
-    with pytest.raises(redis.RedisError):
-        r.zadd('foo', 'two')
-
-
-def test_mset_accepts_kwargs(r):
-    assert r.mset(foo='one', bar='two') is True
-    assert r.mset(foo='one', baz='three') is True
-    assert r.mget('foo', 'bar', 'baz') == [b'one', b'two', b'three']
-
-
-def test_mget_none(r):
-    r.set('foo', 'one')
-    r.set('bar', 'two')
-    assert r.mget('foo', 'bar', None) == [b'one', b'two', None]
-
-
-def test_set_None_value(r):
-    assert r.set('foo', None) is True
-    assert r.get('foo') == b'None'
-
-
-def test_rpush_then_lrange_with_nested_list1(r):
-    assert r.rpush('foo', [12345, 6789]) == 1
-    assert r.rpush('foo', [54321, 9876]) == 2
-    assert r.lrange('foo', 0, -1) == [b'[12345, 6789]', b'[54321, 9876]']
-
-
-def test_rpush_then_lrange_with_nested_list2(r):
-    assert r.rpush('foo', [12345, 'banana']) == 1
-    assert r.rpush('foo', [54321, 'elephant']) == 2
-    assert r.lrange('foo', 0, -1), [b'[12345, \'banana\']', b'[54321, \'elephant\']']
-
-
-def test_rpush_then_lrange_with_nested_list3(r):
-    assert r.rpush('foo', [12345, []]) == 1
-    assert r.rpush('foo', [54321, []]) == 2
-    assert r.lrange('foo', 0, -1) == [b'[12345, []]', b'[54321, []]']
-
-
-def test_hgetall_with_tuples(r):
-    assert r.hset('foo', (1, 2), (1, 2, 3)) == 1
-    assert r.hgetall('foo') == {b'(1, 2)': b'(1, 2, 3)'}
-
-
-def test_hmset_convert_values(r):
-    r.hmset('foo', {'k1': True, 'k2': 1})
-    assert r.hgetall('foo') == {b'k1': b'True', b'k2': b'1'}
-
-
-def test_hmset_does_not_mutate_input_params(r):
-    original = {'key': [123, 456]}
-    r.hmset('foo', original)
-    assert original == {'key': [123, 456]}
-
-
-@pytest.mark.parametrize(
-    'create_redis',
-    [
-        pytest.param('FakeRedis', marks=pytest.mark.fake),
-        pytest.param('Redis', marks=pytest.mark.real)
-    ],
-    indirect=True
-)
-class TestNonStrict:
-    def test_setex(self, r):
-        assert r.setex('foo', 'bar', 100) is True
-        assert r.get('foo') == b'bar'
-
-    def test_setex_using_timedelta(self, r):
-        assert r.setex('foo', 'bar', timedelta(seconds=100)) is True
-        assert r.get('foo') == b'bar'
-
-    def test_lrem_positive_count(self, r):
-        r.lpush('foo', 'same')
-        r.lpush('foo', 'same')
-        r.lpush('foo', 'different')
-        r.lrem('foo', 'same', 2)
-        assert r.lrange('foo', 0, -1) == [b'different']
-
-    def test_lrem_negative_count(self, r):
-        r.lpush('foo', 'removeme')
-        r.lpush('foo', 'three')
-        r.lpush('foo', 'two')
-        r.lpush('foo', 'one')
-        r.lpush('foo', 'removeme')
-        r.lrem('foo', 'removeme', -1)
-        # Should remove it from the end of the list,
-        # leaving the 'removeme' from the front of the list alone.
-        assert r.lrange('foo', 0, -1) == [b'removeme', b'one', b'two', b'three']
-
-    def test_lrem_zero_count(self, r):
-        r.lpush('foo', 'one')
-        r.lpush('foo', 'one')
-        r.lpush('foo', 'one')
-        r.lrem('foo', 'one')
-        assert r.lrange('foo', 0, -1) == []
-
-    def test_lrem_default_value(self, r):
-        r.lpush('foo', 'one')
-        r.lpush('foo', 'one')
-        r.lpush('foo', 'one')
-        r.lrem('foo', 'one')
-        assert r.lrange('foo', 0, -1) == []
-
-    def test_lrem_does_not_exist(self, r):
-        r.lpush('foo', 'one')
-        r.lrem('foo', 'one')
-        # These should be noops.
-        r.lrem('foo', 'one', -2)
-        r.lrem('foo', 'one', 2)
-
-    def test_lrem_return_value(self, r):
-        r.lpush('foo', 'one')
-        count = r.lrem('foo', 'one', 0)
-        assert count == 1
-        assert r.lrem('foo', 'one') == 0
-
-    def test_zadd_deprecated(self, r):
-        result = r.zadd('foo', 'one', 1)
-        assert result == 1
-        assert r.zrange('foo', 0, -1) == [b'one']
-
-    def test_zadd_missing_required_params(self, r):
-        with pytest.raises(redis.RedisError):
-            # Missing the 'score' param.
-            r.zadd('foo', 'one')
-        with pytest.raises(redis.RedisError):
-            # Missing the 'value' param.
-            r.zadd('foo', None, score=1)
-        with pytest.raises(redis.RedisError):
-            r.zadd('foo')
-
-    def test_zadd_with_single_keypair(self, r):
-        result = r.zadd('foo', bar=1)
-        assert result == 1
-        assert r.zrange('foo', 0, -1) == [b'bar']
-
-    def test_zadd_with_multiple_keypairs(self, r):
-        result = r.zadd('foo', bar=1, baz=9)
-        assert result == 2
-        assert r.zrange('foo', 0, -1) == [b'bar', b'baz']
-
-    def test_zadd_with_name_is_non_string(self, r):
-        result = r.zadd('foo', 1, 9)
-        assert result == 1
-        assert r.zrange('foo', 0, -1) == [b'1']
-
-    def test_ttl_should_return_none_for_non_expiring_key(self, r):
-        r.set('foo', 'bar')
-        assert r.get('foo') == b'bar'
-        assert r.ttl('foo') is None
-
-    def test_ttl_should_return_value_for_expiring_key(self, r):
-        r.set('foo', 'bar')
-        r.expire('foo', 1)
-        assert r.ttl('foo') == 1
-        r.expire('foo', 2)
-        assert r.ttl('foo') == 2
-        # See https://github.com/antirez/redis/blob/unstable/src/db.c#L632
-        ttl = 1000000000
-        r.expire('foo', ttl)
-        assert r.ttl('foo') == ttl
-
-    def test_pttl_should_return_none_for_non_expiring_key(self, r):
-        r.set('foo', 'bar')
-        assert r.get('foo') == b'bar'
-        assert r.pttl('foo') is None
-
-    def test_pttl_should_return_value_for_expiring_key(self, r):
-        d = 100
-        r.set('foo', 'bar')
-        r.expire('foo', 1)
-        assert 1000 - d <= r.pttl('foo') <= 1000
-        r.expire('foo', 2)
-        assert 2000 - d <= r.pttl('foo') <= 2000
-        ttl = 1000000000
-        # See https://github.com/antirez/redis/blob/unstable/src/db.c#L632
-        r.expire('foo', ttl)
-        assert ttl * 1000 - d <= r.pttl('foo') <= ttl * 1000
-
-    def test_expire_should_not_handle_floating_point_values(self, r):
-        r.set('foo', 'bar')
-        with pytest.raises(redis.ResponseError, match='value is not an integer or out of range'):
-            r.expire('something_new', 1.2)
-            r.pexpire('something_new', 1000.2)
-            r.expire('some_unused_key', 1.2)
-            r.pexpire('some_unused_key', 1000.2)
-
-    @testtools.run_test_if_lupa
-    def test_lock(self, r):
-        lock = r.lock('foo')
-        assert lock.acquire()
-        assert r.exists('foo')
-        lock.release()
-        assert not r.exists('foo')
-        with r.lock('bar'):
-            assert r.exists('bar')
-        assert not r.exists('bar')
-
-    def test_unlock_without_lock(self, r):
-        lock = r.lock('foo')
-        with pytest.raises(redis.exceptions.LockError):
-            lock.release()
-
-    @pytest.mark.slow
-    @testtools.run_test_if_lupa
-    def test_unlock_expired(self, r):
-        lock = r.lock('foo', timeout=0.01, sleep=0.001)
-        assert lock.acquire()
-        sleep(0.1)
-        with pytest.raises(redis.exceptions.LockError):
-            lock.release()
-
-    @pytest.mark.slow
-    @testtools.run_test_if_lupa
-    def test_lock_blocking_timeout(self, r):
-        lock = r.lock('foo')
-        assert lock.acquire()
-        lock2 = r.lock('foo')
-        assert not lock2.acquire(blocking_timeout=1)
-
-    @testtools.run_test_if_lupa
-    def test_lock_nonblocking(self, r):
-        lock = r.lock('foo')
-        assert lock.acquire()
-        lock2 = r.lock('foo')
-        assert not lock2.acquire(blocking=False)
-
-    @testtools.run_test_if_lupa
-    def test_lock_twice(self, r):
-        lock = r.lock('foo')
-        assert lock.acquire(blocking=False)
-        assert not lock.acquire(blocking=False)
-
-    @testtools.run_test_if_lupa
-    def test_acquiring_lock_different_lock_release(self, r):
-        lock1 = r.lock('foo')
-        lock2 = r.lock('foo')
-        assert lock1.acquire(blocking=False)
-        assert not lock2.acquire(blocking=False)
-
-        # Test only releasing lock1 actually releases the lock
-        with pytest.raises(redis.exceptions.LockError):
-            lock2.release()
-        assert not lock2.acquire(blocking=False)
-        lock1.release()
-        # Locking with lock2 now has the lock
-        assert lock2.acquire(blocking=False)
-        assert not lock1.acquire(blocking=False)
-
-    @testtools.run_test_if_lupa
-    def test_lock_extend(self, r):
-        lock = r.lock('foo', timeout=2)
-        lock.acquire()
-        lock.extend(3)
-        ttl = int(r.pttl('foo'))
-        assert 4000 < ttl <= 5000
-
-    @testtools.run_test_if_lupa
-    def test_lock_extend_exceptions(self, r):
-        lock1 = r.lock('foo', timeout=2)
-        with pytest.raises(redis.exceptions.LockError):
-            lock1.extend(3)
-        lock2 = r.lock('foo')
-        lock2.acquire()
-        with pytest.raises(redis.exceptions.LockError):
-            lock2.extend(3)  # Cannot extend a lock with no timeout
-
-    @pytest.mark.slow
-    @testtools.run_test_if_lupa
-    def test_lock_extend_expired(self, r):
-        lock = r.lock('foo', timeout=0.01, sleep=0.001)
-        lock.acquire()
-        sleep(0.1)
-        with pytest.raises(redis.exceptions.LockError):
-            lock.extend(3)
diff --git a/test/test_redispy4_plus.py b/test/test_redispy4_plus.py
deleted file mode 100644
index 3fe439a..0000000
--- a/test/test_redispy4_plus.py
+++ /dev/null
@@ -1,110 +0,0 @@
-import pytest
-import redis
-
-import testtools
-
-pytestmark = [
-    testtools.run_test_if_redis_ver('above', '4'),
-]
-fake_only = pytest.mark.parametrize(
-    'create_redis',
-    [pytest.param('FakeStrictRedis', marks=pytest.mark.fake)],
-    indirect=True
-)
-
-
-@testtools.run_test_if_redis_ver('above', '4.2.0')
-@testtools.run_test_if_no_aioredis
-def test_fakeredis_aioredis_uses_redis_asyncio():
-    import fakeredis.aioredis as aioredis
-
-    assert not hasattr(aioredis, "__version__")
-
-
-@testtools.run_test_if_redis_ver('above', '4.1.2')
-def test_lmove_to_nonexistent_destination(r):
-    r.rpush('foo', 'one')
-    assert r.lmove('foo', 'bar', 'RIGHT', 'LEFT') == b'one'
-    assert r.rpop('bar') == b'one'
-
-
-def test_lmove_expiry(r):
-    r.rpush('foo', 'one')
-    r.rpush('bar', 'two')
-    r.expire('bar', 10)
-    r.lmove('foo', 'bar', 'RIGHT', 'LEFT')
-    assert r.ttl('bar') > 0
-
-
-def test_lmove_wrong_type(r):
-    r.set('foo', 'bar')
-    r.rpush('list', 'element')
-    with pytest.raises(redis.ResponseError):
-        r.lmove('foo', 'list', 'RIGHT', 'LEFT')
-    assert r.get('foo') == b'bar'
-    assert r.lrange('list', 0, -1) == [b'element']
-    with pytest.raises(redis.ResponseError):
-        r.lmove('list', 'foo', 'RIGHT', 'LEFT')
-    assert r.get('foo') == b'bar'
-    assert r.lrange('list', 0, -1) == [b'element']
-
-
-@testtools.run_test_if_redis_ver('above', '4.1.2')
-def test_lmove(r):
-    assert r.lmove('foo', 'bar', 'RIGHT', 'LEFT') is None
-    assert r.lpop('bar') is None
-    r.rpush('foo', 'one')
-    r.rpush('foo', 'two')
-    r.rpush('bar', 'one')
-
-    # RPOPLPUSH
-    assert r.lmove('foo', 'bar', 'RIGHT', 'LEFT') == b'two'
-    assert r.lrange('foo', 0, -1) == [b'one']
-    assert r.lrange('bar', 0, -1) == [b'two', b'one']
-    # LPOPRPUSH
-    assert r.lmove('bar', 'bar', 'LEFT', 'RIGHT') == b'two'
-    assert r.lrange('bar', 0, -1) == [b'one', b'two']
-    # RPOPRPUSH
-    r.rpush('foo', 'three')
-    assert r.lmove('foo', 'bar', 'RIGHT', 'RIGHT') == b'three'
-    assert r.lrange('foo', 0, -1) == [b'one']
-    assert r.lrange('bar', 0, -1) == [b'one', b'two', b'three']
-    # LPOPLPUSH
-    assert r.lmove('bar', 'foo', 'LEFT', 'LEFT') == b'one'
-    assert r.lrange('foo', 0, -1) == [b'one', b'one']
-    assert r.lrange('bar', 0, -1) == [b'two', b'three']
-
-    # Catch instances where we store bytes and strings inconsistently
-    # and thus bar = ['two', b'one']
-    assert r.lrem('bar', -1, 'two') == 1
-
-
-def test_smismember(r):
-    assert r.smismember('foo', ['member1', 'member2', 'member3']) == [0, 0, 0]
-    r.sadd('foo', 'member1', 'member2', 'member3')
-    assert r.smismember('foo', ['member1', 'member2', 'member3']) == [1, 1, 1]
-    assert r.smismember('foo', ['member1', 'member2', 'member3', 'member4']) == [1, 1, 1, 0]
-    assert r.smismember('foo', ['member4', 'member2', 'member3']) == [0, 1, 1]
-    # should also work if provided values as arguments
-    assert r.smismember('foo', 'member4', 'member2', 'member3') == [0, 1, 1]
-
-
-def test_smismember_wrong_type(r):
-    # verify that command fails when the key itself is not a SET
-    testtools.zadd(r, 'foo', {'member': 1})
-    with pytest.raises(redis.ResponseError):
-        r.smismember('foo', 'member')
-
-    # verify that command fails if the input parameter is of wrong type
-    r.sadd('foo2', 'member1', 'member2', 'member3')
-    with pytest.raises(redis.DataError, match='Invalid input of type'):
-        r.smismember('foo2', [["member1", "member2"]])
-
-
-@pytest.mark.disconnected
-@fake_only
-class TestFakeStrictRedisConnectionErrors:
-
-    def test_lmove(self, r):
-        with pytest.raises(redis.ConnectionError):
-            r.lmove(1, 2, 'LEFT', 'RIGHT')
diff --git a/test/test_scan.py b/test/test_scan.py
new file mode 100644
index 0000000..decabc7
--- /dev/null
+++ b/test/test_scan.py
@@ -0,0 +1,193 @@
+from time import sleep
+
+import pytest
+import redis
+
+from test.testtools import key_val_dict
+
+
+def test_sscan_delete_key_while_scanning_should_not_returns_it_in_scan(r: redis.Redis):
+    size = 600
+    name = 'sscan-test'
+    all_keys_set = {f'{i}'.encode() for i in range(size)}
+    r.sadd(name, *[k for k in all_keys_set])
+    assert r.scard(name) == size
+
+    cursor, keys = r.sscan(name, 0)
+    assert len(keys) < len(all_keys_set)
+
+    key_to_remove = next(x for x in all_keys_set if x not in keys)
+    assert r.srem(name, key_to_remove) == 1
+    assert r.sismember(name, key_to_remove) is False
+    while cursor != 0:
+        cursor, data = r.sscan(name, cursor=cursor)
+        keys.extend(data)
+    assert len(set(keys)) == len(keys)
+    assert len(keys) == size - 1
+    assert key_to_remove not in keys
+
+
+def test_hscan_delete_key_while_scanning_should_not_returns_it_in_scan(r: redis.Redis):
+    size = 600
+    name = 'hscan-test'
+    all_keys_dict = key_val_dict(size=size)
+    r.hset(name, mapping=all_keys_dict)
+    assert len(r.hgetall(name)) == size
+
+    cursor, keys = r.hscan(name, 0)
+    assert len(keys) < len(all_keys_dict)
+
+    key_to_remove = next(x for x in all_keys_dict if x not in keys)
+    assert r.hdel(name, key_to_remove) == 1
+    assert r.hget(name, key_to_remove) is None
+    while cursor != 0:
+        cursor, data = r.hscan(name, cursor=cursor)
+        keys.update(data)
+    assert len(set(keys)) == len(keys)
+    assert len(keys) == size - 1
+    assert key_to_remove not in keys
+
+
+def test_scan_delete_unseen_key_while_scanning_should_not_returns_it_in_scan(r: redis.Redis):
+    size = 30
+    all_keys_dict = key_val_dict(size=size)
+    assert all(r.set(k, v) for k, v in all_keys_dict.items())
+    assert len(r.keys()) == size
+
+    cursor, keys = r.scan()
+
+    key_to_remove = next(x for x in all_keys_dict if x not in keys)
+    assert r.delete(key_to_remove) == 1
+    assert r.get(key_to_remove) is None
+    while cursor != 0:
+        cursor, data = r.scan(cursor=cursor)
+        keys.extend(data)
+    assert len(set(keys)) == len(keys)
+    assert len(keys) == size - 1
+    assert key_to_remove not in keys
+
+
+@pytest.mark.xfail
+def test_scan_delete_seen_key_while_scanning_should_return_all_keys(r: redis.Redis):
+    size = 30
+    all_keys_dict = key_val_dict(size=size)
+    assert all(r.set(k, v) for k, v in all_keys_dict.items())
+    assert len(r.keys()) == size
+
+    cursor, keys = r.scan()
+
+    key_to_remove = keys[0]
+    assert r.delete(keys[0]) == 1
+    assert r.get(key_to_remove) is None
+    while cursor != 0:
+        cursor, data = r.scan(cursor=cursor)
+        keys.extend(data)
+
+    assert len(set(keys)) == len(keys)
+    keys = set(keys)
+    assert len(keys) == size, f"{set(all_keys_dict).difference(keys)} is not empty but should be"
+    assert key_to_remove in keys
+
+
+def test_scan_add_key_while_scanning_should_return_all_keys(r: redis.Redis):
+    size = 30
+    all_keys_dict = key_val_dict(size=size)
+    assert all(r.set(k, v) for k, v in all_keys_dict.items())
+    assert len(r.keys()) == size
+
+    cursor, keys = r.scan()
+
+    r.set('new_key', 'new val')
+    while cursor != 0:
+        cursor, data = r.scan(cursor=cursor)
+        keys.extend(data)
+
+    keys = set(keys)
+    assert len(keys) >= size, f"{set(all_keys_dict).difference(keys)} is not empty but should be"
+
+
+def test_scan(r: redis.Redis):
+    # Set up the data
+    for ix in range(20):
+        k = 'scan-test:%s' % ix
+        v = 'result:%s' % ix
+        r.set(k, v)
+    expected = r.keys()
+    assert len(expected) == 20  # Ensure we know what we're testing
+
+    # Test that we page through the results and get everything out
+    results = []
+    cursor = '0'
+    while cursor != 0:
+        cursor, data = r.scan(cursor, count=6)
+        results.extend(data)
+    assert set(expected) == set(results)
+
+    # Now test that the MATCH functionality works
+    results = []
+    cursor = '0'
+    while cursor != 0:
+        cursor, data = r.scan(cursor, match='*7', count=100)
+        results.extend(data)
+    assert b'scan-test:7' in results
+    assert b'scan-test:17' in results
+    assert len(set(results)) == 2
+
+    # Test the match on iterator
+    results = [r for r in r.scan_iter(match='*7')]
+    assert b'scan-test:7' in results
+    assert b'scan-test:17' in results
+    assert len(set(results)) == 2
+
+
+def test_scan_single(r: redis.Redis):
+    r.set('foo1', 'bar1')
+    assert r.scan(match="foo*") == (0, [b'foo1'])
+
+
+def test_scan_iter_single_page(r: redis.Redis):
+    r.set('foo1', 'bar1')
+    r.set('foo2', 'bar2')
+    assert set(r.scan_iter(match="foo*")) == {b'foo1', b'foo2'}
+    assert set(r.scan_iter()) == {b'foo1', b'foo2'}
+    assert set(r.scan_iter(match="")) == set()
+    assert set(r.scan_iter(match="foo1", _type="string")) == {b'foo1', }
+
+
+def test_scan_iter_multiple_pages(r: redis.Redis):
+    all_keys = key_val_dict(size=100)
+    assert all(r.set(k, v) for k, v in all_keys.items())
+    assert set(r.scan_iter()) == set(all_keys)
+
+
+def test_scan_iter_multiple_pages_with_match(r: redis.Redis):
+    all_keys = key_val_dict(size=100)
+    assert all(r.set(k, v) for k, v in all_keys.items())
+    # Now add a few keys that don't match the key:<number> pattern.
+    r.set('otherkey', 'foo')
+    r.set('andanother', 'bar')
+    actual = set(r.scan_iter(match='key:*'))
+    assert actual == set(all_keys)
+
+
+def test_scan_multiple_pages_with_count_arg(r: redis.Redis):
+    all_keys = key_val_dict(size=100)
+    assert all(r.set(k, v) for k, v in all_keys.items())
+    assert set(r.scan_iter(count=1000)) == set(all_keys)
+
+
+def test_scan_all_in_single_call(r: redis.Redis):
+    all_keys = key_val_dict(size=100)
+    assert all(r.set(k, v) for k, v in all_keys.items())
+    # Specify way more than the 100 keys we've added.
+    actual = r.scan(count=1000)
+    assert set(actual[1]) == set(all_keys)
+    assert actual[0] == 0
+
+
+@pytest.mark.slow
+def test_scan_expired_key(r: redis.Redis):
+    r.set('expiringkey', 'value')
+    r.pexpire('expiringkey', 1)
+    sleep(1)
+    assert r.scan()[1] == []
diff --git a/test/test_lua.py b/test/test_scripting_lua_only.py
similarity index 78%
rename from test/test_lua.py
rename to test/test_scripting_lua_only.py
index fff5d77..b55d71e 100644
--- a/test/test_lua.py
+++ b/test/test_scripting_lua_only.py
@@ -5,34 +5,27 @@ import logging
 
 import pytest
 import redis
-import redis.client
 from redis.exceptions import ResponseError
 
 import fakeredis
-import testtools
+from test import testtools
 
 lupa = pytest.importorskip("lupa")
 
-fake_only = pytest.mark.parametrize(
-    'create_redis',
-    [pytest.param('FakeStrictRedis', marks=pytest.mark.fake)],
-    indirect=True
-)
 
-
-def test_eval_blpop(r):
+def test_eval_blpop(r: redis.Redis):
     r.rpush('foo', 'bar')
     with pytest.raises(redis.ResponseError, match='This Redis command is not allowed from script'):
         r.eval('return redis.pcall("BLPOP", KEYS[1], 1)', 1, 'foo')
 
 
-def test_eval_set_value_to_arg(r):
+def test_eval_set_value_to_arg(r: redis.Redis):
     r.eval('redis.call("SET", KEYS[1], ARGV[1])', 1, 'foo', 'bar')
     val = r.get('foo')
     assert val == b'bar'
 
 
-def test_eval_conditional(r):
+def test_eval_conditional(r: redis.Redis):
     lua = """
     local val = redis.call("GET", KEYS[1])
     if val == ARGV[1] then
@@ -49,7 +42,7 @@ def test_eval_conditional(r):
     assert val == b'baz'
 
 
-def test_eval_table(r):
+def test_eval_table(r: redis.Redis):
     lua = """
     local a = {}
     a[1] = "foo"
@@ -61,7 +54,7 @@ def test_eval_table(r):
     assert val == [b'foo', b'bar']
 
 
-def test_eval_table_with_nil(r):
+def test_eval_table_with_nil(r: redis.Redis):
     lua = """
     local a = {}
     a[1] = "foo"
@@ -73,7 +66,7 @@ def test_eval_table_with_nil(r):
     assert val == [b'foo']
 
 
-def test_eval_table_with_numbers(r):
+def test_eval_table_with_numbers(r: redis.Redis):
     lua = """
     local a = {}
     a[1] = 42
@@ -83,7 +76,7 @@ def test_eval_table_with_numbers(r):
     assert val == [42]
 
 
-def test_eval_nested_table(r):
+def test_eval_nested_table(r: redis.Redis):
     lua = """
     local a = {}
     a[1] = {}
@@ -94,7 +87,7 @@ def test_eval_nested_table(r):
     assert val == [[b'foo']]
 
 
-def test_eval_iterate_over_argv(r):
+def test_eval_iterate_over_argv(r: redis.Redis):
     lua = """
     for i, v in ipairs(ARGV) do
     end
@@ -104,7 +97,7 @@ def test_eval_iterate_over_argv(r):
     assert val == [b"a", b"b", b"c"]
 
 
-def test_eval_iterate_over_keys(r):
+def test_eval_iterate_over_keys(r: redis.Redis):
     lua = """
     for i, v in ipairs(KEYS) do
     end
@@ -114,27 +107,19 @@ def test_eval_iterate_over_keys(r):
     assert val == [b"a", b"b"]
 
 
-def test_eval_mget(r):
+def test_eval_mget(r: redis.Redis):
     r.set('foo1', 'bar1')
     r.set('foo2', 'bar2')
     val = r.eval('return redis.call("mget", "foo1", "foo2")', 2, 'foo1', 'foo2')
     assert val == [b'bar1', b'bar2']
 
 
-@testtools.run_test_if_redis_ver('below', '3')
-def test_eval_mget_none(r):
-    r.set('foo1', None)
-    r.set('foo2', None)
-    val = r.eval('return redis.call("mget", "foo1", "foo2")', 2, 'foo1', 'foo2')
-    assert val == [b'None', b'None']
-
-
-def test_eval_mget_not_set(r):
+def test_eval_mget_not_set(r: redis.Redis):
     val = r.eval('return redis.call("mget", "foo1", "foo2")', 2, 'foo1', 'foo2')
     assert val == [None, None]
 
 
-def test_eval_hgetall(r):
+def test_eval_hgetall(r: redis.Redis):
     r.hset('foo', 'k1', 'bar')
     r.hset('foo', 'k2', 'baz')
     val = r.eval('return redis.call("hgetall", "foo")', 1, 'foo')
@@ -142,7 +127,7 @@ def test_eval_hgetall(r):
     assert sorted_val == [[b'k1', b'bar'], [b'k2', b'baz']]
 
 
-def test_eval_hgetall_iterate(r):
+def test_eval_hgetall_iterate(r: redis.Redis):
     r.hset('foo', 'k1', 'bar')
     r.hset('foo', 'k2', 'baz')
     lua = """
@@ -156,16 +141,7 @@ def test_eval_hgetall_iterate(r):
     assert sorted_val == [[b'k1', b'bar'], [b'k2', b'baz']]
 
 
-@testtools.run_test_if_redis_ver('below', '3')
-def test_eval_list_with_nil(r):
-    r.lpush('foo', 'bar')
-    r.lpush('foo', None)
-    r.lpush('foo', 'baz')
-    val = r.eval('return redis.call("lrange", KEYS[1], 0, 2)', 1, 'foo')
-    assert val == [b'baz', b'None', b'bar']
-
-
-def test_eval_invalid_command(r):
+def test_eval_invalid_command(r: redis.Redis):
     with pytest.raises(ResponseError):
         r.eval(
             'return redis.call("FOO")',
@@ -173,48 +149,48 @@ def test_eval_invalid_command(r):
         )
 
 
-def test_eval_syntax_error(r):
+def test_eval_syntax_error(r: redis.Redis):
     with pytest.raises(ResponseError):
         r.eval('return "', 0)
 
 
-def test_eval_runtime_error(r):
+def test_eval_runtime_error(r: redis.Redis):
     with pytest.raises(ResponseError):
         r.eval('error("CRASH")', 0)
 
 
-def test_eval_more_keys_than_args(r):
+def test_eval_more_keys_than_args(r: redis.Redis):
     with pytest.raises(ResponseError):
         r.eval('return 1', 42)
 
 
-def test_eval_numkeys_float_string(r):
+def test_eval_numkeys_float_string(r: redis.Redis):
     with pytest.raises(ResponseError):
         r.eval('return KEYS[1]', '0.7', 'foo')
 
 
-def test_eval_numkeys_integer_string(r):
+def test_eval_numkeys_integer_string(r: redis.Redis):
     val = r.eval('return KEYS[1]', "1", "foo")
     assert val == b'foo'
 
 
-def test_eval_numkeys_negative(r):
+def test_eval_numkeys_negative(r: redis.Redis):
     with pytest.raises(ResponseError):
         r.eval('return KEYS[1]', -1, "foo")
 
 
-def test_eval_numkeys_float(r):
+def test_eval_numkeys_float(r: redis.Redis):
     with pytest.raises(ResponseError):
         r.eval('return KEYS[1]', 0.7, "foo")
 
 
-def test_eval_global_variable(r):
+def test_eval_global_variable(r: redis.Redis):
     # Redis doesn't allow script to define global variables
     with pytest.raises(ResponseError):
         r.eval('a=10', 0)
 
 
-def test_eval_global_and_return_ok(r):
+def test_eval_global_and_return_ok(r: redis.Redis):
     # Redis doesn't allow script to define global variables
     with pytest.raises(ResponseError):
         r.eval(
@@ -226,7 +202,7 @@ def test_eval_global_and_return_ok(r):
         )
 
 
-def test_eval_convert_number(r):
+def test_eval_convert_number(r: redis.Redis):
     # Redis forces all Lua numbers to integer
     val = r.eval('return 3.2', 0)
     assert val == 3
@@ -236,7 +212,7 @@ def test_eval_convert_number(r):
     assert val == -3
 
 
-def test_eval_convert_bool(r):
+def test_eval_convert_bool(r: redis.Redis):
     # Redis converts true to 1 and false to nil (which redis-py converts to None)
     assert r.eval('return false', 0) is None
     val = r.eval('return true', 0)
@@ -245,7 +221,7 @@ def test_eval_convert_bool(r):
 
 
 @pytest.mark.max_server('6.2.7')
-def test_eval_call_bool6(r):
+def test_eval_call_bool6(r: redis.Redis):
     # Redis doesn't allow Lua bools to be passed to [p]call
     with pytest.raises(redis.ResponseError,
                        match=r'Lua redis\(\) command arguments must be strings or integers'):
@@ -253,20 +229,14 @@ def test_eval_call_bool6(r):
 
 
 @pytest.mark.min_server('7')
-def test_eval_call_bool7(r):
+def test_eval_call_bool7(r: redis.Redis):
     # Redis doesn't allow Lua bools to be passed to [p]call
     with pytest.raises(redis.ResponseError,
                        match=r'Lua redis lib command arguments must be strings or integers'):
         r.eval('return redis.call("SET", KEYS[1], true)', 1, "testkey")
 
 
-@testtools.run_test_if_redis_ver('below', '3')
-def test_eval_none_arg(r):
-    val = r.eval('return ARGV[1] == "None"', 0, None)
-    assert val
-
-
-def test_eval_return_error(r):
+def test_eval_return_error(r: redis.Redis):
     with pytest.raises(redis.ResponseError, match='Testing') as exc_info:
         r.eval('return {err="Testing"}', 0)
     assert isinstance(exc_info.value.args[0], str)
@@ -275,20 +245,20 @@ def test_eval_return_error(r):
     assert isinstance(exc_info.value.args[0], str)
 
 
-def test_eval_return_redis_error(r):
+def test_eval_return_redis_error(r: redis.Redis):
     with pytest.raises(redis.ResponseError) as exc_info:
         r.eval('return redis.pcall("BADCOMMAND")', 0)
     assert isinstance(exc_info.value.args[0], str)
 
 
-def test_eval_return_ok(r):
+def test_eval_return_ok(r: redis.Redis):
     val = r.eval('return {ok="Testing"}', 0)
     assert val == b'Testing'
     val = r.eval('return redis.status_reply("Testing")', 0)
     assert val == b'Testing'
 
 
-def test_eval_return_ok_nested(r):
+def test_eval_return_ok_nested(r: redis.Redis):
     val = r.eval(
         '''
         local a = {}
@@ -300,12 +270,12 @@ def test_eval_return_ok_nested(r):
     assert val == [b'Testing']
 
 
-def test_eval_return_ok_wrong_type(r):
+def test_eval_return_ok_wrong_type(r: redis.Redis):
     with pytest.raises(redis.ResponseError):
         r.eval('return redis.status_reply(123)', 0)
 
 
-def test_eval_pcall(r):
+def test_eval_pcall(r: redis.Redis):
     val = r.eval(
         '''
         local a = {}
@@ -319,12 +289,12 @@ def test_eval_pcall(r):
     assert isinstance(val[0], ResponseError)
 
 
-def test_eval_pcall_return_value(r):
+def test_eval_pcall_return_value(r: redis.Redis):
     with pytest.raises(ResponseError):
         r.eval('return redis.pcall("foo")', 0)
 
 
-def test_eval_delete(r):
+def test_eval_delete(r: redis.Redis):
     r.set('foo', 'bar')
     val = r.get('foo')
     assert val == b'bar'
@@ -332,12 +302,12 @@ def test_eval_delete(r):
     assert val is None
 
 
-def test_eval_exists(r):
+def test_eval_exists(r: redis.Redis):
     val = r.eval('return redis.call("exists", KEYS[1]) == 0', 1, 'foo')
     assert val == 1
 
 
-def test_eval_flushdb(r):
+def test_eval_flushdb(r: redis.Redis):
     r.set('foo', 'bar')
     val = r.eval(
         '''
@@ -367,7 +337,7 @@ def test_eval_flushall(r, create_redis):
     assert 'r2' not in r2
 
 
-def test_eval_incrbyfloat(r):
+def test_eval_incrbyfloat(r: redis.Redis):
     r.set('foo', 0.5)
     val = r.eval(
         '''
@@ -378,7 +348,7 @@ def test_eval_incrbyfloat(r):
     assert val == 1
 
 
-def test_eval_lrange(r):
+def test_eval_lrange(r: redis.Redis):
     r.rpush('foo', 'a', 'b')
     val = r.eval(
         '''
@@ -389,7 +359,7 @@ def test_eval_lrange(r):
     assert val == 1
 
 
-def test_eval_ltrim(r):
+def test_eval_ltrim(r: redis.Redis):
     r.rpush('foo', 'a', 'b', 'c', 'd')
     val = r.eval(
         '''
@@ -401,7 +371,7 @@ def test_eval_ltrim(r):
     assert r.lrange('foo', 0, -1) == [b'b', b'c']
 
 
-def test_eval_lset(r):
+def test_eval_lset(r: redis.Redis):
     r.rpush('foo', 'a', 'b')
     val = r.eval(
         '''
@@ -413,7 +383,7 @@ def test_eval_lset(r):
     assert r.lrange('foo', 0, -1) == [b'z', b'b']
 
 
-def test_eval_sdiff(r):
+def test_eval_sdiff(r: redis.Redis):
     r.sadd('foo', 'a', 'b', 'c', 'f', 'e', 'd')
     r.sadd('bar', 'b')
     val = r.eval(
@@ -432,13 +402,13 @@ def test_eval_sdiff(r):
     assert sorted(val) == [b'a', b'c', b'd', b'e', b'f']
 
 
-def test_script(r):
+def test_script(r: redis.Redis):
     script = r.register_script('return ARGV[1]')
     result = script(args=[42])
     assert result == b'42'
 
 
-@fake_only
+@testtools.fake_only
 def test_lua_log(r, caplog):
     logger = fakeredis._server.LOGGER
     script = """
@@ -458,16 +428,16 @@ def test_lua_log(r, caplog):
     ]
 
 
-def test_lua_log_no_message(r):
+def test_lua_log_no_message(r: redis.Redis):
     script = "redis.log(redis.LOG_DEBUG)"
     script = r.register_script(script)
     with pytest.raises(redis.ResponseError):
         script()
 
 
-@fake_only
+@testtools.fake_only
 def test_lua_log_different_types(r, caplog):
-    logger = fakeredis._server.LOGGER
+    logger = logging.getLogger('fakeredis')
     script = "redis.log(redis.LOG_DEBUG, 'string', 1, true, 3.14, 'string')"
     script = r.register_script(script)
     with caplog.at_level('DEBUG'):
@@ -477,14 +447,14 @@ def test_lua_log_different_types(r, caplog):
     ]
 
 
-def test_lua_log_wrong_level(r):
+def test_lua_log_wrong_level(r: redis.Redis):
     script = "redis.log(10, 'string')"
     script = r.register_script(script)
     with pytest.raises(redis.ResponseError):
         script()
 
 
-@fake_only
+@testtools.fake_only
 def test_lua_log_defined_vars(r, caplog):
     logger = fakeredis._server.LOGGER
     script = """
@@ -497,7 +467,7 @@ def test_lua_log_defined_vars(r, caplog):
     assert caplog.record_tuples == [(logger.name, logging.DEBUG, 'string')]
 
 
-def test_hscan_cursors_are_bytes(r):
+def test_hscan_cursors_are_bytes(r: redis.Redis):
     r.hset('hkey', 'foo', 1)
 
     result = r.eval(
@@ -511,3 +481,28 @@ def test_hscan_cursors_are_bytes(r):
 
     assert result == b'0'
     assert isinstance(result, bytes)
+
+
+@pytest.mark.xfail  # TODO
+def test_deleting_while_scan(r: redis.Redis):
+    for i in range(100):
+        r.set(f'key-{i}', i)
+
+    assert len(r.keys()) == 100
+
+    script = """
+        local cursor = 0
+        local seen = {}
+        repeat
+            local result = redis.call('SCAN', cursor)
+            for _,key in ipairs(result[2]) do
+                seen[#seen+1] = key
+                redis.call('DEL', key)
+            end
+            cursor = tonumber(result[1])
+        until cursor == 0
+        return seen
+    """
+
+    assert len(r.register_script(script)()) == 100
+    assert len(r.keys()) == 0
diff --git a/test/test_singleton.py b/test/test_singleton.py
deleted file mode 100644
index d776fca..0000000
--- a/test/test_singleton.py
+++ /dev/null
@@ -1,9 +0,0 @@
-import fakeredis
-
-
-def test_singleton():
-    conn_generator = fakeredis.FakeRedisConnSingleton()
-    conn1 = conn_generator(dict(), False)
-    conn2 = conn_generator(dict(), False)
-    assert conn1.set('foo', 'bar') is True
-    assert conn2.get('foo') == b'bar'
diff --git a/test/test_zadd.py b/test/test_zadd.py
new file mode 100644
index 0000000..14132aa
--- /dev/null
+++ b/test/test_zadd.py
@@ -0,0 +1,160 @@
+import pytest
+import redis
+import redis.client
+from packaging.version import Version
+
+from test.testtools import raw_command
+
+REDIS_VERSION = Version(redis.__version__)
+
+
+def test_zadd(r: redis.Redis):
+    r.zadd('foo', {'four': 4})
+    r.zadd('foo', {'three': 3})
+    assert r.zadd('foo', {'two': 2, 'one': 1, 'zero': 0}) == 3
+    assert r.zrange('foo', 0, -1) == [b'zero', b'one', b'two', b'three', b'four']
+    assert r.zadd('foo', {'zero': 7, 'one': 1, 'five': 5}) == 1
+    assert (
+            r.zrange('foo', 0, -1)
+            == [b'one', b'two', b'three', b'four', b'five', b'zero']
+    )
+
+
+def test_zadd_empty(r: redis.Redis):
+    # Have to add at least one key/value pair
+    with pytest.raises(redis.RedisError):
+        r.zadd('foo', {})
+
+
+@pytest.mark.max_server('6.2.7')
+def test_zadd_minus_zero_redis6(r: redis.Redis):
+    # Changing -0 to +0 is ignored
+    r.zadd('foo', {'a': -0.0})
+    r.zadd('foo', {'a': 0.0})
+    assert raw_command(r, 'zscore', 'foo', 'a') == b'-0'
+
+
+@pytest.mark.min_server('7')
+def test_zadd_minus_zero_redis7(r: redis.Redis):
+    r.zadd('foo', {'a': -0.0})
+    r.zadd('foo', {'a': 0.0})
+    assert raw_command(r, 'zscore', 'foo', 'a') == b'0'
+
+
+def test_zadd_wrong_type(r: redis.Redis):
+    r.sadd('foo', 'bar')
+    with pytest.raises(redis.ResponseError):
+        r.zadd('foo', {'two': 2})
+
+
+def test_zadd_multiple(r: redis.Redis):
+    r.zadd('foo', {'one': 1, 'two': 2})
+    assert r.zrange('foo', 0, 0) == [b'one']
+    assert r.zrange('foo', 1, 1) == [b'two']
+
+
+@pytest.mark.parametrize(
+    'param,return_value,state',
+    [
+        ({'four': 2.0, 'three': 1.0}, 0, [(b'three', 3.0), (b'four', 4.0)]),
+        ({'four': 2.0, 'three': 1.0, 'zero': 0.0}, 1, [(b'zero', 0.0), (b'three', 3.0), (b'four', 4.0)]),
+        ({'two': 2.0, 'one': 1.0}, 2, [(b'one', 1.0), (b'two', 2.0), (b'three', 3.0), (b'four', 4.0)])
+    ]
+)
+@pytest.mark.parametrize('ch', [False, True])
+def test_zadd_with_nx(r, param, return_value, state, ch):
+    r.zadd('foo', {'four': 4.0, 'three': 3.0})
+    assert r.zadd('foo', param, nx=True, ch=ch) == return_value
+    assert r.zrange('foo', 0, -1, withscores=True) == state
+
+
+@pytest.mark.parametrize(
+    'param,return_value,state',
+    [
+        ({'four': 2.0, 'three': 1.0}, 0, [(b'three', 3.0), (b'four', 4.0)]),
+        ({'four': 5.0, 'three': 1.0, 'zero': 0.0}, 2, [(b'zero', 0.0), (b'three', 3.0), (b'four', 5.0), ]),
+        ({'two': 2.0, 'one': 1.0}, 2, [(b'one', 1.0), (b'two', 2.0), (b'three', 3.0), (b'four', 4.0)])
+    ]
+)
+def test_zadd_with_gt_and_ch(r, param, return_value, state):
+    r.zadd('foo', {'four': 4.0, 'three': 3.0})
+    assert r.zadd('foo', param, gt=True, ch=True) == return_value
+    assert r.zrange('foo', 0, -1, withscores=True) == state
+
+
+@pytest.mark.parametrize(
+    'param,return_value,state',
+    [
+        ({'four': 2.0, 'three': 1.0}, 0, [(b'three', 3.0), (b'four', 4.0)]),
+        ({'four': 5.0, 'three': 1.0, 'zero': 0.0}, 1, [(b'zero', 0.0), (b'three', 3.0), (b'four', 5.0)]),
+        ({'two': 2.0, 'one': 1.0}, 2, [(b'one', 1.0), (b'two', 2.0), (b'three', 3.0), (b'four', 4.0)])
+    ]
+)
+def test_zadd_with_gt(r, param, return_value, state):
+    r.zadd('foo', {'four': 4.0, 'three': 3.0})
+    assert r.zadd('foo', param, gt=True) == return_value
+    assert r.zrange('foo', 0, -1, withscores=True) == state
+
+
+@pytest.mark.parametrize(
+    'param,return_value,state',
+    [
+        ({'four': 4.0, 'three': 1.0}, 1, [(b'three', 1.0), (b'four', 4.0)]),
+        ({'four': 4.0, 'three': 1.0, 'zero': 0.0}, 2, [(b'zero', 0.0), (b'three', 1.0), (b'four', 4.0)]),
+        ({'two': 2.0, 'one': 1.0}, 2, [(b'one', 1.0), (b'two', 2.0), (b'three', 3.0), (b'four', 4.0)])
+    ]
+)
+def test_zadd_with_ch(r, param, return_value, state):
+    r.zadd('foo', {'four': 4.0, 'three': 3.0})
+    assert r.zadd('foo', param, ch=True) == return_value
+    assert r.zrange('foo', 0, -1, withscores=True) == state
+
+
+@pytest.mark.parametrize(
+    'param,changed,state',
+    [
+        ({'four': 2.0, 'three': 1.0}, 2, [(b'three', 1.0), (b'four', 2.0)]),
+        ({'four': 4.0, 'three': 3.0, 'zero': 0.0}, 0, [(b'three', 3.0), (b'four', 4.0)]),
+        ({'two': 2.0, 'one': 1.0}, 0, [(b'three', 3.0), (b'four', 4.0)])
+    ]
+)
+@pytest.mark.parametrize('ch', [False, True])
+def test_zadd_with_xx(r, param, changed, state, ch):
+    r.zadd('foo', {'four': 4.0, 'three': 3.0})
+    assert r.zadd('foo', param, xx=True, ch=ch) == (changed if ch else 0)
+    assert r.zrange('foo', 0, -1, withscores=True) == state
+
+
+@pytest.mark.parametrize('ch', [False, True])
+def test_zadd_with_nx_and_xx(r, ch):
+    r.zadd('foo', {'four': 4.0, 'three': 3.0})
+    with pytest.raises(redis.DataError):
+        r.zadd('foo', {'four': -4.0, 'three': -3.0}, nx=True, xx=True, ch=ch)
+
+
+@pytest.mark.parametrize('ch', [False, True])
+def test_zadd_incr(r, ch):
+    r.zadd('foo', {'four': 4.0, 'three': 3.0})
+    assert r.zadd('foo', {'four': 1.0}, incr=True, ch=ch) == 5.0
+    assert r.zadd('foo', {'three': 1.0}, incr=True, nx=True, ch=ch) is None
+    assert r.zscore('foo', 'three') == 3.0
+    assert r.zadd('foo', {'bar': 1.0}, incr=True, xx=True, ch=ch) is None
+    assert r.zadd('foo', {'three': 1.0}, incr=True, xx=True, ch=ch) == 4.0
+
+
+def test_zadd_with_xx_and_gt_and_ch(r: redis.Redis):
+    r.zadd('test', {"one": 1})
+    assert r.zscore("test", "one") == 1.0
+    assert r.zadd("test", {"one": 4}, xx=True, gt=True, ch=True) == 1
+    assert r.zscore("test", "one") == 4.0
+    assert r.zadd("test", {"one": 0}, xx=True, gt=True, ch=True) == 0
+    assert r.zscore("test", "one") == 4.0
+
+
+def test_zadd_and_zrangebyscore(r: redis.Redis):
+    raw_command(r, 'zadd', '', 0.0, '')
+    assert raw_command(r, 'zrangebyscore', '', 0.0, 0.0, 'limit', 0, 0) == []
+    with pytest.raises(redis.RedisError):
+        raw_command(r, 'zrangebyscore', '', 0.0, 0.0, 'limit', 0)
+    with pytest.raises(redis.RedisError):
+        raw_command(r, 'zadd', 't', 0.0, 'xx', '')
diff --git a/test/testtools.py b/test/testtools.py
index b88b215..93bc0ed 100644
--- a/test/testtools.py
+++ b/test/testtools.py
@@ -1,4 +1,4 @@
-import importlib
+import importlib.util
 
 import pytest
 import redis
@@ -7,7 +7,12 @@ from packaging.version import Version
 REDIS_VERSION = Version(redis.__version__)
 
 
-def raw_command(r, *args):
+def key_val_dict(size=100):
+    return {f'key:{i}'.encode(): f'val:{i}'.encode()
+            for i in range(size)}
+
+
+def raw_command(r: redis.Redis, *args):
     """Like execute_command, but does not do command-specific response parsing"""
     response_callbacks = r.response_callbacks
     try:
@@ -17,19 +22,16 @@ def raw_command(r, *args):
         r.response_callbacks = response_callbacks
 
 
-# Wrap some redis commands to abstract differences between redis-py 2 and 3.
-def zadd(r, key, d, *args, **kwargs):
-    if REDIS_VERSION >= Version('3'):
-        return r.zadd(key, d, *args, **kwargs)
-    else:
-        return r.zadd(key, **d)
+ALLOWED_CONDITIONS = {'above', 'below'}
 
 
-def run_test_if_redis_ver(condition: str, ver: str):
-    cond = REDIS_VERSION < Version(ver) if condition == 'above' else REDIS_VERSION > Version(ver)
+def run_test_if_redispy_ver(condition: str, ver: str):
+    if condition not in ALLOWED_CONDITIONS:
+        raise ValueError(f'condition {condition} is not in allowed conditions ({ALLOWED_CONDITIONS})')
+    cond = REDIS_VERSION >= Version(ver) if condition == 'above' else REDIS_VERSION <= Version(ver)
     return pytest.mark.skipif(
-        cond,
-        reason=f"Test is only applicable to redis-py {ver} and above"
+        not cond,
+        reason=f"Test is not applicable to redis-py {REDIS_VERSION} ({condition}, {ver})"
     )
 
 
@@ -39,8 +41,8 @@ run_test_if_lupa = pytest.mark.skipif(
     reason="Test is only applicable if lupa is installed"
 )
 
-_aioredis_module = importlib.util.find_spec("aioredis")
-run_test_if_no_aioredis = pytest.mark.skipif(
-    _aioredis_module is not None,
-    reason="Test is only applicable if aioredis is not installed",
+fake_only = pytest.mark.parametrize(
+    'create_redis',
+    [pytest.param('FakeStrictRedis', marks=pytest.mark.fake)],
+    indirect=True
 )
diff --git a/tox.ini b/tox.ini
index 9977ba8..0f07ccb 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,14 +1,21 @@
 [tox]
 envlist =
-    py{37,38,39,310}
+    py{37,38,39,310,311}
 
 [testenv]
-whitelist_externals = poetry
+allowlist_externals =
+    poetry
+docker =
+    redis
 usedevelop = True
 commands =
-    poetry install -v
+    poetry install --extras "lua json" -v
     poetry run pytest -v
-extras = lua
 deps =
-    hypothesis
-    pytest
+    poetry
+
+[docker:redis]
+image = redis/redis-stack:7.0.6-RC8
+ports =
+    6379:6379/tcp
+healtcheck_cmd = python -c "import socket;print(True) if 0 == socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect_ex(('127.0.0.1',6379)) else False"

More details

Full run details

Historical runs