Compare commits

...

21 Commits

Author SHA1 Message Date
freddiewanah
a5a9658896
Refactor test cases to improve unit test quality (#2796)
Co-authored-by: Adam Hopkins <adam@amhopkins.com>
2023-09-07 15:26:56 +03:00
Adam Hopkins
91d7e6a77d
Add PAAS files for webhook deployment 2023-09-07 10:20:10 +03:00
Theodore Ni
eb3d78f687
Fix test_fast when there's only one CPU (#2801)
Co-authored-by: Néstor Pérez <25409753+prryplatypus@users.noreply.github.com>
Co-authored-by: Adam Hopkins <adam@amhopkins.com>
2023-09-06 21:26:28 +03:00
Adam Hopkins
d255d1aae1
Conversion of User Guide to the SHH stack (#2781) 2023-09-06 15:44:00 +03:00
Néstor Pérez
47215d4635
Run tests on push as well (#2814) 2023-08-30 20:03:22 +03:00
Adam Hopkins
38ff9069f3
Suppress task cancel traceback (#2812) 2023-08-30 09:05:37 +03:00
Néstor Pérez
4dde4572ec
Refactor GitHub Actions (#2808) 2023-08-29 21:47:40 +03:00
Néstor Pérez
31d14704cb
Update README (#2810) 2023-08-27 18:42:43 +03:00
Néstor Pérez
6a89f4b2fe
Add constraint for autodocsumm (#2807) 2023-08-23 20:47:23 +03:00
Adam Hopkins
16256522f6
Disable Test PyPI dist 2023-07-25 16:13:47 +03:00
Adam Hopkins
205795d1e8
Prepare for v23.6 release (#2797) 2023-07-25 15:57:29 +03:00
Adam Hopkins
9cbe1fb8ad
Add convenience method for exception reporting (#2792) 2023-07-18 00:21:55 +03:00
L. Kärkkäinen
31d7ba8f8c
Add request.client_ip (#2790)
Co-authored-by: L. Kärkkäinen <Tronic@users.noreply.github.com>
2023-07-13 23:01:02 +03:00
Adam Hopkins
dc3c4d1393
Add custom typing to config and ctx (#2785) 2023-07-12 23:47:58 +03:00
Adam Hopkins
929d270569
Update bug-report.yml (#2788) 2023-07-12 19:00:28 +03:00
Adam Hopkins
93714df051
Update bug-report.yml (#2787) 2023-07-12 18:53:14 +03:00
L. Kärkkäinen
6e61eab872
Increase KEEP_ALIVE_TIMEOUT default to 120 seconds (#2670)
Co-authored-by: L. Kärkkäinen <Tronic@users.noreply.github.com>
2023-07-12 08:45:30 +03:00
Adam Hopkins
6848ff24d8
Run keep alive tests in loop to get available port (#2779) 2023-07-09 22:58:17 +03:00
Adam Hopkins
666371bb92
Set multiprocessing start method early (#2776) 2023-07-09 22:34:15 +03:00
Adam Hopkins
4a2b82e42e
Remove Python3.7 support (#2777) 2023-07-09 22:00:14 +03:00
Moshe Nahmias
5dd1623192
Alow Blueprint routes to explicitly define error_format (#2773)
Co-authored-by: Adam Hopkins <adam@amhopkins.com>
2023-07-09 14:47:59 +03:00
381 changed files with 52654 additions and 2775 deletions

View File

@ -21,7 +21,14 @@ body:
id: code id: code
attributes: attributes:
label: Code snippet label: Code snippet
description: Relevant source code, make sure to remove what is not necessary. description: |
Relevant source code, make sure to remove what is not necessary. Please try and format your code so that it is easier to read. For example:
```python
from sanic import Sanic
app = Sanic("Example")
```
validations: validations:
required: false required: false
- type: textarea - type: textarea
@ -42,11 +49,16 @@ body:
- ASGI - ASGI
validations: validations:
required: true required: true
- type: input - type: dropdown
id: os id: os
attributes: attributes:
label: Operating System label: Operating System
description: What OS? description: What OS?
options:
- Linux
- MacOS
- Windows
- Other (tell us in the description)
validations: validations:
required: true required: true
- type: input - type: input

View File

@ -12,29 +12,23 @@ on:
- main - main
- current-release - current-release
- "*LTS" - "*LTS"
jobs: jobs:
test: coverage:
runs-on: ${{ matrix.os }} name: Check coverage
runs-on: ubuntu-latest
strategy: strategy:
matrix:
python-version: [3.9]
os: [ubuntu-latest]
fail-fast: false fail-fast: false
steps: steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies 🔨
run: |
python -m pip install --upgrade pip
pip install tox
- name: Run coverage - name: Run coverage
run: tox -e coverage uses: sanic-org/simple-tox-action@v1
continue-on-error: true with:
- uses: codecov/codecov-action@v2 python-version: "3.11"
tox-env: coverage
ignore-errors: true
- name: Run Codecov
uses: codecov/codecov-action@v3
with: with:
files: ./coverage.xml files: ./coverage.xml
fail_ci_if_error: false fail_ci_if_error: false

View File

@ -1,39 +0,0 @@
name: On Demand Task
on:
workflow_dispatch:
inputs:
python-version:
description: 'Version of Python to use for running Test'
required: false
default: "3.8"
tox-env:
description: 'Test Environment to Run'
required: true
default: ''
os:
description: 'Operating System to Run Test on'
required: false
default: ubuntu-latest
jobs:
onDemand:
name: tox-${{ matrix.config.tox-env }}-on-${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: ["${{ github.event.inputs.os}}"]
config:
- { tox-env: "${{ github.event.inputs.tox-env }}", py-version: "${{ github.event.inputs.python-version }}"}
steps:
- name: Checkout Repository
uses: actions/checkout@v2
- name: Run tests
uses: harshanarayana/custom-actions@main
with:
python-version: ${{ matrix.config.py-version }}
test-infra-tool: tox
test-infra-version: latest
action: tests
test-additional-args: "-e=${{ matrix.config.tox-env }}"
experimental-ignore-error: "yes"

View File

@ -1,38 +0,0 @@
name: Security Analysis
on:
pull_request:
branches:
- main
- current-release
- "*LTS"
types: [opened, synchronize, reopened, ready_for_review]
jobs:
bandit:
if: github.event.pull_request.draft == false
name: type-check-${{ matrix.config.python-version }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest]
config:
- { python-version: 3.7, tox-env: security}
- { python-version: 3.8, tox-env: security}
- { python-version: 3.9, tox-env: security}
- { python-version: "3.10", tox-env: security}
- { python-version: "3.11", tox-env: security}
steps:
- name: Checkout the repository
uses: actions/checkout@v2
id: checkout-branch
- name: Run Linter Checks
id: linter-check
uses: harshanarayana/custom-actions@main
with:
python-version: ${{ matrix.config.python-version }}
test-infra-tool: tox
test-infra-version: latest
action: tests
test-additional-args: "-e=${{ matrix.config.tox-env }}"

View File

@ -1,33 +0,0 @@
name: Document Linter
on:
pull_request:
branches:
- main
- current-release
- "*LTS"
types: [opened, synchronize, reopened, ready_for_review]
jobs:
docsLinter:
if: github.event.pull_request.draft == false
name: Lint Documentation
runs-on: ubuntu-latest
strategy:
matrix:
config:
- {python-version: "3.10", tox-env: "docs"}
fail-fast: false
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Run Document Linter
uses: harshanarayana/custom-actions@main
with:
python-version: ${{ matrix.config.python-version }}
test-infra-tool: tox
test-infra-version: latest
action: tests
test-additional-args: "-e=${{ matrix.config.tox-env }}"

View File

@ -1,34 +0,0 @@
name: Linter Checks
on:
pull_request:
branches:
- main
- current-release
- "*LTS"
types: [opened, synchronize, reopened, ready_for_review]
jobs:
linter:
if: github.event.pull_request.draft == false
name: lint
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest]
config:
- { python-version: "3.10", tox-env: lint}
steps:
- name: Checkout the repository
uses: actions/checkout@v2
id: checkout-branch
- name: Run Linter Checks
id: linter-check
uses: harshanarayana/custom-actions@main
with:
python-version: ${{ matrix.config.python-version }}
test-infra-tool: tox
test-infra-version: latest
action: tests
test-additional-args: "-e=${{ matrix.config.tox-env }}"

View File

@ -1,41 +0,0 @@
name: Python PyPy Tests
on:
workflow_dispatch:
inputs:
tox-env:
description: "Tox Env to run on the PyPy Infra"
required: false
default: "pypy37"
pypy-version:
description: "Version of PyPy to use"
required: false
default: "pypy-3.7"
jobs:
testPyPy:
name: ut-${{ matrix.config.tox-env }}-${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
# os: [ubuntu-latest, macos-latest]
os: [ubuntu-latest]
config:
- {
python-version: "${{ github.event.inputs.pypy-version }}",
tox-env: "${{ github.event.inputs.tox-env }}",
}
steps:
- name: Checkout the Repository
uses: actions/checkout@v2
id: checkout-branch
- name: Run Unit Tests
uses: harshanarayana/custom-actions@main
with:
python-version: ${{ matrix.config.python-version }}
test-infra-tool: tox
test-infra-version: latest
action: tests
test-additional-args: "-e=${{ matrix.config.tox-env }}"
experimental-ignore-error: "true"
command-timeout: "600000"

View File

@ -1,48 +0,0 @@
name: Python 3.10 Tests
on:
pull_request:
branches:
- main
- current-release
- "*LTS"
types: [opened, synchronize, reopened, ready_for_review]
jobs:
testPy310:
if: github.event.pull_request.draft == false
name: ut-${{ matrix.config.tox-env }}-${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
# os: [ubuntu-latest, macos-latest]
os: [ubuntu-latest]
config:
- {
python-version: "3.10",
tox-env: py310,
ignore-error-flake: "false",
command-timeout: "0",
}
- {
python-version: "3.10",
tox-env: py310-no-ext,
ignore-error-flake: "true",
command-timeout: "600000",
}
steps:
- name: Checkout the Repository
uses: actions/checkout@v2
id: checkout-branch
- name: Run Unit Tests
uses: harshanarayana/custom-actions@main
with:
python-version: ${{ matrix.config.python-version }}
test-infra-tool: tox
test-infra-version: latest
action: tests
test-additional-args: "-e=${{ matrix.config.tox-env }},-vv=''"
experimental-ignore-error: "${{ matrix.config.ignore-error-flake }}"
command-timeout: "${{ matrix.config.command-timeout }}"
test-failure-retry: "3"

View File

@ -1,48 +0,0 @@
name: Python 3.11 Tests
on:
pull_request:
branches:
- main
- current-release
- "*LTS"
types: [opened, synchronize, reopened, ready_for_review]
jobs:
testPy311:
if: github.event.pull_request.draft == false
name: ut-${{ matrix.config.tox-env }}-${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
# os: [ubuntu-latest, macos-latest]
os: [ubuntu-latest]
config:
- {
python-version: "3.11",
tox-env: py311,
ignore-error-flake: "false",
command-timeout: "0",
}
- {
python-version: "3.11",
tox-env: py311-no-ext,
ignore-error-flake: "true",
command-timeout: "600000",
}
steps:
- name: Checkout the Repository
uses: actions/checkout@v2
id: checkout-branch
- name: Run Unit Tests
uses: harshanarayana/custom-actions@main
with:
python-version: ${{ matrix.config.python-version }}
test-infra-tool: tox
test-infra-version: latest
action: tests
test-additional-args: "-e=${{ matrix.config.tox-env }},-vv=''"
experimental-ignore-error: "${{ matrix.config.ignore-error-flake }}"
command-timeout: "${{ matrix.config.command-timeout }}"
test-failure-retry: "3"

View File

@ -1,36 +0,0 @@
name: Python 3.7 Tests
on:
pull_request:
branches:
- main
- current-release
- "*LTS"
types: [opened, synchronize, reopened, ready_for_review]
jobs:
testPy37:
if: github.event.pull_request.draft == false
name: ut-${{ matrix.config.tox-env }}-${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: true
matrix:
# os: [ubuntu-latest, macos-latest]
os: [ubuntu-latest]
config:
- { python-version: 3.7, tox-env: py37 }
- { python-version: 3.7, tox-env: py37-no-ext }
steps:
- name: Checkout the Repository
uses: actions/checkout@v2
id: checkout-branch
- name: Run Unit Tests
uses: harshanarayana/custom-actions@main
with:
python-version: ${{ matrix.config.python-version }}
test-infra-tool: tox
test-infra-version: latest
action: tests
test-additional-args: "-e=${{ matrix.config.tox-env }}"
test-failure-retry: "3"

View File

@ -1,36 +0,0 @@
name: Python 3.8 Tests
on:
pull_request:
branches:
- main
- current-release
- "*LTS"
types: [opened, synchronize, reopened, ready_for_review]
jobs:
testPy38:
if: github.event.pull_request.draft == false
name: ut-${{ matrix.config.tox-env }}-${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: true
matrix:
# os: [ubuntu-latest, macos-latest]
os: [ubuntu-latest]
config:
- { python-version: 3.8, tox-env: py38 }
- { python-version: 3.8, tox-env: py38-no-ext }
steps:
- name: Checkout the Repository
uses: actions/checkout@v2
id: checkout-branch
- name: Run Unit Tests
uses: harshanarayana/custom-actions@main
with:
python-version: ${{ matrix.config.python-version }}
test-infra-tool: tox
test-infra-version: latest
action: tests
test-additional-args: "-e=${{ matrix.config.tox-env }}"
test-failure-retry: "3"

View File

@ -1,48 +0,0 @@
name: Python 3.9 Tests
on:
pull_request:
branches:
- main
- current-release
- "*LTS"
types: [opened, synchronize, reopened, ready_for_review]
jobs:
testPy39:
if: github.event.pull_request.draft == false
name: ut-${{ matrix.config.tox-env }}-${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: true
matrix:
# os: [ubuntu-latest, macos-latest]
os: [ubuntu-latest]
config:
- {
python-version: 3.9,
tox-env: py39,
ignore-error-flake: "false",
command-timeout: "0",
}
- {
python-version: 3.9,
tox-env: py39-no-ext,
ignore-error-flake: "true",
command-timeout: "600000",
}
steps:
- name: Checkout the Repository
uses: actions/checkout@v2
id: checkout-branch
- name: Run Unit Tests
uses: harshanarayana/custom-actions@main
with:
python-version: ${{ matrix.config.python-version }}
test-infra-tool: tox
test-infra-version: latest
action: tests
test-additional-args: "-e=${{ matrix.config.tox-env }},-vv=''"
experimental-ignore-error: "${{ matrix.config.ignore-error-flake }}"
command-timeout: "${{ matrix.config.command-timeout }}"
test-failure-retry: "3"

View File

@ -1,38 +0,0 @@
name: Typing Checks
on:
pull_request:
branches:
- main
- current-release
- "*LTS"
types: [opened, synchronize, reopened, ready_for_review]
jobs:
typeChecking:
if: github.event.pull_request.draft == false
name: type-check-${{ matrix.config.python-version }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest]
config:
# - { python-version: 3.7, tox-env: type-checking}
- { python-version: 3.8, tox-env: type-checking}
- { python-version: 3.9, tox-env: type-checking}
- { python-version: "3.10", tox-env: type-checking}
- { python-version: "3.11", tox-env: type-checking}
steps:
- name: Checkout the repository
uses: actions/checkout@v2
id: checkout-branch
- name: Run Linter Checks
id: linter-check
uses: harshanarayana/custom-actions@main
with:
python-version: ${{ matrix.config.python-version }}
test-infra-tool: tox
test-infra-version: latest
action: tests
test-additional-args: "-e=${{ matrix.config.tox-env }}"

View File

@ -1,40 +0,0 @@
name: Run Unit Tests on Windows
on:
pull_request:
branches:
- main
- current-release
- "*LTS"
types: [opened, synchronize, reopened, ready_for_review]
jobs:
testsOnWindows:
if: github.event.pull_request.draft == false
name: ut-${{ matrix.config.tox-env }}
runs-on: windows-latest
strategy:
fail-fast: false
matrix:
config:
- { python-version: 3.7, tox-env: py37-no-ext }
- { python-version: 3.8, tox-env: py38-no-ext }
- { python-version: 3.9, tox-env: py39-no-ext }
- { python-version: "3.10", tox-env: py310-no-ext }
- { python-version: "3.11", tox-env: py310-no-ext }
- { python-version: pypy-3.7, tox-env: pypy37-no-ext }
steps:
- name: Checkout Repository
uses: actions/checkout@v2
- name: Run Unit Tests
uses: ahopkins/custom-actions@pip-extra-args
with:
python-version: ${{ matrix.config.python-version }}
test-infra-tool: tox
test-infra-version: latest
action: tests
test-additional-args: "-e=${{ matrix.config.tox-env }}"
experimental-ignore-error: "true"
command-timeout: "600000"
pip-extra-args: "--user"

View File

@ -1,48 +0,0 @@
name: Publish Docker Images
on:
workflow_run:
workflows:
- 'Publish Artifacts'
types:
- completed
jobs:
publishDockerImages:
name: Docker Image Build [${{ matrix.python-version }}]
runs-on: ubuntu-latest
strategy:
fail-fast: true
matrix:
python-version: ["3.7", "3.8", "3.9", "3.10", "3.11"]
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Build Latest Base images for ${{ matrix.python-version }}
uses: harshanarayana/custom-actions@main
with:
docker-image-base-name: sanicframework/sanic-build
ignore-python-setup: 'true'
dockerfile-base-dir: './docker'
action: 'image-publish'
docker-image-tag: "${{ matrix.python-version }}"
docker-file-suffix: "base"
docker-build-args: "PYTHON_VERSION=${{ matrix.python-version }}"
registry-auth-user: ${{ secrets.DOCKER_ACCESS_USER }}
registry-auth-password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
push-images: 'true'
- name: Publish Sanic Docker Image for ${{ matrix.python-version }}
uses: harshanarayana/custom-actions@main
with:
docker-image-base-name: sanicframework/sanic
ignore-python-setup: 'true'
dockerfile-base-dir: './docker'
action: 'image-publish'
docker-build-args: "BASE_IMAGE_TAG=${{ matrix.python-version }}"
docker-image-prefix: "${{ matrix.python-version }}"
registry-auth-user: ${{ secrets.DOCKER_ACCESS_USER }}
registry-auth-password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
push-images: 'true'

View File

@ -1,28 +0,0 @@
name: Publish Artifacts
on:
release:
types: [created]
jobs:
publishPythonPackage:
name: Publishing Sanic Release Artifacts
runs-on: ubuntu-latest
strategy:
fail-fast: true
matrix:
python-version: ["3.10"]
steps:
- name: Checkout Repository
uses: actions/checkout@v2
- name: Publish Python Package
uses: harshanarayana/custom-actions@main
with:
python-version: ${{ matrix.python-version }}
package-infra-name: "twine"
pypi-user: __token__
pypi-access-token: ${{ secrets.PYPI_ACCESS_TOKEN }}
action: "package-publish"
pypi-verify-metadata: "true"

174
.github/workflows/publish-release.yml vendored Normal file
View File

@ -0,0 +1,174 @@
name: Publish release
on:
release:
types: [created]
env:
IS_TEST: false
DOCKER_ORG_NAME: sanicframework
DOCKER_IMAGE_NAME: sanic
DOCKER_BASE_IMAGE_NAME: sanic-build
DOCKER_IMAGE_DOCKERFILE: ./docker/Dockerfile
DOCKER_BASE_IMAGE_DOCKERFILE: ./docker/Dockerfile-base
jobs:
generate_info:
name: Generate info
runs-on: ubuntu-latest
outputs:
docker-tags: ${{ steps.generate_docker_info.outputs.tags }}
pypi-version: ${{ steps.parse_version_tag.outputs.pypi-version }}
steps:
- name: Parse version tag
id: parse_version_tag
env:
TAG_NAME: ${{ github.event.release.tag_name }}
run: |
tag_name="${{ env.TAG_NAME }}"
if [[ ! "${tag_name}" =~ ^v([0-9]{2})\.([0-9]{1,2})\.([0-9]+)$ ]]; then
echo "::error::Tag name must be in the format vYY.MM.MICRO"
exit 1
fi
year_output="year=${BASH_REMATCH[1]}"
month_output="month=${BASH_REMATCH[2]}"
pypi_output="pypi-version=${tag_name#v}"
echo "${year_output}"
echo "${month_output}"
echo "${pypi_output}"
echo "${year_output}" >> $GITHUB_OUTPUT
echo "${month_output}" >> $GITHUB_OUTPUT
echo "${pypi_output}" >> $GITHUB_OUTPUT
- name: Get latest release
id: get_latest_release
run: |
latest_tag=$(
curl -L \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ github.token }}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/repos/${{ github.repository }}/releases/latest \
| jq -r '.tag_name'
)
echo "latest_tag=$latest_tag" >> $GITHUB_OUTPUT
- name: Generate Docker info
id: generate_docker_info
run: |
tag_year="${{ steps.parse_version_tag.outputs.year }}"
tag_month="${{ steps.parse_version_tag.outputs.month }}"
latest_tag="${{ steps.get_latest_release.outputs.latest_tag }}"
tag="${{ github.event.release.tag_name }}"
tags="${tag_year}.${tag_month}"
if [[ "${tag_month}" == "12" ]]; then
tags+=",LTS"
echo "::notice::Tag ${tag} is LTS version"
else
echo "::notice::Tag ${tag} is not LTS version"
fi
if [[ "${latest_tag}" == "${{ github.event.release.tag_name }}" ]]; then
tags+=",latest"
echo "::notice::Tag ${tag} is marked as latest"
else
echo "::notice::Tag ${tag} is not marked as latest"
fi
tags_output="tags=${tags}"
echo "${tags_output}"
echo "${tags_output}" >> $GITHUB_OUTPUT
publish_package:
name: Build and publish package
runs-on: ubuntu-latest
needs: generate_info
steps:
- name: Checkout repo
uses: actions/checkout@v3
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: "3.11"
- name: Install dependencies
run: pip install build twine
- name: Update package version
run: |
echo "__version__ = \"${{ needs.generate_info.outputs.pypi-version }}\"" > sanic/__version__.py
- name: Build a binary wheel and a source tarball
run: python -m build --sdist --wheel --outdir dist/ .
- name: Publish to PyPi 🚀
run: twine upload --non-interactive --disable-progress-bar dist/*
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ env.IS_TEST == 'true' && secrets.SANIC_TEST_PYPI_API_TOKEN || secrets.SANIC_PYPI_API_TOKEN }}
TWINE_REPOSITORY: ${{ env.IS_TEST == 'true' && 'testpypi' || 'pypi' }}
publish_docker:
name: Publish Docker / Python ${{ matrix.python-version }}
needs: [generate_info, publish_package]
runs-on: ubuntu-latest
strategy:
fail-fast: true
matrix:
python-version: ["3.10", "3.11"]
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_ACCESS_USER }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build and push base image
uses: docker/build-push-action@v4
with:
push: ${{ env.IS_TEST == 'false' }}
file: ${{ env.DOCKER_BASE_IMAGE_DOCKERFILE }}
tags: ${{ env.DOCKER_ORG_NAME }}/${{ env.DOCKER_BASE_IMAGE_NAME }}:${{ matrix.python-version }}
build-args: |
PYTHON_VERSION=${{ matrix.python-version }}
- name: Parse tags for this Python version
id: parse_tags
run: |
IFS=',' read -ra tags <<< "${{ needs.generate_info.outputs.docker-tags }}"
tag_args=""
for tag in "${tags[@]}"; do
tag_args+=",${{ env.DOCKER_ORG_NAME }}/${{ env.DOCKER_IMAGE_NAME }}:${tag}-py${{ matrix.python-version }}"
done
tag_args_output="tag_args=${tag_args:1}"
echo "${tag_args_output}"
echo "${tag_args_output}" >> $GITHUB_OUTPUT
- name: Build and push Sanic image
uses: docker/build-push-action@v4
with:
push: ${{ env.IS_TEST == 'false' }}
file: ${{ env.DOCKER_IMAGE_DOCKERFILE }}
tags: ${{ steps.parse_tags.outputs.tag_args }}
build-args: |
BASE_IMAGE_ORG=${{ env.DOCKER_ORG_NAME }}
BASE_IMAGE_NAME=${{ env.DOCKER_BASE_IMAGE_NAME }}
BASE_IMAGE_TAG=${{ matrix.python-version }}
SANIC_PYPI_VERSION=${{ needs.generate_info.outputs.pypi-version }}

56
.github/workflows/tests.yml vendored Normal file
View File

@ -0,0 +1,56 @@
name: Tests
on:
push:
branches:
- main
- current-release
- "*LTS"
tags:
- "!*"
pull_request:
branches:
- main
- current-release
- "*LTS"
types: [opened, synchronize, reopened, ready_for_review]
jobs:
run_tests:
name: "${{ matrix.config.platform == 'windows-latest' && 'Windows' || 'Linux' }} / Python ${{ matrix.config.python-version }} / tox -e ${{ matrix.config.tox-env }}"
if: github.event.pull_request.draft == false
runs-on: ${{ matrix.config.platform || 'ubuntu-latest' }}
strategy:
fail-fast: true
matrix:
config:
- { python-version: "3.8", tox-env: security }
- { python-version: "3.9", tox-env: security }
- { python-version: "3.10", tox-env: security }
- { python-version: "3.11", tox-env: security }
- { python-version: "3.10", tox-env: lint }
# - { python-version: "3.10", tox-env: docs }
- { python-version: "3.8", tox-env: type-checking }
- { python-version: "3.9", tox-env: type-checking }
- { python-version: "3.10", tox-env: type-checking }
- { python-version: "3.11", tox-env: type-checking }
- { python-version: "3.8", tox-env: py38, max-attempts: 3 }
- { python-version: "3.8", tox-env: py38-no-ext, max-attempts: 3 }
- { python-version: "3.9", tox-env: py39, max-attempts: 3 }
- { python-version: "3.9", tox-env: py39-no-ext, max-attempts: 3 }
- { python-version: "3.10", tox-env: py310, max-attempts: 3 }
- { python-version: "3.10", tox-env: py310-no-ext, max-attempts: 3 }
- { python-version: "3.11", tox-env: py311, max-attempts: 3 }
- { python-version: "3.11", tox-env: py311-no-ext, max-attempts: 3 }
- { python-version: "3.8", tox-env: py38-no-ext, platform: windows-latest, ignore-errors: true }
- { python-version: "3.9", tox-env: py39-no-ext, platform: windows-latest, ignore-errors: true }
- { python-version: "3.10", tox-env: py310-no-ext, platform: windows-latest, ignore-errors: true }
- { python-version: "3.11", tox-env: py310-no-ext, platform: windows-latest, ignore-errors: true }
steps:
- name: Run tests
uses: sanic-org/simple-tox-action@v1
with:
python-version: ${{ matrix.config.python-version }}
tox-env: ${{ matrix.config.tox-env }}
max-attempts: ${{ matrix.config.max-attempts || 1 }}
ignore-errors: ${{ matrix.config.ignore-errors || false }}

1
.gitignore vendored
View File

@ -23,3 +23,4 @@ pip-wheel-metadata/
.venv/* .venv/*
venv/* venv/*
.vscode/* .vscode/*
guide/node_modules/

View File

@ -93,6 +93,9 @@ docs-serve:
changelog: changelog:
python scripts/changelog.py python scripts/changelog.py
guide-serve:
cd guide && sanic server:app -r -R ./content -R ./style
release: release:
ifdef version ifdef version
python scripts/release.py --release-version ${version} --generate-changelog python scripts/release.py --release-version ${version} --generate-changelog

View File

@ -11,7 +11,7 @@ Sanic | Build fast. Run fast.
:stub-columns: 1 :stub-columns: 1
* - Build * - Build
- | |Py310Test| |Py39Test| |Py38Test| |Py37Test| - | |Tests|
* - Docs * - Docs
- | |UserGuide| |Documentation| - | |UserGuide| |Documentation|
* - Package * - Package
@ -19,7 +19,7 @@ Sanic | Build fast. Run fast.
* - Support * - Support
- | |Forums| |Discord| |Awesome| - | |Forums| |Discord| |Awesome|
* - Stats * - Stats
- | |Downloads| |WkDownloads| |Conda downloads| - | |Monthly Downloads| |Weekly Downloads| |Conda downloads|
.. |UserGuide| image:: https://img.shields.io/badge/user%20guide-sanic-ff0068 .. |UserGuide| image:: https://img.shields.io/badge/user%20guide-sanic-ff0068
:target: https://sanicframework.org/ :target: https://sanicframework.org/
@ -27,14 +27,8 @@ Sanic | Build fast. Run fast.
:target: https://community.sanicframework.org/ :target: https://community.sanicframework.org/
.. |Discord| image:: https://img.shields.io/discord/812221182594121728?logo=discord .. |Discord| image:: https://img.shields.io/discord/812221182594121728?logo=discord
:target: https://discord.gg/FARQzAEMAA :target: https://discord.gg/FARQzAEMAA
.. |Py310Test| image:: https://github.com/sanic-org/sanic/actions/workflows/pr-python310.yml/badge.svg?branch=main .. |Tests| image:: https://github.com/sanic-org/sanic/actions/workflows/tests.yml/badge.svg?branch=main
:target: https://github.com/sanic-org/sanic/actions/workflows/pr-python310.yml :target: https://github.com/sanic-org/sanic/actions/workflows/tests.yml
.. |Py39Test| image:: https://github.com/sanic-org/sanic/actions/workflows/pr-python39.yml/badge.svg?branch=main
:target: https://github.com/sanic-org/sanic/actions/workflows/pr-python39.yml
.. |Py38Test| image:: https://github.com/sanic-org/sanic/actions/workflows/pr-python38.yml/badge.svg?branch=main
:target: https://github.com/sanic-org/sanic/actions/workflows/pr-python38.yml
.. |Py37Test| image:: https://github.com/sanic-org/sanic/actions/workflows/pr-python37.yml/badge.svg?branch=main
:target: https://github.com/sanic-org/sanic/actions/workflows/pr-python37.yml
.. |Documentation| image:: https://readthedocs.org/projects/sanic/badge/?version=latest .. |Documentation| image:: https://readthedocs.org/projects/sanic/badge/?version=latest
:target: http://sanic.readthedocs.io/en/latest/?badge=latest :target: http://sanic.readthedocs.io/en/latest/?badge=latest
.. |PyPI| image:: https://img.shields.io/pypi/v/sanic.svg .. |PyPI| image:: https://img.shields.io/pypi/v/sanic.svg
@ -52,19 +46,23 @@ Sanic | Build fast. Run fast.
.. |Awesome| image:: https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg .. |Awesome| image:: https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg
:alt: Awesome Sanic List :alt: Awesome Sanic List
:target: https://github.com/mekicha/awesome-sanic :target: https://github.com/mekicha/awesome-sanic
.. |Downloads| image:: https://pepy.tech/badge/sanic/month .. |Monthly Downloads| image:: https://img.shields.io/pypi/dm/sanic.svg
:alt: Downloads :alt: Downloads
:target: https://pepy.tech/project/sanic :target: https://pepy.tech/project/sanic
.. |WkDownloads| image:: https://pepy.tech/badge/sanic/week .. |Weekly Downloads| image:: https://img.shields.io/pypi/dw/sanic.svg
:alt: Downloads :alt: Downloads
:target: https://pepy.tech/project/sanic :target: https://pepy.tech/project/sanic
.. |Conda downloads| image:: https://img.shields.io/conda/dn/conda-forge/sanic.svg .. |Conda downloads| image:: https://img.shields.io/conda/dn/conda-forge/sanic.svg
:alt: Downloads :alt: Downloads
:target: https://anaconda.org/conda-forge/sanic :target: https://anaconda.org/conda-forge/sanic
.. |Linode| image:: https://www.linode.com/wp-content/uploads/2021/01/Linode-Logo-Black.svg
:alt: Linode
:target: https://www.linode.com
:width: 200px
.. end-badges .. end-badges
Sanic is a **Python 3.7+** web server and web framework that's written to go fast. It allows the usage of the ``async/await`` syntax added in Python 3.5, which makes your code non-blocking and speedy. Sanic is a **Python 3.8+** web server and web framework that's written to go fast. It allows the usage of the ``async/await`` syntax added in Python 3.5, which makes your code non-blocking and speedy.
Sanic is also ASGI compliant, so you can deploy it with an `alternative ASGI webserver <https://sanicframework.org/en/guide/deployment/running.html#asgi>`_. Sanic is also ASGI compliant, so you can deploy it with an `alternative ASGI webserver <https://sanicframework.org/en/guide/deployment/running.html#asgi>`_.
@ -141,17 +139,17 @@ And, we can verify it is working: ``curl localhost:8000 -i``
**Now, let's go build something fast!** **Now, let's go build something fast!**
Minimum Python version is 3.7. If you need Python 3.6 support, please use v20.12LTS. Minimum Python version is 3.8. If you need Python 3.7 support, please use v22.12LTS.
Documentation Documentation
------------- -------------
`User Guide <https://sanicframework.org>`__ and `API Documentation <http://sanic.readthedocs.io/>`__. `User Guide <https://sanic.dev>`__ and `API Documentation <http://sanic.readthedocs.io/>`__.
Changelog Changelog
--------- ---------
`Release Changelogs <https://github.com/sanic-org/sanic/blob/master/CHANGELOG.rst>`__. `Release Changelogs <https://sanic.readthedocs.io/en/stable/sanic/changelog.html>`__.
Questions and Discussion Questions and Discussion
@ -163,8 +161,3 @@ Contribution
------------ ------------
We are always happy to have new contributions. We have `marked issues good for anyone looking to get started <https://github.com/sanic-org/sanic/issues?q=is%3Aopen+is%3Aissue+label%3Abeginner>`_, and welcome `questions on the forums <https://community.sanicframework.org/>`_. Please take a look at our `Contribution guidelines <https://github.com/sanic-org/sanic/blob/master/CONTRIBUTING.rst>`_. We are always happy to have new contributions. We have `marked issues good for anyone looking to get started <https://github.com/sanic-org/sanic/issues?q=is%3Aopen+is%3Aissue+label%3Abeginner>`_, and welcome `questions on the forums <https://community.sanicframework.org/>`_. Please take a look at our `Contribution guidelines <https://github.com/sanic-org/sanic/blob/master/CONTRIBUTING.rst>`_.
.. |Linode| image:: https://www.linode.com/wp-content/uploads/2021/01/Linode-Logo-Black.svg
:alt: Linode
:target: https://www.linode.com
:width: 200px

View File

@ -1,9 +1,13 @@
ARG BASE_IMAGE_ORG
ARG BASE_IMAGE_NAME
ARG BASE_IMAGE_TAG ARG BASE_IMAGE_TAG
FROM sanicframework/sanic-build:${BASE_IMAGE_TAG} FROM ${BASE_IMAGE_ORG}/${BASE_IMAGE_NAME}:${BASE_IMAGE_TAG}
RUN apk update RUN apk update
RUN update-ca-certificates RUN update-ca-certificates
RUN pip install sanic ARG SANIC_PYPI_VERSION
RUN pip install -U pip && pip install sanic==${SANIC_PYPI_VERSION}
RUN apk del build-base RUN apk del build-base

View File

@ -5,6 +5,7 @@
| 🔷 In support release | 🔷 In support release
| |
.. mdinclude:: ./releases/23/23.6.md
.. mdinclude:: ./releases/23/23.3.md .. mdinclude:: ./releases/23/23.3.md
.. mdinclude:: ./releases/22/22.12.md .. mdinclude:: ./releases/22/22.12.md
.. mdinclude:: ./releases/22/22.9.md .. mdinclude:: ./releases/22/22.9.md

View File

@ -1,4 +1,4 @@
## Version 23.3.0 🔶 ## Version 23.3.0
### Features ### Features
- [#2545](https://github.com/sanic-org/sanic/pull/2545) Standardize init of exceptions for more consistent control of HTTP responses using exceptions - [#2545](https://github.com/sanic-org/sanic/pull/2545) Standardize init of exceptions for more consistent control of HTTP responses using exceptions

View File

@ -0,0 +1,33 @@
## Version 23.6.0 🔶
### Features
- [#2670](https://github.com/sanic-org/sanic/pull/2670) Increase `KEEP_ALIVE_TIMEOUT` default to 120 seconds
- [#2716](https://github.com/sanic-org/sanic/pull/2716) Adding allow route overwrite option in blueprint
- [#2724](https://github.com/sanic-org/sanic/pull/2724) and [#2792](https://github.com/sanic-org/sanic/pull/2792) Add a new exception signal for ALL exceptions raised anywhere in application
- [#2727](https://github.com/sanic-org/sanic/pull/2727) Add name prefixing to BP groups
- [#2754](https://github.com/sanic-org/sanic/pull/2754) Update request type on middleware types
- [#2770](https://github.com/sanic-org/sanic/pull/2770) Better exception message on startup time application induced import error
- [#2776](https://github.com/sanic-org/sanic/pull/2776) Set multiprocessing start method early
- [#2785](https://github.com/sanic-org/sanic/pull/2785) Add custom typing to config and ctx objects
- [#2790](https://github.com/sanic-org/sanic/pull/2790) Add `request.client_ip`
### Bugfixes
- [#2728](https://github.com/sanic-org/sanic/pull/2728) Fix traversals for intended results
- [#2729](https://github.com/sanic-org/sanic/pull/2729) Handle case when headers argument of ResponseStream constructor is None
- [#2737](https://github.com/sanic-org/sanic/pull/2737) Fix type annotation for `JSONREsponse` default content type
- [#2740](https://github.com/sanic-org/sanic/pull/2740) Use Sanic's serializer for JSON responses in the Inspector
- [#2760](https://github.com/sanic-org/sanic/pull/2760) Support for `Request.get_current` in ASGI mode
- [#2773](https://github.com/sanic-org/sanic/pull/2773) Alow Blueprint routes to explicitly define error_format
- [#2774](https://github.com/sanic-org/sanic/pull/2774) Resolve headers on different renderers
- [#2782](https://github.com/sanic-org/sanic/pull/2782) Resolve pypy compatibility issues
### Deprecations and Removals
- [#2777](https://github.com/sanic-org/sanic/pull/2777) Remove Python 3.7 support
### Developer infrastructure
- [#2766](https://github.com/sanic-org/sanic/pull/2766) Unpin setuptools version
- [#2779](https://github.com/sanic-org/sanic/pull/2779) Run keep alive tests in loop to get available port
### Improved Documentation
- [#2741](https://github.com/sanic-org/sanic/pull/2741) Better documentation examples about running Sanic
From that list, the items to highlight in the release notes:

1
guide/Procfile Normal file
View File

@ -0,0 +1 @@
web: sanic --port=${PORT} --host=0.0.0.0 --workers=1 server:app

View File

@ -0,0 +1 @@
current_version: "23.6"

View File

@ -0,0 +1,15 @@
root:
- label: Home
path: index.html
- label: Community
items:
- label: Forums
href: https://community.sanicframework.org
- label: Discord
href: https://discord.gg/FARQzAEMAA
- label: Twitter
href: https://twitter.com/sanicframework
- label: Help
path: ./help.html
- label: GitHub
href: https://github.com/sanic-org/sanic

View File

@ -0,0 +1,257 @@
root:
- label: User Guide
items:
- label: General
items:
- label: Introduction
path: guide/introduction.html
- label: Getting Started
path: guide/getting-started.html
- label: Basics
items:
- label: Sanic Application
path: guide/basics/app.html
- label: Handlers
path: guide/basics/handlers.html
- label: Request
path: guide/basics/request.html
- label: Response
path: guide/basics/response.html
- label: Routing
path: guide/basics/routing.html
- label: Listeners
path: guide/basics/listeners.html
- label: Middleware
path: guide/basics/middleware.html
- label: Headers
path: guide/basics/headers.html
- label: Cookies
path: guide/basics/cookies.html
- label: Background Tasks
path: guide/basics/tasks.html
- label: Advanced
items:
- label: Class Based Views
path: guide/advanced/class-based-views.html
- label: Proxy Configuration
path: guide/advanced/proxy-headers.html
- label: Streaming
path: guide/advanced/streaming.html
- label: Websockets
path: guide/advanced/websockets.html
- label: Versioning
path: guide/advanced/versioning.html
- label: Signals
path: guide/advanced/signals.html
- label: Best Practices
items:
- label: Blueprints
path: guide/best-practices/blueprints.html
- label: Exceptions
path: guide/best-practices/exceptions.html
- label: Decorators
path: guide/best-practices/decorators.html
- label: Logging
path: guide/best-practices/logging.html
- label: Testing
path: guide/best-practices/testing.html
- label: Running Sanic
items:
- label: Configuration
path: guide/running/configuration.html
- label: Development
path: guide/running/development.html
- label: Server
path: guide/running/running.html
- label: Worker Manager
path: guide/running/manager.html
- label: Dynamic Applications
path: guide/running/app-loader.html
- label: Inspector
path: guide/running/inspector.html
- label: Deployment
items:
- label: Caddy
path: guide/deployment/caddy.html
- label: Nginx
path: guide/deployment/nginx.html
- label: Docker
path: guide/deployment/docker.html
- label: How to ...
items:
- label: Table of Contents
path: guide/how-to/table-of-contents.html
- label: Application Mounting
path: guide/how-to/mounting.html
- label: Authentication
path: guide/how-to/authentication.html
- label: Autodiscovery
path: guide/how-to/autodiscovery.html
- label: CORS
path: guide/how-to/cors.html
- label: ORM
path: guide/how-to/orm.html
- label: Static Redirects
path: guide/how-to/static-redirects.html
- label: TLS/SSL/HTTPS
path: guide/how-to/tls.html
- label: Plugins
items:
- label: Sanic Extensions
items:
- label: Getting Started
path: plugins/sanic-ext/getting-started.html
- label: HTTP - Methods
path: plugins/sanic-ext/http/methods.html
- label: HTTP - CORS Protection
path: plugins/sanic-ext/http/cors.html
- label: OpenAPI - Basics
path: plugins/sanic-ext/openapi/basics.html
- label: OpenAPI - UI
path: plugins/sanic-ext/openapi/ui.html
- label: OpenAPI - Decorators
path: plugins/sanic-ext/openapi/decorators.html
# - label: OpenAPI - Advanced
# path: plugins/sanic-ext/openapi/advanced.html
- label: OpenAPI - Auto Documentation
path: plugins/sanic-ext/openapi/autodoc.html
- label: OpenAPI - Security
path: plugins/sanic-ext/openapi/security.html
- label: Convenience
path: plugins/sanic-ext/convenience.html
- label: Templating - Jinja
path: plugins/sanic-ext/templating/jinja.html
- label: Templating - html5tagger
path: plugins/sanic-ext/templating/html5tagger.html
- label: Dependency Injection
path: plugins/sanic-ext/injection.html
- label: Validation
path: plugins/sanic-ext/validation.html
- label: Health Monitor
path: plugins/sanic-ext/health-monitor.html
- label: Background Logger
path: plugins/sanic-ext/logger.html
- label: Configuration
path: plugins/sanic-ext/configuration.html
- label: Custom Extensions
path: plugins/sanic-ext/custom.html
- label: Sanic Testing
items:
- label: Getting Started
path: plugins/sanic-testing/getting-started.html
- label: Test Clients
path: plugins/sanic-testing/clients.html
- label: Release Notes
items:
- label: "2023"
items:
- label: Sanic 23.6
path: release-notes/2023/v23.6.html
- label: Sanic 23.3
path: release-notes/2023/v23.3.html
- label: "2022"
items:
- label: Sanic 22.12
path: release-notes/2022/v22.12.html
- label: Sanic 22.9
path: release-notes/2022/v22.9.html
- label: Sanic 22.6
path: release-notes/2022/v22.6.html
- label: Sanic 22.3
path: release-notes/2022/v22.3.html
- label: "2021"
items:
- label: Sanic 21.12
path: release-notes/2021/v21.12.html
- label: Sanic 21.9
path: release-notes/2021/v21.9.html
- label: Sanic 21.6
path: release-notes/2021/v21.6.html
- label: Sanic 21.3
path: release-notes/2021/v21.3.html
- label: Organization
items:
- label: Contributing
path: organization/contributing.html
- label: Code of Conduct
path: organization/code-of-conduct.html
- label: S.C.O.P.E. (Governance)
path: organization/scope.html
- label: Policies
path: organization/policies.html
- label: API Reference
items:
- label: Application
items:
- label: sanic.app
path: /api/sanic.app.html
- label: sanic.config
path: /api/sanic.config.html
- label: sanic.application
path: /api/sanic.application.html
- label: Blueprint
items:
- label: sanic.blueprints
path: /api/sanic.blueprints.html
- label: sanic.blueprint_group
path: /api/sanic.blueprint_group.html
- label: Constant
items:
- label: sanic.constants
path: /api/sanic.constants.html
- label: Core
items:
- label: sanic.cookies
path: /api/sanic.cookies.html
- label: sanic.handlers
path: /api/sanic.handlers.html
- label: sanic.headers
path: /api/sanic.headers.html
- label: sanic.middleware
path: /api/sanic.middleware.html
- label: sanic.mixins
path: /api/sanic.mixins.html
- label: sanic.request
path: /api/sanic.request.html
- label: sanic.response
path: /api/sanic.response.html
- label: sanic.views
path: /api/sanic.views.html
- label: Display
items:
- label: sanic.pages
path: /api/sanic.pages.html
- label: Exception
items:
- label: sanic.errorpages
path: /api/sanic.errorpages.html
- label: sanic.exceptions
path: /api/sanic.exceptions.html
- label: Model
items:
- label: sanic.models
path: /api/sanic.models.html
- label: Routing
items:
- label: sanic.router
path: /api/sanic.router.html
- label: sanic.signals
path: /api/sanic.signals.html
- label: Server
items:
- label: sanic.http
path: /api/sanic.http.html
- label: sanic.server
path: /api/sanic.server.html
- label: sanic.worker
path: /api/sanic.worker.html
- label: Utility
items:
- label: sanic.compat
path: /api/sanic.compat.html
- label: sanic.helpers
path: /api/sanic.helpers.html
- label: sanic.log
path: /api/sanic.log.html
- label: sanic.utils
path: /api/sanic.utils.html

3668
guide/content/en/emoji.py Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,202 @@
# Class Based Views
## Why use them?
.. column::
### The problem
A common pattern when designing an API is to have multiple functionality on the same endpoint that depends upon the HTTP method.
While both of these options work, they are not good design practices and may be hard to maintain over time as your project grows.
.. column::
```python
@app.get("/foo")
async def foo_get(request):
...
@app.post("/foo")
async def foo_post(request):
...
@app.put("/foo")
async def foo_put(request):
...
@app.route("/bar", methods=["GET", "POST", "PATCH"])
async def bar(request):
if request.method == "GET":
...
elif request.method == "POST":
...
elif request.method == "PATCH":
...
```
.. column::
### The solution
Class-based views are simply classes that implement response behavior to requests. They provide a way to compartmentalize handling of different HTTP request types at the same endpoint.
.. column::
```python
from sanic.views import HTTPMethodView
class FooBar(HTTPMethodView):
async def get(self, request):
...
async def post(self, request):
...
async def put(self, request):
...
app.add_route(FooBar.as_view(), "/foobar")
```
## Defining a view
A class-based view should subclass `HTTPMethodView`. You can then implement class methods with the name of the corresponding HTTP method. If a request is received that has no defined method, a `405: Method not allowed` response will be generated.
.. column::
To register a class-based view on an endpoint, the `app.add_route` method is used. The first argument should be the defined class with the method `as_view` invoked, and the second should be the URL endpoint.
The available methods are:
- get
- post
- put
- patch
- delete
- head
- options
.. column::
```python
from sanic.views import HTTPMethodView
from sanic.response import text
class SimpleView(HTTPMethodView):
def get(self, request):
return text("I am get method")
# You can also use async syntax
async def post(self, request):
return text("I am post method")
def put(self, request):
return text("I am put method")
def patch(self, request):
return text("I am patch method")
def delete(self, request):
return text("I am delete method")
app.add_route(SimpleView.as_view(), "/")
```
## Path parameters
.. column::
You can use path parameters exactly as discussed in [the routing section](/guide/basics/routing.md).
.. column::
```python
class NameView(HTTPMethodView):
def get(self, request, name):
return text("Hello {}".format(name))
app.add_route(NameView.as_view(), "/<name>")
```
## Decorators
As discussed in [the decorators section](/guide/best-practices/decorators.md), often you will need to add functionality to endpoints with the use of decorators. You have two options with CBV:
1. Apply to _all_ HTTP methods in the view
2. Apply individually to HTTP methods in the view
Let's see what the options look like:
.. column::
### Apply to all methods
If you want to add any decorators to the class, you can set the `decorators` class variable. These will be applied to the class when `as_view` is called.
.. column::
```python
class ViewWithDecorator(HTTPMethodView):
decorators = [some_decorator_here]
def get(self, request, name):
return text("Hello I have a decorator")
def post(self, request, name):
return text("Hello I also have a decorator")
app.add_route(ViewWithDecorator.as_view(), "/url")
```
.. column::
### Apply to individual methods
But if you just want to decorate some methods and not all methods, you can as shown here.
.. column::
```python
class ViewWithSomeDecorator(HTTPMethodView):
@staticmethod
@some_decorator_here
def get(request, name):
return text("Hello I have a decorator")
def post(self, request, name):
return text("Hello I do not have any decorators")
@some_decorator_here
def patch(self, request, name):
return text("Hello I have a decorator")
```
## Generating a URL
.. column::
This works just like [generating any other URL](/guide/basics/routing.md#generating-a-url), except that the class name is a part of the endpoint.
.. column::
```python
@app.route("/")
def index(request):
url = app.url_for("SpecialClassView")
return redirect(url)
class SpecialClassView(HTTPMethodView):
def get(self, request):
return text("Hello from the Special Class View!")
app.add_route(SpecialClassView.as_view(), "/special_class_view")
```

View File

@ -0,0 +1,477 @@
# Proxy configuration
When you use a reverse proxy server (e.g. nginx), the value of `request.ip` will contain the IP of a proxy, typically `127.0.0.1`. Almost always, this is **not** what you will want.
Sanic may be configured to use proxy headers for determining the true client IP, available as `request.remote_addr`. The full external URL is also constructed from header fields _if available_.
.. tip:: Heads up
Without proper precautions, a malicious client may use proxy headers to spoof its own IP. To avoid such issues, Sanic does not use any proxy headers unless explicitly enabled.
.. column::
Services behind reverse proxies must configure one or more of the following [configuration values](/guide/deployment/configuration.md):
- `FORWARDED_SECRET`
- `REAL_IP_HEADER`
- `PROXIES_COUNT`
.. column::
```python
app.config.FORWARDED_SECRET = "super-duper-secret"
app.config.REAL_IP_HEADER = "CF-Connecting-IP"
app.config.PROXIES_COUNT = 2
```
## Forwarded header
In order to use the `Forwarded` header, you should set `app.config.FORWARDED_SECRET` to a value known to the trusted proxy server. The secret is used to securely identify a specific proxy server.
Sanic ignores any elements without the secret key, and will not even parse the header if no secret is set.
All other proxy headers are ignored once a trusted forwarded element is found, as it already carries complete information about the client.
To learn more about the `Forwarded` header, read the related [MDN](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Forwarded) and [Nginx](https://www.nginx.com/resources/wiki/start/topics/examples/forwarded/) articles.
## Traditional proxy headers
### IP Headers
When your proxy forwards you the IP address in a known header, you can tell Sanic what that is with the `REAL_IP_HEADER` config value.
### X-Forwarded-For
This header typically contains a chain of IP addresses through each layer of a proxy. Setting `PROXIES_COUNT` tells Sanic how deep to look to get an actual IP address for the client. This value should equal the _expected_ number of IP addresses in the chain.
### Other X-headers
If a client IP is found by one of these methods, Sanic uses the following headers for URL parts:
- x-forwarded-proto
- x-forwarded-host
- x-forwarded-port
- x-forwarded-path
- x-scheme
## Examples
In the following examples, all requests will assume that the endpoint looks like this:
```python
@app.route("/fwd")
async def forwarded(request):
return json(
{
"remote_addr": request.remote_addr,
"scheme": request.scheme,
"server_name": request.server_name,
"server_port": request.server_port,
"forwarded": request.forwarded,
}
)
```
.. column::
---
##### Example 1
Without configured FORWARDED_SECRET, x-headers should be respected
```python
app.config.PROXIES_COUNT = 1
app.config.REAL_IP_HEADER = "x-real-ip"
```
```bash
$ curl localhost:8000/fwd \
-H 'Forwarded: for=1.1.1.1, for=injected;host=", for="[::2]";proto=https;host=me.tld;path="/app/";secret=mySecret,for=broken;;secret=b0rked, for=127.0.0.3;scheme=http;port=1234' \
-H "X-Real-IP: 127.0.0.2" \
-H "X-Forwarded-For: 127.0.1.1" \
-H "X-Scheme: ws" \
-H "Host: local.site" | jq
```
.. column::
```bash
# curl response
{
"remote_addr": "127.0.0.2",
"scheme": "ws",
"server_name": "local.site",
"server_port": 80,
"forwarded": {
"for": "127.0.0.2",
"proto": "ws"
}
}
```
---
.. column::
##### Example 2
FORWARDED_SECRET now configured
```python
app.config.PROXIES_COUNT = 1
app.config.REAL_IP_HEADER = "x-real-ip"
app.config.FORWARDED_SECRET = "mySecret"
```
```bash
$ curl localhost:8000/fwd \
-H 'Forwarded: for=1.1.1.1, for=injected;host=", for="[::2]";proto=https;host=me.tld;path="/app/";secret=mySecret,for=broken;;secret=b0rked, for=127.0.0.3;scheme=http;port=1234' \
-H "X-Real-IP: 127.0.0.2" \
-H "X-Forwarded-For: 127.0.1.1" \
-H "X-Scheme: ws" \
-H "Host: local.site" | jq
```
.. column::
```bash
# curl response
{
"remote_addr": "[::2]",
"scheme": "https",
"server_name": "me.tld",
"server_port": 443,
"forwarded": {
"for": "[::2]",
"proto": "https",
"host": "me.tld",
"path": "/app/",
"secret": "mySecret"
}
}
```
---
.. column::
##### Example 3
Empty Forwarded header -> use X-headers
```python
app.config.PROXIES_COUNT = 1
app.config.REAL_IP_HEADER = "x-real-ip"
app.config.FORWARDED_SECRET = "mySecret"
```
```bash
$ curl localhost:8000/fwd \
-H "X-Real-IP: 127.0.0.2" \
-H "X-Forwarded-For: 127.0.1.1" \
-H "X-Scheme: ws" \
-H "Host: local.site" | jq
```
.. column::
```bash
# curl response
{
"remote_addr": "127.0.0.2",
"scheme": "ws",
"server_name": "local.site",
"server_port": 80,
"forwarded": {
"for": "127.0.0.2",
"proto": "ws"
}
}
```
---
.. column::
##### Example 4
Header present but not matching anything
```python
app.config.PROXIES_COUNT = 1
app.config.REAL_IP_HEADER = "x-real-ip"
app.config.FORWARDED_SECRET = "mySecret"
```
```bash
$ curl localhost:8000/fwd \
-H "Forwarded: nomatch" | jq
```
.. column::
```bash
# curl response
{
"remote_addr": "",
"scheme": "http",
"server_name": "localhost",
"server_port": 8000,
"forwarded": {}
}
```
---
.. column::
##### Example 5
Forwarded header present but no matching secret -> use X-headers
```python
app.config.PROXIES_COUNT = 1
app.config.REAL_IP_HEADER = "x-real-ip"
app.config.FORWARDED_SECRET = "mySecret"
```
```bash
$ curl localhost:8000/fwd \
-H "Forwarded: for=1.1.1.1;secret=x, for=127.0.0.1" \
-H "X-Real-IP: 127.0.0.2" | jq
```
.. column::
```bash
# curl response
{
"remote_addr": "127.0.0.2",
"scheme": "http",
"server_name": "localhost",
"server_port": 8000,
"forwarded": {
"for": "127.0.0.2"
}
}
```
---
.. column::
##### Example 6
Different formatting and hitting both ends of the header
```python
app.config.PROXIES_COUNT = 1
app.config.REAL_IP_HEADER = "x-real-ip"
app.config.FORWARDED_SECRET = "mySecret"
```
```bash
$ curl localhost:8000/fwd \
-H 'Forwarded: Secret="mySecret";For=127.0.0.4;Port=1234' | jq
```
.. column::
```bash
# curl response
{
"remote_addr": "127.0.0.4",
"scheme": "http",
"server_name": "localhost",
"server_port": 1234,
"forwarded": {
"secret": "mySecret",
"for": "127.0.0.4",
"port": 1234
}
}
```
---
.. column::
##### Example 7
Test escapes (modify this if you see anyone implementing quoted-pairs)
```python
app.config.PROXIES_COUNT = 1
app.config.REAL_IP_HEADER = "x-real-ip"
app.config.FORWARDED_SECRET = "mySecret"
```
```bash
$ curl localhost:8000/fwd \
-H 'Forwarded: for=test;quoted="\,x=x;y=\";secret=mySecret' | jq
```
.. column::
```bash
# curl response
{
"remote_addr": "test",
"scheme": "http",
"server_name": "localhost",
"server_port": 8000,
"forwarded": {
"for": "test",
"quoted": "\\,x=x;y=\\",
"secret": "mySecret"
}
}
```
---
.. column::
##### Example 8
Secret insulated by malformed field #1
```python
app.config.PROXIES_COUNT = 1
app.config.REAL_IP_HEADER = "x-real-ip"
app.config.FORWARDED_SECRET = "mySecret"
```
```bash
$ curl localhost:8000/fwd \
-H 'Forwarded: for=test;secret=mySecret;b0rked;proto=wss;' | jq
```
.. column::
```bash
# curl response
{
"remote_addr": "test",
"scheme": "http",
"server_name": "localhost",
"server_port": 8000,
"forwarded": {
"for": "test",
"secret": "mySecret"
}
}
```
---
.. column::
##### Example 9
Secret insulated by malformed field #2
```python
app.config.PROXIES_COUNT = 1
app.config.REAL_IP_HEADER = "x-real-ip"
app.config.FORWARDED_SECRET = "mySecret"
```
```bash
$ curl localhost:8000/fwd \
-H 'Forwarded: for=test;b0rked;secret=mySecret;proto=wss' | jq
```
.. column::
```bash
# curl response
{
"remote_addr": "",
"scheme": "wss",
"server_name": "localhost",
"server_port": 8000,
"forwarded": {
"secret": "mySecret",
"proto": "wss"
}
}
```
---
.. column::
##### Example 10
Unexpected termination should not lose existing acceptable values
```python
app.config.PROXIES_COUNT = 1
app.config.REAL_IP_HEADER = "x-real-ip"
app.config.FORWARDED_SECRET = "mySecret"
```
```bash
$ curl localhost:8000/fwd \
-H 'Forwarded: b0rked;secret=mySecret;proto=wss' | jq
```
.. column::
```bash
# curl response
{
"remote_addr": "",
"scheme": "wss",
"server_name": "localhost",
"server_port": 8000,
"forwarded": {
"secret": "mySecret",
"proto": "wss"
}
}
```
---
.. column::
##### Example 11
Field normalization
```python
app.config.PROXIES_COUNT = 1
app.config.REAL_IP_HEADER = "x-real-ip"
app.config.FORWARDED_SECRET = "mySecret"
```
```bash
$ curl localhost:8000/fwd \
-H 'Forwarded: PROTO=WSS;BY="CAFE::8000";FOR=unknown;PORT=X;HOST="A:2";PATH="/With%20Spaces%22Quoted%22/sanicApp?key=val";SECRET=mySecret' | jq
```
.. column::
```bash
# curl response
{
"remote_addr": "",
"scheme": "wss",
"server_name": "a",
"server_port": 2,
"forwarded": {
"proto": "wss",
"by": "[cafe::8000]",
"host": "a:2",
"path": "/With Spaces\"Quoted\"/sanicApp?key=val",
"secret": "mySecret"
}
}
```
---
.. column::
##### Example 12
Using "by" field as secret
```python
app.config.PROXIES_COUNT = 1
app.config.REAL_IP_HEADER = "x-real-ip"
app.config.FORWARDED_SECRET = "_proxySecret"
```
```bash
$ curl localhost:8000/fwd \
-H 'Forwarded: for=1.2.3.4; by=_proxySecret' | jq
```
.. column::
```bash
# curl response
{
"remote_addr": "1.2.3.4",
"scheme": "http",
"server_name": "localhost",
"server_port": 8000,
"forwarded": {
"for": "1.2.3.4",
"by": "_proxySecret"
}
}
```

View File

@ -0,0 +1,346 @@
# Signals
Signals provide a way for one part of your application to tell another part that something happened.
```python
@app.signal("user.registration.created")
async def send_registration_email(**context):
await send_email(context["email"], template="registration")
@app.post("/register")
async def handle_registration(request):
await do_registration(request)
await request.app.dispatch(
"user.registration.created",
context={"email": request.json.email}
})
```
## Adding a signal
.. column::
The API for adding a signal is very similar to adding a route.
.. column::
```python
async def my_signal_handler():
print("something happened")
app.add_signal(my_signal_handler, "something.happened.ohmy")
```
.. column::
But, perhaps a slightly more convenient method is to use the built-in decorators.
.. column::
```python
@app.signal("something.happened.ohmy")
async def my_signal_handler():
print("something happened")
```
.. column::
If the signal requires conditions, make sure to add them while adding the handler.
.. column::
```python
async def my_signal_handler1():
print("something happened")
app.add_signal(
my_signal_handler,
"something.happened.ohmy1",
conditions={"some_condition": "value"}
)
@app.signal("something.happened.ohmy2", conditions={"some_condition": "value"})
async def my_signal_handler2():
print("something happened")
```
.. column::
Signals can also be declared on blueprints
.. column::
```python
bp = Blueprint("foo")
@bp.signal("something.happened.ohmy")
async def my_signal_handler():
print("something happened")
```
## Built-in signals
In addition to creating a new signal, there are a number of built-in signals that are dispatched from Sanic itself. These signals exist to provide developers with more opportunities to add functionality into the request and server lifecycles.
*Added in v21.9*
.. column::
You can attach them just like any other signal to an application or blueprint instance.
.. column::
```python
@app.signal("http.lifecycle.complete")
async def my_signal_handler(conn_info):
print("Connection has been closed")
```
These signals are the signals that are available, along with the arguments that the handlers take, and the conditions that attach (if any).
| Event name | Arguments | Conditions |
| -------------------------- | ------------------------------- | --------------------------------------------------------- |
| `http.routing.before` | request | |
| `http.routing.after` | request, route, kwargs, handler | |
| `http.handler.before` | request | |
| `http.handler.after` | request | |
| `http.lifecycle.begin` | conn_info | |
| `http.lifecycle.read_head` | head | |
| `http.lifecycle.request` | request | |
| `http.lifecycle.handle` | request | |
| `http.lifecycle.read_body` | body | |
| `http.lifecycle.exception` | request, exception | |
| `http.lifecycle.response` | request, response | |
| `http.lifecycle.send` | data | |
| `http.lifecycle.complete` | conn_info | |
| `http.middleware.before` | request, response | `{"attach_to": "request"}` or `{"attach_to": "response"}` |
| `http.middleware.after` | request, response | `{"attach_to": "request"}` or `{"attach_to": "response"}` |
| `server.exception.report` | app, exception | |
| `server.init.before` | app, loop | |
| `server.init.after` | app, loop | |
| `server.shutdown.before` | app, loop | |
| `server.shutdown.after` | app, loop | |
Version 22.9 added `http.handler.before` and `http.handler.after`.
Version 23.6 added `server.exception.report`.
.. column::
To make using the built-in signals easier, there is an `Enum` object that contains all of the allowed built-ins. With a modern IDE this will help so that you do not need to remember the full list of event names as strings.
*Added in v21.12*
.. column::
```python
from sanic.signals import Event
@app.signal(Event.HTTP_LIFECYCLE_COMPLETE)
async def my_signal_handler(conn_info):
print("Connection has been closed")
```
## Events
.. column::
Signals are based off of an _event_. An event, is simply a string in the following pattern:
.. column::
```
namespace.reference.action
```
.. tip:: Events must have three parts. If you do not know what to use, try these patterns:
- `my_app.something.happened`
- `sanic.notice.hello`
### Event parameters
.. column::
An event can be "dynamic" and declared using the same syntax as [path parameters](../basics/routing.md#path-parameters). This allows matching based upon arbitrary values.
.. column::
```python
@app.signal("foo.bar.<thing>")
async def signal_handler(thing):
print(f"[signal_handler] {thing=}")
@app.get("/")
async def trigger(request):
await app.dispatch("foo.bar.baz")
return response.text("Done.")
```
Checkout [path parameters](../basics/routing.md#path-parameters) for more information on allowed type definitions.
.. warning:: Only the third part of an event (the action) may be dynamic:
- `foo.bar.<thing>` 🆗
- `foo.<bar>.baz`
### Waiting
.. column::
In addition to executing a signal handler, your application can wait for an event to be triggered.
.. column::
```python
await app.event("foo.bar.baz")
```
.. column::
**IMPORTANT**: waiting is a blocking function. Therefore, you likely will want this to run in a [background task](../basics/tasks.md).
.. column::
```python
async def wait_for_event(app):
while True:
print("> waiting")
await app.event("foo.bar.baz")
print("> event found\n")
@app.after_server_start
async def after_server_start(app, loop):
app.add_task(wait_for_event(app))
```
.. column::
If your event was defined with a dynamic path, you can use `*` to catch any action.
.. column::
```python
@app.signal("foo.bar.<thing>")
...
await app.event("foo.bar.*")
```
## Dispatching
*In the future, Sanic will dispatch some events automatically to assist developers to hook into life cycle events.*
.. column::
Dispatching an event will do two things:
1. execute any signal handlers defined on the event, and
2. resolve anything that is "waiting" for the event to complete.
.. column::
```python
@app.signal("foo.bar.<thing>")
async def foo_bar(thing):
print(f"{thing=}")
await app.dispatch("foo.bar.baz")
```
```
thing=baz
```
### Context
.. column::
Sometimes you may find the need to pass extra information into the signal handler. In our first example above, we wanted our email registration process to have the email address for the user.
.. column::
```python
@app.signal("user.registration.created")
async def send_registration_email(**context):
print(context)
await app.dispatch(
"user.registration.created",
context={"hello": "world"}
)
```
```
{'hello': 'world'}
```
.. tip:: FYI
Signals are dispatched in a background task.
### Blueprints
Dispatching blueprint signals works similar in concept to [middleware](../basics/middleware.md). Anything that is done from the app level, will trickle down to the blueprints. However, dispatching on a blueprint, will only execute the signals that are defined on that blueprint.
.. column::
Perhaps an example is easier to explain:
.. column::
```python
bp = Blueprint("bp")
app_counter = 0
bp_counter = 0
@app.signal("foo.bar.baz")
def app_signal():
nonlocal app_counter
app_counter += 1
@bp.signal("foo.bar.baz")
def bp_signal():
nonlocal bp_counter
bp_counter += 1
```
.. column::
Running `app.dispatch("foo.bar.baz")` will execute both signals.
.. column::
```python
await app.dispatch("foo.bar.baz")
assert app_counter == 1
assert bp_counter == 1
```
.. column::
Running `bp.dispatch("foo.bar.baz")` will execute only the blueprint signal.
.. column::
```python
await bp.dispatch("foo.bar.baz")
assert app_counter == 1
assert bp_counter == 2
```

View File

@ -0,0 +1,151 @@
# Streaming
## Request streaming
Sanic allows you to stream data sent by the client to begin processing data as the bytes arrive.
.. column::
When enabled on an endpoint, you can stream the request body using `await request.stream.read()`.
That method will return `None` when the body is completed.
.. column::
```python
from sanic.views import stream
class SimpleView(HTTPMethodView):
@stream
async def post(self, request):
result = ""
while True:
body = await request.stream.read()
if body is None:
break
result += body.decode("utf-8")
return text(result)
```
.. column::
It also can be enabled with a keyword argument in the decorator...
.. column::
```python
@app.post("/stream", stream=True)
async def handler(request):
...
body = await request.stream.read()
...
```
.. column::
... or the `add_route()` method.
.. column::
```python
bp.add_route(
bp_handler,
"/bp_stream",
methods=["POST"],
stream=True,
)
```
.. tip:: FYI
Only post, put and patch decorators have stream argument.
## Response streaming
.. column::
Sanic allows you to stream content to the client.
.. column::
```python
@app.route("/")
async def test(request):
response = await request.respond(content_type="text/csv")
await response.send("foo,")
await response.send("bar")
# Optionally, you can explicitly end the stream by calling:
await response.eof()
```
This is useful in situations where you want to stream content to the client that originates in an external service, like a database. For example, you can stream database records to the client with the asynchronous cursor that `asyncpg` provides.
```python
@app.route("/")
async def index(request):
response = await request.respond()
conn = await asyncpg.connect(database='test')
async with conn.transaction():
async for record in conn.cursor('SELECT generate_series(0, 10)'):
await response.send(record[0])
```
You can explicitly end a stream by calling `await response.eof()`. It a convenience method to replace `await response.send("", True)`. It should be called **one time** *after* your handler has determined that it has nothing left to send back to the client. While it is *optional* to use with Sanic server, if you are running Sanic in ASGI mode, then you **must** explicitly terminate the stream.
*Calling `eof` became optional in v21.6*
## File streaming
.. column::
Sanic provides `sanic.response.file_stream` function that is useful when you want to send a large file. It returns a `StreamingHTTPResponse` object and will use chunked transfer encoding by default; for this reason Sanic doesnt add `Content-Length` HTTP header in the response.
A typical use case might be streaming an video file.
.. column::
```python
@app.route("/mp4")
async def handler_file_stream(request):
return await response.file_stream(
"/path/to/sample.mp4",
chunk_size=1024,
mime_type="application/metalink4+xml",
headers={
"Content-Disposition": 'Attachment; filename="nicer_name.meta4"',
"Content-Type": "application/metalink4+xml",
},
)
```
.. column::
If you want to use the `Content-Length` header, you can disable chunked transfer encoding and add it manually simply by adding the `Content-Length` header.
.. column::
```python
from aiofiles import os as async_os
from sanic.response import file_stream
@app.route("/")
async def index(request):
file_path = "/srv/www/whatever.png"
file_stat = await async_os.stat(file_path)
headers = {"Content-Length": str(file_stat.st_size)}
return await file_stream(
file_path,
headers=headers,
)
```

View File

@ -0,0 +1,170 @@
# Versioning
It is standard practice in API building to add versions to your endpoints. This allows you to easily differentiate incompatible endpoints when you try and change your API down the road in a breaking manner.
Adding a version will add a `/v{version}` url prefix to your endpoints.
The version can be a `int`, `float`, or `str`. Acceptable values:
- `1`, `2`, `3`
- `1.1`, `2.25`, `3.0`
- `"1"`, `"v1"`, `"v1.1"`
## Per route
.. column::
You can pass a version number to the routes directly.
.. column::
```python
# /v1/text
@app.route("/text", version=1)
def handle_request(request):
return response.text("Hello world! Version 1")
# /v2/text
@app.route("/text", version=2)
def handle_request(request):
return response.text("Hello world! Version 2")
```
## Per Blueprint
.. column::
You can also pass a version number to the blueprint, which will apply to all routes in that blueprint.
.. column::
```python
bp = Blueprint("test", url_prefix="/foo", version=1)
# /v1/foo/html
@bp.route("/html")
def handle_request(request):
return response.html("<p>Hello world!</p>")
```
## Per Blueprint Group
.. column::
In order to simplify the management of the versioned blueprints, you can provide a version number in the blueprint
group. The same will be inherited to all the blueprint grouped under it if the blueprints don't already override the
same information with a value specified while creating a blueprint instance.
When using blueprint groups for managing the versions, the following order is followed to apply the Version prefix to
the routes being registered.
1. Route Level configuration
2. Blueprint level configuration
3. Blueprint Group level configuration
If we find a more pointed versioning specification, we will pick that over the more generic versioning specification
provided under the Blueprint or Blueprint Group
.. column::
```python
from sanic.blueprints import Blueprint
from sanic.response import json
bp1 = Blueprint(
name="blueprint-1",
url_prefix="/bp1",
version=1.25,
)
bp2 = Blueprint(
name="blueprint-2",
url_prefix="/bp2",
)
group = Blueprint.group(
[bp1, bp2],
url_prefix="/bp-group",
version="v2",
)
# GET /v1.25/bp-group/bp1/endpoint-1
@bp1.get("/endpoint-1")
async def handle_endpoint_1_bp1(request):
return json({"Source": "blueprint-1/endpoint-1"})
# GET /v2/bp-group/bp2/endpoint-2
@bp2.get("/endpoint-1")
async def handle_endpoint_1_bp2(request):
return json({"Source": "blueprint-2/endpoint-1"})
# GET /v1/bp-group/bp2/endpoint-2
@bp2.get("/endpoint-2", version=1)
async def handle_endpoint_2_bp2(request):
return json({"Source": "blueprint-2/endpoint-2"})
```
## Version prefix
As seen above, the `version` that is applied to a route is **always** the first segment in the generated URI path. Therefore, to make it possible to add path segments before the version, every place that a `version` argument is passed, you can also pass `version_prefix`.
The `version_prefix` argument can be defined in:
- `app.route` and `bp.route` decorators (and all the convenience decorators also)
- `Blueprint` instantiation
- `Blueprint.group` constructor
- `BlueprintGroup` instantiation
- `app.blueprint` registration
If there are definitions in multiple places, a more specific definition overrides a more general. This list provides that hierarchy.
The default value of `version_prefix` is `/v`.
.. column::
An often requested feature is to be able to mount versioned routes on `/api`. This can easily be accomplished with `version_prefix`.
.. column::
```python
# /v1/my/path
app.route("/my/path", version=1, version_prefix="/api/v")
```
.. column::
Perhaps a more compelling usage is to load all `/api` routes into a single `BlueprintGroup`.
.. column::
```python
# /v1/my/path
app = Sanic(__name__)
v2ip = Blueprint("v2ip", url_prefix="/ip", version=2)
api = Blueprint.group(v2ip, version_prefix="/api/version")
# /api/version2/ip
@v2ip.get("/")
async def handler(request):
return text(request.ip)
app.blueprint(api)
```
We can therefore learn that a route's URI is:
```
version_prefix + version + url_prefix + URI definition
```
.. tip::
Just like with `url_prefix`, it is possible to define path parameters inside a `version_prefix`. It is perfectly legitimate to do this. Just remember that every route will have that parameter injected into the handler.
```python
version_prefix="/<foo:str>/v"
```
*Added in v21.6*

View File

@ -0,0 +1,82 @@
# Websockets
Sanic provides an easy to use abstraction on top of [websockets](https://websockets.readthedocs.io/en/stable/).
## Routing
.. column::
Websocket handlers can be hooked up to the router similar to regular handlers.
.. column::
```python
from sanic import Request, Websocket
async def feed(request: Request, ws: Websocket):
pass
app.add_websocket_route(feed, "/feed")
```
```python
from sanic import Request, Websocket
@app.websocket("/feed")
async def feed(request: Request, ws: Websocket):
pass
```
## Handler
.. column::
Typically, a websocket handler will want to hold open a loop.
It can then use the `send()` and `recv()` methods on the second object injected into the handler.
This example is a simple endpoint that echos back to the client messages that it receives.
.. column::
```python
from sanic import Request, Websocket
@app.websocket("/feed")
async def feed(request: Request, ws: Websocket):
while True:
data = "hello!"
print("Sending: " + data)
await ws.send(data)
data = await ws.recv()
print("Received: " + data)
```
.. column::
You can simplify your loop by just iterating over the `Websocket` object in a for loop.
*Added in v22.9*
.. column::
```python
from sanic import Request, Websocket
@app.websocket("/feed")
async def feed(request: Request, ws: Websocket):
async for msg in ws:
await ws.send(msg)
```
## Configuration
See [configuration section](/guide/deployment/configuration.md) for more details, however the defaults are shown below.
```python
app.config.WEBSOCKET_MAX_SIZE = 2 ** 20
app.config.WEBSOCKET_PING_INTERVAL = 20
app.config.WEBSOCKET_PING_TIMEOUT = 20
```

View File

@ -0,0 +1 @@
# Basics

View File

@ -0,0 +1,517 @@
# Sanic Application
## Instance
.. column::
The most basic building block is the `Sanic()` instance. It is not required, but the custom is to instantiate this in a file called `server.py`.
.. column::
```python
# /path/to/server.py
from sanic import Sanic
app = Sanic("MyHelloWorldApp")
```
## Application context
Most applications will have the need to share/reuse data or objects across different parts of the code base. The most common example is DB connections.
.. column::
In versions of Sanic prior to v21.3, this was commonly done by attaching an attribute to the application instance
.. column::
```python
# Raises a warning as deprecated feature in 21.3
app = Sanic("MyApp")
app.db = Database()
```
.. column::
Because this can create potential problems with name conflicts, and to be consistent with [request context](./request.md#context) objects, v21.3 introduces application level context object.
.. column::
```python
# Correct way to attach objects to the application
app = Sanic("MyApp")
app.ctx.db = Database()
```
## App Registry
.. column::
When you instantiate a Sanic instance, that can be retrieved at a later time from the Sanic app registry. This can be useful, for example, if you need to access your Sanic instance from a location where it is not otherwise accessible.
.. column::
```python
# ./path/to/server.py
from sanic import Sanic
app = Sanic("my_awesome_server")
# ./path/to/somewhere_else.py
from sanic import Sanic
app = Sanic.get_app("my_awesome_server")
```
.. column::
If you call `Sanic.get_app("non-existing")` on an app that does not exist, it will raise `SanicException` by default. You can, instead, force the method to return a new instance of Sanic with that name.
.. column::
```python
app = Sanic.get_app(
"non-existing",
force_create=True,
)
```
.. column::
If there is **only one** Sanic instance registered, then calling `Sanic.get_app()` with no arguments will return that instance
.. column::
```python
Sanic("My only app")
app = Sanic.get_app()
```
## Configuration
.. column::
Sanic holds the configuration in the `config` attribute of the `Sanic` instance. Configuration can be modified **either** using dot-notation **OR** like a dictionary.
.. column::
```python
app = Sanic('myapp')
app.config.DB_NAME = 'appdb'
app.config['DB_USER'] = 'appuser'
db_settings = {
'DB_HOST': 'localhost',
'DB_NAME': 'appdb',
'DB_USER': 'appuser'
}
app.config.update(db_settings)
```
.. note:: Heads up
Config keys _should_ be uppercase. But, this is mainly by convention, and lowercase will work most of the time.
```
app.config.GOOD = "yay!"
app.config.bad = "boo"
```
There is much [more detail about configuration](/guide/deployment/configuration.md) later on.
## Customization
The Sanic application instance can be customized for your application needs in a variety of ways at instantiation.
### Custom configuration
.. column::
This simplest form of custom configuration would be to pass your own object directly into that Sanic application instance
If you create a custom configuration object, it is *highly* recommended that you subclass the Sanic `Config` option to inherit its behavior. You could use this option for adding properties, or your own set of custom logic.
*Added in v21.6*
.. column::
```python
from sanic.config import Config
class MyConfig(Config):
FOO = "bar"
app = Sanic(..., config=MyConfig())
```
.. column::
A useful example of this feature would be if you wanted to use a config file in a form that differs from what is [supported](../deployment/configuration.md#using-sanic-update-config).
.. column::
```python
from sanic import Sanic, text
from sanic.config import Config
class TomlConfig(Config):
def __init__(self, *args, path: str, **kwargs):
super().__init__(*args, **kwargs)
with open(path, "r") as f:
self.apply(toml.load(f))
def apply(self, config):
self.update(self._to_uppercase(config))
def _to_uppercase(self, obj: Dict[str, Any]) -> Dict[str, Any]:
retval: Dict[str, Any] = {}
for key, value in obj.items():
upper_key = key.upper()
if isinstance(value, list):
retval[upper_key] = [
self._to_uppercase(item) for item in value
]
elif isinstance(value, dict):
retval[upper_key] = self._to_uppercase(value)
else:
retval[upper_key] = value
return retval
toml_config = TomlConfig(path="/path/to/config.toml")
app = Sanic(toml_config.APP_NAME, config=toml_config)
```
### Custom context
.. column::
By default, the application context is a [`SimpleNamespace()`](https://docs.python.org/3/library/types.html#types.SimpleNamespace) that allows you to set any properties you want on it. However, you also have the option of passing any object whatsoever instead.
*Added in v21.6*
.. column::
```python
app = Sanic(..., ctx=1)
```
```python
app = Sanic(..., ctx={})
```
```python
class MyContext:
...
app = Sanic(..., ctx=MyContext())
```
### Custom requests
.. column::
It is sometimes helpful to have your own `Request` class, and tell Sanic to use that instead of the default. One example is if you wanted to modify the default `request.id` generator.
.. note:: Important
It is important to remember that you are passing the *class* not an instance of the class.
.. column::
```python
import time
from sanic import Request, Sanic, text
class NanoSecondRequest(Request):
@classmethod
def generate_id(*_):
return time.time_ns()
app = Sanic(..., request_class=NanoSecondRequest)
@app.get("/")
async def handler(request):
return text(str(request.id))
```
### Custom error handler
.. column::
See [exception handling](../best-practices/exceptions.md#custom-error-handling) for more
.. column::
```python
from sanic.handlers import ErrorHandler
class CustomErrorHandler(ErrorHandler):
def default(self, request, exception):
''' handles errors that have no error handlers assigned '''
# You custom error handling logic...
return super().default(request, exception)
app = Sanic(..., error_handler=CustomErrorHandler())
```
### Custom dumps function
.. column::
It may sometimes be necessary or desirable to provide a custom function that serializes an object to JSON data.
.. column::
```python
import ujson
dumps = partial(ujson.dumps, escape_forward_slashes=False)
app = Sanic(__name__, dumps=dumps)
```
.. column::
Or, perhaps use another library or create your own.
.. column::
```python
from orjson import dumps
app = Sanic(__name__, dumps=dumps)
```
### Custom loads function
.. column::
Similar to `dumps`, you can also provide a custom function for deserializing data.
*Added in v22.9*
.. column::
```python
from orjson import loads
app = Sanic(__name__, loads=loads)
```
.. new:: NEW in v23.6
### Custom typed application
The correct, default type of a Sanic application instance is:
```python
sanic.app.Sanic[sanic.config.Config, types.SimpleNamespace]
```
It refers to two generic types:
1. The first is the type of the configuration object. It defaults to `sanic.config.Config`, but can be any subclass of that.
2. The second is the type of the application context. It defaults to `types.SimpleNamespace`, but can be **any object** as show above.
Let's look at some examples of how the type will change.
.. column::
Consider this example where we pass a custom subclass of `Config` and a custom context object.
.. column::
```python
from sanic import Sanic
from sanic.config import Config
class CustomConfig(Config):
pass
app = Sanic("test", config=CustomConfig())
reveal_type(app) # N: Revealed type is "sanic.app.Sanic[main.CustomConfig, types.SimpleNamespace]"
```
```
sanic.app.Sanic[main.CustomConfig, types.SimpleNamespace]
```
.. column::
Similarly, when passing a custom context object, the type will change to reflect that.
.. column::
```python
from sanic import Sanic
class Foo:
pass
app = Sanic("test", ctx=Foo())
reveal_type(app) # N: Revealed type is "sanic.app.Sanic[sanic.config.Config, main.Foo]"
```
```
sanic.app.Sanic[sanic.config.Config, main.Foo]
```
.. column::
Of course, you can set both the config and context to custom types.
.. column::
```python
from sanic import Sanic
from sanic.config import Config
class CustomConfig(Config):
pass
class Foo:
pass
app = Sanic("test", config=CustomConfig(), ctx=Foo())
reveal_type(app) # N: Revealed type is "sanic.app.Sanic[main.CustomConfig, main.Foo]"
```
```
sanic.app.Sanic[main.CustomConfig, main.Foo]
```
This pattern is particularly useful if you create a custom type alias for your application instance so that you can use it to annotate listeners and handlers.
```python
# ./path/to/types.py
from sanic.app import Sanic
from sanic.config import Config
from myapp.context import MyContext
from typing import TypeAlias
MyApp = TypeAlias("MyApp", Sanic[Config, MyContext])
```
```python
# ./path/to/listeners.py
from myapp.types import MyApp
def add_listeners(app: MyApp):
@app.before_server_start
async def before_server_start(app: MyApp):
# do something with your fully typed app instance
await app.ctx.db.connect()
```
```python
# ./path/to/server.py
from myapp.types import MyApp
from myapp.context import MyContext
from myapp.config import MyConfig
from myapp.listeners import add_listeners
app = Sanic("myapp", config=MyConfig(), ctx=MyContext())
add_listeners(app)
```
*Added in v23.6*
### Custom typed request
Sanic also allows you to customize the type of the request object. This is useful if you want to add custom properties to the request object, or be able to access your custom properties of a typed application instance.
The correct, default type of a Sanic request instance is:
```python
sanic.request.Request[
sanic.app.Sanic[sanic.config.Config, types.SimpleNamespace],
types.SimpleNamespace
]
```
It refers to two generic types:
1. The first is the type of the application instance. It defaults to `sanic.app.Sanic[sanic.config.Config, types.SimpleNamespace]`, but can be any subclass of that.
2. The second is the type of the request context. It defaults to `types.SimpleNamespace`, but can be **any object** as show above in [custom requests](#custom-requests).
Let's look at some examples of how the type will change.
.. column::
Expanding upon the full example above where there is a type alias for a customized application instance, we can also create a custom request type so that we can access those same type annotations.
Of course, you do not need type aliases for this to work. We are only showing them here to cut down on the amount of code shown.
.. column::
```python
from sanic import Request
from myapp.types import MyApp
from types import SimpleNamespace
def add_routes(app: MyApp):
@app.get("/")
async def handler(request: Request[MyApp, SimpleNamespace]):
# do something with your fully typed app instance
results = await request.app.ctx.db.query("SELECT * FROM foo")
```
.. column::
Perhaps you have a custom request object that generates a custom context object. You can type annotate it to properly access those properties with your IDE as shown here.
.. column::
```python
from sanic import Request, Sanic
from sanic.config import Config
class CustomConfig(Config):
pass
class Foo:
pass
class RequestContext:
foo: Foo
class CustomRequest(Request[Sanic[CustomConfig, Foo], RequestContext]):
@staticmethod
def make_context() -> RequestContext:
ctx = RequestContext()
ctx.foo = Foo()
return ctx
app = Sanic(
"test", config=CustomConfig(), ctx=Foo(), request_class=CustomRequest
)
@app.get("/")
async def handler(request: CustomRequest):
# Full access to typed:
# - custom application configuration object
# - custom application context object
# - custom request context object
pass
```
See more information in the [custom request context](./request.md#custom-request-context) section.
*Added in v23.6*

View File

@ -0,0 +1,108 @@
# Cookies
## Reading
.. column::
Cookies can be accessed via the `Request` objects `cookies` dictionary.
.. column::
```python
@app.route("/cookie")
async def test(request):
test_cookie = request.cookies.get("test")
return text(f"Test cookie: {test_cookie}")
```
.. tip:: FYI
💡 The `request.cookies` object is one of a few types that is a dictionary with each value being a `list`. This is because HTTP allows a single key to be reused to send multiple values.
Most of the time you will want to use the `.get()` method to access the first element and not a `list`. If you do want a `list` of all items, you can use `.getlist()`.
*Added in v23.3*
## Writing
.. column::
When returning a response, cookies can be set on the `Response` object: `response.cookies`. This object is an instance of `CookieJar` which is a special sort of dictionary that automatically will write the response headers for you.
.. column::
```python
@app.route("/cookie")
async def test(request):
response = text("There's a cookie up in this response")
response.add_cookie(
"test",
"It worked!",
domain=".yummy-yummy-cookie.com",
httponly=True
)
return response
```
Response cookies can be set like dictionary values and have the following parameters available:
- `path: str` - The subset of URLs to which this cookie applies. Defaults to `/`.
- `domain: str` - Specifies the domain for which the cookie is valid. An explicitly specified domain must always start with a dot.
- `max_age: int` - Number of seconds the cookie should live for.
- `expires: datetime` - The time for the cookie to expire on the clients browser. Usually it is better to use max-age instead.
- `secure: bool` - Specifies whether the cookie will only be sent via HTTPS. Defaults to `True`.
- `httponly: bool` - Specifies whether the cookie cannot be read by JavaScript.
- `samesite: str` - Available values: Lax, Strict, and None. Defaults to `Lax`.
- `comment: str` - A comment (metadata).
- `host_prefix: bool` - Whether to add the `__Host-` prefix to the cookie.
- `secure_prefix: bool` - Whether to add the `__Secure-` prefix to the cookie.
- `partitioned: bool` - Whether to mark the cookie as partitioned.
To better understand the implications and usage of these values, it might be helpful to read the [MDN documentation](https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies) on [setting cookies](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie).
.. tip:: FYI
By default, Sanic will set the `secure` flag to `True` to ensure that cookies are only sent over HTTPS as a sensible default. This should not be impactful for local development since secure cookies over HTTP should still be sent to `localhost`. For more information, you should read the [MDN documentation](https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#restrict_access_to_cookies) on [secure cookies](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#Secure).
## Deleting
.. column::
Cookies can be removed semantically or explicitly.
.. column::
```python
@app.route("/cookie")
async def test(request):
response = text("Time to eat some cookies muahaha")
# This cookie will be set to expire in 0 seconds
response.delete_cookie("eat_me")
# This cookie will self destruct in 5 seconds
response.add_cookie("fast_bake", "Be quick!", max_age=5)
return response
```
*Don't forget to add `path` or `domain` if needed!*
## Eating
.. column::
Sanic likes cookies
.. column::
.. attrs::
:class: is-size-1 has-text-centered
🍪

View File

@ -0,0 +1,131 @@
# Handlers
The next important building block are your _handlers_. These are also sometimes called "views".
In Sanic, a handler is any callable that takes at least a `Request` instance as an argument, and returns either an `HTTPResponse` instance, or a coroutine that does the same.
.. column::
Huh? 😕
It is a **function**; either synchronous or asynchronous.
The job of the handler is to respond to an endpoint and do something. This is where the majority of your business logic will go.
.. column::
```python
def i_am_a_handler(request):
return HTTPResponse()
async def i_am_ALSO_a_handler(request):
return HTTPResponse()
```
.. tip:: Heads up
If you want to learn more about encapsulating your logic, checkout [class based views](/guide/advanced/class-based-views.md).
.. column::
Then, all you need to do is wire it up to an endpoint. We'll learn more about [routing soon](./routing.md).
Let's look at a practical example.
- We use a convenience decorator on our app instance: `@app.get()`
- And a handy convenience method for generating out response object: `text()`
Mission accomplished :muscle:
.. column::
```python
from sanic.response import text
@app.get("/foo")
async def foo_handler(request):
return text("I said foo!")
```
---
## A word about _async_...
.. column::
It is entirely possible to write handlers that are synchronous.
In this example, we are using the _blocking_ `time.sleep()` to simulate 100ms of processing time. Perhaps this represents fetching data from a DB, or a 3rd-party website.
Using four (4) worker processes and a common benchmarking tool:
- **956** requests in 30.10s
- Or, about **31.76** requests/second
.. column::
```python
@app.get("/sync")
def sync_handler(request):
time.sleep(0.1)
return text("Done.")
```
.. column::
Just by changing to the asynchronous alternative `asyncio.sleep()`, we see an incredible change in performance. 🚀
Using the same four (4) worker processes:
- **115,590** requests in 30.08s
- Or, about **3,843.17** requests/second
.. attrs::
:class: is-size-3
🤯
Okay... this is a ridiculously overdramatic result. And any benchmark you see is inherently very biased. This example is meant to over-the-top show the benefit of `async/await` in the web world. Results will certainly vary. Tools like Sanic and other async Python libraries are not magic bullets that make things faster. They make them _more efficient_.
In our example, the asynchronous version is so much better because while one request is sleeping, it is able to start another one, and another one, and another one, and another one...
But, this is the point! Sanic is fast because it takes the available resources and squeezes performance out of them. It can handle many requests concurrently, which means more requests per second.
.. column::
```python
@app.get("/async")
async def async_handler(request):
await asyncio.sleep(0.1)
return text("Done.")
```
.. warning:: A common mistake!
Don't do this! You need to ping a website. What do you use? `pip install your-fav-request-library` 🙈
Instead, try using a client that is `async/await` capable. Your server will thank you. Avoid using blocking tools, and favor those that play well in the asynchronous ecosystem. If you need recommendations, check out [Awesome Sanic](https://github.com/mekicha/awesome-sanic).
Sanic uses [httpx](https://www.python-httpx.org/) inside of its testing package (sanic-testing) :wink:.
---
## A fully annotated handler
For those that are using type annotations...
```python
from sanic.response import HTTPResponse, text
from sanic.request import Request
@app.get("/typed")
async def typed_handler(request: Request) -> HTTPResponse:
return text("Done.")
```

View File

@ -0,0 +1,229 @@
# Headers
Request and response headers are available in the `Request` and `HTTPResponse` objects, respectively. They make use of the [`multidict` package](https://multidict.readthedocs.io/en/stable/multidict.html#cimultidict) that allows a single key to have multiple values.
.. tip:: FYI
Header keys are converted to *lowercase* when parsed. Capitalization is not considered for headers.
## Request
Sanic does attempt to do some normalization on request headers before presenting them to the developer, and also make some potentially meaningful extractions for common use cases.
.. column::
#### Tokens
Authorization tokens in the form `Token <token>` or `Bearer <token>` are extracted to the request object: `request.token`.
.. column::
```python
@app.route("/")
async def handler(request):
return text(request.token)
```
```bash
$ curl localhost:8000 \
-H "Authorization: Token ABCDEF12345679"
ABCDEF12345679
```
```bash
$ curl localhost:8000 \
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c"
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
```
### Proxy headers
Sanic has special handling for proxy headers. See the [proxy headers](/guide/advanced/proxy-headers.md) section for more details.
### Host header and dynamic URL construction
.. column::
The *effective host* is available via `request.host`. This is not necessarily the same as the host header, as it prefers proxy-forwarded host and can be forced by the server name setting.
Webapps should generally use this accessor so that they can function the same no matter how they are deployed. The actual host header, if needed, can be found via `request.headers`
The effective host is also used in dynamic URL construction via `request.url_for`, which uses the request to determine the external address of a handler.
.. tip:: Be wary of malicious clients
These URLs can be manipulated by sending misleading host headers. `app.url_for` should be used instead if this is a concern.
.. column::
```python
app.config.SERVER_NAME = "https://example.com"
@app.route("/hosts", name="foo")
async def handler(request):
return json(
{
"effective host": request.host,
"host header": request.headers.get("host"),
"forwarded host": request.forwarded.get("host"),
"you are here": request.url_for("foo"),
}
)
```
```bash
$ curl localhost:8000/hosts
{
"effective host": "example.com",
"host header": "localhost:8000",
"forwarded host": null,
"you are here": "https://example.com/hosts"
}
```
### Other headers
.. column::
All request headers are available on `request.headers`, and can be accessed in dictionary form. Capitalization is not considered for headers, and can be accessed using either uppercase or lowercase keys.
.. column::
```python
@app.route("/")
async def handler(request):
return json(
{
"foo_weakref": request.headers["foo"],
"foo_get": request.headers.get("Foo"),
"foo_getone": request.headers.getone("FOO"),
"foo_getall": request.headers.getall("fOo"),
"all": list(request.headers.items()),
}
)
```
```bash
$ curl localhost:9999/headers -H "Foo: one" -H "FOO: two"|jq
{
"foo_weakref": "one",
"foo_get": "one",
"foo_getone": "one",
"foo_getall": [
"one",
"two"
],
"all": [
[
"host",
"localhost:9999"
],
[
"user-agent",
"curl/7.76.1"
],
[
"accept",
"*/*"
],
[
"foo",
"one"
],
[
"foo",
"two"
]
]
}
```
.. tip:: FYI
💡 The request.headers object is one of a few types that is a dictionary with each value being a list. This is because HTTP allows a single key to be reused to send multiple values.
Most of the time you will want to use the .get() or .getone() methods to access the first element and not a list. If you do want a list of all items, you can use .getall().
### Request ID
.. column::
Often it is convenient or necessary to track a request by its `X-Request-ID` header. You can easily access that as: `request.id`.
.. column::
```python
@app.route("/")
async def handler(request):
return text(request.id)
```
```bash
$ curl localhost:8000 \
-H "X-Request-ID: ABCDEF12345679"
ABCDEF12345679
```
## Response
Sanic will automatically set the following response headers (when appropriate) for you:
- `content-length`
- `content-type`
- `connection`
- `transfer-encoding`
In most circumstances, you should never need to worry about setting these headers.
.. column::
Any other header that you would like to set can be done either in the route handler, or a response middleware.
.. column::
```python
@app.route("/")
async def handler(request):
return text("Done.", headers={"content-language": "en-US"})
@app.middleware("response")
async def add_csp(request, response):
response.headers["content-security-policy"] = "default-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';base-uri 'self';form-action 'self'"
```
.. column::
A common [middleware](middleware.md) you might want is to add a `X-Request-ID` header to every response. As stated above: `request.id` will provide the ID from the incoming request. But, even if no ID was supplied in the request headers, one will be automatically supplied for you.
[See API docs for more details](https://sanic.readthedocs.io/en/latest/sanic/api_reference.html#sanic.request.Request.id)
.. column::
```python
@app.route("/")
async def handler(request):
return text(str(request.id))
@app.on_response
async def add_request_id_header(request, response):
response.headers["X-Request-ID"] = request.id
```
```bash
$ curl localhost:8000 -i
HTTP/1.1 200 OK
X-Request-ID: 805a958e-9906-4e7a-8fe0-cbe83590431b
content-length: 36
connection: keep-alive
content-type: text/plain; charset=utf-8
805a958e-9906-4e7a-8fe0-cbe83590431b
```

View File

@ -0,0 +1,243 @@
# Listeners
Sanic provides you with eight (8) opportunities to inject an operation into the life cycle of your application server. This does not include the [signals](../advanced/signals.md), which allow further injection customization.
There are two (2) that run **only** on your main Sanic process (ie, once per call to `sanic server.app`.)
- `main_process_start`
- `main_process_stop`
There are also two (2) that run **only** in a reloader process if auto-reload has been turned on.
- `reload_process_start`
- `reload_process_stop`
*Added `reload_process_start` and `reload_process_stop` in v22.3*
There are four (4) that enable you to execute startup/teardown code as your server starts or closes.
- `before_server_start`
- `after_server_start`
- `before_server_stop`
- `after_server_stop`
The life cycle of a worker process looks like this:
.. mermaid::
sequenceDiagram
autonumber
participant Process
participant Worker
participant Listener
participant Handler
Note over Process: sanic server.app
loop
Process->>Listener: @app.main_process_start
Listener->>Handler: Invoke event handler
end
Process->>Worker: Run workers
loop Start each worker
loop
Worker->>Listener: @app.before_server_start
Listener->>Handler: Invoke event handler
end
Note over Worker: Server status: started
loop
Worker->>Listener: @app.after_server_start
Listener->>Handler: Invoke event handler
end
Note over Worker: Server status: ready
end
Process->>Worker: Graceful shutdown
loop Stop each worker
loop
Worker->>Listener: @app.before_server_stop
Listener->>Handler: Invoke event handler
end
Note over Worker: Server status: stopped
loop
Worker->>Listener: @app.after_server_stop
Listener->>Handler: Invoke event handler
end
Note over Worker: Server status: closed
end
loop
Process->>Listener: @app.main_process_stop
Listener->>Handler: Invoke event handler
end
Note over Process: exit
The reloader process live outside of this worker process inside of a process that is responsible for starting and stopping the Sanic processes. Consider the following example:
```python
@app.reload_process_start
async def reload_start(*_):
print(">>>>>> reload_start <<<<<<")
@app.main_process_start
async def main_start(*_):
print(">>>>>> main_start <<<<<<")
```
If this application were run with auto-reload turned on, the `reload_start` function would be called once. This is contrasted with `main_start`, which would be run every time a file is save and the reloader restarts the applicaition process.
## Attaching a listener
.. column::
The process to setup a function as a listener is similar to declaring a route.
The currently running `Sanic()` instance is injected into the listener.
.. column::
```python
async def setup_db(app):
app.ctx.db = await db_setup()
app.register_listener(setup_db, "before_server_start")
```
.. column::
The `Sanic` app instance also has a convenience decorator.
.. column::
```python
@app.listener("before_server_start")
async def setup_db(app):
app.ctx.db = await db_setup()
```
.. column::
Prior to v22.3, both the application instance and the current event loop were injected into the function. However, only the application instance is injected by default. If your function signature will accept both, then both the application and the loop will be injected as shown here.
.. column::
```python
@app.listener("before_server_start")
async def setup_db(app, loop):
app.ctx.db = await db_setup()
```
.. column::
You can shorten the decorator even further. This is helpful if you have an IDE with autocomplete.
.. column::
```python
@app.before_server_start
async def setup_db(app):
app.ctx.db = await db_setup()
```
## Order of execution
Listeners are executed in the order they are declared during startup, and reverse order of declaration during teardown
| | Phase | Order |
|-----------------------|-----------------|---------|
| `main_process_start` | main startup | regular 🙂 ⬇️ |
| `before_server_start` | worker startup | regular 🙂 ⬇️ |
| `after_server_start` | worker startup | regular 🙂 ⬇️ |
| `before_server_stop` | worker shutdown | 🙃 ⬆️ reverse |
| `after_server_stop` | worker shutdown | 🙃 ⬆️ reverse |
| `main_process_stop` | main shutdown | 🙃 ⬆️ reverse |
Given the following setup, we should expect to see this in the console if we run two workers.
.. column::
```python
@app.listener("before_server_start")
async def listener_1(app, loop):
print("listener_1")
@app.before_server_start
async def listener_2(app, loop):
print("listener_2")
@app.listener("after_server_start")
async def listener_3(app, loop):
print("listener_3")
@app.after_server_start
async def listener_4(app, loop):
print("listener_4")
@app.listener("before_server_stop")
async def listener_5(app, loop):
print("listener_5")
@app.before_server_stop
async def listener_6(app, loop):
print("listener_6")
@app.listener("after_server_stop")
async def listener_7(app, loop):
print("listener_7")
@app.after_server_stop
async def listener_8(app, loop):
print("listener_8")
```
.. column::
```bash
[pid: 1000000] [INFO] Goin' Fast @ http://127.0.0.1:9999
[pid: 1000000] [INFO] listener_0
[pid: 1111111] [INFO] listener_1
[pid: 1111111] [INFO] listener_2
[pid: 1111111] [INFO] listener_3
[pid: 1111111] [INFO] listener_4
[pid: 1111111] [INFO] Starting worker [1111111]
[pid: 1222222] [INFO] listener_1
[pid: 1222222] [INFO] listener_2
[pid: 1222222] [INFO] listener_3
[pid: 1222222] [INFO] listener_4
[pid: 1222222] [INFO] Starting worker [1222222]
[pid: 1111111] [INFO] Stopping worker [1111111]
[pid: 1222222] [INFO] Stopping worker [1222222]
[pid: 1222222] [INFO] listener_6
[pid: 1222222] [INFO] listener_5
[pid: 1222222] [INFO] listener_8
[pid: 1222222] [INFO] listener_7
[pid: 1111111] [INFO] listener_6
[pid: 1111111] [INFO] listener_5
[pid: 1111111] [INFO] listener_8
[pid: 1111111] [INFO] listener_7
[pid: 1000000] [INFO] listener_9
[pid: 1000000] [INFO] Server Stopped
```
In the above example, notice how there are three processes running:
- `pid: 1000000` - The *main* process
- `pid: 1111111` - Worker 1
- `pid: 1222222` - Worker 2
*Just because our example groups all of one worker and then all of another, in reality since these are running on separate processes, the ordering between processes is not guaranteed. But, you can be sure that a single worker will **always** maintain its order.*
.. tip:: FYI
The practical result of this is that if the first listener in `before_server_start` handler setups a database connection, listeners that are registered after it can rely upon that connection being alive both when they are started and stopped.
## ASGI Mode
If you are running your application with an ASGI server, then make note of the following changes:
- `reload_process_start` and `reload_process_stop` will be **ignored**
- `main_process_start` and `main_process_stop` will be **ignored**
- `before_server_start` will run as early as it can, and will be before `after_server_start`, but technically, the server is already running at that point
- `after_server_stop` will run as late as it can, and will be after `before_server_stop`, but technically, the server is still running at that point

View File

@ -0,0 +1,229 @@
# Middleware
Whereas listeners allow you to attach functionality to the lifecycle of a worker process, middleware allows you to attach functionality to the lifecycle of an HTTP stream.
You can execute middleware either _before_ the handler is executed, or _after_.
.. mermaid::
sequenceDiagram
autonumber
participant Worker
participant Middleware
participant MiddlewareHandler
participant RouteHandler
Note over Worker: Incoming HTTP request
loop
Worker->>Middleware: @app.on_request
Middleware->>MiddlewareHandler: Invoke middleware handler
MiddlewareHandler-->>Worker: Return response (optional)
end
rect rgba(255, 13, 104, .1)
Worker->>RouteHandler: Invoke route handler
RouteHandler->>Worker: Return response
end
loop
Worker->>Middleware: @app.on_response
Middleware->>MiddlewareHandler: Invoke middleware handler
MiddlewareHandler-->>Worker: Return response (optional)
end
Note over Worker: Deliver response
## Attaching middleware
.. column::
This should probably look familiar by now. All you need to do is declare when you would like the middleware to execute: on the `request` or on the `response`.
.. column::
```python
async def extract_user(request):
request.ctx.user = await extract_user_from_request(request)
app.register_middleware(extract_user, "request")
```
.. column::
Again, the `Sanic` app instance also has a convenience decorator.
.. column::
```python
@app.middleware("request")
async def extract_user(request):
request.ctx.user = await extract_user_from_request(request)
```
.. column::
Response middleware receives both the `request` and `response` arguments.
.. column::
```python
@app.middleware('response')
async def prevent_xss(request, response):
response.headers["x-xss-protection"] = "1; mode=block"
```
.. column::
You can shorten the decorator even further. This is helpful if you have an IDE with autocomplete.
This is the preferred usage, and is what we will use going forward.
.. column::
```python
@app.on_request
async def extract_user(request):
...
@app.on_response
async def prevent_xss(request, response):
...
```
## Modification
Middleware can modify the request or response parameter it is given, _as long as it does not return it_.
.. column::
#### Order of execution
1. Request middleware: `add_key`
2. Route handler: `index`
3. Response middleware: `prevent_xss`
4. Response middleware: `custom_banner`
.. column::
```python
@app.on_request
async def add_key(request):
# Arbitrary data may be stored in request context:
request.ctx.foo = "bar"
@app.on_response
async def custom_banner(request, response):
response.headers["Server"] = "Fake-Server"
@app.on_response
async def prevent_xss(request, response):
response.headers["x-xss-protection"] = "1; mode=block"
@app.get("/")
async def index(request):
return text(request.ctx.foo)
```
.. column::
You can modify the `request.match_info`. A useful feature that could be used, for example, in middleware to convert `a-slug` to `a_slug`.
.. column::
```python
@app.on_request
def convert_slug_to_underscore(request: Request):
request.match_info["slug"] = request.match_info["slug"].replace("-", "_")
@app.get("/<slug:slug>")
async def handler(request, slug):
return text(slug)
```
```
$ curl localhost:9999/foo-bar-baz
foo_bar_baz
```
## Responding early
.. column::
If middleware returns a `HTTPResponse` object, the request will stop processing and the response will be returned. If this occurs to a request before the route handler is reached, the handler will **not** be called. Returning a response will also prevent any further middleware from running.
.. tip::
You can return a `None` value to stop the execution of the middleware handler to allow the request to process as normal. This can be useful when using early return to avoid processing requests inside of that middleware handler.
.. column::
```python
@app.on_request
async def halt_request(request):
return text("I halted the request")
@app.on_response
async def halt_response(request, response):
return text("I halted the response")
```
## Order of execution
Request middleware is executed in the order declared. Response middleware is executed in **reverse order**.
Given the following setup, we should expect to see this in the console.
.. column::
```python
@app.on_request
async def middleware_1(request):
print("middleware_1")
@app.on_request
async def middleware_2(request):
print("middleware_2")
@app.on_response
async def middleware_3(request, response):
print("middleware_3")
@app.on_response
async def middleware_4(request, response):
print("middleware_4")
@app.get("/handler")
async def handler(request):
print("~ handler ~")
return text("Done.")
```
.. column::
```bash
middleware_1
middleware_2
~ handler ~
middleware_4
middleware_3
[INFO][127.0.0.1:44788]: GET http://localhost:8000/handler 200 5
```
### Middleware priority
.. column::
You can modify the order of execution of middleware by assigning it a higher priority. This happens inside of the middleware definition. The higher the value, the earlier it will execute relative to other middleware. The default priority for middleware is `0`.
.. column::
```python
@app.on_request
async def low_priority(request):
...
@app.on_request(priority=99)
async def high_priority(request):
...
```
*Added in v22.9*

View File

@ -0,0 +1,327 @@
# Request
The `Request` instance contains **a lot** of helpful information available on its parameters. Refer to the [API documentation](https://sanic.readthedocs.io/) for full details.
## Body
The `Request` object allows you to access the content of the request body in a few different ways.
### JSON
.. column::
**Parameter**: `request.json`
**Description**: The parsed JSON object
.. column::
```bash
$ curl localhost:8000 -d '{"foo": "bar"}'
```
```python
>>> print(request.json)
{'foo': 'bar'}
```
### Raw
.. column::
**Parameter**: `request.body`
**Description**: The raw bytes from the request body
.. column::
```bash
$ curl localhost:8000 -d '{"foo": "bar"}'
```
```python
>>> print(request.body)
b'{"foo": "bar"}'
```
### Form
.. column::
**Parameter**: `request.form`
**Description**: The form data
.. tip:: FYI
The `request.form` object is one of a few types that is a dictionary with each value being a list. This is because HTTP allows a single key to be reused to send multiple values.
Most of the time you will want to use the `.get()` method to access the first element and not a list. If you do want a list of all items, you can use `.getlist()`.
.. column::
```bash
$ curl localhost:8000 -d 'foo=bar'
```
```python
>>> print(request.body)
b'foo=bar'
>>> print(request.form)
{'foo': ['bar']}
>>> print(request.form.get("foo"))
bar
>>> print(request.form.getlist("foo"))
['bar']
```
### Uploaded
.. column::
**Parameter**: `request.files`
**Description**: The files uploaded to the server
.. tip:: FYI
The `request.files` object is one of a few types that is a dictionary with each value being a list. This is because HTTP allows a single key to be reused to send multiple values.
Most of the time you will want to use the `.get()` method to access the first element and not a list. If you do want a list of all items, you can use `.getlist()`.
.. column::
```bash
$ curl -F 'my_file=@/path/to/TEST' http://localhost:8000
```
```python
>>> print(request.body)
b'--------------------------cb566ad845ad02d3\r\nContent-Disposition: form-data; name="my_file"; filename="TEST"\r\nContent-Type: application/octet-stream\r\n\r\nhello\n\r\n--------------------------cb566ad845ad02d3--\r\n'
>>> print(request.files)
{'my_file': [File(type='application/octet-stream', body=b'hello\n', name='TEST')]}
>>> print(request.files.get("my_file"))
File(type='application/octet-stream', body=b'hello\n', name='TEST')
>>> print(request.files.getlist("my_file"))
[File(type='application/octet-stream', body=b'hello\n', name='TEST')]
```
## Context
### Request context
The `request.ctx` object is your playground to store whatever information you need to about the request.
This is often used to store items like authenticated user details. We will get more into [middleware](./middleware.md) later, but here is a simple example.
```python
@app.on_request
async def run_before_handler(request):
request.ctx.user = await fetch_user_by_token(request.token)
@app.route('/hi')
async def hi_my_name_is(request):
return text("Hi, my name is {}".format(request.ctx.user.name))
```
A typical use case would be to store the user object acquired from database in an authentication middleware. Keys added are accessible to all later middleware as well as the handler over the duration of the request.
Custom context is reserved for applications and extensions. Sanic itself makes no use of it.
### Connection context
.. column::
Often times your API will need to serve multiple concurrent (or consecutive) requests to the same client. This happens, for example, very often with progressive web apps that need to query multiple endpoints to get data.
The HTTP protocol calls for an easing of overhead time caused by the connection with the use of [keep alive headers](../deployment/configuration.md#keep-alive-timeout).
When multiple requests share a single connection, Sanic provides a context object to allow those requests to share state.
.. column::
```python
@app.on_request
async def increment_foo(request):
if not hasattr(request.conn_info.ctx, "foo"):
request.conn_info.ctx.foo = 0
request.conn_info.ctx.foo += 1
@app.get("/")
async def count_foo(request):
return text(f"request.conn_info.ctx.foo={request.conn_info.ctx.foo}")
```
```bash
$ curl localhost:8000 localhost:8000 localhost:8000
request.conn_info.ctx.foo=1
request.conn_info.ctx.foo=2
request.conn_info.ctx.foo=3
```
### Custom Request Objects
As dicussed in [application customization](./app.md#custom-requests), you can create a subclass of `sanic.Request` to add additional functionality to the request object. This is useful for adding additional attributes or methods that are specific to your application.
.. column::
For example, imagine your application sends a custom header that contains a user ID. You can create a custom request object that will parse that header and store the user ID for you.
.. column::
```python
from sanic import Sanic, Request
class CustomRequest(Request):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.user_id = self.headers.get("X-User-ID")
app = Sanic("Example", request_class=CustomRequest)
```
.. column::
Now, in your handlers, you can access the `user_id` attribute.
.. column::
```python
@app.route("/")
async def handler(request: CustomRequest):
return text(f"User ID: {request.user_id}")
```
.. new:: NEW in v23.6
### Custom Request Context
By default, the request context (`request.ctx`) is a `SimpleNamespace` object allowing you to set arbitrary attributes on it. While this is super helpful to reuse logic across your application, it can be difficult in the development experience since the IDE will not know what attributes are available.
To help with this, you can create a custom request context object that will be used instead of the default `SimpleNamespace`. This allows you to add type hints to the context object and have them be available in your IDE.
.. column::
Start by subclassing the `sanic.Request` class to create a custom request type. Then, you will need to add a `make_context()` method that returns an instance of your custom context object. *NOTE: the `make_context` method should be a static method.*
.. column::
```python
from sanic import Sanic, Request
from types import SimpleNamespace
class CustomRequest(Request):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.ctx.user_id = self.headers.get("X-User-ID")
@staticmethod
def make_context() -> CustomContext:
return CustomContext()
@dataclass
class CustomContext:
user_id: str = None
```
*Added in v23.6*
## Parameters
.. column::
Values that are extracted from the path are injected into the handler as parameters, or more specifically as keyword arguments. There is much more detail about this in the [Routing section](./routing.md).
.. column::
```python
@app.route('/tag/<tag>')
async def tag_handler(request, tag):
return text("Tag - {}".format(tag))
```
## Arguments
There are two attributes on the `request` instance to get query parameters:
- `request.args`
- `request.query_args`
```bash
$ curl http://localhost:8000\?key1\=val1\&key2\=val2\&key1\=val3
```
```python
>>> print(request.args)
{'key1': ['val1', 'val3'], 'key2': ['val2']}
>>> print(request.args.get("key1"))
val1
>>> print(request.args.getlist("key1"))
['val1', 'val3']
>>> print(request.query_args)
[('key1', 'val1'), ('key2', 'val2'), ('key1', 'val3')]
>>> print(request.query_string)
key1=val1&key2=val2&key1=val3
```
.. tip:: FYI
The `request.args` object is one of a few types that is a dictionary with each value being a list. This is because HTTP allows a single key to be reused to send multiple values.
Most of the time you will want to use the `.get()` method to access the first element and not a list. If you do want a list of all items, you can use `.getlist()`.
## Current request getter
Sometimes you may find that you need access to the current request in your application in a location where it is not accessible. A typical example might be in a `logging` format. You can use `Request.get_current()` to fetch the current request (if any).
```python
import logging
from sanic import Request, Sanic, json
from sanic.exceptions import SanicException
from sanic.log import LOGGING_CONFIG_DEFAULTS
LOGGING_FORMAT = (
"%(asctime)s - (%(name)s)[%(levelname)s][%(host)s]: "
"%(request_id)s %(request)s %(message)s %(status)d %(byte)d"
)
old_factory = logging.getLogRecordFactory()
def record_factory(*args, **kwargs):
record = old_factory(*args, **kwargs)
record.request_id = ""
try:
request = Request.get_current()
except SanicException:
...
else:
record.request_id = str(request.id)
return record
logging.setLogRecordFactory(record_factory)
LOGGING_CONFIG_DEFAULTS["formatters"]["access"]["format"] = LOGGING_FORMAT
app = Sanic("Example", log_config=LOGGING_CONFIG_DEFAULTS)
```
In this example, we are adding the `request.id` to every access log message.
*Added in v22.6*

View File

@ -0,0 +1,265 @@
# Response
All [handlers](./handlers.md) *usually* return a response object, and [middleware](./middleware.md) may optionally return a response object.
To clarify that statement:
- unless the handler is a streaming endpoint handling its own pattern for sending bytes to the client, the return value must be an instance of `sanic.HTTPResponse` (to learn more about this exception see [streaming responses](../advanced/streaming.md#response-streaming))
- if a middleware returns a response object, that will be used instead of whatever the handler would do (see [middleware](./middleware.md) to learn more)
A most basic handler would look like the following. The `HTTPResponse` object will allow you to set the status, body, and headers to be returned to the client.
```python
from sanic import HTTPResponse, Sanic
app = Sanic("TestApp")
@app.route("")
def handler(_):
return HTTPResponse()
```
However, usually it is easier to use one of the convenience methods discussed below.
## Methods
The easiest way to generate a response object is to use one of the nine (9) convenience methods.
### Text
.. column::
**Default Content-Type**: `text/plain; charset=utf-8`
**Description**: Returns plain text
.. column::
```python
from sanic import text
@app.route("/")
async def handler(request):
return text("Hi 😎")
```
### HTML
.. column::
**Default Content-Type**: `text/html; charset=utf-8`
**Description**: Returns an HTML document
.. column::
```python
from sanic import html
@app.route("/")
async def handler(request):
return html('<!DOCTYPE html><html lang="en"><meta charset="UTF-8"><div>Hi 😎</div>')
```
### JSON
.. column::
**Default Content-Type**: `application/json`
**Description**: Returns a JSON document
.. column::
```python
from sanic import json
@app.route("/")
async def handler(request):
return json({"foo": "bar"})
```
By default, Sanic ships with [`ujson`](https://github.com/ultrajson/ultrajson) as its JSON encoder of choice. It is super simple to change this if you want.
```python
from orjson import dumps
json({"foo": "bar"}, dumps=dumps)
```
If `ujson` is not installed, it will fall back to the standard library `json` module.
You may additionally declare which implementation to use globally across your application at initialization:
```python
from orjson import dumps
app = Sanic(..., dumps=dumps)
```
### File
.. column::
**Default Content-Type**: N/A
**Description**: Returns a file
.. column::
```python
from sanic import file
@app.route("/")
async def handler(request):
return await file("/path/to/whatever.png")
```
Sanic will examine the file, and try and guess its mime type and use an appropriate value for the content type. You could be explicit, if you would like:
```python
file("/path/to/whatever.png", mime_type="image/png")
```
You can also choose to override the file name:
```python
file("/path/to/whatever.png", filename="super-awesome-incredible.png")
```
### File Streaming
.. column::
**Default Content-Type**: N/A
**Description**: Streams a file to a client, useful when streaming large files, like a video
.. column::
```python
from sanic.response import file_stream
@app.route("/")
async def handler(request):
return await file_stream("/path/to/whatever.mp4")
```
Like the `file()` method, `file_stream()` will attempt to determine the mime type of the file.
### Raw
.. column::
**Default Content-Type**: `application/octet-stream`
**Description**: Send raw bytes without encoding the body
.. column::
```python
from sanic import raw
@app.route("/")
async def handler(request):
return raw(b"raw bytes")
```
### Redirect
.. column::
**Default Content-Type**: `text/html; charset=utf-8`
**Description**: Send a `302` response to redirect the client to a different path
.. column::
```python
from sanic import redirect
@app.route("/")
async def handler(request):
return redirect("/login")
```
### Empty
.. column::
**Default Content-Type**: N/A
**Description**: For responding with an empty message as defined by [RFC 2616](https://tools.ietf.org/search/rfc2616#section-7.2.1)
.. column::
```python
from sanic import empty
@app.route("/")
async def handler(request):
return empty()
```
Defaults to a `204` status.
## Default status
The default HTTP status code for the response is `200`. If you need to change it, it can be done by the response method.
```python
@app.post("/")
async def create_new(request):
new_thing = await do_create(request)
return json({"created": True, "id": new_thing.thing_id}, status=201)
```
## Returning JSON data
Starting in v22.12, When you use the `sanic.json` convenience method, it will return a subclass of `HTTPResponse` called `JSONResponse`. This object will
have several convenient methods available to modify common JSON body.
```python
from sanic import json
resp = json(...)
```
- `resp.set_body(<raw_body>)` - Set the body of the JSON object to the value passed
- `resp.append(<value>)` - Append a value to the body like `list.append` (only works if the root JSON is an array)
- `resp.extend(<value>)` - Extend a value to the body like `list.extend` (only works if the root JSON is an array)
- `resp.update(<value>)` - Update the body with a value like `dict.update` (only works if the root JSON is an object)
- `resp.pop()` - Pop a value like `list.pop` or `dict.pop` (only works if the root JSON is an array or an object)
.. warning::
The raw Python object is stored on the `JSONResponse` object as `raw_body`. While it is safe to overwrite this value with a new one, you should **not** attempt to mutate it. You should instead use the methods listed above.
```python
resp = json({"foo": "bar"})
# This is OKAY
resp.raw_body = {"foo": "bar", "something": "else"}
# This is better
resp.set_body({"foo": "bar", "something": "else"})
# This is also works well
resp.update({"something": "else"})
# This is NOT OKAY
resp.raw_body.update({"something": "else"})
```
```python
# Or, even treat it like a list
resp = json(["foo", "bar"])
# This is OKAY
resp.raw_body = ["foo", "bar", "something", "else"]
# This is better
resp.extend(["something", "else"])
# This is also works well
resp.append("something")
resp.append("else")
# This is NOT OKAY
resp.raw_body.append("something")
```
*Added in v22.9*

View File

@ -0,0 +1,798 @@
# Routing
.. column::
So far we have seen a lot of this decorator in different forms.
But what is it? And how do we use it?
.. column::
```python
@app.route("/stairway")
...
@app.get("/to")
...
@app.post("/heaven")
...
```
## Adding a route
.. column::
The most basic way to wire up a handler to an endpoint is with `app.add_route()`.
See [API docs](https://sanic.readthedocs.io/en/stable/sanic/api_reference.html#sanic.app.Sanic.url_for) for more details.
.. column::
```python
async def handler(request):
return text("OK")
app.add_route(handler, "/test")
```
.. column::
By default, routes are available as an HTTP `GET` call. You can change a handler to respond to one or more HTTP methods.
.. column::
```python
app.add_route(
handler,
'/test',
methods=["POST", "PUT"],
)
```
.. column::
Using the decorator syntax, the previous example is identical to this.
.. column::
```python
@app.route('/test', methods=["POST", "PUT"])
async def handler(request):
return text('OK')
```
## HTTP methods
Each of the standard HTTP methods has a convenience decorator.
### GET
```python
@app.get('/test')
async def handler(request):
return text('OK')
```
[MDN Docs](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/GET)
### POST
```python
@app.post('/test')
async def handler(request):
return text('OK')
```
[MDN Docs](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/POST)
### PUT
```python
@app.put('/test')
async def handler(request):
return text('OK')
```
[MDN Docs](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/PUT)
### PATCH
```python
@app.patch('/test')
async def handler(request):
return text('OK')
```
[MDN Docs](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/PATCH)
### DELETE
```python
@app.delete('/test')
async def handler(request):
return text('OK')
```
[MDN Docs](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/DELETE)
### HEAD
```python
@app.head('/test')
async def handler(request):
return empty()
```
[MDN Docs](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/HEAD)
### OPTIONS
```python
@app.options('/test')
async def handler(request):
return empty()
```
[MDN Docs](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/OPTIONS)
.. warning::
By default, Sanic will **only** consume the incoming request body on non-safe HTTP methods (`POST`, `PUT`, `PATCH`, `DELETE`). If you want to receive data in the HTTP request on any other method, you will need to do one of the following two options:
**Option #1 - Tell Sanic to consume the body using `ignore_body`**
```python
@app.request("/path", ignore_body=False)
async def handler(_):
...
```
**Option #2 - Manually consume the body in the handler using `receive_body`**
```python
@app.get("/path")
async def handler(request: Request):
await request.receive_body()
```
## Path parameters
.. column::
Sanic allows for pattern matching, and for extracting values from URL paths. These parameters are then injected as keyword arguments in the route handler.
.. column::
```python
@app.get("/tag/<tag>")
async def tag_handler(request, tag):
return text("Tag - {}".format(tag))
```
.. column::
You can declare a type for the parameter. This will be enforced when matching, and also will type cast the variable.
.. column::
```python
@app.get("/foo/<foo_id:uuid>")
async def uuid_handler(request, foo_id: UUID):
return text("UUID - {}".format(foo_id))
```
### Supported types
### `str`
.. column::
**Regular expression applied**: `r"[^/]+"`
**Cast type**: `str`
**Example matches**:
- `/path/to/Bob`
- `/path/to/Python%203`
Beginning in v22.3 `str` will *not* match on empty strings. See `strorempty` for this behavior.
.. column::
```python
@app.route("/path/to/<foo:str>")
async def handler(request, foo: str):
...
```
### `strorempty`
.. column::
**Regular expression applied**: `r"[^/]*"`
**Cast type**: `str`
**Example matches**:
- `/path/to/Bob`
- `/path/to/Python%203`
- `/path/to/`
Unlike the `str` path parameter type, `strorempty` can also match on an empty string path segment.
*Added in v22.3*
.. column::
```python
@app.route("/path/to/<foo:strorempty>")
async def handler(request, foo: str):
...
```
### `int`
.. column::
**Regular expression applied**: `r"-?\d+"`
**Cast type**: `int`
**Example matches**:
- `/path/to/10`
- `/path/to/-10`
_Does not match float, hex, octal, etc_
.. column::
```python
@app.route("/path/to/<foo:int>")
async def handler(request, foo: int):
...
```
### `float`
.. column::
**Regular expression applied**: `r"-?(?:\d+(?:\.\d*)?|\.\d+)"`
**Cast type**: `float`
**Example matches**:
- `/path/to/10`
- `/path/to/-10`
- `/path/to/1.5`
.. column::
```python
@app.route("/path/to/<foo:float>")
async def handler(request, foo: float):
...
```
### `alpha`
.. column::
**Regular expression applied**: `r"[A-Za-z]+"`
**Cast type**: `str`
**Example matches**:
- `/path/to/Bob`
- `/path/to/Python`
_Does not match a digit, or a space or other special character_
.. column::
```python
@app.route("/path/to/<foo:alpha>")
async def handler(request, foo: str):
...
```
### `slug`
.. column::
**Regular expression applied**: `r"[a-z0-9]+(?:-[a-z0-9]+)*"`
**Cast type**: `str`
**Example matches**:
- `/path/to/some-news-story`
- `/path/to/or-has-digits-123`
*Added in v21.6*
.. column::
```python
@app.route("/path/to/<article:slug>")
async def handler(request, article: str):
...
```
### `path`
.. column::
**Regular expression applied**: `r"[^/].*?"`
**Cast type**: `str`
**Example matches**:
- `/path/to/hello`
- `/path/to/hello.txt`
- `/path/to/hello/world.txt`
.. column::
```python
@app.route("/path/to/<foo:path>")
async def handler(request, foo: str):
...
```
.. warning::
Because this will match on `/`, you should be careful and thoroughly test your patterns that use `path` so they do not capture traffic intended for another endpoint. Additionally, depending on how you use this type, you may be creating a path traversal vulnerability in your application. It is your job to protect your endpoint against this, but feel free to ask in our community channels for help if you need it :)
### `ymd`
.. column::
**Regular expression applied**: `r"^([12]\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[12]\d|3[01]))"`
**Cast type**: `datetime.date`
**Example matches**:
- `/path/to/2021-03-28`
.. column::
```python
@app.route("/path/to/<foo:ymd>")
async def handler(request, foo: datetime.date):
...
```
### `uuid`
.. column::
**Regular expression applied**: `r"[A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}"`
**Cast type**: `UUID`
**Example matches**:
- `/path/to/123a123a-a12a-1a1a-a1a1-1a12a1a12345`
.. column::
```python
@app.route("/path/to/<foo:uuid>")
async def handler(request, foo: UUID):
...
```
### ext
.. column::
**Regular expression applied**: n/a
**Cast type**: *varies*
**Example matches**:
.. column::
```python
@app.route("/path/to/<foo:ext>")
async def handler(request, foo: str, ext: str):
...
```
| definition | example | filename | extension |
| --------------------------------- | ----------- | ----------- | ---------- |
| \<file:ext> | page.txt | `"page"` | `"txt"` |
| \<file:ext=jpg> | cat.jpg | `"cat"` | `"jpg"` |
| \<file:ext=jpg\|png\|gif\|svg> | cat.jpg | `"cat"` | `"jpg"` |
| <file=int:ext> | 123.txt | `123` | `"txt"` |
| <file=int:ext=jpg\|png\|gif\|svg> | 123.svg | `123` | `"svg"` |
| <file=float:ext=tar.gz> | 3.14.tar.gz | `3.14` | `"tar.gz"` |
File extensions can be matched using the special `ext` parameter type. It uses a special format that allows you to specify other types of parameter types as the file name, and one or more specific extensions as shown in the example table above.
It does *not* support the `path` parameter type.
*Added in v22.3*
### regex
.. column::
**Regular expression applied**: _whatever you insert_
**Cast type**: `str`
**Example matches**:
- `/path/to/2021-01-01`
This gives you the freedom to define specific matching patterns for your use case.
In the example shown, we are looking for a date that is in `YYYY-MM-DD` format.
.. column::
```python
@app.route(r"/path/to/<foo:([12]\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[12]\d|3[01]))>")
async def handler(request, foo: str):
...
```
### Regex Matching
More often than not, compared with complex routing, the above example is too simple, and we use a completely different routing matching pattern, so here we will explain the advanced usage of regex matching in detail.
Sometimes, you want to match a part of a route:
```text
/image/123456789.jpg
```
If you wanted to match the file pattern, but only capture the numeric portion, you need to do some regex fun 😄:
```python
app.route(r"/image/<img_id:(?P<img_id>\d+)\.jpg>")
```
Further, these should all be acceptable:
```python
@app.get(r"/<foo:[a-z]{3}.txt>") # matching on the full pattern
@app.get(r"/<foo:([a-z]{3}).txt>") # defining a single matching group
@app.get(r"/<foo:(?P<foo>[a-z]{3}).txt>") # defining a single named matching group
@app.get(r"/<foo:(?P<foo>[a-z]{3}).(?:txt)>") # defining a single named matching group, with one or more non-matching groups
```
Also, if using a named matching group, it must be the same as the segment label.
```python
@app.get(r"/<foo:(?P<foo>\d+).jpg>") # OK
@app.get(r"/<foo:(?P<bar>\d+).jpg>") # NOT OK
```
For more regular usage methods, please refer to [Regular expression operations](https://docs.python.org/3/library/re.html)
## Generating a URL
.. column::
Sanic provides a method to generate URLs based on the handler method name: `app.url_for()`. This is useful if you want to avoid hardcoding url paths into your app; instead, you can just reference the handler name.
.. column::
```python
@app.route('/')
async def index(request):
# generate a URL for the endpoint `post_handler`
url = app.url_for('post_handler', post_id=5)
# Redirect to `/posts/5`
return redirect(url)
@app.route('/posts/<post_id>')
async def post_handler(request, post_id):
...
```
.. column::
You can pass any arbitrary number of keyword arguments. Anything that is _not_ a request parameter will be implemented as a part of the query string.
.. column::
```python
assert app.url_for(
"post_handler",
post_id=5,
arg_one="one",
arg_two="two",
) == "/posts/5?arg_one=one&arg_two=two"
```
.. column::
Also supported is passing multiple values for a single query key.
.. column::
```python
assert app.url_for(
"post_handler",
post_id=5,
arg_one=["one", "two"],
) == "/posts/5?arg_one=one&arg_one=two"
```
### Special keyword arguments
See [API Docs]() for more details.
```python
app.url_for("post_handler", post_id=5, arg_one="one", _anchor="anchor")
# '/posts/5?arg_one=one#anchor'
# _external requires you to pass an argument _server or set SERVER_NAME in app.config if not url will be same as no _external
app.url_for("post_handler", post_id=5, arg_one="one", _external=True)
# '//server/posts/5?arg_one=one'
# when specifying _scheme, _external must be True
app.url_for("post_handler", post_id=5, arg_one="one", _scheme="http", _external=True)
# 'http://server/posts/5?arg_one=one'
# you can pass all special arguments at once
app.url_for("post_handler", post_id=5, arg_one=["one", "two"], arg_two=2, _anchor="anchor", _scheme="http", _external=True, _server="another_server:8888")
# 'http://another_server:8888/posts/5?arg_one=one&arg_one=two&arg_two=2#anchor'
```
### Customizing a route name
.. column::
A custom route name can be used by passing a `name` argument while registering the route.
.. column::
```python
@app.get("/get", name="get_handler")
def handler(request):
return text("OK")
```
.. column::
Now, use this custom name to retrieve the URL
.. column::
```python
assert app.url_for("get_handler", foo="bar") == "/get?foo=bar"
```
## Websockets routes
.. column::
Websocket routing works similar to HTTP methods.
.. column::
```python
async def handler(request, ws):
message = "Start"
while True:
await ws.send(message)
message = await ws.recv()
app.add_websocket_route(handler, "/test")
```
.. column::
It also has a convenience decorator.
.. column::
```python
@app.websocket("/test")
async def handler(request, ws):
message = "Start"
while True:
await ws.send(message)
message = await ws.recv()
```
Read the [websockets section](/guide/advanced/websockets.md) to learn more about how they work.
## Strict slashes
.. column::
Sanic routes can be configured to strictly match on whether or not there is a trailing slash: `/`. This can be configured at a few levels and follows this order of precedence:
1. Route
2. Blueprint
3. BlueprintGroup
4. Application
.. column::
```python
# provide default strict_slashes value for all routes
app = Sanic(__file__, strict_slashes=True)
```
```python
# overwrite strict_slashes value for specific route
@app.get("/get", strict_slashes=False)
def handler(request):
return text("OK")
```
```python
# it also works for blueprints
bp = Blueprint(__file__, strict_slashes=True)
@bp.get("/bp/get", strict_slashes=False)
def handler(request):
return text("OK")
```
```python
bp1 = Blueprint(name="bp1", url_prefix="/bp1")
bp2 = Blueprint(
name="bp2",
url_prefix="/bp2",
strict_slashes=False,
)
# This will enforce strict slashes check on the routes
# under bp1 but ignore bp2 as that has an explicitly
# set the strict slashes check to false
group = Blueprint.group([bp1, bp2], strict_slashes=True)
```
## Static files
.. column::
In order to serve static files from Sanic, use `app.static()`.
The order of arguments is important:
1. Route the files will be served from
2. Path to the files on the server
See [API docs](https://sanic.readthedocs.io/en/stable/sanic/api/app.html#sanic.app.Sanic.static) for more details.
.. column::
```python
app.static("/static/", "/path/to/directory/")
```
.. tip::
It is generally best practice to end your directory paths with a trailing slash (`/this/is/a/directory/`). This removes ambiguity by being more explicit.
.. column::
You can also serve individual files.
.. column::
```python
app.static("/", "/path/to/index.html")
```
.. column::
It is also sometimes helpful to name your endpoint
.. column::
```python
app.static(
"/user/uploads/",
"/path/to/uploads/",
name="uploads",
)
```
.. column::
Retrieving the URLs works similar to handlers. But, we can also add the `filename` argument when we need a specific file inside a directory.
.. column::
```python
assert app.url_for(
"static",
name="static",
filename="file.txt",
) == "/static/file.txt"
```
```python
assert app.url_for(
"static",
name="uploads",
filename="image.png",
) == "/user/uploads/image.png"
```
.. tip::
If you are going to have multiple `static()` routes, then it is *highly* suggested that you manually name them. This will almost certainly alleviate potential hard to discover bugs.
```python
app.static("/user/uploads/", "/path/to/uploads/", name="uploads")
app.static("/user/profile/", "/path/to/profile/", name="profile_pics")
```
#### Auto index serving
.. column::
If you have a directory of static files that should be served by an index page, you can provide the filename of the index. Now, when reaching that directory URL, the index page will be served.
.. column::
```python
app.static("/foo/", "/path/to/foo/", index="index.html")
```
*Added in v23.3*
#### File browser
.. column::
When serving a directory from a static handler, Sanic can be configured to show a basic file browser instead using `directory_view=True`.
.. column::
```python
app.static("/uploads/", "/path/to/dir", directory_view=True)
```
You now have a browsable directory in your web browser:
![image](/assets/images/directory-view.png)
*Added in v23.3*
## Route context
.. column::
When a route is defined, you can add any number of keyword arguments with a `ctx_` prefix. These values will be injected into the route `ctx` object.
.. column::
```python
@app.get("/1", ctx_label="something")
async def handler1(request):
...
@app.get("/2", ctx_label="something")
async def handler2(request):
...
@app.get("/99")
async def handler99(request):
...
@app.on_request
async def do_something(request):
if request.route.ctx.label == "something":
...
```
*Added in v21.12*

View File

@ -0,0 +1,135 @@
# Background tasks
## Creating Tasks
It is often desirable and very convenient to make usage of [tasks](https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task) in async Python. Sanic provides a convenient method to add tasks to the currently **running** loop. It is somewhat similar to `asyncio.create_task`. For adding tasks before the 'App' loop is running, see next section.
```python
async def notify_server_started_after_five_seconds():
await asyncio.sleep(5)
print('Server successfully started!')
app.add_task(notify_server_started_after_five_seconds())
```
.. column::
Sanic will attempt to automatically inject the app, passing it as an argument to the task.
.. column::
```python
async def auto_inject(app):
await asyncio.sleep(5)
print(app.name)
app.add_task(auto_inject)
```
.. column::
Or you can pass the `app` argument explicitly.
.. column::
```python
async def explicit_inject(app):
await asyncio.sleep(5)
print(app.name)
app.add_task(explicit_inject(app))
```
## Adding tasks before `app.run`
It is possible to add background tasks before the App is run ie. before `app.run`. To add a task before the App is run, it is recommended to not pass the coroutine object (ie. one created by calling the `async` callable), but instead just pass the callable and Sanic will create the coroutine object on **each worker**. Note: the tasks that are added such are run as `before_server_start` jobs and thus run on every worker (and not in the main process). This has certain consequences, please read [this comment](https://github.com/sanic-org/sanic/issues/2139#issuecomment-868993668) on [this issue](https://github.com/sanic-org/sanic/issues/2139) for further details.
To add work on the main process, consider adding work to [`@app.main_process_start`](./listeners.md). Note: the workers won't start until this work is completed.
.. column::
Example to add a task before `app.run`
.. column::
```python
async def slow_work():
...
async def even_slower(num):
...
app = Sanic(...)
app.add_task(slow_work) # Note: we are passing the callable and not coroutine object ...
app.add_task(even_slower(10)) # ... or we can call the function and pass the coroutine.
app.run(...)
```
## Named tasks
.. column::
When creating a task, you can ask Sanic to keep track of it for you by providing a `name`.
.. column::
```python
app.add_task(slow_work, name="slow_task")
```
.. column::
You can now retrieve that task instance from anywhere in your application using `get_task`.
.. column::
```python
task = app.get_task("slow_task")
```
.. column::
If that task needs to be cancelled, you can do that with `cancel_task`. Make sure that you `await` it.
.. column::
```python
await app.cancel_task("slow_task")
```
.. column::
All registered tasks can be found in the `app.tasks` property. To prevent cancelled tasks from filling up, you may want to run `app.purge_tasks` that will clear out any completed or cancelled tasks.
.. column::
```python
app.purge_tasks()
```
This pattern can be particularly useful with `websockets`:
```python
async def receiver(ws):
while True:
message = await ws.recv()
if not message:
break
print(f"Received: {message}")
@app.websocket("/feed")
async def feed(request, ws):
task_name = f"receiver:{request.id}"
request.app.add_task(receiver(ws), name=task_name)
try:
while True:
await request.app.event("my.custom.event")
await ws.send("A message")
finally:
# When the websocket closes, let's cleanup the task
await request.app.cancel_task(task_name)
request.app.purge_tasks()
```
*Added in v21.12*

View File

@ -0,0 +1,453 @@
# Blueprints
## Overview
Blueprints are objects that can be used for sub-routing within an application. Instead of adding routes to the application instance, blueprints define similar methods for adding routes, which are then registered with the application in a flexible and pluggable manner.
Blueprints are especially useful for larger applications, where your application logic can be broken down into several groups or areas of responsibility.
## Creating and registering
.. column::
First, you must create a blueprint. It has a very similar API as the `Sanic()` app instance with many of the same decorators.
.. column::
```python
# ./my_blueprint.py
from sanic.response import json
from sanic import Blueprint
bp = Blueprint("my_blueprint")
@bp.route("/")
async def bp_root(request):
return json({"my": "blueprint"})
```
.. column::
Next, you register it with the app instance.
.. column::
```python
from sanic import Sanic
from my_blueprint import bp
app = Sanic(__name__)
app.blueprint(bp)
```
Blueprints also have the same `websocket()` decorator and `add_websocket_route` method for implementing websockets.
.. column::
Beginning in v21.12, a Blueprint may be registered before or after adding objects to it. Previously, only objects attached to the Blueprint at the time of registration would be loaded into application instance.
.. column::
```python
app.blueprint(bp)
@bp.route("/")
async def bp_root(request):
...
```
## Copying
.. column::
Blueprints along with everything that is attached to them can be copied to new instances using the `copy()` method. The only required argument is to pass it a new `name`. However, you could also use this to override any of the values from the old blueprint.
.. column::
```python
v1 = Blueprint("Version1", version=1)
@v1.route("/something")
def something(request):
pass
v2 = v1.copy("Version2", version=2)
app.blueprint(v1)
app.blueprint(v2)
```
```
Available routes:
/v1/something
/v2/something
```
*Added in v21.9*
## Blueprint groups
Blueprints may also be registered as part of a list or tuple, where the registrar will recursively cycle through any sub-sequences of blueprints and register them accordingly. The Blueprint.group method is provided to simplify this process, allowing a mock backend directory structure mimicking whats seen from the front end. Consider this (quite contrived) example:
```text
api/
├──content/
│ ├──authors.py
│ ├──static.py
│ └──__init__.py
├──info.py
└──__init__.py
app.py
```
.. column::
#### First blueprint
.. column::
```python
# api/content/authors.py
from sanic import Blueprint
authors = Blueprint("content_authors", url_prefix="/authors")
```
.. column::
#### Second blueprint
.. column::
```python
# api/content/static.py
from sanic import Blueprint
static = Blueprint("content_static", url_prefix="/static")
```
.. column::
#### Blueprint group
.. column::
```python
# api/content/__init__.py
from sanic import Blueprint
from .static import static
from .authors import authors
content = Blueprint.group(static, authors, url_prefix="/content")
```
.. column::
#### Third blueprint
.. column::
```python
# api/info.py
from sanic import Blueprint
info = Blueprint("info", url_prefix="/info")
```
.. column::
#### Another blueprint group
.. column::
```python
# api/__init__.py
from sanic import Blueprint
from .content import content
from .info import info
api = Blueprint.group(content, info, url_prefix="/api")
```
.. column::
#### Main server
All blueprints are now registered
.. column::
```python
# app.py
from sanic import Sanic
from .api import api
app = Sanic(__name__)
app.blueprint(api)
```
### Blueprint group prefixes and composability
As shown in the code above, when you create a group of blueprints you can extend the URL prefix of all the blueprints in the group by passing the `url_prefix` argument to the `Blueprint.group` method. This is useful for creating a mock directory structure for your API.
.. new:: NEW in v23.6
In addition, there is a `name_prefix` argument that can be used to make blueprints reusable and composable. The is specifically necessary when applying a single blueprint to multiple groups. By doing this, the blueprint will be registered with a unique name for each group, which allows the blueprint to be registered multiple times and have its routes each properly named with a unique identifier.
.. column::
Consider this example. The routes built will be named as follows:
- `TestApp.group-a_bp1.route1`
- `TestApp.group-a_bp2.route2`
- `TestApp.group-b_bp1.route1`
- `TestApp.group-b_bp2.route2`
.. column::
```python
bp1 = Blueprint("bp1", url_prefix="/bp1")
bp2 = Blueprint("bp2", url_prefix="/bp2")
bp1.add_route(lambda _: ..., "/", name="route1")
bp2.add_route(lambda _: ..., "/", name="route2")
group_a = Blueprint.group(
bp1, bp2, url_prefix="/group-a", name_prefix="group-a"
)
group_b = Blueprint.group(
bp1, bp2, url_prefix="/group-b", name_prefix="group-b"
)
app = Sanic("TestApp")
app.blueprint(group_a)
app.blueprint(group_b)
```
*Name prefixing added in v23.6*
## Middleware
.. column::
Blueprints can also have middleware that is specifically registered for its endpoints only.
.. column::
```python
@bp.middleware
async def print_on_request(request):
print("I am a spy")
@bp.middleware("request")
async def halt_request(request):
return text("I halted the request")
@bp.middleware("response")
async def halt_response(request, response):
return text("I halted the response")
```
.. column::
Similarly, using blueprint groups, it is possible to apply middleware to an entire group of nested blueprints.
.. column::
```python
bp1 = Blueprint("bp1", url_prefix="/bp1")
bp2 = Blueprint("bp2", url_prefix="/bp2")
@bp1.middleware("request")
async def bp1_only_middleware(request):
print("applied on Blueprint : bp1 Only")
@bp1.route("/")
async def bp1_route(request):
return text("bp1")
@bp2.route("/<param>")
async def bp2_route(request, param):
return text(param)
group = Blueprint.group(bp1, bp2)
@group.middleware("request")
async def group_middleware(request):
print("common middleware applied for both bp1 and bp2")
# Register Blueprint group under the app
app.blueprint(group)
```
## Exceptions
.. column::
Just like other [exception handling](./exceptions.md), you can define blueprint specific handlers.
.. column::
```python
@bp.exception(NotFound)
def ignore_404s(request, exception):
return text("Yep, I totally found the page: {}".format(request.url))
```
## Static files
.. column::
Blueprints can also have their own static handlers
.. column::
```python
bp = Blueprint("bp", url_prefix="/bp")
bp.static("/web/path", "/folder/to/serve")
bp.static("/web/path", "/folder/to/server", name="uploads")
```
.. column::
Which can then be retrieved using `url_for()`. See [routing](/guide/basics/routing.md) for more information.
.. column::
```python
>>> print(app.url_for("static", name="bp.uploads", filename="file.txt"))
'/bp/web/path/file.txt'
```
## Listeners
.. column::
Blueprints can also implement [listeners](/guide/basics/listeners.md).
.. column::
```python
@bp.listener("before_server_start")
async def before_server_start(app, loop):
...
@bp.listener("after_server_stop")
async def after_server_stop(app, loop):
...
```
## Versioning
As discussed in the [versioning section](/guide/advanced/versioning.md), blueprints can be used to implement different versions of a web API.
.. column::
The `version` will be prepended to the routes as `/v1` or `/v2`, etc.
.. column::
```python
auth1 = Blueprint("auth", url_prefix="/auth", version=1)
auth2 = Blueprint("auth", url_prefix="/auth", version=2)
```
.. column::
When we register our blueprints on the app, the routes `/v1/auth` and `/v2/auth` will now point to the individual blueprints, which allows the creation of sub-sites for each API version.
.. column::
```python
from auth_blueprints import auth1, auth2
app = Sanic(__name__)
app.blueprint(auth1)
app.blueprint(auth2)
```
.. column::
It is also possible to group the blueprints under a `BlueprintGroup` entity and version multiple of them together at the
same time.
.. column::
```python
auth = Blueprint("auth", url_prefix="/auth")
metrics = Blueprint("metrics", url_prefix="/metrics")
group = Blueprint.group(auth, metrics, version="v1")
# This will provide APIs prefixed with the following URL path
# /v1/auth/ and /v1/metrics
```
## Composable
A `Blueprint` may be registered to multiple groups, and each of `BlueprintGroup` itself could be registered and nested further. This creates a limitless possibility `Blueprint` composition.
*Added in v21.6*
.. column::
Take a look at this example and see how the two handlers are actually mounted as five (5) distinct routes.
.. column::
```python
app = Sanic(__name__)
blueprint_1 = Blueprint("blueprint_1", url_prefix="/bp1")
blueprint_2 = Blueprint("blueprint_2", url_prefix="/bp2")
group = Blueprint.group(
blueprint_1,
blueprint_2,
version=1,
version_prefix="/api/v",
url_prefix="/grouped",
strict_slashes=True,
)
primary = Blueprint.group(group, url_prefix="/primary")
@blueprint_1.route("/")
def blueprint_1_default_route(request):
return text("BP1_OK")
@blueprint_2.route("/")
def blueprint_2_default_route(request):
return text("BP2_OK")
app.blueprint(group)
app.blueprint(primary)
app.blueprint(blueprint_1)
# The mounted paths:
# /api/v1/grouped/bp1/
# /api/v1/grouped/bp2/
# /api/v1/primary/grouped/bp1
# /api/v1/primary/grouped/bp2
# /bp1
```
## Generating a URL
When generating a url with `url_for()`, the endpoint name will be in the form:
```text
{blueprint_name}.{handler_name}
```

View File

@ -0,0 +1,176 @@
# Decorators
One of the best ways to create a consistent and DRY web API is to make use of decorators to remove functionality from the handlers, and make it repeatable across your views.
.. column::
Therefore, it is very common to see a Sanic view handler with several decorators on it.
.. column::
```python
@app.get("/orders")
@authorized("view_order")
@validate_list_params()
@inject_user()
async def get_order_details(request, params, user):
...
```
## Example
Here is a starter template to help you create decorators.
In this example, lets say you want to check that a user is authorized to access a particular endpoint. You can create a decorator that wraps a handler function, checks a request if the client is authorized to access a resource, and sends the appropriate response.
```python
from functools import wraps
from sanic.response import json
def authorized():
def decorator(f):
@wraps(f)
async def decorated_function(request, *args, **kwargs):
# run some method that checks the request
# for the client's authorization status
is_authorized = await check_request_for_authorization_status(request)
if is_authorized:
# the user is authorized.
# run the handler method and return the response
response = await f(request, *args, **kwargs)
return response
else:
# the user is not authorized.
return json({"status": "not_authorized"}, 403)
return decorated_function
return decorator
@app.route("/")
@authorized()
async def test(request):
return json({"status": "authorized"})
```
## Templates
Decorators are **fundamental** to building applications with Sanic. They increase the portability and maintainablity of your code.
In paraphrasing the Zen of Python: "[decorators] are one honking great idea -- let's do more of those!"
To make it easier to implement them, here are three examples of copy/pastable code to get you started.
.. column::
Don't forget to add these import statements. Although it is *not* necessary, using `@wraps` helps keep some of the metadata of your function intact. [See docs](https://docs.python.org/3/library/functools.html#functools.wraps). Also, we use the `isawaitable` pattern here to allow the route handlers to by regular or asynchronous functions.
.. column::
```python
from inspect import isawaitable
from functools import wraps
```
### With args
.. column::
Often, you will want a decorator that will *always* need arguments. Therefore, when it is implemented you will always be calling it.
```python
@app.get("/")
@foobar(1, 2)
async def handler(request: Request):
return text("hi")
```
.. column::
```python
def foobar(arg1, arg2):
def decorator(f):
@wraps(f)
async def decorated_function(request, *args, **kwargs):
response = f(request, *args, **kwargs)
if isawaitable(response):
response = await response
return response
return decorated_function
return decorator
```
### Without args
.. column::
Sometimes you want a decorator that will not take arguments. When this is the case, it is a nice convenience not to have to call it
```python
@app.get("/")
@foobar
async def handler(request: Request):
return text("hi")
```
.. column::
```python
def foobar(func):
def decorator(f):
@wraps(f)
async def decorated_function(request, *args, **kwargs):
response = f(request, *args, **kwargs)
if isawaitable(response):
response = await response
return response
return decorated_function
return decorator(func)
```
### With or Without args
.. column::
If you want a decorator with the ability to be called or not, you can follow this pattern. Using keyword only arguments is not necessary, but might make implementation simpler.
```python
@app.get("/")
@foobar(arg1=1, arg2=2)
async def handler(request: Request):
return text("hi")
```
```python
@app.get("/")
@foobar
async def handler(request: Request):
return text("hi")
```
.. column::
```python
def foobar(maybe_func=None, *, arg1=None, arg2=None):
def decorator(f):
@wraps(f)
async def decorated_function(request, *args, **kwargs):
response = f(request, *args, **kwargs)
if isawaitable(response):
response = await response
return response
return decorated_function
return decorator(maybe_func) if maybe_func else decorator
```

View File

@ -0,0 +1,580 @@
# Exceptions
## Using Sanic exceptions
Sometimes you just need to tell Sanic to halt execution of a handler and send back a status code response. You can raise a `SanicException` for this and Sanic will do the rest for you.
You can pass an optional `status_code` argument. By default, a SanicException will return an internal server error 500 response.
```python
from sanic.exceptions import SanicException
@app.route("/youshallnotpass")
async def no_no(request):
raise SanicException("Something went wrong.", status_code=501)
```
Sanic provides a number of standard exceptions. They each automatically will raise the appropriate HTTP status code in your response. [Check the API reference](https://sanic.readthedocs.io/en/latest/sanic/api_reference.html#module-sanic.exceptions) for more details.
.. column::
The more common exceptions you _should_ implement yourself include:
- `BadRequest` (400)
- `Unauthorized` (401)
- `Forbidden` (403)
- `NotFound` (404)
- `ServerError` (500)
.. column::
```python
from sanic import exceptions
@app.route("/login")
async def login(request):
user = await some_login_func(request)
if not user:
raise exceptions.NotFound(
f"Could not find user with username={request.json.username}"
)
```
## Exception properties
All exceptions in Sanic derive from `SanicException`. That class has a few properties on it that assist the developer in consistently reporting their exceptions across an application.
- `message`
- `status_code`
- `quiet`
- `headers`
- `context`
- `extra`
All of these properties can be passed to the exception when it is created, but the first three can also be used as class variables as we will see.
.. column::
### `message`
The `message` property obviously controls the message that will be displayed as with any other exception in Python. What is particularly useful is that you can set the `message` property on the class definition allowing for easy standardization of language across an application
.. column::
```python
class CustomError(SanicException):
message = "Something bad happened"
raise CustomError
# or
raise CustomError("Override the default message with something else")
```
.. column::
### `status_code`
This property is used to set the response code when the exception is raised. This can particularly be useful when creating custom 400 series exceptions that are usually in response to bad information coming from the client.
.. column::
```python
class TeapotError(SanicException):
status_code = 418
message = "Sorry, I cannot brew coffee"
raise TeapotError
# or
raise TeapotError(status_code=400)
```
.. column::
### `quiet`
By default, exceptions will be output by Sanic to the `error_logger`. Sometimes this may not be desirable, especially if you are using exceptions to trigger events in exception handlers (see [the following section](./exceptions.md#handling)). You can suppress the log output using `quiet=True`.
.. column::
```python
class SilentError(SanicException):
message = "Something happened, but not shown in logs"
quiet = True
raise SilentError
# or
raise InvalidUsage("blah blah", quiet=True)
```
.. column::
Sometimes while debugging you may want to globally ignore the `quiet=True` property. You can force Sanic to log out all exceptions regardless of this property using `NOISY_EXCEPTIONS`
*Added in v21.12*
.. column::
```python
app.config.NOISY_EXCEPTIONS = True
```
.. column::
### `headers`
Using `SanicException` as a tool for creating responses is super powerful. This is in part because not only can you control the `status_code`, but you can also control reponse headers directly from the exception.
.. column::
```python
class MyException(SanicException):
headers = {
"X-Foo": "bar"
}
raise MyException
# or
raise InvalidUsage("blah blah", headers={
"X-Foo": "bar"
})
```
.. column::
### `extra`
See [contextual exceptions](./exceptions.md#contextual-exceptions)
*Added in v21.12*
.. column::
```python
raise SanicException(..., extra={"name": "Adam"})
```
.. column::
### `context`
See [contextual exceptions](./exceptions.md#contextual-exceptions)
*Added in v21.12*
.. column::
```python
raise SanicException(..., context={"foo": "bar"})
```
## Handling
Sanic handles exceptions automatically by rendering an error page, so in many cases you don't need to handle them yourself. However, if you would like more control on what to do when an exception is raised, you can implement a handler yourself.
Sanic provides a decorator for this, which applies to not only the Sanic standard exceptions, but **any** exception that your application might throw.
.. column::
The easiest method to add a handler is to use `@app.exception()` and pass it one or more exceptions.
.. column::
```python
from sanic.exceptions import NotFound
@app.exception(NotFound, SomeCustomException)
async def ignore_404s(request, exception):
return text("Yep, I totally found the page: {}".format(request.url))
```
.. column::
You can also create a catchall handler by catching `Exception`.
.. column::
```python
@app.exception(Exception)
async def catch_anything(request, exception):
...
```
.. column::
You can also use `app.error_handler.add()` to add error handlers.
.. column::
```python
async def server_error_handler(request, exception):
return text("Oops, server error", status=500)
app.error_handler.add(Exception, server_error_handler)
```
## Built-in error handling
Sanic ships with three formats for exceptions: HTML, JSON, and text. You can see examples of them below in the [Fallback handler](#fallback-handler) section.
.. column::
You can control _per route_ which format to use with the `error_format` keyword argument.
*Added in v21.9*
.. column::
```python
@app.request("/", error_format="text")
async def handler(request):
...
```
## Custom error handling
In some cases, you might want to add some more error handling functionality to what is provided by default. In that case, you can subclass Sanic's default error handler as such:
```python
from sanic.handlers import ErrorHandler
class CustomErrorHandler(ErrorHandler):
def default(self, request: Request, exception: Exception) -> HTTPResponse:
''' handles errors that have no error handlers assigned '''
# You custom error handling logic...
status_code = getattr(exception, "status_code", 500)
return json({
"error": str(exception),
"foo": "bar"
}, status=status_code)
app.error_handler = CustomErrorHandler()
```
## Fallback handler
Sanic comes with three fallback exception handlers:
1. HTML
2. Text
3. JSON
These handlers present differing levels of detail depending upon whether your application is in [debug mode](/guide/deployment/development.md) or not.
By default, Sanic will be in "auto" mode, which means that it will using the incoming request and potential matching handler to choose the appropriate response format. For example, when in a browser it should always provide an HTML error page. When using curl, you might see JSON or plain text.
### HTML
```python
app.config.FALLBACK_ERROR_FORMAT = "html"
```
.. column::
```python
app.config.DEBUG = True
```
![Error](/assets/images/error-display-html-debug.png)
.. column::
```python
app.config.DEBUG = False
```
![Error](/assets/images/error-display-html-prod.png)
### Text
```python
app.config.FALLBACK_ERROR_FORMAT = "text"
```
.. column::
```python
app.config.DEBUG = True
```
```sh
curl localhost:8000/exc -i
HTTP/1.1 500 Internal Server Error
content-length: 620
connection: keep-alive
content-type: text/plain; charset=utf-8
⚠️ 500 — Internal Server Error
==============================
That time when that thing broke that other thing? That happened.
ServerError: That time when that thing broke that other thing? That happened. while handling path /exc
Traceback of TestApp (most recent call last):
ServerError: That time when that thing broke that other thing? That happened.
File /path/to/sanic/app.py, line 979, in handle_request
response = await response
File /path/to/server.py, line 16, in handler
do_something(cause_error=True)
File /path/to/something.py, line 9, in do_something
raise ServerError(
```
.. column::
```python
app.config.DEBUG = False
```
```sh
curl localhost:8000/exc -i
HTTP/1.1 500 Internal Server Error
content-length: 134
connection: keep-alive
content-type: text/plain; charset=utf-8
⚠️ 500 — Internal Server Error
==============================
That time when that thing broke that other thing? That happened.
```
### JSON
```python
app.config.FALLBACK_ERROR_FORMAT = "json"
```
.. column::
```python
app.config.DEBUG = True
```
```sh
curl localhost:8000/exc -i
HTTP/1.1 500 Internal Server Error
content-length: 572
connection: keep-alive
content-type: application/jso
{
"description": "Internal Server Error",
"status": 500,
"message": "That time when that thing broke that other thing? That happened.",
"path": "/exc",
"args": {},
"exceptions": [
{
"type": "ServerError",
"exception": "That time when that thing broke that other thing? That happened.",
"frames": [
{
"file": "/path/to/sanic/app.py",
"line": 979,
"name": "handle_request",
"src": "response = await response"
},
{
"file": "/path/to/server.py",
"line": 16,
"name": "handler",
"src": "do_something(cause_error=True)"
},
{
"file": "/path/to/something.py",
"line": 9,
"name": "do_something",
"src": "raise ServerError("
}
]
}
]
}
```
.. column::
```python
app.config.DEBUG = False
```
```sh
curl localhost:8000/exc -i
HTTP/1.1 500 Internal Server Error
content-length: 129
connection: keep-alive
content-type: application/json
{
"description": "Internal Server Error",
"status": 500,
"message": "That time when that thing broke that other thing? That happened."
}
```
### Auto
Sanic also provides an option for guessing which fallback option to use.
```python
app.config.FALLBACK_ERROR_FORMAT = "auto"
```
## Contextual Exceptions
Default exception messages that simplify the ability to consistently raise exceptions throughout your application.
```python
class TeapotError(SanicException):
status_code = 418
message = "Sorry, I cannot brew coffee"
raise TeapotError
```
But this lacks two things:
1. A dynamic and predictable message format
2. The ability to add additional context to an error message (more on this in a moment)
*Added in v21.12*
Using one of Sanic's exceptions, you have two options to provide additional details at runtime:
```python
raise TeapotError(extra={"foo": "bar"}, context={"foo": "bar"})
```
What's the difference and when should you decide to use each?
- `extra` - The object itself will **never** be sent to a production client. It is meant for internal use only. What could it be used for?
- Generating (as we will see in a minute) a dynamic error message
- Providing runtime details to a logger
- Debug information (when in development mode, it is rendered)
- `context` - This object is **always** sent to production clients. It is generally meant to be used to send additional details about the context of what happened. What could it be used for?
- Providing alternative values on a `BadRequest` validation issue
- Responding with helpful details for your customers to open a support ticket
- Displaying state information like current logged in user info
### Dynamic and predictable message using `extra`
Sanic exceptions can be raised using `extra` keyword arguments to provide additional information to a raised exception instance.
```python
class TeapotError(SanicException):
status_code = 418
@property
def message(self):
return f"Sorry {self.extra['name']}, I cannot make you coffee"
raise TeapotError(extra={"name": "Adam"})
```
The new feature allows the passing of `extra` meta to the exception instance, which can be particularly useful as in the above example to pass dynamic data into the message text. This `extra` info object **will be suppressed** when in `PRODUCTION` mode, but displayed in `DEVELOPMENT` mode.
.. column::
**DEVELOPMENT**
![image](~@assets/images/error-extra-debug.png)
.. column::
**PRODUCTION**
![image](~@assets/images/error-extra-prod.png)
### Additional `context` to an error message
Sanic exceptions can also be raised with a `context` argument to pass intended information along to the user about what happened. This is particularly useful when creating microservices or an API intended to pass error messages in JSON format. In this use case, we want to have some context around the exception beyond just a parseable error message to return details to the client.
```python
raise TeapotError(context={"foo": "bar"})
```
This is information **that we want** to always be passed in the error (when it is available). Here is what it should look like:
.. column::
**PRODUCTION**
```json
{
"description": "I'm a teapot",
"status": 418,
"message": "Sorry Adam, I cannot make you coffee",
"context": {
"foo": "bar"
}
}
```
.. column::
**DEVELOPMENT**
```json
{
"description": "I'm a teapot",
"status": 418,
"message": "Sorry Adam, I cannot make you coffee",
"context": {
"foo": "bar"
},
"path": "/",
"args": {},
"exceptions": [
{
"type": "TeapotError",
"exception": "Sorry Adam, I cannot make you coffee",
"frames": [
{
"file": "handle_request",
"line": 83,
"name": "handle_request",
"src": ""
},
{
"file": "/tmp/p.py",
"line": 17,
"name": "handler",
"src": "raise TeapotError("
}
]
}
]
}
```
.. new:: NEW in v23.6
## Error reporting
Sanic has a [signal](../advanced/signals.md#built-in-signals) that allows you to hook into the exception reporting process. This is useful if you want to send exception information to a third party service like Sentry or Rollbar. This can be conveniently accomplished by attaching an error reporting handler as show below:
```python
@app.report_exception
async def catch_any_exception(app: Sanic, exception: Exception):
print("Caught exception:", exception)
```
.. note::
This handler will be dispatched into a background task and **IS NOT** intended for use to manipulate any response data. It is intended to be used for logging or reporting purposes only, and should not impact the ability of your application to return the error response to the client.
*Added in v23.6*

View File

@ -0,0 +1,97 @@
# Logging
Sanic allows you to do different types of logging (access log, error log) on the requests based on the [Python logging API](https://docs.python.org/3/howto/logging.html). You should have some basic knowledge on Python logging if you want to create a new configuration.
## Quick Start
.. column::
A simple example using default settings would be like this:
.. column::
```python
from sanic import Sanic
from sanic.log import logger
from sanic.response import text
app = Sanic('logging_example')
@app.route('/')
async def test(request):
logger.info('Here is your log')
return text('Hello World!')
if __name__ == "__main__":
app.run(debug=True, access_log=True)
```
After the server is running, you should see logs like this.
```text
[2021-01-04 15:26:26 +0200] [1929659] [INFO] Goin' Fast @ http://127.0.0.1:8000
[2021-01-04 15:26:26 +0200] [1929659] [INFO] Starting worker [1929659]
```
You can send a request to server and it will print the log messages.
```text
[2021-01-04 15:26:28 +0200] [1929659] [INFO] Here is your log
[2021-01-04 15:26:28 +0200] - (sanic.access)[INFO][127.0.0.1:44228]: GET http://localhost:8000/ 200 -1
```
## Changing Sanic loggers
To use your own logging config, simply use `logging.config.dictConfig`, or pass `log_config` when you initialize Sanic app.
```python
app = Sanic('logging_example', log_config=LOGGING_CONFIG)
if __name__ == "__main__":
app.run(access_log=False)
```
.. tip:: FYI
Logging in Python is a relatively cheap operation. However, if you are serving a high number of requests and performance is a concern, all of that time logging out access logs adds up and becomes quite expensive.
This is a good opportunity to place Sanic behind a proxy (like nginx) and to do your access logging there. You will see a *significant* increase in overall performance by disabling the `access_log`.
For optimal production performance, it is advised to run Sanic with `debug` and `access_log` disabled: `app.run(debug=False, access_log=False)`
## Configuration
Sanic's default logging configuration is: `sanic.log.LOGGING_CONFIG_DEFAULTS`.
.. column::
There are three loggers used in sanic, and must be defined if you want to create your own logging configuration:
| **Logger Name** | **Use Case** |
|-----------------|-------------------------------|
| `sanic.root` | Used to log internal messages. |
| `sanic.error` | Used to log error logs. |
| `sanic.access` | Used to log access logs. |
.. column::
### Log format
In addition to default parameters provided by Python (`asctime`, `levelname`, `message`), Sanic provides additional parameters for access logger with.
| Log Context Parameter | Parameter Value | Datatype |
|-----------------------|---------------------------------------|----------|
| `host` | `request.ip` | `str` |
| `request` | `request.method + " " + request.url` | `str` |
| `status` | `response` | `int` |
| `byte` | `len(response.body)` | `int` |
The default access log format is:
```text
%(asctime)s - (%(name)s)[%(levelname)s][%(host)s]: %(request)s %(message)s %(status)d %(byte)d
```

View File

@ -0,0 +1,3 @@
# Testing
See [sanic-testing](../../plugins/sanic-testing/getting-started.md)

View File

@ -0,0 +1,74 @@
# Caddy Deployment
## Introduction
Caddy is a state-of-the-art web server and proxy that supports up to HTTP/3. Its simplicity lies in its minimalistic configuration and the inbuilt ability to automatically procure TLS certificates for your domains from Let's Encrypt. In this setup, we will configure the Sanic application to serve locally at 127.0.0.1:8001, with Caddy playing the role of the public-facing server for the domain example.com.
You may install Caddy from your favorite package menager on Windows, Linux and Mac. The package is named `caddy`.
## Proxied Sanic app
```python
from sanic import Sanic
from sanic.response import text
app = Sanic("proxied_example")
@app.get("/")
def index(request):
# This should display external (public) addresses:
return text(
f"{request.remote_addr} connected to {request.url_for('index')}\n"
f"Forwarded: {request.forwarded}\n"
)
```
To run this application, save as `proxied_example.py`, and use the sanic command-line interface as follows:
```bash
SANIC_PROXIES_COUNT=1 sanic proxied_example --port 8001
```
Setting the SANIC_PROXIES_COUNT environment variable instructs Sanic to trust the X-Forwarded-* headers sent by Caddy, allowing it to correctly identify the client's IP address and other information.
## Caddy is simple
If you have no other web servers running, you can simply run Caddy CLI (needs `sudo` on Linux):
```bash
caddy reverse-proxy --from example.com --to :8001
```
This is a complete server that includes a certificate for your domain, http-to-https redirect, proxy headers, streaming and WebSockets. Your Sanic application should now be available on the domain you specified by HTTP versions 1, 2 and 3. Remember to open up UDP/443 on your firewall to enable H3 communications.
All done?
Soon enough you'll be needing more than one server, or more control over details, which is where the configuration files come in. The above command is equivalent to this `Caddyfile`, serving as a good starting point for your install:
```
example.com {
reverse_proxy localhost:8001
}
```
Some Linux distributions install Caddy such that it reads configuration from `/etc/caddy/Caddyfile`, which `import /etc/caddy/conf.d/*` for each site you are running. If not, you'll need to manually run `caddy run` as a system service, pointing it at the proper config file. Alternatively, use Caddy API mode with `caddy run --resume` for persistent config changes. Note that any Caddyfile loading will replace all prior configuration and thus `caddy-api` is not configurable in this traditional manner.
## Advanced configuration
At times, you might need to mix static files and handlers at the site root for cleaner URLs. In Sanic, you'd use `app.static("/", "static", index="index.html")` to achieve this. However, for improved performance, you can offload serving static files to Caddy:
```
app.example.com {
# Look for static files first, proxy to Sanic if not found
route {
file_server {
root /srv/sanicexample/static
precompress br # brotli your large scripts and styles
pass_thru
}
reverse_proxy unix//tmp/sanic.socket # sanic --unix /tmp/sanic.socket
}
}
```
Please refer to [Caddy documentation](https://caddyserver.com/docs/) for more options.

View File

@ -0,0 +1,175 @@
# Docker Deployment
## Introduction
For a long time, the environment has always been a difficult problem for deployment. If there are conflicting configurations in your project, you have to spend a lot of time resolving them. Fortunately, virtualization provides us with a good solution. Docker is one of them. If you don't know Docker, you can visit [Docker official website](https://www.docker.com/) to learn more.
## Build Image
Let's start with a simple project. We will use a Sanic project as an example. Assume the project path is `/path/to/SanicDocker`.
.. column::
The directory structure looks like this:
.. column::
```text
# /path/to/SanicDocker
SanicDocker
├── requirements.txt
├── dockerfile
└── server.py
```
.. column::
And the `server.py` code looks like this:
.. column::
```python
app = Sanic("MySanicApp")
@app.get('/')
async def hello(request):
return text("OK!")
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000)
```
.. note::
Please note that the host cannot be 127.0.0.1 . In docker container, 127.0.0.1 is the default network interface of the container, only the container can communicate with other containers. more information please visit [Docker network](https://docs.docker.com/engine/reference/commandline/network/)
Code is ready, let's write the `Dockerfile`:
```Dockerfile
FROM sanicframework/sanic:3.8-latest
WORKDIR /sanic
COPY . .
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["python", "server.py"]
```
Run the following command to build the image:
```shell
docker build -t my-sanic-image .
```
## Start Container
.. column::
After the image built, we can start the container use `my-sanic-image`:
.. column::
```shell
docker run --name mysanic -p 8000:8000 -d my-sanic-image
```
.. column::
Now we can visit `http://localhost:8000` to see the result:
.. column::
```text
OK!
```
## Use docker-compose
If your project consist of multiple services, you can use [docker-compose](https://docs.docker.com/compose/) to manage them.
for example, we will deploy `my-sanic-image` and `nginx`, achieve through nginx access sanic server.
.. column::
First of all, we need prepare nginx configuration file. create a file named `mysanic.conf`:
.. column::
```nginx
server {
listen 80;
listen [::]:80;
location / {
proxy_pass http://mysanic:8000/;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
proxy_set_header Accept-Encoding gzip;
}
}
```
.. column::
Then, we need to prepare `docker-compose.yml` file. The content follows:
.. column::
```yaml
version: "3"
services:
mysanic:
image: my-sanic-image
ports:
- "8000:8000"
restart: always
mynginx:
image: nginx:1.13.6-alpine
ports:
- "80:80"
depends_on:
- mysanic
volumes:
- ./mysanic.conf:/etc/nginx/conf.d/mysanic.conf
restart: always
networks:
default:
driver: bridge
```
.. column::
After that, we can start them:
.. column::
```shell
docker-compose up -d
```
.. column::
Now, we can visit `http://localhost:80` to see the result:
.. column::
```text
OK!
```

View File

@ -0,0 +1 @@
# Kubernetes

View File

@ -0,0 +1,172 @@
# Nginx Deployment
## Introduction
Although Sanic can be run directly on Internet, it may be useful to use a proxy
server such as Nginx in front of it. This is particularly useful for running
multiple virtual hosts on the same IP, serving NodeJS or other services beside
a single Sanic app, and it also allows for efficient serving of static files.
TLS and HTTP/2 are also easily implemented on such proxy.
We are setting the Sanic app to serve only locally at 127.0.0.1:8001, while the
Nginx installation is responsible for providing the service to public Internet
on domain example.com. Static files will be served by Nginx for maximal
performance.
## Proxied Sanic app
```python
from sanic import Sanic
from sanic.response import text
app = Sanic("proxied_example")
@app.get("/")
def index(request):
# This should display external (public) addresses:
return text(
f"{request.remote_addr} connected to {request.url_for('index')}\n"
f"Forwarded: {request.forwarded}\n"
)
```
Since this is going to be a system service, save your code to
`/srv/sanicservice/proxied_example.py`.
For testing, run your app in a terminal using the `sanic` CLI in the folder where you saved the file.
```bash
SANIC_FORWARDED_SECRET=_hostname sanic proxied_example --port 8001
```
We provide Sanic config `FORWARDED_SECRET` to identify which proxy it gets
the remote addresses from. Note the `_` in front of the local hostname.
This gives basic protection against users spoofing these headers and faking
their IP addresses and more.
## SSL certificates
Install Certbot and obtain a certicate for all your domains. This will spin up its own webserver on port 80 for a moment to verify you control the given domain names.
```bash
certbot -d example.com -d www.example.com
```
## Nginx configuration
Quite much configuration is required to allow fast transparent proxying, but
for the most part these don't need to be modified, so bear with me.
.. tip:: Note
Separate upstream section, rather than simply adding the IP after `proxy_pass`
as in most tutorials, is needed for HTTP keep-alive. We also enable streaming,
WebSockets and Nginx serving static files.
The following config goes inside the `http` section of `nginx.conf` or if your
system uses multiple config files, `/etc/nginx/sites-available/default` or
your own files (be sure to symlink them to `sites-enabled`):
```nginx
# Files managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Sanic service
upstream example.com {
keepalive 100;
server 127.0.0.1:8001;
#server unix:/tmp//sanic.sock;
}
server {
server_name example.com;
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
# Serve static files if found, otherwise proxy to Sanic
location / {
root /srv/sanicexample/static;
try_files $uri @sanic;
}
location @sanic {
proxy_pass http://$server_name;
# Allow fast streaming HTTP/1.1 pipes (keep-alive, unbuffered)
proxy_http_version 1.1;
proxy_request_buffering off;
proxy_buffering off;
proxy_set_header forwarded by=\"_$hostname\";$for_addr;proto=$scheme;host=\"$http_host\";
# Allow websockets and keep-alive (avoid connection: close)
proxy_set_header connection "upgrade";
proxy_set_header upgrade $http_upgrade;
}
}
# Redirect WWW to no-WWW
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name ~^www\.(.*)$;
return 308 $scheme://$1$request_uri;
}
# Redirect all HTTP to HTTPS with no-WWW
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name ~^(?:www\.)?(.*)$;
return 308 https://$1$request_uri;
}
# Forwarded for= client IP address formatting
map $remote_addr $for_addr {
~^[0-9.]+$ "for=$remote_addr"; # IPv4 client address
~^[0-9A-Fa-f:.]+$ "for=\"[$remote_addr]\""; # IPv6 bracketed and quoted
default "for=unknown"; # Unix socket
}
```
Start or restart Nginx for changes to take effect. E.g.
```bash
systemctl restart nginx
```
You should be able to connect your app on `https://example.com`. Any 404
errors and such will be handled by Sanic's error pages, and whenever a static
file is present at a given path, it will be served by Nginx.
## Running as a service
This part is for Linux distributions based on `systemd`. Create a unit file
`/etc/systemd/system/sanicexample.service`
```
[Unit]
Description=Sanic Example
[Service]
DynamicUser=Yes
WorkingDirectory=/srv/sanicservice
Environment=SANIC_PROXY_SECRET=_hostname
ExecStart=sanic proxied_example --port 8001 --fast
Restart=always
[Install]
WantedBy=multi-user.target
```
Then reload service files, start your service and enable it on boot:
```bash
systemctl daemon-reload
systemctl start sanicexample
systemctl enable sanicexample
```
.. tip:: Note
For brevity we skipped setting up a separate user account and a Python virtual environment or installing your app as a Python module. There are good tutorials on those topics elsewhere that easily apply to Sanic as well. The DynamicUser setting creates a strong sandbox which basically means your application cannot store its data in files, so you may consider setting `User=sanicexample` instead if you need that.

View File

@ -0,0 +1,93 @@
# Getting Started
Before we begin, make sure you are running Python 3.8 or higher. Currently, Sanic is works with Python versions 3.8 3.11.
## Install
```sh
pip install sanic
```
## Hello, world application
.. column::
If you have ever used one of the many decorator based frameworks, this probably looks somewhat familiar to you.
.. note::
If you are coming from Flask or another framework, there are a few important things to point out. Remember, Sanic aims for performance, flexibility, and ease of use. These guiding principles have tangible impact on the API and how it works.
.. column::
```python
from sanic import Sanic
from sanic.response import text
app = Sanic("MyHelloWorldApp")
@app.get("/")
async def hello_world(request):
return text("Hello, world.")
```
### Important to note
- Every request handler can either be sync (`def hello_world`) or async (`async def hello_world`). Unless you have a clear reason for it, always go with `async`.
- The `request` object is always the first argument of your handler. Other frameworks pass this around in a context variable to be imported. In the `async` world, this would not work so well and it is far easier (not to mention cleaner and more performant) to be explicit about it.
- You **must** use a response type. MANY other frameworks allow you to have a return value like this: `return "Hello, world."` or this: `return {"foo": "bar"}`. But, in order to do this implicit calling, somewhere in the chain needs to spend valuable time trying to determine what you meant. So, at the expense of this ease, Sanic has decided to require an explicit call.
### Running
.. column::
Let's save the above file as `server.py`. And launch it.
.. column::
```sh
sanic server
```
.. note::
This **another** important distinction. Other frameworks come with a built in development server and explicitly say that it is _only_ intended for development use. The opposite is true with Sanic.
**The packaged server is production ready.**
## Sanic Extensions
Sanic intentionally aims for a clean and unopinionated feature list. The project does not want to require you to build your application in a certain way, and tries to avoid prescribing specific development patterns. There are a number of third-party plugins that are built and maintained by the community to add additional features that do not otherwise meet the requirements of the core repository.
However, in order **to help API developers**, the Sanic organization maintains an official plugin called [Sanic Extensions](../plugins/sanic-ext/getting-started.md) to provide all sorts of goodies, including:
- **OpenAPI** documentation with Redoc and/or Swagger
- **CORS** protection
- **Dependency injection** into route handlers
- Request query arguments and body input **validation**
- Auto create `HEAD`, `OPTIONS`, and `TRACE` endpoints
- Predefined, endpoint-specific response serializers
The preferred method to set it up is to install it along with Sanic, but you can also install the packages on their own.
.. column::
```sh
pip install sanic[ext]
```
.. column::
```sh
pip install sanic sanic-ext
```
Starting in v21.12, Sanic will automatically setup Sanic Extensions if it is in the same environment. You will also have access to two additional application properties:
- `app.extend()` - used to configure Sanic Extensions
- `app.ext` - the `Extend` instance attached to the application
See [the plugin documentation](../plugins/sanic-ext/getting-started.md) for more information about how to use and work with the plugin

View File

@ -0,0 +1 @@
# How to ...

View File

@ -0,0 +1,117 @@
# Authentication
> How do I control authentication and authorization?
This is an _extremely_ complicated subject to cram into a few snippets. But, this should provide you with an idea on ways to tackle this problem. This example uses [JWTs](https://jwt.io/), but the concepts should be equally applicable to sessions or some other scheme.
## `server.py`
```python
from sanic import Sanic, text
from auth import protected
from login import login
app = Sanic("AuthApp")
app.config.SECRET = "KEEP_IT_SECRET_KEEP_IT_SAFE"
app.blueprint(login)
@app.get("/secret")
@protected
async def secret(request):
return text("To go fast, you must be fast.")
```
## `login.py`
```python
import jwt
from sanic import Blueprint, text
login = Blueprint("login", url_prefix="/login")
@login.post("/")
async def do_login(request):
token = jwt.encode({}, request.app.config.SECRET)
return text(token)
```
## `auth.py`
```python
from functools import wraps
import jwt
from sanic import text
def check_token(request):
if not request.token:
return False
try:
jwt.decode(
request.token, request.app.config.SECRET, algorithms=["HS256"]
)
except jwt.exceptions.InvalidTokenError:
return False
else:
return True
def protected(wrapped):
def decorator(f):
@wraps(f)
async def decorated_function(request, *args, **kwargs):
is_authenticated = check_token(request)
if is_authenticated:
response = await f(request, *args, **kwargs)
return response
else:
return text("You are unauthorized.", 401)
return decorated_function
return decorator(wrapped)
```
This decorator pattern is taken from the [decorators page](/en/guide/best-practices/decorators.md).
---
```bash
$ curl localhost:9999/secret -i
HTTP/1.1 401 Unauthorized
content-length: 21
connection: keep-alive
content-type: text/plain; charset=utf-8
You are unauthorized.
$ curl localhost:9999/login -X POST
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.e30.rjxS7ztIGt5tpiRWS8BGLUqjQFca4QOetHcZTi061DE
$ curl localhost:9999/secret -i -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.e30.rjxS7ztIGt5tpiRWS8BGLUqjQFca4QOetHcZTi061DE"
HTTP/1.1 200 OK
content-length: 29
connection: keep-alive
content-type: text/plain; charset=utf-8
To go fast, you must be fast.
$ curl localhost:9999/secret -i -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.e30.BAD"
HTTP/1.1 401 Unauthorized
content-length: 21
connection: keep-alive
content-type: text/plain; charset=utf-8
You are unauthorized.
```
Also, checkout some resources from the community:
- Awesome Sanic - [Authorization](https://github.com/mekicha/awesome-sanic/blob/master/README.md#authentication) & [Session](https://github.com/mekicha/awesome-sanic/blob/master/README.md#session)
- [EuroPython 2020 - Overcoming access control in web APIs](https://www.youtube.com/watch?v=Uqgoj43ky6A)

View File

@ -0,0 +1,197 @@
---
title: Autodiscovery
---
# Autodiscovery of Blueprints, Middleware, and Listeners
> How do I autodiscover the components I am using to build my application?
One of the first problems someone faces when building an application, is *how* to structure the project. Sanic makes heavy use of decorators to register route handlers, middleware, and listeners. And, after creating blueprints, they need to be mounted to the application.
A possible solution is a single file in which **everything** is imported and applied to the Sanic instance. Another is passing around the Sanic instance as a global variable. Both of these solutions have their drawbacks.
An alternative is autodiscovery. You point your application at modules (already imported, or strings), and let it wire everything up.
## `server.py`
```python
from sanic import Sanic
from sanic.response import empty
import blueprints
from utility import autodiscover
app = Sanic("auto", register=True)
autodiscover(
app,
blueprints,
"parent.child",
"listeners.something",
recursive=True,
)
app.route("/")(lambda _: empty())
```
```bash
[2021-03-02 21:37:02 +0200] [880451] [INFO] Goin' Fast @ http://127.0.0.1:9999
[2021-03-02 21:37:02 +0200] [880451] [DEBUG] something
[2021-03-02 21:37:02 +0200] [880451] [DEBUG] something @ nested
[2021-03-02 21:37:02 +0200] [880451] [DEBUG] something @ level1
[2021-03-02 21:37:02 +0200] [880451] [DEBUG] something @ level3
[2021-03-02 21:37:02 +0200] [880451] [DEBUG] something inside __init__.py
[2021-03-02 21:37:02 +0200] [880451] [INFO] Starting worker [880451]
```
## `utility.py`
```python
from glob import glob
from importlib import import_module, util
from inspect import getmembers
from pathlib import Path
from types import ModuleType
from typing import Union
from sanic.blueprints import Blueprint
def autodiscover(
app, *module_names: Union[str, ModuleType], recursive: bool = False
):
mod = app.__module__
blueprints = set()
_imported = set()
def _find_bps(module):
nonlocal blueprints
for _, member in getmembers(module):
if isinstance(member, Blueprint):
blueprints.add(member)
for module in module_names:
if isinstance(module, str):
module = import_module(module, mod)
_imported.add(module.__file__)
_find_bps(module)
if recursive:
base = Path(module.__file__).parent
for path in glob(f"{base}/**/*.py", recursive=True):
if path not in _imported:
name = "module"
if "__init__" in path:
*_, name, __ = path.split("/")
spec = util.spec_from_file_location(name, path)
specmod = util.module_from_spec(spec)
_imported.add(path)
spec.loader.exec_module(specmod)
_find_bps(specmod)
for bp in blueprints:
app.blueprint(bp)
```
## `blueprints/level1.py`
```python
from sanic import Blueprint
from sanic.log import logger
level1 = Blueprint("level1")
@level1.after_server_start
def print_something(app, loop):
logger.debug("something @ level1")
```
## `blueprints/one/two/level3.py`
```python
from sanic import Blueprint
from sanic.log import logger
level3 = Blueprint("level3")
@level3.after_server_start
def print_something(app, loop):
logger.debug("something @ level3")
```
## `listeners/something.py`
```python
from sanic import Sanic
from sanic.log import logger
app = Sanic.get_app("auto")
@app.after_server_start
def print_something(app, loop):
logger.debug("something")
```
## `parent/child/__init__.py`
```python
from sanic import Blueprint
from sanic.log import logger
bp = Blueprint("__init__")
@bp.after_server_start
def print_something(app, loop):
logger.debug("something inside __init__.py")
```
## `parent/child/nested.py`
```python
from sanic import Blueprint
from sanic.log import logger
nested = Blueprint("nested")
@nested.after_server_start
def print_something(app, loop):
logger.debug("something @ nested")
```
---
```text
here is the dir tree
generate with 'find . -type d -name "__pycache__" -exec rm -rf {} +; tree'
. # run 'sanic sever -d' here
├── blueprints
│ ├── __init__.py # you need add this file, just empty
│ ├── level1.py
│ └── one
│ └── two
│ └── level3.py
├── listeners
│ └── something.py
├── parent
│ └── child
│ ├── __init__.py
│ └── nested.py
├── server.py
└── utility.py
```
```sh
source ./.venv/bin/activate # activate the python venv which sanic is installed in
sanic sever -d # run this in the directory containing server.py
```
```text
you will see "something ***" like this:
[2023-07-12 11:23:36 +0000] [113704] [DEBUG] something
[2023-07-12 11:23:36 +0000] [113704] [DEBUG] something inside __init__.py
[2023-07-12 11:23:36 +0000] [113704] [DEBUG] something @ level3
[2023-07-12 11:23:36 +0000] [113704] [DEBUG] something @ level1
[2023-07-12 11:23:36 +0000] [113704] [DEBUG] something @ nested
```

View File

@ -0,0 +1,135 @@
---
title: CORS
---
# Cross-origin resource sharing (CORS)
> How do I configure my application for CORS?
.. note::
🏆 The best solution is to use [Sanic Extensions](../../plugins/sanic-ext/http/cors.md).
However, if you would like to build your own version, you could use this limited example as a starting point.
### `server.py`
```python
from sanic import Sanic, text
from cors import add_cors_headers
from options import setup_options
app = Sanic("app")
@app.route("/", methods=["GET", "POST"])
async def do_stuff(request):
return text("...")
# Add OPTIONS handlers to any route that is missing it
app.register_listener(setup_options, "before_server_start")
# Fill in CORS headers
app.register_middleware(add_cors_headers, "response")
```
## `cors.py`
```python
from typing import Iterable
def _add_cors_headers(response, methods: Iterable[str]) -> None:
allow_methods = list(set(methods))
if "OPTIONS" not in allow_methods:
allow_methods.append("OPTIONS")
headers = {
"Access-Control-Allow-Methods": ",".join(allow_methods),
"Access-Control-Allow-Origin": "mydomain.com",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Allow-Headers": (
"origin, content-type, accept, "
"authorization, x-xsrf-token, x-request-id"
),
}
response.headers.extend(headers)
def add_cors_headers(request, response):
if request.method != "OPTIONS":
methods = [method for method in request.route.methods]
_add_cors_headers(response, methods)
```
## `options.py`
```python
from collections import defaultdict
from typing import Dict, FrozenSet
from sanic import Sanic, response
from sanic.router import Route
from cors import _add_cors_headers
def _compile_routes_needing_options(
routes: Dict[str, Route]
) -> Dict[str, FrozenSet]:
needs_options = defaultdict(list)
# This is 21.12 and later. You will need to change this for older versions.
for route in routes.values():
if "OPTIONS" not in route.methods:
needs_options[route.uri].extend(route.methods)
return {
uri: frozenset(methods) for uri, methods in dict(needs_options).items()
}
def _options_wrapper(handler, methods):
def wrapped_handler(request, *args, **kwargs):
nonlocal methods
return handler(request, methods)
return wrapped_handler
async def options_handler(request, methods) -> response.HTTPResponse:
resp = response.empty()
_add_cors_headers(resp, methods)
return resp
def setup_options(app: Sanic, _):
app.router.reset()
needs_options = _compile_routes_needing_options(app.router.routes_all)
for uri, methods in needs_options.items():
app.add_route(
_options_wrapper(options_handler, methods),
uri,
methods=["OPTIONS"],
)
app.router.finalize()
```
---
```
$ curl localhost:9999/ -i
HTTP/1.1 200 OK
Access-Control-Allow-Methods: OPTIONS,POST,GET
Access-Control-Allow-Origin: mydomain.com
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: origin, content-type, accept, authorization, x-xsrf-token, x-request-id
content-length: 3
connection: keep-alive
content-type: text/plain; charset=utf-8
...
$ curl localhost:9999/ -i -X OPTIONS
HTTP/1.1 204 No Content
Access-Control-Allow-Methods: GET,POST,OPTIONS
Access-Control-Allow-Origin: mydomain.com
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: origin, content-type, accept, authorization, x-xsrf-token, x-request-id
connection: keep-alive
```
Also, checkout some resources from the community:
- [Awesome Sanic](https://github.com/mekicha/awesome-sanic/blob/master/README.md#frontend)

View File

@ -0,0 +1 @@
csrf

View File

@ -0,0 +1 @@
connecting to data sources

View File

@ -0,0 +1 @@
decorators

View File

View File

@ -0,0 +1,51 @@
# Application Mounting
> How do I mount my application at some path above the root?
```python
# server.py
from sanic import Sanic, text
app = Sanic("app")
app.config.SERVER_NAME = "example.com/api"
@app.route("/foo")
def handler(request):
url = app.url_for("handler", _external=True)
return text(f"URL: {url}")
```
```yaml
# docker-compose.yml
version: "3.7"
services:
app:
image: nginx:alpine
ports:
- 80:80
volumes:
- type: bind
source: ./conf
target: /etc/nginx/conf.d/default.conf
```
```nginx
# conf
server {
listen 80;
# Computed data service
location /api/ {
proxy_pass http://<YOUR IP ADDRESS>:9999/;
proxy_set_header Host example.com;
}
}
```
```bash
$ docker-compose up -d
$ sanic server.app --port=9999 --host=0.0.0.0
```
```bash
$ curl localhost/api/foo
URL: http://example.com/api/foo
```

View File

@ -0,0 +1,418 @@
# ORM
> How do I use SQLAlchemy with Sanic ?
All ORM tools can work with Sanic, but non-async ORM tool have a impact on Sanic performance.
There are some orm packages who support
At present, there are many ORMs that support Python's `async`/`await` keywords. Some possible choices include
- [Mayim](https://ahopkins.github.io/mayim/)
- [SQLAlchemy 1.4](https://docs.sqlalchemy.org/en/14/changelog/changelog_14.html)
- [tortoise-orm](https://github.com/tortoise/tortoise-orm)
Integration in to your Sanic application is fairly simple:
## Mayim
Mayim ships with [an extension for Sanic Extensions](https://ahopkins.github.io/mayim/guide/extensions.html#sanic), which makes it super simple to get started with Sanic. It is certainly possible to run Mayim with Sanic without the extension, but it is recommended because it handles all of the [lifecycle events](https://sanic.dev/en/guide/basics/listeners.html) and [dependency injections](https://sanic.dev/en/plugins/sanic-ext/injection.html).
.. column::
### Dependencies
First, we need to install the required dependencies. See [Mayim docs](https://ahopkins.github.io/mayim/guide/install.html#postgres) for the installation needed for your DB driver.
.. column::
```shell
pip install sanic-ext
pip install mayim[postgres]
```
.. column::
### Define ORM Model
Mayim allows you to use whatever you want for models. Whether it is [dataclasses](https://docs.python.org/3/library/dataclasses.html), [pydantic](https://pydantic-docs.helpmanual.io/), [attrs](https://www.attrs.org/en/stable/), or even just plain `dict` objects. Since it works very nicely [out of the box with Pydantic](https://ahopkins.github.io/mayim/guide/pydantic.html), that is what we will use here.
.. column::
```python
# ./models.py
from pydantic import BaseModel
class City(BaseModel):
id: int
name: str
district: str
population: int
class Country(BaseModel):
code: str
name: str
continent: str
region: str
capital: City
```
.. column::
### Define SQL
If you are unfamiliar, Mayim is different from other ORMs in that it is one-way, SQL-first. This means you define your own queries either inline, or in a separate `.sql` file, which is what we will do here.
.. column::
```sql
-- ./queries/select_all_countries.sql
SELECT country.code,
country.name,
country.continent,
country.region,
(
SELECT row_to_json(q)
FROM (
SELECT city.id,
city.name,
city.district,
city.population
) q
) capital
FROM country
JOIN city ON country.capital = city.id
ORDER BY country.name ASC
LIMIT $limit OFFSET $offset;
```
.. column::
### Create Sanic App and Async Engine
We need to create the app instance and attach the `SanicMayimExtension` with any executors.
.. column::
```python
# ./server.py
from sanic import Sanic, Request, json
from sanic_ext import Extend
from mayim.executor import PostgresExecutor
from mayim.extensions import SanicMayimExtension
from models import Country
class CountryExecutor(PostgresExecutor):
async def select_all_countries(
self, limit: int = 4, offset: int = 0
) -> list[Country]:
...
app = Sanic("Test")
Extend.register(
SanicMayimExtension(
executors=[CountryExecutor],
dsn="postgres://...",
)
)
```
.. column::
### Register Routes
Because we are using Mayim's extension for Sanic, we have the automatic `CountryExecutor` injection into the route handler. It makes for an easy, type-annotated development experience.
.. column::
```python
@app.get("/")
async def handler(request: Request, executor: CountryExecutor):
countries = await executor.select_all_countries()
return json({"countries": [country.dict() for country in co
```
.. column::
### Send Requests
.. column::
```sh
curl 'http://127.0.0.1:8000'
{"countries":[{"code":"AFG","name":"Afghanistan","continent":"Asia","region":"Southern and Central Asia","capital":{"id":1,"name":"Kabul","district":"Kabol","population":1780000}},{"code":"ALB","name":"Albania","continent":"Europe","region":"Southern Europe","capital":{"id":34,"name":"Tirana","district":"Tirana","population":270000}},{"code":"DZA","name":"Algeria","continent":"Africa","region":"Northern Africa","capital":{"id":35,"name":"Alger","district":"Alger","population":2168000}},{"code":"ASM","name":"American Samoa","continent":"Oceania","region":"Polynesia","capital":{"id":54,"name":"Fagatogo","district":"Tutuila","population":2323}}]}
```
## SQLAlchemy
Because [SQLAlchemy 1.4](https://docs.sqlalchemy.org/en/14/changelog/changelog_14.html) has added native support for `asyncio`, Sanic can finally work well with SQLAlchemy. Be aware that this functionality is still considered *beta* by the SQLAlchemy project.
.. column::
### Dependencies
First, we need to install the required dependencies. In the past, the dependencies installed were `sqlalchemy` and `pymysql`, but now `sqlalchemy` and `aiomysql` are needed.
.. column::
```shell
pip install -U sqlalchemy
pip install -U aiomysql
```
.. column::
### Define ORM Model
ORM model creation remains the same.
.. column::
```python
# ./models.py
from sqlalchemy import INTEGER, Column, ForeignKey, String
from sqlalchemy.orm import declarative_base, relationship
Base = declarative_base()
class BaseModel(Base):
__abstract__ = True
id = Column(INTEGER(), primary_key=True)
class Person(BaseModel):
__tablename__ = "person"
name = Column(String())
cars = relationship("Car")
def to_dict(self):
return {"name": self.name, "cars": [{"brand": car.brand} for car in self.cars]}
class Car(BaseModel):
__tablename__ = "car"
brand = Column(String())
user_id = Column(ForeignKey("person.id"))
user = relationship("Person", back_populates="cars")
```
.. column::
### Create Sanic App and Async Engine
Here we use mysql as the database, and you can also choose PostgreSQL/SQLite. Pay attention to changing the driver from `aiomysql` to `asyncpg`/`aiosqlite`.
.. column::
```python
# ./server.py
from sanic import Sanic
from sqlalchemy.ext.asyncio import create_async_engine
app = Sanic("my_app")
bind = create_async_engine("mysql+aiomysql://root:root@localhost/test", echo=True)
```
.. column::
### Register Middlewares
The request middleware creates an usable `AsyncSession` object and set it to `request.ctx` and `_base_model_session_ctx`.
Thread-safe variable `_base_model_session_ctx` helps you to use the session object instead of fetching it from `request.ctx`.
.. column::
```python
# ./server.py
from contextvars import ContextVar
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import sessionmaker
_sessionmaker = sessionmaker(bind, AsyncSession, expire_on_commit=False)
_base_model_session_ctx = ContextVar("session")
@app.middleware("request")
async def inject_session(request):
request.ctx.session = _sessionmaker()
request.ctx.session_ctx_token = _base_model_session_ctx.set(request.ctx.session)
@app.middleware("response")
async def close_session(request, response):
if hasattr(request.ctx, "session_ctx_token"):
_base_model_session_ctx.reset(request.ctx.session_ctx_token)
await request.ctx.session.close()
```
.. column::
### Register Routes
According to sqlalchemy official docs, `session.query` will be legacy in 2.0, and the 2.0 way to query an ORM object is using `select`.
.. column::
```python
# ./server.py
from sqlalchemy import select
from sqlalchemy.orm import selectinload
from sanic.response import json
from models import Car, Person
@app.post("/user")
async def create_user(request):
session = request.ctx.session
async with session.begin():
car = Car(brand="Tesla")
person = Person(name="foo", cars=[car])
session.add_all([person])
return json(person.to_dict())
@app.get("/user/<pk:int>")
async def get_user(request, pk):
session = request.ctx.session
async with session.begin():
stmt = select(Person).where(Person.id == pk).options(selectinload(Person.cars))
result = await session.execute(stmt)
person = result.scalar()
if not person:
return json({})
return json(person.to_dict())
```
.. column::
### Send Requests
.. column::
```sh
curl --location --request POST 'http://127.0.0.1:8000/user'
{"name":"foo","cars":[{"brand":"Tesla"}]}
```
```sh
curl --location --request GET 'http://127.0.0.1:8000/user/1'
{"name":"foo","cars":[{"brand":"Tesla"}]}
```
## Tortoise-ORM
.. column::
### Dependencies
tortoise-orm's dependency is very simple, you just need install tortoise-orm.
.. column::
```shell
pip install -U tortoise-orm
```
.. column::
### Define ORM Model
If you are familiar with Django, you should find this part very familiar.
.. column::
```python
# ./models.py
from tortoise import Model, fields
class Users(Model):
id = fields.IntField(pk=True)
name = fields.CharField(50)
def __str__(self):
return f"I am {self.name}"
```
.. column::
### Create Sanic App and Async Engine
Tortoise-orm provides a set of registration interface, which is convenient for users, and you can use it to create database connection easily.
.. column::
```python
# ./main.py
from models import Users
from tortoise.contrib.sanic import register_tortoise
app = Sanic(__name__)
register_tortoise(
app, db_url="mysql://root:root@localhost/test", modules={"models": ["models"]}, generate_schemas=True
)
```
.. column::
### Register Routes
.. column::
```python
# ./main.py
from models import Users
from sanic import Sanic, response
@app.route("/user")
async def list_all(request):
users = await Users.all()
return response.json({"users": [str(user) for user in users]})
@app.route("/user/<pk:int>")
async def get_user(request, pk):
user = await Users.query(pk=pk)
return response.json({"user": str(user)})
if __name__ == "__main__":
app.run(port=5000)
```
.. column::
### Send Requests
.. column::
```sh
curl --location --request POST 'http://127.0.0.1:8000/user'
{"users":["I am foo", "I am bar"]}
```
```sh
curl --location --request GET 'http://127.0.0.1:8000/user/1'
{"user": "I am foo"}
```

View File

@ -0,0 +1 @@
# Serialization

View File

@ -0,0 +1 @@
sse

View File

@ -0,0 +1,112 @@
# "Static" Redirects
> How do I configure static redirects?
## `app.py`
```python
### SETUP ###
import typing
import sanic, sanic.response
# Create the Sanic app
app = sanic.Sanic(__name__)
# This dictionary represents your "static"
# redirects. For example, these values
# could be pulled from a configuration file.
REDIRECTS = {
'/':'/hello_world', # Redirect '/' to '/hello_world'
'/hello_world':'/hello_world.html' # Redirect '/hello_world' to 'hello_world.html'
}
# This function will return another function
# that will return the configured value
# regardless of the arguments passed to it.
def get_static_function(value:typing.Any) -> typing.Callable[..., typing.Any]:
return lambda *_, **__: value
### ROUTING ###
# Iterate through the redirects
for src, dest in REDIRECTS.items():
# Create the redirect response object
response:sanic.HTTPResponse = sanic.response.redirect(dest)
# Create the handler function. Typically,
# only a sanic.Request object is passed
# to the function. This object will be
# ignored.
handler = get_static_function(response)
# Route the src path to the handler
app.route(src)(handler)
# Route some file and client resources
app.static('/files/', 'files')
app.static('/', 'client')
### RUN ###
if __name__ == '__main__':
app.run(
'127.0.0.1',
10000
)
```
## `client/hello_world.html`
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Hello World</title>
</head>
<link rel="stylesheet" href="/hello_world.css">
<body>
<div id='hello_world'>
Hello world!
</div>
</body>
</html>
```
## `client/hello_world.css`
```css
#hello_world {
width: 1000px;
margin-left: auto;
margin-right: auto;
margin-top: 100px;
padding: 100px;
color: aqua;
text-align: center;
font-size: 100px;
font-family: monospace;
background-color: rgba(0, 0, 0, 0.75);
border-radius: 10px;
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.75);
}
body {
background-image: url("/files/grottoes.jpg");
background-repeat: no-repeat;
background-size: cover;
}
```
## `files/grottoes.jpg`
![lake](/assets/images/grottoes.jpg)
---
Also, checkout some resources from the community:
- [Static Routing Example](https://github.com/Perzan/sanic-static-routing-example)

View File

@ -0,0 +1,13 @@
# Table of Contents
We have compiled fully working examples to answer common questions and user cases. For the most part, the examples are as minimal as possible, but should be complete and runnable solutions.
| Page | How do I ... |
|:-----|:------------|
| [Application mounting](./mounting.md) | ... mount my application at some path above the root? |
| [Authentication](./authentication.md) | ... control authentication and authorization? |
| [Autodiscovery](./autodiscovery.md) | ... autodiscover the components I am using to build my application? |
| [CORS](./cors.md) | ... configure my application for CORS? |
| [ORM](./orm) | ... use an ORM with Sanic? |
| ["Static" Redirects](./static-redirects.md) | ... configure static redirects |
| [TLS/SSL/HTTPS](./tls.md) | ... run Sanic via HTTPS?<br> ... redirect HTTP to HTTPS? |

View File

@ -0,0 +1 @@
task queue

View File

@ -0,0 +1,176 @@
# TLS/SSL/HTTPS
> How do I run Sanic via HTTPS?
If you do not have TLS certificates yet, [see the end of this page](./tls.md#get-certificates-for-your-domain-names).
## Single domain and single certificate
.. column::
Let Sanic automatically load your certificate files, which need to be named `fullchain.pem` and `privkey.pem` in the given folder:
.. column::
```sh
sudo sanic myserver:app -H :: -p 443 \
--tls /etc/letsencrypt/live/example.com/
```
```python
app.run("::", 443, ssl="/etc/letsencrypt/live/example.com/")
```
.. column::
Or, you can pass cert and key filenames separately as a dictionary:
Additionally, `password` may be added if the key is encrypted, all fields except for the password are passed to `request.conn_info.cert`.
.. column::
```python
ssl = {
"cert": "/path/to/fullchain.pem",
"key": "/path/to/privkey.pem",
"password": "for encrypted privkey file", # Optional
}
app.run(host="0.0.0.0", port=8443, ssl=ssl)
```
.. column::
Alternatively, [`ssl.SSLContext`](https://docs.python.org/3/library/ssl.html) may be passed, if you need full control over details such as which crypto algorithms are permitted. By default Sanic only allows secure algorithms, which may restrict access from very old devices.
.. column::
```python
import ssl
context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
context.load_cert_chain("certs/fullchain.pem", "certs/privkey.pem")
app.run(host="0.0.0.0", port=8443, ssl=context)
```
## Multiple domains with separate certificates
.. column::
A list of multiple certificates may be provided, in which case Sanic chooses the one matching the hostname the user is connecting to. This occurs so early in the TLS handshake that Sanic has not sent any packets to the client yet.
If the client sends no SNI (Server Name Indication), the first certificate on the list will be used even though on the client browser it will likely fail with a TLS error due to name mismatch. To prevent this fallback and to cause immediate disconnection of clients without a known hostname, add `None` as the first entry on the list. `--tls-strict-host` is the equivalent CLI option.
.. column::
```python
ssl = ["certs/example.com/", "certs/bigcorp.test/"]
app.run(host="0.0.0.0", port=8443, ssl=ssl)
```
```sh
sanic myserver:app
--tls certs/example.com/
--tls certs/bigcorp.test/
--tls-strict-host
```
.. tip::
You may also use `None` in front of a single certificate if you do not wish to reveal your certificate, true hostname or site content to anyone connecting to the IP address instead of the proper DNS name.
.. column::
Dictionaries can be used on the list. This allows also specifying which domains a certificate matches to, although the names present on the certificate itself cannot be controlled from here. If names are not specified, the names from the certificate itself are used.
To only allow connections to the main domain **example.com** and only to subdomains of **bigcorp.test**:
.. column::
```python
ssl = [
None, # No fallback if names do not match!
{
"cert": "certs/example.com/fullchain.pem",
"key": "certs/example.com/privkey.pem",
"names": ["example.com", "*.bigcorp.test"],
}
]
app.run(host="0.0.0.0", port=8443, ssl=ssl)
```
## Accessing TLS information in handlers via `request.conn_info` fields
* `.ssl` - is the connection secure (bool)
* `.cert` - certificate info and dict fields of the currently active cert (dict)
* `.server_name` - the SNI sent by the client (str, may be empty)
Do note that all `conn_info` fields are per connection, where there may be many requests over time. If a proxy is used in front of your server, these requests on the same pipe may even come from different users.
## Redirect HTTP to HTTPS, with certificate requests still over HTTP
In addition to your normal server(s) running HTTPS, run another server for redirection, `http_redir.py`:
```python
from sanic import Sanic, exceptions, response
app = Sanic("http_redir")
# Serve ACME/certbot files without HTTPS, for certificate renewals
app.static("/.well-known", "/var/www/.well-known", resource_type="dir")
@app.exception(exceptions.NotFound, exceptions.MethodNotSupported)
def redirect_everything_else(request, exception):
server, path = request.server_name, request.path
if server and path.startswith("/"):
return response.redirect(f"https://{server}{path}", status=308)
return response.text("Bad Request. Please use HTTPS!", status=400)
```
It is best to setup this as a systemd unit separate of your HTTPS servers. You may need to run HTTP while initially requesting your certificates, while you cannot run the HTTPS server yet. Start for IPv4 and IPv6:
```
sanic http_redir:app -H 0.0.0.0 -p 80
sanic http_redir:app -H :: -p 80
```
Alternatively, it is possible to run the HTTP redirect application from the main application:
```python
# app == Your main application
# redirect == Your http_redir application
@app.before_server_start
async def start(app, _):
app.ctx.redirect = await redirect.create_server(
port=80, return_asyncio_server=True
)
app.add_task(runner(redirect, app.ctx.redirect))
@app.before_server_stop
async def stop(app, _):
await app.ctx.redirect.close()
async def runner(app, app_server):
app.is_running = True
try:
app.signalize()
app.finalize()
app.state.is_started = True
await app_server.serve_forever()
finally:
app.is_running = False
app.is_stopping = True
```
## Get certificates for your domain names
You can get free certificates from [Let's Encrypt](https://letsencrypt.org/). Install [certbot](https://certbot.eff.org/) via your package manager, and request a certificate:
```sh
sudo certbot certonly --key-type ecdsa --preferred-chain "ISRG Root X1" -d example.com -d www.example.com
```
Multiple domain names may be added by further `-d` arguments, all stored into a single certificate which gets saved to `/etc/letsencrypt/live/example.com/` as per **the first domain** that you list here.
The key type and preferred chain options are necessary for getting a minimal size certificate file, essential for making your server run as *fast* as possible. The chain will still contain one RSA certificate until when Let's Encrypt gets their new EC chain trusted in all major browsers.

View File

@ -0,0 +1 @@
validation

View File

@ -0,0 +1 @@
websocket feed

View File

@ -0,0 +1,67 @@
# Introduction
Sanic is a Python 3.7+ web server and web framework thats written to go fast. It allows the usage of the async/await syntax added in Python 3.5, which makes your code non-blocking and speedy.
.. attrs::
:class: introduction-table
| | |
|--|--|
| Build | [![Tests](https://github.com/sanic-org/sanic/actions/workflows/tests.yml/badge.svg?branch=main)](https://github.com/sanic-org/sanic/actions/workflows/tests.yml) |
| Docs | [![User Guide](https://img.shields.io/badge/user%20guide-sanic-ff0068)](https://sanicframework.org/) [![Documentation](https://readthedocs.org/projects/sanic/badge/?version=latest)](http://sanic.readthedocs.io/en/latest/?badge=latest) |
| Package | [![PyPI](https://img.shields.io/pypi/v/sanic.svg)](https://pypi.python.org/pypi/sanic/) [![PyPI version](https://img.shields.io/pypi/pyversions/sanic.svg)](https://pypi.python.org/pypi/sanic/) [![Wheel](https://img.shields.io/pypi/wheel/sanic.svg)](https://pypi.python.org/pypi/sanic) [![Supported implementations](https://img.shields.io/pypi/implementation/sanic.svg)](https://pypi.python.org/pypi/sanic) [![Code style black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black) |
| Support | [![Forums](https://img.shields.io/badge/forums-community-ff0068.svg)](https://community.sanicframework.org/) [![Discord](https://img.shields.io/discord/812221182594121728?logo=discord)](https://discord.gg/FARQzAEMAA) [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/mekicha/awesome-sanic) |
| Stats | [![Monthly Downloads](https://img.shields.io/pypi/dm/sanic.svg)](https://pepy.tech/project/sanic) [![Weekly Downloads](https://img.shields.io/pypi/dw/sanic.svg)](https://pepy.tech/project/sanic) [![Conda downloads](https://img.shields.io/conda/dn/conda-forge/sanic.svg)](https://anaconda.org/conda-forge/sanic) |
## What is it?
First things first, before you jump in the water, you should know that Sanic is different than other frameworks.
Right there in that first sentence there is a huge mistake because Sanic is _both_ a **framework** and a **web server**. In the deployment section we will talk a little bit more about this.
But, remember, out of the box Sanic comes with everything you need to write, deploy, and scale a production grade web application. :rocket:
## Goal
> To provide a simple way to get up and running a highly performant HTTP server that is easy to build, to expand, and ultimately to scale.
## Features
.. column::
### Core
- Built in, **_fast_** web server
- Production ready
- Highly scalable
- ASGI compliant
- Simple and intuitive API design
- By the community, for the community
.. column::
### Sanic Extensions [[learn more](../plugins/sanic-ext/getting-started.md)]
- CORS protection
- Template rendering with Jinja
- Dependency injection into route handlers
- OpenAPI documentation with Redoc and/or Swagger
- Predefined, endpoint-specific response serializers
- Request query arguments and body input validation
- Auto create `HEAD`, `OPTIONS`, and `TRACE` endpoints
## Sponsor
Check out [open collective](https://opencollective.com/sanic-org) to learn more about helping to fund Sanic.
## Join the Community
The main channel for discussion is at the [community forums](https://community.sanicframework.org/). There also is a [Discord Server](https://discord.gg/FARQzAEMAA) for live discussion and chat.
The Stackoverflow `[sanic]` tag is [actively monitored](https://stackoverflow.com/questions/tagged/sanic) by project maintainers.
## Contribution
We are always happy to have new contributions. We have [marked issues good for anyone looking to get started](https://github.com/sanic-org/sanic/issues?q=is%3Aopen+is%3Aissue+label%3Abeginner), and welcome [questions/answers/discussion on the forums](https://community.sanicframework.org/). Please take a look at our [Contribution guidelines](https://github.com/sanic-org/sanic/blob/master/CONTRIBUTING.rst).

View File

@ -0,0 +1,85 @@
# Dynamic Applications
Running Sanic has been optimized to work with the CLI. If you have not read it yet, you should read [Running Sanic](./running.md#sanic-server) to become familiar with the options.
.. column::
This includes running it as a global scope object...
.. column::
```sh
sanic path.to.server:app
```
```python
# server.py
app = Sanic("TestApp")
@app.get("/")
async def handler(request: Request):
return json({"foo": "bar"})
```
.. column::
...or, a factory function that creates the `Sanic` application object.
.. column::
```sh
sanic path.to.server:create_app --factory
```
```python
# server.py
def create_app():
app = Sanic("TestApp")
@app.get("/")
async def handler(request: Request):
return json({"foo": "bar"})
return app
```
**Sometimes, this is not enough ... 🤔**
Introduced in [v22.9](../release-notes/v22.9.md), Sanic has an `AppLoader` object that is responsible for creating an application in the various [worker processes](./manager.md#how-sanic-server-starts-processes). You can take advantage of this if you need to create a more dynamic startup experience for your application.
.. column::
An `AppLoader` can be passed a callable that returns a `Sanic` instance. That `AppLoader` could be used with the low-level application running API.
.. column::
```python
import sys
from functools import partial
from sanic import Request, Sanic, json
from sanic.worker.loader import AppLoader
def attach_endpoints(app: Sanic):
@app.get("/")
async def handler(request: Request):
return json({"app_name": request.app.name})
def create_app(app_name: str) -> Sanic:
app = Sanic(app_name)
attach_endpoints(app)
return app
if __name__ == "__main__":
app_name = sys.argv[-1]
loader = AppLoader(factory=partial(create_app, app_name))
app = loader.load()
app.prepare(port=9999, dev=True)
Sanic.serve(primary=app, app_loader=loader)
```
```sh
python path/to/server.py MyTestAppName
```
In the above example, the `AppLoader` is created with a `factory` that can be used to create copies of the same application across processes. When doing this, you should explicitly use the `Sanic.serve` pattern shown above so that the `AppLoader` that you create is not replaced.

View File

@ -0,0 +1,291 @@
# Configuration
## Basics
.. column::
Sanic holds the configuration in the config attribute of the application object. The configuration object is merely an object that can be modified either using dot-notation or like a dictionary.
.. column::
```python
app = Sanic("myapp")
app.config.DB_NAME = "appdb"
app.config["DB_USER"] = "appuser"
```
.. column::
You can also use the `update()` method like on regular dictionaries.
.. column::
```python
db_settings = {
'DB_HOST': 'localhost',
'DB_NAME': 'appdb',
'DB_USER': 'appuser'
}
app.config.update(db_settings)
```
.. note::
It is standard practice in Sanic to name your config values in **uppercase letters**. Indeed, you may experience weird behaviors if you start mixing uppercase and lowercase names.
## Loading
### Environment variables
.. column::
Any environment variables defined with the `SANIC_` prefix will be applied to the Sanic config. For example, setting `SANIC_REQUEST_TIMEOUT` will be loaded by the application automatically and fed into the `REQUEST_TIMEOUT` config variable.
.. column::
```bash
$ export SANIC_REQUEST_TIMEOUT=10
```
```python
>>> print(app.config.REQUEST_TIMEOUT)
10
```
.. column::
You can change the prefix that Sanic is expecting at startup.
.. column::
```bash
$ export MYAPP_REQUEST_TIMEOUT=10
```
```python
>>> app = Sanic(__name__, env_prefix='MYAPP_')
>>> print(app.config.REQUEST_TIMEOUT)
10
```
.. column::
You can also disable environment variable loading completely.
.. column::
```python
app = Sanic(__name__, load_env=False)
```
### Using Sanic.update_config
The `Sanic` instance has a _very_ versatile method for loading config: `app.update_config`. You can feed it a path to a file, a dictionary, a class, or just about any other sort of object.
#### From a file
.. column::
Let's say you have `my_config.py` file that looks like this.
.. column::
```python
# my_config.py
A = 1
B = 2
```
.. column::
You can load this as config values by passing its path to `app.update_config`.
.. column::
```python
>>> app.update_config("/path/to/my_config.py")
>>> print(app.config.A)
1
```
.. column::
This path also accepts bash style environment variables.
.. column::
```bash
$ export my_path="/path/to"
```
```python
app.update_config("${my_path}/my_config.py")
```
.. note::
Just remember that you have to provide environment variables in the format `${environment_variable}` and that `$environment_variable` is not expanded (is treated as "plain" text).
#### From a dict
.. column::
The `app.update_config` method also works on plain dictionaries.
.. column::
```python
app.update_config({"A": 1, "B": 2})
```
#### From a class or object
.. column::
You can define your own config class, and pass it to `app.update_config`
.. column::
```python
class MyConfig:
A = 1
B = 2
app.update_config(MyConfig)
```
.. column::
It even could be instantiated.
.. column::
```python
app.update_config(MyConfig())
```
### Type casting
When loading from environment variables, Sanic will attempt to cast the values to expected Python types. This particularly applies to:
- `int`
- `float`
- `bool`
In regards to `bool`, the following _case insensitive_ values are allowed:
- **`True`**: `y`, `yes`, `yep`, `yup`, `t`, `true`, `on`, `enable`, `enabled`, `1`
- **`False`**: `n`, `no`, `f`, `false`, `off`, `disable`, `disabled`, `0`
If a value cannot be cast, it will default to a `str`.
.. column::
Additionally, Sanic can be configured to cast additional types using additional type converters. This should be any callable that returns the value or raises a `ValueError`.
*Added in v21.12*
.. column::
```python
app = Sanic(..., config=Config(converters=[UUID]))
```
## Builtin values
| **Variable** | **Default** | **Description** |
|---------------------------|------------------|---------------------------------------------------------------------------------------------------------------------------------------|
| ACCESS_LOG | True | Disable or enable access log |
| AUTO_EXTEND | True | Control whether [Sanic Extensions](../../plugins/sanic-ext/getting-started.md) will load if it is in the existing virtual environment |
| AUTO_RELOAD | True | Control whether the application will automatically reload when a file changes |
| EVENT_AUTOREGISTER | True | When `True` using the `app.event()` method on a non-existing signal will automatically create it and not raise an exception |
| FALLBACK_ERROR_FORMAT | html | Format of error response if an exception is not caught and handled |
| FORWARDED_FOR_HEADER | X-Forwarded-For | The name of "X-Forwarded-For" HTTP header that contains client and proxy ip |
| FORWARDED_SECRET | None | Used to securely identify a specific proxy server (see below) |
| GRACEFUL_SHUTDOWN_TIMEOUT | 15.0 | How long to wait to force close non-idle connection (sec) |
| INSPECTOR | False | Whether to enable the Inspector |
| INSPECTOR_HOST | localhost | The host for the Inspector |
| INSPECTOR_PORT | 6457 | The port for the Inspector |
| INSPECTOR_TLS_KEY | - | The TLS key for the Inspector |
| INSPECTOR_TLS_CERT | - | The TLS certificate for the Inspector |
| INSPECTOR_API_KEY | - | The API key for the Inspector |
| KEEP_ALIVE_TIMEOUT | 120 | How long to hold a TCP connection open (sec) |
| KEEP_ALIVE | True | Disables keep-alive when False |
| MOTD | True | Whether to display the MOTD (message of the day) at startup |
| MOTD_DISPLAY | {} | Key/value pairs to display additional, arbitrary data in the MOTD |
| NOISY_EXCEPTIONS | False | Force all `quiet` exceptions to be logged |
| PROXIES_COUNT | None | The number of proxy servers in front of the app (e.g. nginx; see below) |
| REAL_IP_HEADER | None | The name of "X-Real-IP" HTTP header that contains real client ip |
| REGISTER | True | Whether the app registry should be enabled |
| REQUEST_BUFFER_SIZE | 65536 | Request buffer size before request is paused, default is 64 Kib |
| REQUEST_ID_HEADER | X-Request-ID | The name of "X-Request-ID" HTTP header that contains request/correlation ID |
| REQUEST_MAX_SIZE | 100000000 | How big a request may be (bytes), default is 100 megabytes |
| REQUEST_MAX_HEADER_SIZE | 8192 | How big a request header may be (bytes), default is 8192 bytes |
| REQUEST_TIMEOUT | 60 | How long a request can take to arrive (sec) |
| RESPONSE_TIMEOUT | 60 | How long a response can take to process (sec) |
| USE_UVLOOP | True | Whether to override the loop policy to use `uvloop`. Supported only with `app.run`. |
| WEBSOCKET_MAX_SIZE | 2^20 | Maximum size for incoming messages (bytes) |
| WEBSOCKET_PING_INTERVAL | 20 | A Ping frame is sent every ping_interval seconds. |
| WEBSOCKET_PING_TIMEOUT | 20 | Connection is closed when Pong is not received after ping_timeout seconds |
.. tip:: FYI
- The `USE_UVLOOP` value will be ignored if running with Gunicorn. Defaults to `False` on non-supported platforms (Windows).
- The `WEBSOCKET_` values will be ignored if in ASGI mode.
- v21.12 added: `AUTO_EXTEND`, `MOTD`, `MOTD_DISPLAY`, `NOISY_EXCEPTIONS`
- v22.9 added: `INSPECTOR`
- v22.12 added: `INSPECTOR_HOST`, `INSPECTOR_PORT`, `INSPECTOR_TLS_KEY`, `INSPECTOR_TLS_CERT`, `INSPECTOR_API_KEY`
## Timeouts
### REQUEST_TIMEOUT
A request timeout measures the duration of time between the instant when a new open TCP connection is passed to the
Sanic backend server, and the instant when the whole HTTP request is received. If the time taken exceeds the
`REQUEST_TIMEOUT` value (in seconds), this is considered a Client Error so Sanic generates an `HTTP 408` response
and sends that to the client. Set this parameter's value higher if your clients routinely pass very large request payloads
or upload requests very slowly.
### RESPONSE_TIMEOUT
A response timeout measures the duration of time between the instant the Sanic server passes the HTTP request to the Sanic App, and the instant a HTTP response is sent to the client. If the time taken exceeds the `RESPONSE_TIMEOUT` value (in seconds), this is considered a Server Error so Sanic generates an `HTTP 503` response and sends that to the client. Set this parameter's value higher if your application is likely to have long-running process that delay the
generation of a response.
### KEEP_ALIVE_TIMEOUT
#### What is Keep Alive? And what does the Keep Alive Timeout value do?
`Keep-Alive` is a HTTP feature introduced in `HTTP 1.1`. When sending a HTTP request, the client (usually a web browser application) can set a `Keep-Alive` header to indicate the http server (Sanic) to not close the TCP connection after it has send the response. This allows the client to reuse the existing TCP connection to send subsequent HTTP requests, and ensures more efficient network traffic for both the client and the server.
The `KEEP_ALIVE` config variable is set to `True` in Sanic by default. If you don't need this feature in your application, set it to `False` to cause all client connections to close immediately after a response is sent, regardless of the `Keep-Alive` header on the request.
The amount of time the server holds the TCP connection open is decided by the server itself. In Sanic, that value is configured using the `KEEP_ALIVE_TIMEOUT` value. By default, **it is set to 120 seconds**. This means that if the client sends a `Keep-Alive` header, the server will hold the TCP connection open for 120 seconds after sending the response, and the client can reuse the connection to send another HTTP request within that time.
For reference:
* Apache httpd server default keepalive timeout = 5 seconds
* Nginx server default keepalive timeout = 75 seconds
* Nginx performance tuning guidelines uses keepalive = 15 seconds
* Caddy server default keepalive timeout = 120 seconds
* IE (5-9) client hard keepalive limit = 60 seconds
* Firefox client hard keepalive limit = 115 seconds
* Opera 11 client hard keepalive limit = 120 seconds
* Chrome 13+ client keepalive limit > 300+ seconds
## Proxy configuration
See [proxy configuration section](/guide/advanced/proxy-headers.md)

View File

@ -0,0 +1,126 @@
# Development
The first thing that should be mentioned is that the webserver that is integrated into Sanic is **not** just a development server.
It is production ready out-of-the-box, *unless you enable in debug mode*.
## Debug mode
By setting the debug mode, Sanic will be more verbose in its output and will disable several run-time optimizations.
```python
# server.py
from sanic import Sanic
from sanic.response import json
app = Sanic(__name__)
@app.route("/")
async def hello_world(request):
return json({"hello": "world"})
```
```sh
sanic server:app --host=0.0.0.0 --port=1234 --debug
```
.. danger::
Sanic's debug mode will slow down the server's performance, and is **NOT** intended for production environments.
**DO NOT** enable debug mode in production.
## Automatic Reloader
.. column::
Sanic offers a way to enable or disable the Automatic Reloader. The easiest way to enable it is using the CLI's `--reload` argument to activate the Automatic Reloader. Every time a Python file is changed, the reloader will restart your application automatically. This is very convenient while developing.
.. note::
The reloader is only available when using Sanic's [worker manager](./manager.md). If you have disabled it using `--single-process` then the reloader will not be available to you.
.. column::
```sh
sanic path.to:app --reload
```
You can also use the shorthand property
```sh
sanic path.to:app -r
```
.. column::
If you have additional directories that you would like to automatically reload on file save (for example, a directory of HTML templates), you can add that using `--reload-dir`.
.. column::
```sh
sanic path.to:app --reload --reload-dir=/path/to/templates
```
Or multiple directories, shown here using the shorthand properties
```sh
sanic path.to:app -r -R /path/to/one -R /path/to/two
```
## Best of both worlds
.. column::
If you would like to be in debug mode **and** have the Automatic Reloader running, you can pass `dev=True`. This is equivalent to **debug + auto reload**.
*Added in v22.3*
.. column::
```sh
sanic path.to:app --dev
```
You can also use the shorthand property
```sh
sanic path.to:app -d
```
## Automatic TLS certificate
When running in `DEBUG` mode, you can ask Sanic to handle setting up localhost temporary TLS certificates. This is helpful if you want to access your local development environment with `https://`.
This functionality is provided by either [mkcert](https://github.com/FiloSottile/mkcert) or [trustme](https://github.com/python-trio/trustme). Both are good choices, but there are some differences. `trustme` is a Python library and can be installed into your environment with `pip`. This makes for easy envrionment handling, but it is not compatible when running a HTTP/3 server. `mkcert` might be a more involved installation process, but can install a local CA and make it easier to use.
.. column::
You can choose which platform to use by setting `config.LOCAL_CERT_CREATOR`. When set to `"auto"`, it will select either option, preferring `mkcert` if possible.
.. column::
```python
app.config.LOCAL_CERT_CREATOR = "auto"
app.config.LOCAL_CERT_CREATOR = "mkcert"
app.config.LOCAL_CERT_CREATOR = "trustme"
```
.. column::
Automatic TLS can be enabled at Sanic server run time:
.. column::
```sh
sanic path.to.server:app --auto-tls --debug
```
.. warning::
Localhost TLS certificates (like those generated by both `mkcert` and `trustme`) are **NOT** suitable for production environments. If you are not familiar with how to obtain a *real* TLS certificate, checkout the [How to...](../how-to/tls.md) section.
*Added in v22.6*

View File

@ -0,0 +1,214 @@
# Inspector
The Sanic Inspector is a feature of Sanic Server. It is *only* available when running Sanic with the built-in [worker manager](./manager.md).
It is an HTTP application that *optionally* runs in the background of your application to allow you to interact with the running instance of your application.
.. tip:: INFO
The Inspector was introduced in limited capacity in v22.9, but the documentation on this page assumes you are using v22.12 or higher.
## Getting Started
The inspector is disabled by default. To enable it, you have two options.
.. column::
Set a flag when creating your application instance.
.. column::
```python
app = Sanic("TestApp", inspector=True)
```
.. column::
Or, set a configuration value.
.. column::
```python
app = Sanic("TestApp")
app.config.INSPECTOR = True
```
.. warning::
If you are using the configuration value, it *must* be done early and before the main worker process starts. This means that it should either be an environment variable, or it should be set shortly after creating the application instance as shown above.
## Using the Inspector
Once the inspector is running, you will have access to it via the CLI or by directly accessing its web API via HTTP.
.. column::
**Via CLI**
```sh
sanic inspect
```
.. column::
**Via HTTP**
```sh
curl http://localhost:6457
```
.. note::
Remember, the Inspector is not running on your Sanic application. It is a seperate process, with a seperate application, and exposed on a seperate socket.
## Built-in Commands
The Inspector comes with the following built-in commands.
| CLI Command | HTTP Action | Description |
|--------------------|------------------------------------|--------------------------------------------------------------------------|
| `inspect` | `GET /` | Display basic details about the running application. |
| `inspect reload` | `POST /reload` | Trigger a reload of all server workers. |
| `inspect shutdown` | `POST /shutdown` | Trigger a shutdown of all processes. |
| `inspect scale N` | `POST /scale`<br>`{"replicas": N}` | Scale the number of workers. Where `N` is the target number of replicas. |
## Custom Commands
The Inspector is easily extendable to add custom commands (and endpoints).
.. column::
Subclass the `Inspector` class and create arbitrary methods. As long as the method name is not preceded by an underscore (`_`), then the name of the method will be a new subcommand on the inspector.
.. column::
```python
from sanic import json
from sanic.worker.inspector import Inspector
class MyInspector(Inspector):
async def something(self, *args, **kwargs):
print(args)
print(kwargs)
app = Sanic("TestApp", inspector_class=MyInspector, inspector=True)
```
This will expose custom methods in the general pattern:
- CLI: `sanic inspect <method_name>`
- HTTP: `POST /<method_name>`
It is important to note that the arguments that the new method accepts are derived from how you intend to use the command. For example, the above `something` method accepts all positional and keyword based parameters.
.. column::
In the CLI, the positional and keyword parameters are passed as either positional or keyword arguments to your method. All values will be a `str` with the following exceptions:
- A keyword parameter with no assigned value will be: `True`
- Unless the parameter is prefixed with `no-`, then it will be: `False`
.. column::
```sh
sanic inspect something one two three --four --no-five --six=6
```
In your application log console, you will see:
```
('one', 'two', 'three')
{'four': True, 'five': False, 'six': '6'}
```
.. column::
The same can be achieved by hitting the API directly. You can pass arguments to the method by exposing them in a JSON payload. The only thing to note is that the positional arguments should be exposed as `{"args": [...]}`.
.. column::
```sh
curl http://localhost:6457/something \
--json '{"args":["one", "two", "three"], "four":true, "five":false, "six":6}'
```
In your application log console, you will see:
```
('one', 'two', 'three')
{'four': True, 'five': False, 'six': 6}
```
## Using in production
.. danger::
Before exposing the Inspector on a product, please consider all of the options in this section carefully.
When running Inspector on a remote production instance, you can protect the endpoints by requiring TLS encryption, and requiring API key authentication.
### TLS encryption
.. column::
To the Inspector HTTP instance over TLS, pass the paths to your certificate and key.
.. column::
```python
app.config.INSPECTOR_TLS_CERT = "/path/to/cert.pem"
app.config.INSPECTOR_TLS_KEY = "/path/to/key.pem"
```
.. column::
This will require use of the `--secure` flag, or `https://`.
.. column::
```sh
sanic inspect --secure --host=<somewhere>
```
```sh
curl https://<somewhere>:6457
```
### API Key Authentication
.. column::
You can secure the API with bearer token authentication.
.. column::
```python
app.config.INSPECTOR_API_KEY = "Super-Secret-200"
```
.. column::
This will require the `--api-key` parameter, or bearer token authorization header.
.. column::
```sh
sanic inspect --api-key=Super-Secret-200
```
```sh
curl http://localhost:6457 -H "Authorization: Bearer Super-Secret-200"
```
## Configuration
See [configuration](./configuration.md)

View File

@ -0,0 +1,343 @@
# Worker Manager
The worker manager and its functionality was introduced in version 22.9.
*The details of this section are intended for more advanced usages and **not** necessary to get started.*
The purpose of the manager is to create consistency and flexibility between development and production environments. Whether you intend to run a single worker, or multiple workers, whether with, or without auto-reload: the experience will be the same.
In general it looks like this:
![](https://user-images.githubusercontent.com/166269/178677618-3b4089c3-6c6a-4ecc-8d7a-7eba2a7f29b0.png)
When you run Sanic, the main process instantiates a `WorkerManager`. That manager is in charge of running one or more `WorkerProcess`. There generally are two kinds of processes:
- server processes, and
- non-server processes.
For the sake of ease, the User Guide generally will use the term "worker" or "worker process" to mean a server process, and "Manager" to mean the single worker manager running in your main process.
## How Sanic Server starts processes
Sanic will start processes using the [spawn](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) start method. This means that for every process/worker, the global scope of your application will be run on its own thread. The practical impact of this that *if* you do not run Sanic with the CLI, you will need to nest the execution code inside a block to make sure it only runs on `__main__`.
```python
if __name__ == "__main__":
app.run()
```
If you do not, you are likely to see an error message like this:
```
sanic.exceptions.ServerError: Sanic server could not start: [Errno 98] Address already in use.
This may have happened if you are running Sanic in the global scope and not inside of a `if __name__ == "__main__"` block.
See more information: https://sanic.dev/en/guide/deployment/manager.html#how-sanic-server-starts-processes
```
The likely fix for this problem is nesting your Sanic run call inside of the `__name__ == "__main__"` block. If you continue to receive this message after nesting, or if you see this while using the CLI, then it means the port you are trying to use is not available on your machine and you must select another port.
### Starting a worker
All worker processes *must* send an acknowledgement when starting. This happens under the hood, and you as a developer do not need to do anything. However, the Manager will exit with a status code `1` if one or more workers do not send that `ack` message, or a worker process throws an exception while trying to start. If no exceptions are encountered, the Manager will wait for up to thirty (30) seconds for the acknowledgement.
.. column::
In the situation when you know that you will need more time to start, you can monkeypatch the Manager. The threshold does not include anything inside of a listener, and is limited to the execution time of everything in the global scope of your application.
If you run into this issue, it may indicate a need to look deeper into what is causing the slow startup.
.. column::
```python
from sanic.worker.manager import WorkerManager
WorkerManager.THRESHOLD = 100 # Value is in 0.1s
```
See [worker ack](#worker-ack) for more information.
.. column::
As stated above, Sanic will use [spawn](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) to start worker processes. If you would like to change this behavior and are aware of the implications of using different start methods, you can modify as shown here.
.. column::
```python
from sanic import Sanic
Sanic.start_method = "fork"
```
### Worker ack
When all of your workers are running in a subprocess a potential problem is created: deadlock. This can occur when the child processes cease to function, but the main process is unaware that this happened. Therefore, Sanic servers will automatically send an `ack` message (short for acknowledge) to the main process after startup.
In version 22.9, the `ack` timeout was short and limited to `5s`. In version 22.12, the timeout was lengthened to `30s`. If your application is shutting down after thirty seconds then it might be necessary to manually increase this threshhold.
.. column::
The value of `WorkerManager.THRESHOLD` is in `0.1s` increments. Therefore, to set it to one minute, you should set the value to `600`.
This value should be set as early as possible in your application, and should ideally happen in the global scope. Setting it after the main process has started will not work.
.. column::
```python
from sanic.worker.manager import WorkerManager
WorkerManager.THRESHOLD = 600
```
### Zero downtime restarts
By default, when restarting workers, Sanic will teardown the existing process first before starting a new one.
If you are intending to use the restart functionality in production then you may be interested in having zero-downtime reloading. This can be accomplished by forcing the reloader to change the order to start a new process, wait for it to [ack](#worker-ack), and then teardown the old process.
.. column::
From the multiplexer, use the `zero_downtime` argument
.. column::
```python
app.m.restart(zero_downtime=True)
```
*Added in v22.12*
## Using shared context between worker processes
Python provides a few methods for [exchanging objects](https://docs.python.org/3/library/multiprocessing.html#exchanging-objects-between-processes), [synchronizing](https://docs.python.org/3/library/multiprocessing.html#synchronization-between-processes), and [sharing state](https://docs.python.org/3/library/multiprocessing.html#sharing-state-between-processes) between processes. This usually involves objects from the `multiprocessing` and `ctypes` modules.
If you are familiar with these objects and how to work with them, you will be happy to know that Sanic provides an API for sharing these objects between your worker processes. If you are not familiar, you are encouraged to read through the Python documentation linked above and try some of the examples before proceeding with implementing shared context.
Similar to how [application context](../basics/app.md#application-context) allows an applicaiton to share state across the lifetime of the application with `app.ctx`, shared context provides the same for the special objects mentioned above. This context is available as `app.shared_ctx` and should **ONLY** be used to share objects intended for this purpose.
The `shared_ctx` will:
- *NOT* share regular objects like `int`, `dict`, or `list`
- *NOT* share state between Sanic instances running on different machines
- *NOT* share state to non-worker processes
- **only** share state between server workers managed by the same Manager
Attaching an inappropriate object to `shared_ctx` will likely result in a warning, and not an error. You should be careful to not accidentally add an unsafe object to `shared_ctx` as it may not work as expected. If you are directed here because of one of those warnings, you might have accidentally used an unsafe object in `shared_ctx`.
.. column::
In order to create a shared object you **must** create it in the main process and attach it inside of the `main_process_start` listener.
.. column::
```python
from multiprocessing import Queue
@app.main_process_start
async def main_process_start(app):
app.shared_ctx.queue = Queue()
```
Trying to attach to the `shared_ctx` object outside of this listener may result in a `RuntimeError`.
.. column::
After creating the objects in the `main_process_start` listener and attaching to the `shared_ctx`, they will be available in your workers wherever the application instance is available (example: listeners, middleware, request handlers).
.. column::
```python
from multiprocessing import Queue
@app.get("")
async def handler(request):
request.app.shared_ctx.queue.put(1)
...
```
## Access to the multiplexer
The application instance has access to an object that provides access to interacting with the Manager and other worker processes. The object is attached as the `app.multiplexer` property, but it is more easily accessed by its alias: `app.m`.
.. column::
For example, you can get access to the current worker state.
.. column::
```python
@app.on_request
async def print_state(request: Request):
print(request.app.m.name)
print(request.app.m.pid)
print(request.app.m.state)
```
```
Sanic-Server-0-0
99999
{'server': True, 'state': 'ACKED', 'pid': 99999, 'start_at': datetime.datetime(2022, 10, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc), 'starts': 2, 'restart_at': datetime.datetime(2022, 10, 1, 0, 0, 12, 861332, tzinfo=datetime.timezone.utc)}
```
.. column::
The `multiplexer` also has access to terminate the Manager, or restart worker processes
.. column::
```python
# shutdown the entire application and all processes
app.m.name.terminate()
# restart the current worker only
app.m.name.restart()
# restart specific workers only (comma delimited)
app.m.name.restart("Sanic-Server-4-0,Sanic-Server-7-0")
# restart ALL workers
app.m.name.restart(all_workers=True) # Available v22.12+
```
## Worker state
.. column::
As shown above, the `multiplexer` has access to report upon the state of the current running worker. However, it also contains the state for ALL processes running.
.. column::
```python
@app.on_request
async def print_state(request: Request):
print(request.app.m.workers)
```
```
{
'Sanic-Main': {'pid': 99997},
'Sanic-Server-0-0': {
'server': True,
'state': 'ACKED',
'pid': 9999,
'start_at': datetime.datetime(2022, 10, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc),
'starts': 2,
'restart_at': datetime.datetime(2022, 10, 1, 0, 0, 12, 861332, tzinfo=datetime.timezone.utc)
},
'Sanic-Reloader-0': {
'server': False,
'state': 'STARTED',
'pid': 99998,
'start_at': datetime.datetime(2022, 10, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc),
'starts': 1
}
}
```
## Built-in non-server processes
As mentioned, the Manager also has the ability to run non-server processes. Sanic comes with two built-in types of non-server processes, and allows for [creating custom processes](#running-custom-processes).
The two built-in processes are
- the [auto-reloader](./development.md#automatic-reloader), optionally enabled to watch the file system for changes and trigger a restart
- [inspector](#inspector), optionally enabled to provide external access to the state of the running instance
## Inspector
Sanic has the ability to expose the state and the functionality of the `multiplexer` to the CLI. Currently, this requires the CLI command to be run on the same machine as the running Sanic instance. By default the inspector is disabled.
.. column::
To enable it, set the config value to `True`.
.. column::
```python
app.config.INSPECTOR = True
```
You will now have access to execute any of these CLI commands:
```
sanic inspect reload Trigger a reload of the server workers
sanic inspect shutdown Shutdown the application and all processes
sanic inspect scale N Scale the number of workers to N
sanic inspect <custom> Run a custom command
```
![](https://user-images.githubusercontent.com/166269/190099384-2f2f3fae-22d5-4529-b279-8446f6b5f9bd.png)
.. column::
This works by exposing a small HTTP service on your machine. You can control the location using configuration values:
.. column::
```python
app.config.INSPECTOR_HOST = "localhost"
app.config.INSPECTOR_PORT = 6457
```
[Learn more](./inspector.md) to find out what is possible with the Inspector.
## Running custom processes
To run a managed custom process on Sanic, you must create a callable. If that process is meant to be long-running, then it should handle a shutdown call by a `SIGINT` or `SIGTERM` signal.
.. column::
The simplest method for doing that in Python will be to just wrap your loop in `KeyboardInterrupt`.
If you intend to run another application, like a bot, then it is likely that it already has capability to handle this signal and you likely do not need to do anything.
.. column::
```python
from time import sleep
def my_process(foo):
try:
while True:
sleep(1)
except KeyboardInterrupt:
print("done")
```
.. column::
That callable must be registered in the `main_process_ready` listener. It is important to note that is is **NOT** the same location that you should register [shared context](#using-shared-context-between-worker-processes) objects.
.. column::
```python
@app.main_process_ready
async def ready(app: Sanic, _):
# app.manager.manage(<name>, <callable>, <kwargs>)
app.manager.manage("MyProcess", my_process, {"foo": "bar"})
```
## Single process mode
.. column::
If you would like to opt out of running multiple processes, you can run Sanic in a single process only. In this case, the Manager will not run. You will also not have access to any features that require processes (auto-reload, the inspector, etc).
.. column::
```sh
sanic path.to.server:app --single-process
```
```python
if __name__ == "__main__":
app.run(single_process=True)
```
```python
if __name__ == "__main__":
app.prepare(single_process=True)
Sanic.serve_single()
```

View File

@ -0,0 +1,521 @@
# Running Sanic
Sanic ships with its own internal web server. Under most circumstances, this is the preferred method for deployment. In addition, you can also deploy Sanic as an ASGI app bundled with an ASGI-able web server.
## Sanic Server
The main way to run Sanic is to use the included [CLI](#sanic-cli).
```sh
sanic path.to.server:app
```
In this example, Sanic is instructed to look for a python module called `path.to.server`. Inside of that module, it will look for a global variable called `app`, which should be an instance of `Sanic(...)`.
```python
# ./path/to/server.py
from sanic import Sanic, Request, json
app = Sanic("TestApp")
@app.get("/")
async def handler(request: Request):
return json({"foo": "bar"})
```
You may also dropdown to the [lower level API](#low-level-apprun) to call `app.run` as a script. However, if you choose this option you should be more comfortable handling issues that may arise with `multiprocessing`.
### Workers
.. column::
By default, Sanic runs a main process and a single worker process (see [worker manager](./manager.md) for more details).
To crank up the juice, just specify the number of workers in the run arguments.
.. column::
```sh
sanic server:app --host=0.0.0.0 --port=1337 --workers=4
```
Sanic will automatically spin up multiple processes and route traffic between them. We recommend as many workers as you have available processors.
.. column::
The easiest way to get the maximum CPU performance is to use the `--fast` option. This will automatically run the maximum number of workers given the system constraints.
*Added in v21.12*
.. column::
```sh
sanic server:app --host=0.0.0.0 --port=1337 --fast
```
In version 22.9, Sanic introduced a new worker manager to provide more consistency and flexibility between development and production servers. Read [about the manager](./manager.md) for more details about workers.
.. column::
If you only want to run Sanic with a single process, specify `single_process` in the run arguments. This means that auto-reload, and the worker manager will be unavailable.
*Added in v22.9*
.. column::
```sh
sanic server:app --host=0.0.0.0 --port=1337 --single-process
```
### Running via command
#### Sanic CLI
Use `sanic --help` to see all the options.
.. attrs::
:title: Sanic CLI help output
:class: details
```text
$ sanic --help
▄███ █████ ██ ▄█▄ ██ █ █ ▄██████████
██ █ █ █ ██ █ █ ██
▀███████ ███▄ ▀ █ █ ██ ▄ █ ██
██ █████████ █ ██ █ █ ▄▄
████ ████████▀ █ █ █ ██ █ ▀██ ███████
To start running a Sanic application, provide a path to the module, where
app is a Sanic() instance:
$ sanic path.to.server:app
Or, a path to a callable that returns a Sanic() instance:
$ sanic path.to.factory:create_app --factory
Or, a path to a directory to run as a simple HTTP server:
$ sanic ./path/to/static --simple
Required
========
Positional:
module Path to your Sanic app. Example: path.to.server:app
If running a Simple Server, path to directory to serve. Example: ./
Optional
========
General:
-h, --help show this help message and exit
--version show program's version number and exit
Application:
--factory Treat app as an application factory, i.e. a () -> <Sanic app> callable
-s, --simple Run Sanic as a Simple Server, and serve the contents of a directory
(module arg should be a path)
--inspect Inspect the state of a running instance, human readable
--inspect-raw Inspect the state of a running instance, JSON output
--trigger-reload Trigger worker processes to reload
--trigger-shutdown Trigger all processes to shutdown
HTTP version:
--http {1,3} Which HTTP version to use: HTTP/1.1 or HTTP/3. Value should
be either 1, or 3. [default 1]
-1 Run Sanic server using HTTP/1.1
-3 Run Sanic server using HTTP/3
Socket binding:
-H HOST, --host HOST
Host address [default 127.0.0.1]
-p PORT, --port PORT
Port to serve on [default 8000]
-u UNIX, --unix UNIX
location of unix socket
TLS certificate:
--cert CERT Location of fullchain.pem, bundle.crt or equivalent
--key KEY Location of privkey.pem or equivalent .key file
--tls DIR TLS certificate folder with fullchain.pem and privkey.pem
May be specified multiple times to choose multiple certificates
--tls-strict-host Only allow clients that send an SNI matching server certs
Worker:
-w WORKERS, --workers WORKERS
Number of worker processes [default 1]
--fast Set the number of workers to max allowed
--single-process Do not use multiprocessing, run server in a single process
--legacy Use the legacy server manager
--access-logs Display access logs
--no-access-logs No display access logs
Development:
--debug Run the server in debug mode
-r, --reload, --auto-reload
Watch source directory for file changes and reload on changes
-R PATH, --reload-dir PATH
Extra directories to watch and reload on changes
-d, --dev debug + auto reload
--auto-tls Create a temporary TLS certificate for local development (requires mkcert or trustme)
Output:
--coffee Uhm, coffee?
--no-coffee No uhm, coffee?
--motd Show the startup display
--no-motd No show the startup display
-v, --verbosity Control logging noise, eg. -vv or --verbosity=2 [default 0]
--noisy-exceptions Output stack traces for all exceptions
--no-noisy-exceptions
No output stack traces for all exceptions
```
#### As a module
.. column::
Sanic applications can also be called directly as a module.
.. column::
```bash
python -m sanic server.app --host=0.0.0.0 --port=1337 --workers=4
```
#### Using a factory
A very common solution is to develop your application *not* as a global variable, but instead using the factory pattern. In this context, "factory" means a function that returns an instance of `Sanic(...)`.
.. column::
Suppose that you have this in your `server.py`
.. column::
```python
from sanic import Sanic
def create_app() -> Sanic:
app = Sanic("MyApp")
return app
```
.. column::
You can run this application now by referencing it in the CLI explicitly as a factory:
.. column::
```sh
sanic server:create_app --factory
```
Or, explicitly like this:
```sh
sanic "server:create_app()"
```
Or, implicitly like this:
```sh
sanic server:create_app
```
*Implicit command added in v23.3*
### Low level `app.run`
When using `app.run` you will just call your Python file like any other script.
.. column::
`app.run` must be properly nested inside of a name-main block.
.. column::
```python
# server.py
app = Sanic("MyApp")
if __name__ == "__main__":
app.run()
```
.. danger::
Be *careful* when using this pattern. A very common mistake is to put too much logic inside of the `if __name__ == "__main__":` block.
🚫 This is a mistake
```python
from sanic import Sanic
from my.other.module import bp
app = Sanic("MyApp")
if __name__ == "__main__":
app.blueprint(bp)
app.run()
```
If you do this, your [blueprint](../best-practices/blueprints.md) will not be attached to your application. This is because the `__main__` block will only run on Sanic's main worker process, **NOT** any of its [worker processes](../deployment/manager.md). This goes for anything else that might impact your application (like attaching listeners, signals, middleware, etc). The only safe operations are anything that is meant for the main process, like the `app.main_*` listeners.
Perhaps something like this is more appropriate:
```python
from sanic import Sanic
from my.other.module import bp
app = Sanic("MyApp")
if __name__ == "__mp_main__":
app.blueprint(bp)
elif __name__ == "__main__":
app.run()
```
To use the low-level `run` API, after defining an instance of `sanic.Sanic`, we can call the run method with the following keyword arguments:
| Parameter | Default | Description |
| :-------------------: | :--------------: | :---------------------------------------------------------------------------------------- |
| **host** | `"127.0.0.1"` | Address to host the server on. |
| **port** | `8000` | Port to host the server on. |
| **unix** | `None` | Unix socket name to host the server on (instead of TCP). |
| **dev** | `False` | Equivalent to `debug=True` and `auto_reload=True`. |
| **debug** | `False` | Enables debug output (slows server). |
| **ssl** | `None` | SSLContext for SSL encryption of worker(s). |
| **sock** | `None` | Socket for the server to accept connections from. |
| **workers** | `1` | Number of worker processes to spawn. Cannot be used with fast. |
| **loop** | `None` | An asyncio-compatible event loop. If none is specified, Sanic creates its own event loop. |
| **protocol** | `HttpProtocol` | Subclass of asyncio.protocol. |
| **version** | `HTTP.VERSION_1` | The HTTP version to use (`HTTP.VERSION_1` or `HTTP.VERSION_3`). |
| **access_log** | `True` | Enables log on handling requests (significantly slows server). |
| **auto_reload** | `None` | Enables auto-reload on the source directory. |
| **reload_dir** | `None` | A path or list of paths to directories the auto-reloader should watch. |
| **noisy_exceptions** | `None` | Whether to set noisy exceptions globally. None means leave as default. |
| **motd** | `True` | Whether to display the startup message. |
| **motd_display** | `None` | A dict with extra key/value information to display in the startup message |
| **fast** | `False` | Whether to maximize worker processes. Cannot be used with workers. |
| **verbosity** | `0` | Level of logging detail. Max is 2. |
| **auto_tls** | `False` | Whether to auto-create a TLS certificate for local development. Not for production. |
| **single_process** | `False` | Whether to run Sanic in a single process. |
.. column::
For example, we can turn off the access log in order to increase performance, and bind to a custom host and port.
.. column::
```python
# server.py
app = Sanic("MyApp")
if __name__ == "__main__":
app.run(host='0.0.0.0', port=1337, access_log=False)
```
.. column::
Now, just execute the python script that has `app.run(...)`
.. column::
```sh
python server.py
```
For a slightly more advanced implementation, it is good to know that `app.run` will call `app.prepare` and `Sanic.serve` under the hood.
.. column::
Therefore, these are equivalent:
.. column::
```python
if __name__ == "__main__":
app.run(host='0.0.0.0', port=1337, access_log=False)
```
```python
if __name__ == "__main__":
app.prepare(host='0.0.0.0', port=1337, access_log=False)
Sanic.serve()
```
.. column::
This can be useful if you need to bind your appliction(s) to multiple ports.
.. column::
```python
if __name__ == "__main__":
app1.prepare(host='0.0.0.0', port=9990)
app1.prepare(host='0.0.0.0', port=9991)
app2.prepare(host='0.0.0.0', port=5555)
Sanic.serve()
```
### Sanic Simple Server
.. column::
Sometimes you just have a directory of static files that need to be served. This especially can be handy for quickly standing up a localhost server. Sanic ships with a Simple Server, where you only need to point it at a directory.
.. column::
```sh
sanic ./path/to/dir --simple
```
.. column::
This could also be paired with auto-reloading.
.. column::
```sh
sanic ./path/to/dir --simple --reload --reload-dir=./path/to/dir
```
*Added in v21.6*
### HTTP/3
Sanic server offers HTTP/3 support using [aioquic](https://github.com/aiortc/aioquic). This **must** be installed to use HTTP/3:
```sh
pip install sanic aioquic
```
```sh
pip install sanic[http3]
```
To start HTTP/3, you must explicitly request it when running your application.
.. column::
```sh
sanic path.to.server:app --http=3
```
```sh
sanic path.to.server:app -3
```
.. column::
```python
app.run(version=3)
```
To run both an HTTP/3 and HTTP/1.1 server simultaneously, you can use [application multi-serve](../release-notes/v22.3.html#application-multi-serve) introduced in v22.3. This will automatically add an [Alt-Svc](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Alt-Svc) header to your HTTP/1.1 requests to let the client know that it is also available as HTTP/3.
.. column::
```sh
sanic path.to.server:app --http=3 --http=1
```
```sh
sanic path.to.server:app -3 -1
```
.. column::
```python
app.prepare(version=3)
app.prepare(version=1)
Sanic.serve()
```
Because HTTP/3 requires TLS, you cannot start a HTTP/3 server without a TLS certificate. You should [set it up yourself](../how-to/tls.html) or use `mkcert` if in `DEBUG` mode. Currently, automatic TLS setup for HTTP/3 is not compatible with `trustme`. See [development](./development.md) for more details.
*Added in v22.6*
## ASGI
Sanic is also ASGI-compliant. This means you can use your preferred ASGI webserver to run Sanic. The three main implementations of ASGI are [Daphne](http://github.com/django/daphne), [Uvicorn](https://www.uvicorn.org/), and [Hypercorn](https://pgjones.gitlab.io/hypercorn/index.html).
.. warning::
Daphne does not support the ASGI `lifespan` protocol, and therefore cannot be used to run Sanic. See [Issue #264](https://github.com/django/daphne/issues/264) for more details.
Follow their documentation for the proper way to run them, but it should look something like:
```sh
uvicorn myapp:app
```
```sh
hypercorn myapp:app
```
A couple things to note when using ASGI:
1. When using the Sanic webserver, websockets will run using the `websockets` package. In ASGI mode, there is no need for this package since websockets are managed in the ASGI server.
2. The ASGI lifespan protocol <https://asgi.readthedocs.io/en/latest/specs/lifespan.html>, supports only two server events: startup and shutdown. Sanic has four: before startup, after startup, before shutdown, and after shutdown. Therefore, in ASGI mode, the startup and shutdown events will run consecutively and not actually around the server process beginning and ending (since that is now controlled by the ASGI server). Therefore, it is best to use `after_server_start` and `before_server_stop`.
### Trio
Sanic has experimental support for running on Trio with:
```sh
hypercorn -k trio myapp:app
```
## Gunicorn
[Gunicorn](http://gunicorn.org/) ("Green Unicorn") is a WSGI HTTP Server for UNIX based operating systems. It is a pre-fork worker model ported from Rubys Unicorn project.
In order to run Sanic application with Gunicorn, you need to use it with the adapter from [uvicorn](https://www.uvicorn.org/). Make sure uvicorn is installed and run it with `uvicorn.workers.UvicornWorker` for Gunicorn worker-class argument:
```sh
gunicorn myapp:app --bind 0.0.0.0:1337 --worker-class uvicorn.workers.UvicornWorker
```
See the [Gunicorn Docs](http://docs.gunicorn.org/en/latest/settings.html#max-requests) for more information.
.. warning::
It is generally advised to not use `gunicorn` unless you need it. The Sanic Server is primed for running Sanic in production. Weigh your considerations carefully before making this choice. Gunicorn does provide a lot of configuration options, but it is not the best choice for getting Sanic to run at its fastest.
## Performance considerations
.. column::
When running in production, make sure you turn off `debug`.
.. column::
```sh
sanic path.to.server:app
```
.. column::
Sanic will also perform fastest if you turn off `access_log`.
If you still require access logs, but want to enjoy this performance boost, consider using [Nginx as a proxy](./nginx.md), and letting that handle your access logging. It will be much faster than anything Python can handle.
.. column::
```sh
sanic path.to.server:app --no-access-logs
```

27
guide/content/en/help.md Normal file
View File

@ -0,0 +1,27 @@
---
layout: main
---
# Need some help?
As an active community of developers, we try to support each other. If you need some help, try one of the following:
.. column::
### Discord 💬
Best place to turn for quick answers and live chat
`#sanic-support` channel on the [Discord server](https://discord.gg/FARQzAEMAA)
.. column::
### Community Forums 👥
Good for sharing snippets of code and longer support queries
`Questions and Help` category on the [Forums](https://community.sanicframework.org/c/questions-and-help/6)
---
We also actively monitor the `[sanic]` tag on [Stack Overflow](https://stackoverflow.com/questions/tagged/sanic).

311
guide/content/en/index.md Normal file
View File

@ -0,0 +1,311 @@
---
layout: home
features:
- title: Simple and lightweight
details: Intuitive API with smart defaults and no bloat allows you to get straight to work building your app.
- title: Unopinionated and flexible
details: Build the way you want to build without letting your tooling constrain you.
- title: Performant and scalable
details: Built from the ground up with speed and scalability as a main concern. It is ready to power web applications big and small.
- title: Production ready
details: Out of the box, it comes bundled with a web server ready to power your web applications.
- title: Trusted by millions
details: Sanic is one of the overall most popular frameworks on PyPI, and the top async enabled framework
- title: Community driven
details: The project is maintained and run by the community for the community.
---
## ⚡ The lightning-fast asynchronous Python web framework
.. attrs::
:class: columns is-multiline mt-6
.. attrs::
:class: column is-4
#### Simple and lightweight
Intuitive API with smart defaults and no bloat allows you to get straight to work building your app.
.. attrs::
:class: column is-4
#### Unopinionated and flexible
Build the way you want to build without letting your tooling constrain you.
.. attrs::
:class: column is-4
#### Performant and scalable
Built from the ground up with speed and scalability as a main concern. It is ready to power web applications big and small.
.. attrs::
:class: column is-4
#### Production ready
Out of the box, it comes bundled with a web server ready to power your web applications.
.. attrs::
:class: column is-4
#### Trusted by millions
Sanic is one of the overall most popular frameworks on PyPI, and the top async enabled framework
.. attrs::
:class: column is-4
#### Community driven
The project is maintained and run by the community for the community.
.. attrs::
:class: is-size-3 mt-6
**With the features and tools you'd expect.**
.. attrs::
:class: is-size-3 ml-6
**And some {span:has-text-primary:you wouldn't believe}.**
.. tab:: Production-grade
After installing, Sanic has all the tools you need for a scalable, production-grade server—out of the box!
Including [full TLS support](/en/guide/how-to/tls).
```python
from sanic import Sanic
from sanic.response import text
app = Sanic("MyHelloWorldApp")
@app.get("/")
async def hello_world(request):
return text("Hello, world.")
```
```sh
sanic path.to.server:app
[2023-01-31 12:34:56 +0000] [999996] [INFO] Sanic v22.12.0
[2023-01-31 12:34:56 +0000] [999996] [INFO] Goin' Fast @ http://127.0.0.1:8000
[2023-01-31 12:34:56 +0000] [999996] [INFO] mode: production, single worker
[2023-01-31 12:34:56 +0000] [999996] [INFO] server: sanic, HTTP/1.1
[2023-01-31 12:34:56 +0000] [999996] [INFO] python: 3.10.9
[2023-01-31 12:34:56 +0000] [999996] [INFO] platform: SomeOS-9.8.7
[2023-01-31 12:34:56 +0000] [999996] [INFO] packages: sanic-routing==22.8.0
[2023-01-31 12:34:56 +0000] [999997] [INFO] Starting worker [999997]
```
.. tab:: TLS server
Running Sanic with TLS enabled is as simple as passing it the file paths...
```sh
sanic path.to.server:app --cert=/path/to/bundle.crt --key=/path/to/privkey.pem
```
... or the a directory containing `fullchain.pem` and `privkey.pem`
```sh
sanic path.to.server:app --tls=/path/to/certs
```
**Even better**, while you are developing, let Sanic handle setting up local TLS certificates so you can access your site over TLS at [https://localhost:8443](https://localhost:8443)
```sh
sanic path.to.server:app --dev --auto-tls
```
.. tab:: Websockets
Up and running with websockets in no time using the [websockets](https://websockets.readthedocs.io) package.
```python
from sanic import Request, Websocket
@app.websocket("/feed")
async def feed(request: Request, ws: Websocket):
async for msg in ws:
await ws.send(msg)
```
.. tab:: Static files
Serving static files is of course intuitive and easy. Just name an endpoint and either a file or directory that should be served.
```python
app.static("/", "/path/to/index.html")
app.static("/uploads/", "/path/to/uploads/")
```
Moreover, serving a directory has two additional features: automatically serving an index, and automatically serving a file browser.
Sanic can automatically serve `index.html` (or any other named file) as an index page in a directory or its subdirectories.
```python
app.static(
"/uploads/",
"/path/to/uploads/",
index="index.html"
)
```
And/or, setup Sanic to display a file browser.
![image](/assets/images/directory-view.png)
```python
app.static(
"/uploads/",
"/path/to/uploads/",
directory_view=True
)
```
.. tab:: Lifecycle
Beginning or ending a route with functionality is as simple as adding a decorator.
```python
@app.on_request
async def add_key(request):
request.ctx.foo = "bar"
@app.on_response
async def custom_banner(request, response):
response.headers["X-Foo"] = request.ctx.foo
```
Same with server events.
```python
@app.before_server_start
async def setup_db(app):
app.ctx.db_pool = await db_setup()
@app.after_server_stop
async def setup_db(app):
await app.ctx.db_pool.shutdown()
```
But, Sanic also allows you to tie into a bunch of built-in events (called signals), or create and dispatch your own.
```python
@app.signal("http.lifecycle.complete") # built-in
async def my_signal_handler(conn_info):
print("Connection has been closed")
@app.signal("something.happened.ohmy") # custom
async def my_signal_handler():
print("something happened")
await app.dispatch("something.happened.ohmy")
```
.. tab:: Smart error handling
Raising errors will intuitively result in proper HTTP errors:
```python
raise sanic.exceptions.NotFound # Automatically responds with HTTP 404
```
Or, make your own:
```python
from sanic.exceptions import SanicException
class TeapotError(SanicException):
status_code = 418
message = "Sorry, I cannot brew coffee"
raise TeapotError
```
And, when an error does happen, Sanic's beautiful DEV mode error page will help you drill down to the bug quickly.
![image](../assets/images/error-div-by-zero.png)
Regardless, Sanic comes with an algorithm that attempts to respond with HTML, JSON, or text-based errors as appropriate. Don't worry, it is super easy to setup and customize your error handling to your exact needs.
.. tab:: App Inspector
Check in on your live, running applications (whether local or remote).
```sh
sanic inspect
┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Sanic │
│ Inspecting @ http://localhost:6457 │
├───────────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────┤
│ │ mode: production, single worker │
│ ▄███ █████ ██ │ server: unknown │
│ ██ │ python: 3.10.9 │
│ ▀███████ ███▄ │ platform: SomeOS-9.8.7
│ ██ │ packages: sanic==22.12.0, sanic-routing==22.8.0, sanic-testing==22.12.0, sanic-ext==22.12.0 │
│ ████ ████████▀ │ │
│ │ │
│ Build Fast. Run Fast. │ │
└───────────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────┘
Sanic-Main
pid: 999996
Sanic-Server-0-0
server: True
state: ACKED
pid: 999997
start_at: 2023-01-31T12:34:56.00000+00:00
starts: 1
Sanic-Inspector-0
server: False
state: STARTED
pid: 999998
start_at: 2023-01-31T12:34:56.00000+00:00
starts: 1
```
And, issue commands like `reload`, `shutdown`, `scale`...
```sh
sanic inspect scale 4
```
... or even create your own!
```sh
sanic inspect migrations
```
.. tab:: Extendable
In addition to the tools that Sanic comes with, the officially supported [Sanic Extensions](./plugins/sanic-ext/getting-started.md) provides lots of extra goodies to make development easier.
- **CORS** protection
- Template rendering with **Jinja**
- **Dependency injection** into route handlers
- OpenAPI documentation with **Redoc** and/or **Swagger**
- Predefined, endpoint-specific response **serializers**
- Request query arguments and body input **validation**
- **Auto create** HEAD, OPTIONS, and TRACE endpoints
- Live **health monitor**
.. tab:: Developer Experience
Sanic is **built for building**.
From the moment it is installed, Sanic includes helpful tools to help the developer get their job done.
- **One server** - Develop locally in DEV mode on the same server that will run your PRODUCTION application
- **Auto reload** - Reload running applications every time you save a Python fil, but also auto-reload **on any arbitrary directory** like HTML template directories
- **Debuggin tools** - Super helpful (and beautiful) [error pages](/en/guide/best-practices/exceptions) that help you traverse the trace stack easily
- **Auto TLS** - Running a localhost website with `https` can be difficult, [Sanic makes it easy](/en/guide/how-to/tls)
- **Streamlined testing** - Built-in testing capabilities, making it easier for developers to create and run tests, ensuring the quality and reliability of their services
- **Modern Python** - Thoughtful use of type hints to help the developer IDE experience

119
guide/content/en/migrate.py Normal file
View File

@ -0,0 +1,119 @@
import re
from pathlib import Path
from textwrap import indent
from emoji import EMOJI
COLUMN_PATTERN = re.compile(r"---:1\s*(.*?)\s*:--:1\s*(.*?)\s*:---", re.DOTALL)
PYTHON_HIGHLIGHT_PATTERN = re.compile(r"```python\{+.*?\}", re.DOTALL)
BASH_HIGHLIGHT_PATTERN = re.compile(r"```bash\{+.*?\}", re.DOTALL)
NOTIFICATION_PATTERN = re.compile(
r":::\s*(\w+)\s*(.*?)\n([\s\S]*?):::", re.MULTILINE
)
EMOJI_PATTERN = re.compile(r":(\w+):")
CURRENT_DIR = Path(__file__).parent
SOURCE_DIR = (
CURRENT_DIR.parent.parent.parent.parent / "sanic-guide" / "src" / "en"
)
def convert_columns(content: str):
def replacer(match: re.Match):
left, right = match.groups()
left = indent(left.strip(), " " * 4)
right = indent(right.strip(), " " * 4)
return f"""
.. column::
{left}
.. column::
{right}
"""
return COLUMN_PATTERN.sub(replacer, content)
def cleanup_highlights(content: str):
content = PYTHON_HIGHLIGHT_PATTERN.sub("```python", content)
content = BASH_HIGHLIGHT_PATTERN.sub("```bash", content)
return content
def convert_notifications(content: str):
def replacer(match: re.Match):
type_, title, body = match.groups()
body = indent(body.strip(), " " * 4)
return f"""
.. {type_}:: {title}
{body}
"""
return NOTIFICATION_PATTERN.sub(replacer, content)
def convert_emoji(content: str):
def replace(match):
return EMOJI.get(match.group(1), match.group(0))
return EMOJI_PATTERN.sub(replace, content)
def convert_code_blocks(content: str):
for src, dest in (
("yml", "yaml"),
("caddy", ""),
("systemd", ""),
("mermaid", "\nmermaid"),
):
content = content.replace(f"```{src}", f"```{dest}")
return content
def cleanup_multibreaks(content: str):
return content.replace("\n\n\n", "\n\n")
def convert(content: str):
content = convert_emoji(content)
content = convert_columns(content)
content = cleanup_highlights(content)
content = convert_code_blocks(content)
content = convert_notifications(content)
content = cleanup_multibreaks(content)
return content
def convert_file(src: Path, dest: Path):
short_src = src.relative_to(SOURCE_DIR)
short_dest = dest.relative_to(CURRENT_DIR)
print(f"Converting {short_src} -> {short_dest}")
content = src.read_text()
new_content = convert(content)
dest.parent.mkdir(parents=True, exist_ok=True)
dest.touch()
dest.write_text(new_content)
def translate_path(source_dir: Path, source_path: Path, dest_dir: Path):
rel_path = source_path.relative_to(source_dir)
dest_path = dest_dir / rel_path
return dest_path
def main():
print(f"Source: {SOURCE_DIR}")
for path in SOURCE_DIR.glob("**/*.md"):
if path.name in ("index.md", "README.md"):
continue
dest_path = translate_path(SOURCE_DIR, path, CURRENT_DIR)
convert_file(path, dest_path)
if __name__ == "__main__":
main()

View File

@ -0,0 +1 @@
# Project

View File

@ -0,0 +1,9 @@
# Feature Requests
[Create new feature request](https://github.com/sanic-org/sanic/issues/new?assignees=&labels=feature+request&template=feature_request.md)
To vote on a feature request, visit the [GitHub Issues](https://github.com/sanic-org/sanic/issues?q=is%3Aissue+is%3Aopen+label%3A%22feature+request%22%2CRFC+sort%3Areactions-%2B1-desc) and add a reaction
---
<FeatureRequests />

View File

@ -0,0 +1,65 @@
# Policies
## Versioning
Sanic uses [calendar versioning](https://calver.org/), aka "calver". To be more specific, the pattern follows:
```
YY.MM.MICRO
```
Generally, versions are referred to in their ``YY.MM`` form. The `MICRO` number indicates an incremental patch version, starting at `0`.
## Release Schedule
There are four (4) scheduled releases per year: March, June, September, and December. Therefore, there are four (4) released versions per year: `YY.3`, `YY.6`, `YY.9`, and `YY.12`.
This release schedule provides:
- a predictable release cadence,
- relatively short development windows allowing features to be regularly released,
- controlled [deprecations](#deprecation), and
- consistent stability with a yearly LTS.
We also use the yearly release cycle in conjunction with our governance model, covered by the [S.C.O.P.E.](./scope.md)
### Long term support v Interim releases
Sanic releases a long term support release (aka "LTS") once a year in December. The LTS releases receive bug fixes and security updates for **24 months**. Interim releases throughout the year occur every three months, and are supported until the subsequent release.
| Version | LTS | Supported |
| ------- | ------------- | ----------------------- |
| 22.12 | until 2024-12 | :white_check_mark: |
| 22.9 | | :x: |
| 22.6 | | :x: |
| 22.3 | | :x: |
| 21.12 | until 2023-12 | :ballot_box_with_check: |
| 21.9 | | :x: |
| 21.6 | | :x: |
| 21.3 | | :x: |
| 20.12 | | :x: |
| 20.9 | | :x: |
| 20.6 | | :x: |
| 20.3 | | :x: |
| 19.12 | | :x: |
| 19.9 | | :x: |
| 19.6 | | :x: |
| 19.3 | | :x: |
| 18.12 | | :x: |
| 0.8.3 | | :x: |
| 0.7.0 | | :x: |
| 0.6.0 | | :x: |
| 0.5.4 | | :x: |
| 0.4.1 | | :x: |
| 0.3.1 | | :x: |
| 0.2.0 | | :x: |
| 0.1.9 | | :x: |
:ballot_box_with_check: = security fixes
:white_check_mark: = full support
## Deprecation
Before a feature is deprecated, or breaking changes are introduced into the API, it shall be publicized and shall appear with deprecation warnings through two release cycles. No deprecations shall be made in an LTS release.
Breaking changes or feature removal may happen outside of these guidelines when absolutely warranted. These circumstances should be rare. For example, it might happen when no alternative is available to curtail a major security issue.

View File

@ -0,0 +1,263 @@
---
title: S.C.O.P.E
---
Sanic Community Organization Policy E-manual
============================================
December 2019, version 1
Goals
-----
To create a sustainable, community-driven organization around the Sanic projects that promote: (1) stability and predictability, (2) quick iteration and enhancement cycles, (3) engagement from outside contributors, (4) overall reliable software, and (5) a safe, rewarding environment for the community members.
Overview
--------
This Policy is the governance model for the Sanic Community Organization (“SCO”). The SCO is a meritocratic, consensus-based community organization responsible for all projects adopted by it. Anyone with an interest in one of the projects can join the community, contribute to the community or projects, and participate in the decision making process. This document describes how that participation takes place and how to set about earning merit within the project community.
Structure
---------
The SCO has multiple **projects**. Each project is represented by a single GitHub repository under the Sanic community umbrella. These projects are used by **users**, developed by **contributors**, governed by **core developers**, released by **release managers**, and ultimately overseen by a **steering council**. If this sounds similar to the Python project and PEP 8016 that is because it is intentionally designed that way.
Roles and responsibilities
--------------------------
### Users
Users are community members who have a need for the projects. They are the developers and personnel that download and install the packages. Users are the **most important** members of the community and without them the projects would have no purpose. Anyone can be a user and the licenses adopted by the projects shall be appropriate open source licenses.
_The SCO asks its users to participate in the project and community as much as possible._
User contributions enable the project team to ensure that they are satisfying the needs of those users. Common user contributions include (but are not limited to):
* evangelizing about the project (e.g. a link on a website and word-of-mouth awareness raising)
* informing developers of strengths and weaknesses from a new user perspective
* providing moral support (a thank you goes a long way)
* providing financial support (the software is open source, but its developers need to eat)
Users who continue to engage with the SCO, its projects, and its community will often become more and more involved. Such users may find themselves becoming contributors, as described in the next section.
### Contributors
Contributors are community members who contribute in concrete ways to one or more of the projects. Anyone can become a contributor and contributions can take many forms. Contributions and requirements are governed by each project separately by a contribution policy.
There is **no expectation** of commitment to the project, **no specific skill requirements** and **no selection process**.
In addition to their actions as users, contributors may also find themselves doing one or more of the following:
* supporting new users (existing users are often the best people to support new users)
* reporting bugs
* identifying requirements
* providing graphics and web design
* Programming
* example use cases
* assisting with project infrastructure
* writing documentation
* fixing bugs
* adding features
* providing constructive opinions and engaging in community discourse
Contributors engage with the projects through GitHub and the Community Forums. They submit changes to the projects itself via pull requests, which will be considered for inclusion in the project by the community at large. The Community Forums are the most appropriate place to ask for help when making that first contribution.
Indeed one of the most important roles of a contributor may be to **simply engage in the community conversation**. Most decisions about the direction of a project are made by consensus. This is discussed in more detail below. In general, however, it is helpful for the health and direction of the projects for the contributors to **speak freely** (within the confines of the code of conduct) and **express their opinions and experiences** to help drive the consensus building.
As contributors gain experience and familiarity with a project, their profile within, and commitment to, the community will increase. At some stage, they may find themselves being nominated for a core developer team.
### Core Developer
Each project under the SCO umbrella has its own team of core developers. They are the people in charge of that project.
_What is a core developer?_
Core developers are community members who have shown that they are committed to the continued development of the project through ongoing engagement with the community. Being a core developer allows contributors to more easily carry on with their project related activities by giving them direct access to the projects resources. They can make changes directly to the project repository without having to submit changes via pull requests from a fork.
This does not mean that a core developer is free to do what they want. In fact, core developers have no more direct authority over the final release of a package than do contributors. While this honor does indicate a valued member of the community who has demonstrated a healthy respect for the projects aims and objectives, their work continues to be reviewed by the community before acceptance in an official release.
_What can a core developer do on a project?_
Each project might define this role slightly differently. However, the general usage of this designation is that an individual has risen to a level of trust within the community such that they now are given some control. This comes in the form of push rights to non-protected branches, and the ability to have a voice in the approval of pull requests.
The projects employ various communication mechanisms to ensure that all contributions are reviewed by the community as a whole. This includes tools provided by GitHub, as well as the Community Forums. By the time a contributor is invited to become a core developer, they should be familiar with the various tools and workflows as a user and then as a contributor.
_How to become a core developer?_
Anyone can become a core developer; there are no special requirements, other than to have shown a willingness and ability to positively participate in the project as a team player.
Typically, a potential core developer will need to show that they have an understanding of the project, its objectives and its strategy. They will also have provided valuable contributions to the project over a period of time. However, there is **no technical or other skill** requirement for eligibility.
New core developers can be **nominated by any existing core developer** at any time. At least twice a year (April and October) there will be a ballot process run by the Steering Council. Voting should be done by secret ballot. Each existing core developer for that project receives a number of votes equivalent to the number of nominees on the ballot. For example, if there are four nominees, then each existing core developer has four votes. The core developer may cast those votes however they choose, but may not vote for a single nominee more than once. A nominee must receive two-thirds approval from the number of cast ballots (not the number of eligible ballots). Once accepted by the core developers, it is the responsibility of the Steering Council to approve and finalize the nomination. The Steering Council does not have the right to determine whether a nominee is meritorious enough to receive the core developer title. However, they do retain the right to override a vote in cases where the health of the community would so require.
Once the vote has been held, the aggregated voting results are published on the Community Forums. The nominee is entitled to request an explanation of any override against them. A nominee that fails to be admitted as a core developer may be nominated again in the future.
It is important to recognize that being a core developer is a privilege, not a right. That privilege must be earned and once earned it can be removed by the Steering Council (see next section) in extreme circumstances. However, under normal circumstances the core developer title exists for as long as the individual wishes to continue engaging with the project and community.
A committer who shows an above-average level of contribution to the project, particularly with respect to its strategic direction and long-term health, may be nominated to become a member of the Steering Council, or a Release Manager. This role is described below.
_What are the rights and responsibilities of core developers?_
As discussed, the majority of decisions to be made are by consensus building. In certain circumstances where an issue has become more contentious, or a major decision needs to be made, the Release Manager or Steering Council may decide (or be required) to implement the RFC process, which is outlined in more detail below.
It is also incumbent upon core developers to have a voice in the governance of the community. All core developers for all of the projects have the ability to be nominated to be on the Steering Council and vote in their elections.
This Policy (the “SCOPE”) may only be changed under the authority of two-thirds of active core developers, except that in the first six (6) months after adoption, the core developers reserve the right to make changes under the authority of a simple majority of active core developers.
_What if a core developer becomes inactive?_
It is hoped that all core developers participate and remain active on a regular basis in their projects. However, it is also understood that such commitments may not be realistic or possible from time to time.
Therefore, the Steering Council has the duty to encourage participation and the responsibility to place core developers into an inactive status if they are no longer willing or capable to participate. The main purpose of this is **not to punish** a person for behavior, but to help the development process to continue for those that do remain active.
To this end, a core developer that becomes “inactive” shall not have commit rights to a repository, and shall not participate in any votes. To be eligible to vote in an election, a core developer **must have been active** at the time of the previous scheduled project release.
Inactive members may ask the Steering Council to reinstate their status at any time, and upon such request the Steering Council shall make the core developer active again.
Individuals that know they will be unable to maintain their active status for a period are asked to be in communication with the Steering Council and declare themselves inactive if necessary.
An “active” core developer is an individual that has participated in a meaningful way during the previous six months. Any further definition is within the discretion of the Steering Council.
### Release Manager
Core developers shall have access only to make commits and merges on non-protected branches. The “master” branch and other protected branches are controlled by the release management team for that project. Release managers shall be elected from the core development team by the core development team, and shall serve for a full release cycle.
Each core developer team may decide how many release managers to have for each release cycle. It is highly encouraged that there be at least two release managers for a release cycle to help divide the responsibilities and not force too much effort upon a single person. However, there also should not be so many managers that their efforts are impeded.
The main responsibilities of the release management team include:
* push the development cycle forward by monitoring and facilitating technical discussions
* establish a release calendar and perform actions required to release packages
* approve pull requests to the master branch and other protected branches
* merge pull requests to the master branch and other protected branches
The release managers **do not have the authority to veto or withhold a merge** of a pull request that otherwise meets contribution criteria and has been accepted by the community. It is not their responsibility to decide what should be developed, but rather that the decisions of the community are carried out and that the project is being moved forward.
From time to time, a decision may need to be made that cannot be achieved through consensus. In that case, the release managers have the authority to call upon the removal of the decision to the RFC process. This should not occur regularly (unless required as discussed below), and its use should be discouraged in favor of the more communal consensus building strategy.
Since not all projects have the same requirements, the specifics governing release managers on a project shall be set forth in an Appendix to this Policy, or in the projects contribution guidelines.
If necessary, the Steering Council has the right to remove a release manager that is derelict in their duties, or for other good cause.
### Steering Council
The Steering Council is the governing body consisting of those individuals identified as the “project owner” and having control of the resources and assets of the SCO. Their ultimate goal is to ensure the smooth operation of the projects by removing impediments, and assisting the members as needed. It is expected that they will be regular voices in the community.
_What can the Steering Council do?_
The members of the Steering Council **do not individually have any more authority than any other core developer**, and shall not have any additional rights to make decisions, commits, merges, or the like on a project.
However, as a body, the Steering Council has the following capacity:
* accept, remand, and reject all RFCs
* enforce the community code of conduct
* administer community assets such as repositories, servers, forums, integration services, and the like (or, to delegate such authority to someone else)
* place core developers into inactive status where appropriate take any other enforcement measures afforded to it in this Policy, including, in extreme cases, removing core developers
* adopt or remove projects from the community umbrella
It is highly encouraged that the Steering Council delegate its authority as much as possible, and where appropriate, to other willing community members.
The Steering Council **does not have the authority** to change this Policy.
_How many members are on the Steering Council?_
Four.
While it seems like a committee with four votes may potentially end in a deadlock with no way to break a majority vote, the Steering Council is discouraged from voting as much as possible. Instead, it should try to work by consensus, and requires three consenting votes when it is necessary to vote on a matter.
_How long do members serve on the Steering Council?_
A single term shall be for two calendar years starting in January. Terms shall be staggered so that each year there are two members continuing from the previous years council.
Therefore, the inaugural vote shall have two positions available for a two year term, and two positions available for a one year term.
There are no limits to the number of terms that can be served, and it is possible for an individual to serve consecutive terms.
_Who runs the Steering Council?_
After the Steering Council is elected, the group shall collectively decide upon one person to act as the Chair. The Chair does not have any additional rights or authority over any other member of the Steering Council.
The role of the Chair is merely as a coordinator and facilitator. The Chair is expected to ensure that all governance processes are adhered to. The position is more administrative and clerical, and is expected that the Chair sets agendas and coordinates discussion of the group.
_How are council members elected?_
Once a year, **all eligible core developers** for each of the projects shall have the right to elect members to the Steering Council.
Nominations shall be open from September 1 and shall close on September 30. After that, voting shall begin on October 1 and shall close on October 31. Every core developer active on the date of the June release of the Sanic Framework for that year shall be eligible to receive one vote per vacant seat on the Steering Council. For the sake of clarity, to be eligible to vote, a core developer **does not** need to be a core developer on Sanic Framework, but rather just have been active within their respective project on that date.
The top recipients of votes shall be declared the winners. If there is any tie, it is highly encouraged that the tied nominees themselves resolve the dispute before a decision is made at random.
In regards to the inaugural vote of the Steering Council, the top two vote-recipients shall serve for two years, and the next two vote-recipients shall assume the one-year seats.
To be an eligible candidate for the Steering Council, the individual must have been a core developer in active status on at least one project for the previous twelve months.
_What if there is a vacancy?_
If a vacancy on the Steering Council exists during a term, then the next highest vote-recipient in the previous election shall be offered to complete the remainder of the term. If one cannot be found this way, the Steering Council may decide the most appropriate course of action to fill the seat (whether by appointment, vote, or other means).
If a member of the Steering Council becomes inactive, then that individual shall be removed from the Steering Council immediately and the seat shall become vacant.
In extreme cases, the body of all core developers has the right to bring a vote to remove a member of the Steering Council for cause by a two-thirds majority of all eligible voting core developers.
_How shall the Steering Council conduct its business?_
As much as possible, the Steering Council shall conduct its business and discussions in the open. Any member of the community should be allowed to enter the conversation with them. However, at times it may be necessary or appropriate for discussions to be held privately. Selecting the proper venue for conversations is part of the administrative duties of the Chair.
While the specifics of how to operate are beyond the scope of the Policy, it is encouraged that the Steering Council attempt to meet at least one time per quarter in a “real-time” discussion. This could be achieved via video conferencing, live chatting, or other appropriate means.
Support
-------
All participants in the community are encouraged to provide support for users within the project management infrastructure. This support is provided as a way of growing the community. Those seeking support should recognize that all support activity within the project is voluntary and is therefore provided as and when time allows. A user requiring guaranteed response times or results should therefore seek to purchase a support contract from a community member. However, for those willing to engage with the project on its own terms, and willing to help support other users, the community support channels are ideal.
Decision making process
-----------------------
Decisions about the future of the projects are made through discussion with all members of the community, from the newest user to the most experienced member. Everyone has a voice.
All non-sensitive project management discussion takes place on the community forums, or other designated channels. Occasionally, sensitive discussions may occur in private.
In order to ensure that the project is not bogged down by endless discussion and continual voting, the project operates a policy of **lazy consensus**. This allows the majority of decisions to be made without resorting to a formal vote. For any **major decision** (as defined below), there is a separate Request for Comment (RFC) process.
### Technical decisions
Pull requests and technical decisions should generally fall into the following categories.
* **Routine**: Documentation fixes, code changes that are for cleanup or additional testing. No functionality changes.
* **Minor**: Changes to the code base that either fix a bug, or introduce a trivial feature. No breaking changes.
* **Major**: Any change to the code base that breaks or deprecates existing API, alters operation in a non-trivial manner, or adds a significant feature.
It is generally the responsibility of the release managers to make sure that changes to the repositories receive the proper authorization before merge.
The release managers retain the authority to individually review and accept routine decisions that meet standards for code quality without additional input.
### Lazy consensus
Decision making (whether by the community or Steering Council) typically involves the following steps:
* proposal
* discussion
* vote (if consensus is not reached through discussion)
* decision
Any community member can make a proposal for consideration by the community. In order to initiate a discussion about a new idea, they should post a message on the appropriate channel on the Community forums, or submit a pull request implementing the idea on GitHub. This will prompt a review and, if necessary, a discussion of the idea.
The goal of this review and discussion is to gain approval for the contribution. Since most people in the project community have a shared vision, there is often little need for discussion in order to reach consensus.
In general, as long as nobody explicitly opposes a proposal or patch, it is recognized as having the support of the community. This is called lazy consensus; that is, those who have not stated their opinion explicitly have implicitly agreed to the implementation of the proposal.
Lazy consensus is a very important concept within the SCO. It is this process that allows a large group of people to efficiently reach consensus, as someone with no objections to a proposal need not spend time stating their position, and others need not spend time reading such messages.
For lazy consensus to be effective, it is necessary to allow an appropriate amount of time before assuming that there are no objections to the proposal. This is somewhat dependent upon the circumstances, but it is generally assumed that 72 hours is reasonable. This requirement ensures that everyone is given enough time to read, digest and respond to the proposal. This time period is chosen so as to be as inclusive as possible of all participants, regardless of their location and time commitments. The facilitators of discussion (whether it be the Chair or the Release Managers, where applicable) shall be charged with determining the proper length of time for such consensus to be reached.
As discussed above regarding so-called routine decisions, the release managers have the right to make decisions within a shorter period of time. In such cases, lazy consensus shall be implied.
### Request for Comment (RFC)
The Steering Council shall be in charge of overseeing the RFC process. It shall be a process that remains open to debate to all members of the community, and shall allow for ample time to consider a proposal and for members to respond and engage in meaningful discussion.
The final decision is vested with the Steering Council. However, it is strongly discouraged that the Steering Council adopt a decision that is contrary to any consensus that may exist in the community. From time to time this may happen if there is a conflict between consensus and the overall project and community goals.
An RFC shall be initiated by submission to the Steering Council in the public manner as set forth by the Steering Council. Debate shall continue and be facilitated by the Steering Council in general, and the Chair specifically.
In circumstances that the Steering Council feels it is appropriate, the RFC process may be waived in favor of lazy consensus.

View File

@ -0,0 +1,74 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at adam@sanicframework.org. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@ -0,0 +1,166 @@
# Contributing
Thank you for your interest! Sanic is always looking for contributors. If you don't feel comfortable contributing code, adding docstrings to the source files, or helping with the [Sanic User Guide](https://github.com/sanic-org/sanic-guide) by providing documentation or implementation examples would be appreciated!
We are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, disability, ethnicity, religion, or similar personal characteristic. Our [code of conduct](https://github.com/sanic-org/sanic/blob/master/CONDUCT.md) sets the standards for behavior.
## Installation
To develop on Sanic (and mainly to just run the tests) it is highly recommend to install from sources.
So assume you have already cloned the repo and are in the working directory with a virtual environment already set up, then run:
```sh
pip install -e ".[dev]"
```
## Dependency Changes
`Sanic` doesn't use `requirements*.txt` files to manage any kind of dependencies related to it in order to simplify the effort required in managing the dependencies. Please make sure you have read and understood the following section of the document that explains the way `sanic` manages dependencies inside the `setup.py` file.
| Dependency Type | Usage | Installation |
| ------------------------------- | ------------------------------------------------- | ---------------------------- |
| requirements | Bare minimum dependencies required for sanic to function | `pip3 install -e .` |
| tests_require / extras_require['test'] | Dependencies required to run the Unit Tests for `sanic` | `pip3 install -e '.[test]'` |
| extras_require['dev'] | Additional Development requirements to add contributing | `pip3 install -e '.[dev]'` |
| extras_require['docs'] | Dependencies required to enable building and enhancing sanic documentation | `pip3 install -e '.[docs]'` |
## Running all tests
To run the tests for Sanic it is recommended to use tox like so:
```sh
tox
```
See it's that simple!
`tox.ini` contains different environments. Running `tox` without any arguments will
run all unittests, perform lint and other checks.
## Run unittests
`tox` environment -> `[testenv]`
To execute only unittests, run `tox` with environment like so:
```sh
tox -e py37 -v -- tests/test_config.py
# or
tox -e py310 -v -- tests/test_config.py
```
## Run lint checks
`tox` environment -> `[testenv:lint]`
Permform `flake8`\ , `black` and `isort` checks.
```sh
tox -e lint
```
## Run type annotation checks
`tox` environment -> `[testenv:type-checking]`
Permform `mypy` checks.
```sh
tox -e type-checking
```
## Run other checks
`tox` environment -> `[testenv:check]`
Perform other checks.
```sh
tox -e check
```
## Run Static Analysis
`tox` environment -> `[testenv:security]`
Perform static analysis security scan
```sh
tox -e security
```
## Run Documentation sanity check
`tox` environment -> `[testenv:docs]`
Perform sanity check on documentation
```sh
tox -e docs
```
## Code Style
To maintain the code consistency, Sanic uses the following tools:
1. [isort](https://github.com/timothycrosley/isort)
2. [black](https://github.com/python/black)
3. [flake8](https://github.com/PyCQA/flake8)
4. [slotscheck](https://github.com/ariebovenberg/slotscheck)
### isort
`isort` sorts Python imports. It divides imports into three categories sorted each in alphabetical order:
1. built-in
2. third-party
3. project-specific
### black
`black` is a Python code formatter.
### flake8
`flake8` is a Python style guide that wraps the following tools into one:
1. PyFlakes
2. pycodestyle
3. Ned Batchelder's McCabe script
### slotscheck
`slotscheck` ensures that there are no problems with `__slots__` (e.g., overlaps, or missing slots in base classes).
`isort`, `black`, `flake8`, and `slotscheck` checks are performed during `tox` lint checks.
The **easiest** way to make your code conform is to run the following before committing:
```bash
make pretty
```
Refer to [tox documentation](https://tox.readthedocs.io/en/latest/index.html) for more details.
## Pull requests
So the pull request approval rules are pretty simple:
1. All pull requests must pass unit tests.
2. All pull requests must be reviewed and approved by at least one current member of the Core Developer team.
3. All pull requests must pass flake8 checks.
4. All pull requests must match `isort` and `black` requirements.
5. All pull requests must be **PROPERLY** type annotated, unless exemption is given.
6. All pull requests must be consistent with the existing code.
7. If you decide to remove/change anything from any common interface a deprecation message should accompany it in accordance with our [deprecation policy](https://sanicframework.org/en/guide/project/policies.html#deprecation).
8. If you implement a new feature you should have at least one unit test to accompany it.
9. An example must be one of the following:
* Example of how to use Sanic
* Example of how to use Sanic extensions
* Example of how to use Sanic and asynchronous library
## Documentation
_Check back. We are reworking our documentation so this will change._

View File

@ -0,0 +1,68 @@
# Policies
## Versioning
Sanic uses [calendar versioning](https://calver.org/), aka "calver". To be more specific, the pattern follows:
```
YY.MM.MICRO
```
Generally, versions are referred to in their ``YY.MM`` form. The `MICRO` number indicates an incremental patch version, starting at `0`.
## Release Schedule
There are four (4) scheduled releases per year: March, June, September, and December. Therefore, there are four (4) released versions per year: `YY.3`, `YY.6`, `YY.9`, and `YY.12`.
This release schedule provides:
- a predictable release cadence,
- relatively short development windows allowing features to be regularly released,
- controlled [deprecations](#deprecation), and
- consistent stability with a yearly LTS.
We also use the yearly release cycle in conjunction with our governance model, covered by the [S.C.O.P.E.](./scope.md)
### Long term support v Interim releases
Sanic releases a long term support release (aka "LTS") once a year in December. The LTS releases receive bug fixes and security updates for **24 months**. Interim releases throughout the year occur every three months, and are supported until the subsequent release.
| Version | LTS | Supported |
| ------- | ------------- | --------- |
| 23.6 | | ✅ |
| 23.3 | | ⚪ |
| 22.12 | until 2024-12 | ☑️ |
| 22.9 | | ⚪ |
| 22.6 | | ⚪ |
| 22.3 | | ⚪ |
| 21.12 | until 2023-12 | ☑️ |
| 21.9 | | ⚪ |
| 21.6 | | ⚪ |
| 21.3 | | ⚪ |
| 20.12 | | ⚪ |
| 20.9 | | ⚪ |
| 20.6 | | ⚪ |
| 20.3 | | ⚪ |
| 19.12 | | ⚪ |
| 19.9 | | ⚪ |
| 19.6 | | ⚪ |
| 19.3 | | ⚪ |
| 18.12 | | ⚪ |
| 0.8.3 | | ⚪ |
| 0.7.0 | | ⚪ |
| 0.6.0 | | ⚪ |
| 0.5.4 | | ⚪ |
| 0.4.1 | | ⚪ |
| 0.3.1 | | ⚪ |
| 0.2.0 | | ⚪ |
| 0.1.9 | | ⚪ |
☑️ = security fixes
✅ = full support
⚪ = no support
## Deprecation
Before a feature is deprecated, or breaking changes are introduced into the API, it shall be publicized and shall appear with deprecation warnings through two release cycles. No deprecations shall be made in an LTS release.
Breaking changes or feature removal may happen outside of these guidelines when absolutely warranted. These circumstances should be rare. For example, it might happen when no alternative is available to curtail a major security issue.

View File

@ -0,0 +1,258 @@
# Sanic Community Organization Policy E-manual (SCOPE)
.. attrs::
:class: is-size-7
_December 2019, version 1_
## Goals
To create a sustainable, community-driven organization around the Sanic projects that promote: (1) stability and predictability, (2) quick iteration and enhancement cycles, (3) engagement from outside contributors, (4) overall reliable software, and (5) a safe, rewarding environment for the community members.
## Overview
This Policy is the governance model for the Sanic Community Organization (“SCO”). The SCO is a meritocratic, consensus-based community organization responsible for all projects adopted by it. Anyone with an interest in one of the projects can join the community, contribute to the community or projects, and participate in the decision making process. This document describes how that participation takes place and how to set about earning merit within the project community.
## Structure
The SCO has multiple **projects**. Each project is represented by a single GitHub repository under the Sanic community umbrella. These projects are used by **users**, developed by **contributors**, governed by **core developers**, released by **release managers**, and ultimately overseen by a **steering council**. If this sounds similar to the Python project and PEP 8016 that is because it is intentionally designed that way.
## Roles and responsibilities
### Users
Users are community members who have a need for the projects. They are the developers and personnel that download and install the packages. Users are the **most important** members of the community and without them the projects would have no purpose. Anyone can be a user and the licenses adopted by the projects shall be appropriate open source licenses.
_The SCO asks its users to participate in the project and community as much as possible._
User contributions enable the project team to ensure that they are satisfying the needs of those users. Common user contributions include (but are not limited to):
* evangelizing about the project (e.g. a link on a website and word-of-mouth awareness raising)
* informing developers of strengths and weaknesses from a new user perspective
* providing moral support (a thank you goes a long way)
* providing financial support (the software is open source, but its developers need to eat)
Users who continue to engage with the SCO, its projects, and its community will often become more and more involved. Such users may find themselves becoming contributors, as described in the next section.
### Contributors
Contributors are community members who contribute in concrete ways to one or more of the projects. Anyone can become a contributor and contributions can take many forms. Contributions and requirements are governed by each project separately by a contribution policy.
There is **no expectation** of commitment to the project, **no specific skill requirements** and **no selection process**.
In addition to their actions as users, contributors may also find themselves doing one or more of the following:
* supporting new users (existing users are often the best people to support new users)
* reporting bugs
* identifying requirements
* providing graphics and web design
* Programming
* example use cases
* assisting with project infrastructure
* writing documentation
* fixing bugs
* adding features
* providing constructive opinions and engaging in community discourse
Contributors engage with the projects through GitHub and the Community Forums. They submit changes to the projects itself via pull requests, which will be considered for inclusion in the project by the community at large. The Community Forums are the most appropriate place to ask for help when making that first contribution.
Indeed one of the most important roles of a contributor may be to **simply engage in the community conversation**. Most decisions about the direction of a project are made by consensus. This is discussed in more detail below. In general, however, it is helpful for the health and direction of the projects for the contributors to **speak freely** (within the confines of the code of conduct) and **express their opinions and experiences** to help drive the consensus building.
As contributors gain experience and familiarity with a project, their profile within, and commitment to, the community will increase. At some stage, they may find themselves being nominated for a core developer team.
### Core Developer
Each project under the SCO umbrella has its own team of core developers. They are the people in charge of that project.
_What is a core developer?_
Core developers are community members who have shown that they are committed to the continued development of the project through ongoing engagement with the community. Being a core developer allows contributors to more easily carry on with their project related activities by giving them direct access to the projects resources. They can make changes directly to the project repository without having to submit changes via pull requests from a fork.
This does not mean that a core developer is free to do what they want. In fact, core developers have no more direct authority over the final release of a package than do contributors. While this honor does indicate a valued member of the community who has demonstrated a healthy respect for the projects aims and objectives, their work continues to be reviewed by the community before acceptance in an official release.
_What can a core developer do on a project?_
Each project might define this role slightly differently. However, the general usage of this designation is that an individual has risen to a level of trust within the community such that they now are given some control. This comes in the form of push rights to non-protected branches, and the ability to have a voice in the approval of pull requests.
The projects employ various communication mechanisms to ensure that all contributions are reviewed by the community as a whole. This includes tools provided by GitHub, as well as the Community Forums. By the time a contributor is invited to become a core developer, they should be familiar with the various tools and workflows as a user and then as a contributor.
_How to become a core developer?_
Anyone can become a core developer; there are no special requirements, other than to have shown a willingness and ability to positively participate in the project as a team player.
Typically, a potential core developer will need to show that they have an understanding of the project, its objectives and its strategy. They will also have provided valuable contributions to the project over a period of time. However, there is **no technical or other skill** requirement for eligibility.
New core developers can be **nominated by any existing core developer** at any time. At least twice a year (April and October) there will be a ballot process run by the Steering Council. Voting should be done by secret ballot. Each existing core developer for that project receives a number of votes equivalent to the number of nominees on the ballot. For example, if there are four nominees, then each existing core developer has four votes. The core developer may cast those votes however they choose, but may not vote for a single nominee more than once. A nominee must receive two-thirds approval from the number of cast ballots (not the number of eligible ballots). Once accepted by the core developers, it is the responsibility of the Steering Council to approve and finalize the nomination. The Steering Council does not have the right to determine whether a nominee is meritorious enough to receive the core developer title. However, they do retain the right to override a vote in cases where the health of the community would so require.
Once the vote has been held, the aggregated voting results are published on the Community Forums. The nominee is entitled to request an explanation of any override against them. A nominee that fails to be admitted as a core developer may be nominated again in the future.
It is important to recognize that being a core developer is a privilege, not a right. That privilege must be earned and once earned it can be removed by the Steering Council (see next section) in extreme circumstances. However, under normal circumstances the core developer title exists for as long as the individual wishes to continue engaging with the project and community.
A committer who shows an above-average level of contribution to the project, particularly with respect to its strategic direction and long-term health, may be nominated to become a member of the Steering Council, or a Release Manager. This role is described below.
_What are the rights and responsibilities of core developers?_
As discussed, the majority of decisions to be made are by consensus building. In certain circumstances where an issue has become more contentious, or a major decision needs to be made, the Release Manager or Steering Council may decide (or be required) to implement the RFC process, which is outlined in more detail below.
It is also incumbent upon core developers to have a voice in the governance of the community. All core developers for all of the projects have the ability to be nominated to be on the Steering Council and vote in their elections.
This Policy (the “SCOPE”) may only be changed under the authority of two-thirds of active core developers, except that in the first six (6) months after adoption, the core developers reserve the right to make changes under the authority of a simple majority of active core developers.
_What if a core developer becomes inactive?_
It is hoped that all core developers participate and remain active on a regular basis in their projects. However, it is also understood that such commitments may not be realistic or possible from time to time.
Therefore, the Steering Council has the duty to encourage participation and the responsibility to place core developers into an inactive status if they are no longer willing or capable to participate. The main purpose of this is **not to punish** a person for behavior, but to help the development process to continue for those that do remain active.
To this end, a core developer that becomes “inactive” shall not have commit rights to a repository, and shall not participate in any votes. To be eligible to vote in an election, a core developer **must have been active** at the time of the previous scheduled project release.
Inactive members may ask the Steering Council to reinstate their status at any time, and upon such request the Steering Council shall make the core developer active again.
Individuals that know they will be unable to maintain their active status for a period are asked to be in communication with the Steering Council and declare themselves inactive if necessary.
An “active” core developer is an individual that has participated in a meaningful way during the previous six months. Any further definition is within the discretion of the Steering Council.
### Release Manager
Core developers shall have access only to make commits and merges on non-protected branches. The “master” branch and other protected branches are controlled by the release management team for that project. Release managers shall be elected from the core development team by the core development team, and shall serve for a full release cycle.
Each core developer team may decide how many release managers to have for each release cycle. It is highly encouraged that there be at least two release managers for a release cycle to help divide the responsibilities and not force too much effort upon a single person. However, there also should not be so many managers that their efforts are impeded.
The main responsibilities of the release management team include:
* push the development cycle forward by monitoring and facilitating technical discussions
* establish a release calendar and perform actions required to release packages
* approve pull requests to the master branch and other protected branches
* merge pull requests to the master branch and other protected branches
The release managers **do not have the authority to veto or withhold a merge** of a pull request that otherwise meets contribution criteria and has been accepted by the community. It is not their responsibility to decide what should be developed, but rather that the decisions of the community are carried out and that the project is being moved forward.
From time to time, a decision may need to be made that cannot be achieved through consensus. In that case, the release managers have the authority to call upon the removal of the decision to the RFC process. This should not occur regularly (unless required as discussed below), and its use should be discouraged in favor of the more communal consensus building strategy.
Since not all projects have the same requirements, the specifics governing release managers on a project shall be set forth in an Appendix to this Policy, or in the projects contribution guidelines.
If necessary, the Steering Council has the right to remove a release manager that is derelict in their duties, or for other good cause.
### Steering Council
The Steering Council is the governing body consisting of those individuals identified as the “project owner” and having control of the resources and assets of the SCO. Their ultimate goal is to ensure the smooth operation of the projects by removing impediments, and assisting the members as needed. It is expected that they will be regular voices in the community.
_What can the Steering Council do?_
The members of the Steering Council **do not individually have any more authority than any other core developer**, and shall not have any additional rights to make decisions, commits, merges, or the like on a project.
However, as a body, the Steering Council has the following capacity:
* accept, remand, and reject all RFCs
* enforce the community code of conduct
* administer community assets such as repositories, servers, forums, integration services, and the like (or, to delegate such authority to someone else)
* place core developers into inactive status where appropriate take any other enforcement measures afforded to it in this Policy, including, in extreme cases, removing core developers
* adopt or remove projects from the community umbrella
It is highly encouraged that the Steering Council delegate its authority as much as possible, and where appropriate, to other willing community members.
The Steering Council **does not have the authority** to change this Policy.
_How many members are on the Steering Council?_
Four.
While it seems like a committee with four votes may potentially end in a deadlock with no way to break a majority vote, the Steering Council is discouraged from voting as much as possible. Instead, it should try to work by consensus, and requires three consenting votes when it is necessary to vote on a matter.
_How long do members serve on the Steering Council?_
A single term shall be for two calendar years starting in January. Terms shall be staggered so that each year there are two members continuing from the previous years council.
Therefore, the inaugural vote shall have two positions available for a two year term, and two positions available for a one year term.
There are no limits to the number of terms that can be served, and it is possible for an individual to serve consecutive terms.
_Who runs the Steering Council?_
After the Steering Council is elected, the group shall collectively decide upon one person to act as the Chair. The Chair does not have any additional rights or authority over any other member of the Steering Council.
The role of the Chair is merely as a coordinator and facilitator. The Chair is expected to ensure that all governance processes are adhered to. The position is more administrative and clerical, and is expected that the Chair sets agendas and coordinates discussion of the group.
_How are council members elected?_
Once a year, **all eligible core developers** for each of the projects shall have the right to elect members to the Steering Council.
Nominations shall be open from September 1 and shall close on September 30. After that, voting shall begin on October 1 and shall close on October 31. Every core developer active on the date of the June release of the Sanic Framework for that year shall be eligible to receive one vote per vacant seat on the Steering Council. For the sake of clarity, to be eligible to vote, a core developer **does not** need to be a core developer on Sanic Framework, but rather just have been active within their respective project on that date.
The top recipients of votes shall be declared the winners. If there is any tie, it is highly encouraged that the tied nominees themselves resolve the dispute before a decision is made at random.
In regards to the inaugural vote of the Steering Council, the top two vote-recipients shall serve for two years, and the next two vote-recipients shall assume the one-year seats.
To be an eligible candidate for the Steering Council, the individual must have been a core developer in active status on at least one project for the previous twelve months.
_What if there is a vacancy?_
If a vacancy on the Steering Council exists during a term, then the next highest vote-recipient in the previous election shall be offered to complete the remainder of the term. If one cannot be found this way, the Steering Council may decide the most appropriate course of action to fill the seat (whether by appointment, vote, or other means).
If a member of the Steering Council becomes inactive, then that individual shall be removed from the Steering Council immediately and the seat shall become vacant.
In extreme cases, the body of all core developers has the right to bring a vote to remove a member of the Steering Council for cause by a two-thirds majority of all eligible voting core developers.
_How shall the Steering Council conduct its business?_
As much as possible, the Steering Council shall conduct its business and discussions in the open. Any member of the community should be allowed to enter the conversation with them. However, at times it may be necessary or appropriate for discussions to be held privately. Selecting the proper venue for conversations is part of the administrative duties of the Chair.
While the specifics of how to operate are beyond the scope of the Policy, it is encouraged that the Steering Council attempt to meet at least one time per quarter in a “real-time” discussion. This could be achieved via video conferencing, live chatting, or other appropriate means.
Support
-------
All participants in the community are encouraged to provide support for users within the project management infrastructure. This support is provided as a way of growing the community. Those seeking support should recognize that all support activity within the project is voluntary and is therefore provided as and when time allows. A user requiring guaranteed response times or results should therefore seek to purchase a support contract from a community member. However, for those willing to engage with the project on its own terms, and willing to help support other users, the community support channels are ideal.
Decision making process
-----------------------
Decisions about the future of the projects are made through discussion with all members of the community, from the newest user to the most experienced member. Everyone has a voice.
All non-sensitive project management discussion takes place on the community forums, or other designated channels. Occasionally, sensitive discussions may occur in private.
In order to ensure that the project is not bogged down by endless discussion and continual voting, the project operates a policy of **lazy consensus**. This allows the majority of decisions to be made without resorting to a formal vote. For any **major decision** (as defined below), there is a separate Request for Comment (RFC) process.
### Technical decisions
Pull requests and technical decisions should generally fall into the following categories.
* **Routine**: Documentation fixes, code changes that are for cleanup or additional testing. No functionality changes.
* **Minor**: Changes to the code base that either fix a bug, or introduce a trivial feature. No breaking changes.
* **Major**: Any change to the code base that breaks or deprecates existing API, alters operation in a non-trivial manner, or adds a significant feature.
It is generally the responsibility of the release managers to make sure that changes to the repositories receive the proper authorization before merge.
The release managers retain the authority to individually review and accept routine decisions that meet standards for code quality without additional input.
### Lazy consensus
Decision making (whether by the community or Steering Council) typically involves the following steps:
* proposal
* discussion
* vote (if consensus is not reached through discussion)
* decision
Any community member can make a proposal for consideration by the community. In order to initiate a discussion about a new idea, they should post a message on the appropriate channel on the Community forums, or submit a pull request implementing the idea on GitHub. This will prompt a review and, if necessary, a discussion of the idea.
The goal of this review and discussion is to gain approval for the contribution. Since most people in the project community have a shared vision, there is often little need for discussion in order to reach consensus.
In general, as long as nobody explicitly opposes a proposal or patch, it is recognized as having the support of the community. This is called lazy consensus; that is, those who have not stated their opinion explicitly have implicitly agreed to the implementation of the proposal.
Lazy consensus is a very important concept within the SCO. It is this process that allows a large group of people to efficiently reach consensus, as someone with no objections to a proposal need not spend time stating their position, and others need not spend time reading such messages.
For lazy consensus to be effective, it is necessary to allow an appropriate amount of time before assuming that there are no objections to the proposal. This is somewhat dependent upon the circumstances, but it is generally assumed that 72 hours is reasonable. This requirement ensures that everyone is given enough time to read, digest and respond to the proposal. This time period is chosen so as to be as inclusive as possible of all participants, regardless of their location and time commitments. The facilitators of discussion (whether it be the Chair or the Release Managers, where applicable) shall be charged with determining the proper length of time for such consensus to be reached.
As discussed above regarding so-called routine decisions, the release managers have the right to make decisions within a shorter period of time. In such cases, lazy consensus shall be implied.
### Request for Comment (RFC)
The Steering Council shall be in charge of overseeing the RFC process. It shall be a process that remains open to debate to all members of the community, and shall allow for ample time to consider a proposal and for members to respond and engage in meaningful discussion.
The final decision is vested with the Steering Council. However, it is strongly discouraged that the Steering Council adopt a decision that is contrary to any consensus that may exist in the community. From time to time this may happen if there is a conflict between consensus and the overall project and community goals.
An RFC shall be initiated by submission to the Steering Council in the public manner as set forth by the Steering Council. Debate shall continue and be facilitated by the Steering Council in general, and the Chair specifically.
In circumstances that the Steering Council feels it is appropriate, the RFC process may be waived in favor of lazy consensus.

View File

@ -0,0 +1,270 @@
# Configuration
Sanic Extensions can be configured in all of the same ways that [you can configure Sanic](../../guide/deployment/configuration.md). That makes configuring Sanic Extensions very easy.
```python
app = Sanic("MyApp")
app.config.OAS_URL_PREFIX = "/apidocs"
```
However, there are a few more configuration options that should be considered.
## Manual `extend`
.. column::
Even though Sanic Extensions will automatically attach to your application, you can manually choose `extend`. When you do that, you can pass all of the configuration values as a keyword arguments (lowercase).
.. column::
```python
app = Sanic("MyApp")
app.extend(oas_url_prefix="/apidocs")
```
.. column::
Or, alternatively they could be passed all at once as a single `dict`.
.. column::
```python
app = Sanic("MyApp")
app.extend(config={"oas_url_prefix": "/apidocs"})
```
.. column::
Both of these solutions suffers from the fact that the names of the configuration settings are not discoverable by an IDE. Therefore, there is also a type annotated object that you can use. This should help the development experience.
.. column::
```python
from sanic_ext import Config
app = Sanic("MyApp")
app.extend(config=Config(oas_url_prefix="/apidocs"))
```
## Settings
.. note::
Often, the easiest way to change these for an application (since they likely are not going to change dependent upon an environment), is to set them directly on the `app.config` object.
Simply use the capitalized version of the configuration key as shown here:
```python
app = Sanic("MyApp")
app.config.OAS_URL_PREFIX = "/apidocs"
```
### `cors`
- **Type**: `bool`
- **Default**: `True`
- **Description**: Whether to enable CORS protection
### `cors_allow_headers`
- **Type**: `str`
- **Default**: `"*"`
- **Description**: Value of the header: `access-control-allow-headers`
### `cors_always_send`
- **Type**: `bool`
- **Default**: `True`
- **Description**: Whether to always send the header: `access-control-allow-origin`
### `cors_automatic_options`
- **Type**: `bool`
- **Default**: `True`
- **Description**: Whether to automatically generate `OPTIONS` endpoints for routes that do *not* already have one defined
### `cors_expose_headers`
- **Type**: `str`
- **Default**: `""`
- **Description**: Value of the header: `access-control-expose-headers`
### `cors_max_age`
- **Type**: `int`
- **Default**: `5`
- **Description**: Value of the header: `access-control-max-age`
### `cors_methods`
- **Type**: `str`
- **Default**: `""`
- **Description**: Value of the header: `access-control-access-control-allow-methods`
### `cors_origins`
- **Type**: `str`
- **Default**: `""`
- **Description**: Value of the header: `access-control-allow-origin`
.. warning::
Be very careful if you place `*` here. Do not do this unless you know what you are doing as it can be a security issue.
### `cors_send_wildcard`
- **Type**: `bool`
- **Default**: `False`
- **Description**: Whether to send a wildcard origin instead of the incoming request origin
### `cors_supports_credentials`
- **Type**: `bool`
- **Default**: `False`
- **Description**: Value of the header: `access-control-allow-credentials`
### `cors_vary_header`
- **Type**: `bool`
- **Default**: `True`
- **Description**: Whether to add the `vary` header
### `http_all_methods`
- **Type**: `bool`
- **Default**: `True`
- **Description**: Adds the HTTP `CONNECT` and `TRACE` methods as allowable
### `http_auto_head`
- **Type**: `bool`
- **Default**: `True`
- **Description**: Automatically adds `HEAD` handlers to any `GET` routes
### `http_auto_options`
- **Type**: `bool`
- **Default**: `True`
- **Description**: Automatically adds `OPTIONS` handlers to any routes without
### `http_auto_trace`
- **Type**: `bool`
- **Default**: `False`
- **Description**: Automatically adds `TRACE` handlers to any routes without
### `oas`
- **Type**: `bool`
- **Default**: `True`
- **Description**: Whether to enable OpenAPI specification generation
### `oas_autodoc`
- **Type**: `bool`
- **Default**: `True`
- **Description**: Whether to automatically extract OpenAPI details from the docstring of a route function
### `oas_ignore_head`
- **Type**: `bool`
- **Default**: `True`
- **Description**: WHen `True`, it will not add `HEAD` endpoints into the OpenAPI specification
### `oas_ignore_options`
- **Type**: `bool`
- **Default**: `True`
- **Description**: WHen `True`, it will not add `OPTIONS` endpoints into the OpenAPI specification
### `oas_path_to_redoc_html`
- **Type**: `Optional[str]`
- **Default**: `None`
- **Description**: Path to HTML file to override the existing Redoc HTML
### `oas_path_to_swagger_html`
- **Type**: `Optional[str]`
- **Default**: `None`
- **Description**: Path to HTML file to override the existing Swagger HTML
### `oas_ui_default`
- **Type**: `Optional[str]`
- **Default**: `"redoc"`
- **Description**: Which OAS documentation to serve on the bare `oas_url_prefix` endpoint; when `None` there will be no documentation at that location
### `oas_ui_redoc`
- **Type**: `bool`
- **Default**: `True`
- **Description**: Whether to enable the Redoc UI
### `oas_ui_swagger`
- **Type**: `bool`
- **Default**: `True`
- **Description**: Whether to enable the Swagger UI
### `oas_ui_swagger_version`
- **Type**: `str`
- **Default**: `"4.1.0"`
- **Description**: Which Swagger version to use
### `oas_uri_to_config`
- **Type**: `str`
- **Default**: `"/swagger-config"`
- **Description**: Path to serve the Swagger configurtaion
### `oas_uri_to_json`
- **Type**: `str`
- **Default**: `"/openapi.json"`
- **Description**: Path to serve the OpenAPI JSON
### `oas_uri_to_redoc`
- **Type**: `str`
- **Default**: `"/redoc"`
- **Description**: Path to Redoc
### `oas_uri_to_swagger`
- **Type**: `str`
- **Default**: `"/swagger"`
- **Description**: Path to Swagger
### `oas_url_prefix`
- **Type**: `str`
- **Default**: `"/docs"`
- **Description**: URL prefix for the Blueprint that all of the OAS documentation witll attach to
### `swagger_ui_configuration`
- **Type**: `Dict[str, Any]`
- **Default**: `{"apisSorter": "alpha", "operationsSorter": "alpha", "docExpansion": "full"}`
- **Description**: The Swagger documentation to be served to the frontend
### `templating_enable_async`
- **Type**: `bool`
- **Default**: `True`
- **Description**: Whether to set `enable_async` on the Jinja `Environment`
### `templating_path_to_templates`
- **Type**: `Union[str, os.PathLike, Sequence[Union[str, os.PathLike]]] `
- **Default**: `templates`
- **Description**: A single path, or multiple paths to where your template files are located
### `trace_excluded_headers`
- **Type**: `Sequence[str]`
- **Default**: `("authorization", "cookie")`
- **Description**: Which headers should be suppresed from responses to `TRACE` requests

View File

@ -0,0 +1,111 @@
# Convenience
## Fixed serializer
.. column::
Often when developing an application, there will be certain routes that always return the same sort of response. When this is the case, you can predefine the return serializer and on the endpoint, and then all that needs to be returned is the content.
.. column::
```python
from sanic_ext import serializer
@app.get("/<name>")
@serializer(text)
async def hello_world(request, name: str):
if name.isnumeric():
return "hello " * int(name)
return f"Hello, {name}"
```
.. column::
The `serializer` decorator also can add status codes.
.. column::
```python
from sanic_ext import serializer
@app.post("/")
@serializer(text, status=202)
async def create_something(request):
...
```
## Custom serializer
.. column::
Using the `@serializer` decorator, you can also pass your own custom functions as long as they also return a valid type (`HTTPResonse`).
.. column::
```python
def message(retval, request, action, status):
return json(
{
"request_id": str(request.id),
"action": action,
"message": retval,
},
status=status,
)
@app.post("/<action>")
@serializer(message)
async def do_action(request, action: str):
return "This is a message"
```
.. column::
Now, returning just a string should return a nice serialized output.
.. column::
```python
$ curl localhost:8000/eat_cookies -X POST
{
"request_id": "ef81c45b-235c-46dd-9dbd-b550f8fa77f9",
"action": "eat_cookies",
"message": "This is a message"
}
```
## Request counter
.. column::
Sanic Extensions comes with a subclass of `Request` that can be setup to automatically keep track of the number of requests processed per worker process. To enable this, you should pass the `CountedRequest` class to your application contructor.
.. column::
```python
from sanic_ext import CountedRequest
app = Sanic(..., request_class=CountedRequest)
```
.. column::
You will now have access to the number of requests served during the lifetime of the worker process.
.. column::
```python
@app.get("/")
async def handler(request: CountedRequest):
return json({"count": request.count})
```
If possible, the request count will also be added to the [worker state](../../guide/deployment/manager.md#worker-state).
![](https://user-images.githubusercontent.com/166269/190922460-43bd2cfc-f81a-443b-b84f-07b6ce475cbf.png)

View File

@ -0,0 +1,84 @@
# Custom extensions
It is possible to create your own custom extensions.
Version 22.9 added the `Extend.register` [method](#extension-preregistration). This makes it extremely easy to add custom expensions to an application.
## Anatomy of an extension
All extensions must subclass `Extension`.
### Required
- `name`: By convention, the name is an all-lowercase string
- `startup`: A method that runs when the extension is added
### Optional
- `label`: A method that returns additional information about the extension in the MOTD
- `included`: A method that returns a boolean whether the extension should be enabled or not (could be used for example to check config state)
### Example
```python
from sanic import Request, Sanic, json
from sanic_ext import Extend, Extension
app = Sanic(__name__)
app.config.MONITOR = True
class AutoMonitor(Extension):
name = "automonitor"
def startup(self, bootstrap) -> None:
if self.included():
self.app.before_server_start(self.ensure_monitor_set)
self.app.on_request(self.monitor)
@staticmethod
async def monitor(request: Request):
if request.route and request.route.ctx.monitor:
print("....")
@staticmethod
async def ensure_monitor_set(app: Sanic):
for route in app.router.routes:
if not hasattr(route.ctx, "monitor"):
route.ctx.monitor = False
def label(self):
has_monitor = [
route
for route in self.app.router.routes
if getattr(route.ctx, "monitor", None)
]
return f"{len(has_monitor)} endpoint(s)"
def included(self):
return self.app.config.MONITOR
Extend.register(AutoMonitor)
@app.get("/", ctx_monitor=True)
async def handler(request: Request):
return json({"foo": "bar"})
```
## Extension preregistration
.. column::
`Extend.register` simplifies the addition of custom extensions.
.. column::
```python
from sanic_ext import Extend, Extension
class MyCustomExtension(Extension):
...
Extend.register(MyCustomExtension())
```
*Added in v22.9*

View File

@ -0,0 +1,81 @@
# Getting Started
Sanic Extensions is an *officially supported* plugin developed, and maintained by the SCO. The primary goal of this project is to add additional features to help Web API and Web application development easier.
## Features
- CORS protection
- Template rendering with Jinja
- Dependency injection into route handlers
- OpenAPI documentation with Redoc and/or Swagger
- Predefined, endpoint-specific response serializers
- Request query arguments and body input validation
- Auto create `HEAD`, `OPTIONS`, and `TRACE` endpoints
## Minimum requirements
- **Python**: 3.8+
- **Sanic**: 21.9+
## Install
The best method is to just install Sanic Extensions along with Sanic itself:
```bash
pip install sanic[ext]
```
You can of course also just install it by itself.
```bash
pip install sanic-ext
```
## Extend your application
Out of the box, Sanic Extensions will enable a bunch of features for you.
.. column::
To setup Sanic Extensions (v21.12+), you need to do: **nothing**. If it is installed in the environment, it is setup and ready to go.
This code is the Hello, world app in the [Sanic Getting Started page](../../guide/getting-started.md) _without any changes_, but using Sanic Extensions with `sanic-ext` installed in the background.
.. column::
```python
from sanic import Sanic
from sanic.response import text
app = Sanic("MyHelloWorldApp")
@app.get("/")
async def hello_world(request):
return text("Hello, world.")
```
.. column::
**_OLD DEPRECATED SETUP_**
In v21.9, the easiest way to get started is to instantiate it with `Extend`.
If you look back at the Hello, world app in the [Sanic Getting Started page](../../guide/getting-started.md), you will see the only additions here are the two highlighted lines.
.. column::
```python
from sanic import Sanic
from sanic.response import text
from sanic_ext import Extend
app = Sanic("MyHelloWorldApp")
Extend(app)
@app.get("/")
async def hello_world(request):
return text("Hello, world.")
```
Regardless of how it is setup, you should now be able to view the OpenAPI documentation and see some of the functionality in action: [http://localhost:8000/docs](http://localhost:8000/docs).

View File

@ -0,0 +1,69 @@
# Health monitor
The health monitor requires both `sanic>=22.9` and `sanic-ext>=22.9`.
You can setup Sanic Extensions to monitor the health of your worker processes. This requires that you not be in [single process mode](../../guide/deployment/manager.md#single-process-mode).
## Setup
.. column::
Out of the box, the health monitor is disabled. You will need to opt-in if you would like to use it.
.. column::
```python
app.config.HEALTH = True
```
## How does it work
The monitor sets up a new background process that will periodically receive acknowledgements of liveliness from each worker process. If a worker process misses a report too many times, then the monitor will restart that one worker.
## Diagnostics endpoint
.. column::
The health monitor will also enable a diagnostics endpoint that outputs the [worker state](../../guide/deployment/manager.md#worker-state). By default is id disabled.
.. danger::
The diagnostics endpoint is not secured. If you are deploying it in a production environment, you should take steps to protect it with a proxy server if you are using one. If not, you may want to consider disabling this feature in production since it will leak details about your server state.
.. column::
```
$ curl http://localhost:8000/__health__
{
'Sanic-Main': {'pid': 99997},
'Sanic-Server-0-0': {
'server': True,
'state': 'ACKED',
'pid': 9999,
'start_at': datetime.datetime(2022, 10, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc),
'starts': 2,
'restart_at': datetime.datetime(2022, 10, 1, 0, 0, 12, 861332, tzinfo=datetime.timezone.utc)
},
'Sanic-Reloader-0': {
'server': False,
'state': 'STARTED',
'pid': 99998,
'start_at': datetime.datetime(2022, 10, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc),
'starts': 1
}
}
```
## Configuration
| Key | Type | Default| Description |
|--|--|--|--|
| HEALTH | `bool` | `False` | Whether to enable this extension. |
| HEALTH_ENDPOINT | `bool` | `False` | Whether to enable the diagnostics endpoint. |
| HEALTH_MAX_MISSES | `int` | `3` | The number of consecutive misses before a worker process is restarted. |
| HEALTH_MISSED_THRESHHOLD | `int` | `10` | The number of seconds the monitor checks for worker process health. |
| HEALTH_MONITOR | `bool` | `True` | Whether to enable the health monitor. |
| HEALTH_REPORT_INTERVAL | `int` | `5` | The number of seconds between reporting each acknowledgement of liveliness. |
| HEALTH_URI_TO_INFO | `str` | `""` | The URI path of the diagnostics endpoint. |
| HEALTH_URL_PREFIX | `str` | `"/__health__"` | The URI prefix of the diagnostics blueprint. |

View File

@ -0,0 +1,86 @@
# CORS protection
Cross-Origin Resource Sharing (aka CORS) is a *huge* topic by itself. The documentation here cannot go into enough detail about *what* it is. You are highly encouraged to do some research on your own to understand the security problem presented by it, and the theory behind the solutions. [MDN Web Docs](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) are a great first step.
In super brief terms, CORS protection is a framework that browsers use to facilitate how and when a web page can access information from another domain. It is extremely relevant to anyone building a single-page application. Often times your frontend might be on a domain like `https://portal.myapp.com`, but it needs to access the backend from `https://api.myapp.com`.
The implementation here is heavily inspired by [`sanic-cors`](https://github.com/ashleysommer/sanic-cors), which is in turn based upon [`flask-cors`](https://github.com/corydolphin/flask-cors). It is therefore very likely that you can achieve a near drop-in replacement of `sanic-cors` with `sanic-ext`.
## Basic implementation
.. column::
As shown in the example in the [auto-endpoints example](methods.md#options), Sanic Extensions will automatically enable CORS protection without further action. But, it does not offer too much out of the box.
At a *bare minimum*, it is **highly** recommended that you set `config.CORS_ORIGINS` to the intended origin(s) that will be accessing the application.
.. column::
```python
from sanic import Sanic, text
from sanic_ext import Extend
app = Sanic(__name__)
app.config.CORS_ORIGINS = "http://foobar.com,http://bar.com"
Extend(app)
@app.get("/")
async def hello_world(request):
return text("Hello, world.")
```
```
$ curl localhost:8000 -X OPTIONS -i
HTTP/1.1 204 No Content
allow: GET,HEAD,OPTIONS
access-control-allow-origin: http://foobar.com
connection: keep-alive
```
## Configuration
The true power of CORS protection, however, comes into play once you start configuring it. Here is a table of all of the options.
| Key | Type | Default| Description |
|--|--|--|--|
| `CORS_ALLOW_HEADERS` | `str` or `List[str]` | `"*"` | The list of headers that will appear in `access-control-allow-headers`. |
| `CORS_ALWAYS_SEND` | `bool` | `True` | When `True`, will always set a value for `access-control-allow-origin`. When `False`, will only set it if there is an `Origin` header. |
| `CORS_AUTOMATIC_OPTIONS` | `bool` | `True` | When the incoming preflight request is received, whether to automatically set values for `access-control-allow-headers`, `access-control-max-age`, and `access-control-allow-methods` headers. If `False` these values will only be set on routes that are decorated with the `@cors` decorator. |
| `CORS_EXPOSE_HEADERS` | `str` or `List[str]` | `""` | Specific list of headers to be set in `access-control-expose-headers` header. |
| `CORS_MAX_AGE` | `str`, `int`, `timedelta` | `0` | The maximum number of seconds the preflight response may be cached using the `access-control-max-age` header. A falsey value will cause the header to not be set. |
| `CORS_METHODS` | `str` or `List[str]` | `""` | The HTTP methods that the allowed origins can access, as set on the `access-control-allow-methods` header. |
| `CORS_ORIGINS` | `str`, `List[str]`, `re.Pattern` | `"*"` | The origins that are allowed to access the resource, as set on the `access-control-allow-origin` header. |
| `CORS_SEND_WILDCARD` | `bool` | `False` | If `True`, will send the wildcard `*` origin instead of the `origin` request header. |
| `CORS_SUPPORTS_CREDENTIALS` | `bool` | `False` | Whether to set the `access-control-allow-credentials` header. |
| `CORS_VARY_HEADER` | `bool` | `True` | Whether to add `vary` header, when appropriate. |
*For the sake of brevity, where the above says `List[str]` any instance of a `list`, `set`, `frozenset`, or `tuple` will be acceptable. Alternatively, if the value is a `str`, it can be a comma delimited list.*
## Route level overrides
.. column::
It may sometimes be necessary to override app-wide settings for a specific route. To allow for this, you can use the `@sanic_ext.cors()` decorator to set different route-specific values.
The values that can be overridden with this decorator are:
- `origins`
- `expose_headers`
- `allow_headers`
- `allow_methods`
- `supports_credentials`
- `max_age`
.. column::
```python
from sanic_ext import cors
app.config.CORS_ORIGINS = "https://foo.com"
@app.get("/", host="bar.com")
@cors(origins="https://bar.com")
async def hello_world(request):
return text("Hello, world.")
```

Some files were not shown because too many files have changed in this diff Show More