Compare commits

..

16 Commits

Author SHA1 Message Date
23595f2d60 asd
Some checks failed
Run Helm tests / Execute helm lint (push) Successful in 17s
Run Helm tests / Execute helm template (push) Successful in 9s
Markdown linter / Execute npm run readme:link (push) Successful in 20s
Markdown linter / Execute npm run readme:lint (push) Successful in 7s
Markdown linter / Execute npm run readme:parameters (push) Successful in 10s
Run Helm tests / Execute helm unittest (push) Successful in 1m9s
Release / publish-chart (push) Failing after 32s
Release / publish-release-notes (push) Has been skipped
2026-02-15 20:02:54 +01:00
eb80fdee50 asd
Some checks failed
Run Helm tests / Execute helm template (push) Successful in 8s
Run Helm tests / Execute helm lint (push) Successful in 17s
Run Helm tests / Execute helm unittest (push) Successful in 26s
Markdown linter / Execute npm run readme:lint (push) Successful in 8s
Markdown linter / Execute npm run readme:link (push) Successful in 34s
Markdown linter / Execute npm run readme:parameters (push) Successful in 10s
Release / publish-chart (push) Failing after 1m1s
Release / publish-release-notes (push) Has been skipped
2026-02-15 19:57:29 +01:00
2b37bdfa32 asd
Some checks failed
Run Helm tests / Execute helm template (push) Successful in 9s
Run Helm tests / Execute helm lint (push) Successful in 17s
Run Helm tests / Execute helm unittest (push) Successful in 28s
Markdown linter / Execute npm run readme:lint (push) Successful in 8s
Markdown linter / Execute npm run readme:link (push) Successful in 35s
Markdown linter / Execute npm run readme:parameters (push) Successful in 9s
Release / publish-chart (push) Failing after 57s
Release / publish-release-notes (push) Has been skipped
2026-02-15 19:53:58 +01:00
146e2cf1a5 asd
Some checks failed
Run Helm tests / Execute helm lint (push) Successful in 10s
Run Helm tests / Execute helm template (push) Successful in 9s
Markdown linter / Execute npm run readme:link (push) Successful in 19s
Markdown linter / Execute npm run readme:lint (push) Successful in 8s
Markdown linter / Execute npm run readme:parameters (push) Successful in 9s
Run Helm tests / Execute helm unittest (push) Successful in 1m9s
Release / publish-chart (push) Failing after 29s
Release / publish-release-notes (push) Has been skipped
2026-02-15 19:49:28 +01:00
a78b5d6172 fix(ci): asd
Some checks failed
Run Helm tests / Execute helm lint (push) Successful in 8s
Run Helm tests / Execute helm template (push) Successful in 17s
Run Helm tests / Execute helm unittest (push) Successful in 26s
Markdown linter / Execute npm run readme:lint (push) Successful in 7s
Markdown linter / Execute npm run readme:parameters (push) Successful in 9s
Markdown linter / Execute npm run readme:link (push) Successful in 36s
Release / publish-chart (push) Failing after 29s
Release / publish-release-notes (push) Has been skipped
2026-02-15 19:43:33 +01:00
3219f22a68 asd
Some checks failed
Run Helm tests / Execute helm lint (push) Successful in 9s
Run Helm tests / Execute helm template (push) Successful in 17s
Run Helm tests / Execute helm unittest (push) Successful in 26s
Markdown linter / Execute npm run readme:lint (push) Successful in 8s
Markdown linter / Execute npm run readme:link (push) Successful in 35s
Markdown linter / Execute npm run readme:parameters (push) Successful in 9s
Release / publish-chart (push) Failing after 54s
Release / publish-release-notes (push) Has been skipped
2026-02-15 19:33:48 +01:00
cdd75f2e77 fix(ci): adapt release workflow
Some checks failed
Run Helm tests / Execute helm lint (push) Successful in 14s
Run Helm tests / Execute helm unittest (push) Successful in 27s
Run Helm tests / Execute helm template (push) Successful in 50s
Markdown linter / Execute npm run readme:link (push) Successful in 27s
Markdown linter / Execute npm run readme:parameters (push) Successful in 12s
Markdown linter / Execute npm run readme:lint (push) Successful in 36s
Release / publish-chart (push) Failing after 24s
Release / publish-release-notes (push) Has been skipped
2026-02-15 18:45:42 +01:00
c96824da7f fix(ci): adapt release workflow 2026-02-15 18:43:33 +01:00
5851fe7c4c fix(scripts): support pre-releases 2026-02-15 16:52:47 +01:00
5c39511d9a fix(deployment): adapt nodeSelector test 2025-12-18 20:11:38 +01:00
935b82ab0e fix(Makefile): add yamllint as dedicated target 2025-11-05 19:11:28 +01:00
1b22954570 fix(deployment): avoid duplicated nodeSelector #980 2025-11-05 19:11:28 +01:00
3da31782dd fix(Chart): add annotation 'artifacthub.io/links' 2025-10-12 12:15:58 +02:00
4d6db83c28 fix(ci): improve workflows (#959)
Some checks failed
Run Helm tests / Execute helm lint (push) Successful in 11s
Run Helm tests / Execute helm template (push) Failing after 11s
Run Helm tests / Execute helm unittest (push) Successful in 28s
Markdown linter / Execute npm run readme:link (push) Successful in 36s
Markdown linter / Execute npm run readme:lint (push) Successful in 8s
Markdown linter / Execute npm run readme:parameters (push) Successful in 27s
🤖 Split up helm chart workflows

The following patch adapts the CI workflows. The worflows has been splitted into
dedicated parts. For example the `helm template` and `helm unittest` command is
now a seperate step to notice that a change affects the template mechanism but
not the unittest. This was priviously not possible, because both commands were
part of one step.

🤖 Changelog Issue

Additionally has the changelog workflow be improved. The shell commands has
been migrated to a dedicated file named `.gitea/scripts/changelog.sh`. This has
the advantage, that the shellcheck plugin of IDE's support developers by
developing such shell scripts. Furthermore, the used container image has been
replaced by the ubuntu:latest image of the act_runner. This make it more
comfortable in using `curl` or `jq`, because the complete set of features/flags
are
avialable instead of the previously used container image
`docker.io/thegeeklab/git-sv:2.0.5`. Final note to the shell script
`changelog.sh`, this can now be executed locally as well as on ARM-based
act_runners and helps to test the helm chart in own Gitea environments
beforehand.

🤖 Markdown linter

In addition, a new workflow for markdown files has now been introduced. This
checks the `README.md` file for links, ensures that it is properly formatted,
and verifies that the parameters match those in `values.yaml`. Here, too, the
commands have been outsourced to separate jobs so that more precise interaction
is possible in the event of an error.

⚠️ Warning

This patch also requires an adjustment in branch protection. There, the
workflows that must be successful before a merge must be redefined.

Reviewed-on: https://gitea.com/gitea/helm-gitea/pulls/959
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Markus Pesch <markus.pesch@cryptic.systems>
Co-committed-by: Markus Pesch <markus.pesch@cryptic.systems>
2025-10-03 07:38:26 +00:00
72606192a6 refactor(structure): remove leading gitea directory (#958)
The following pull request removes the `gitea` directory. With regard to maintaining act_runners in a separate git repository or helm chart, this additional directory becomes redundant.

Reviewed-on: https://gitea.com/gitea/helm-gitea/pulls/958
Reviewed-by: DaanSelen <daanselen@noreply.gitea.com>
Co-authored-by: Markus Pesch <markus.pesch@cryptic.systems>
Co-committed-by: Markus Pesch <markus.pesch@cryptic.systems>
2025-10-02 11:36:47 +00:00
fb407618dc feat: support network policies (#952)
The following patch adds support for network policies.

The patch does not contain any specific network policies, as it is uncertain in which environment and with which access rights gitea will be deployed.

With regard to third-party components such as PostgreSQL or Valkey, the network policy may need to be adjusted. Whether this happens directly in the helm chart or whether the user has to enter it themselves is open to discussion.

During testing, I defined a few sample network policies to get Gitea up and running. These are only examples.

Reviewed-on: https://gitea.com/gitea/helm-gitea/pulls/952
Reviewed-by: DaanSelen <daanselen@noreply.gitea.com>
Co-authored-by: Markus Pesch <markus.pesch@cryptic.systems>
Co-committed-by: Markus Pesch <markus.pesch@cryptic.systems>
2025-09-22 07:05:21 +00:00
82 changed files with 2338 additions and 738 deletions

View File

@@ -1,61 +1,65 @@
#!/bin/bash
set -e
set -e -o pipefail
CHART_FILE="Chart.yaml"
if [ ! -f "${CHART_FILE}" ]; then
echo "ERROR: ${CHART_FILE} not found!" 1>&2
chart_file="Chart.yaml"
if [ ! -f "${chart_file}" ]; then
echo "ERROR: ${chart_file} not found!" 1>&2
exit 1
fi
DEFAULT_NEW_TAG="$(git tag --sort=-version:refname | head -n 1)"
DEFAULT_OLD_TAG="$(git tag --sort=-version:refname | head -n 2 | tail -n 1)"
default_new_tag="$(git tag --sort=-version:refname | head -n 1)"
default_old_tag="$(git tag --sort=-version:refname | head -n 2 | tail -n 1)"
if [ -z "${1}" ]; then
read -p "Enter start tag [${DEFAULT_OLD_TAG}]: " OLD_TAG
if [ -z "${OLD_TAG}" ]; then
OLD_TAG="${DEFAULT_OLD_TAG}"
echo "Enter start tag [${default_old_tag}]:"
read -r old_tag
if [ -z "${old_tag}" ]; then
old_tag="${default_old_tag}"
fi
while [ -z "$(git tag --list "${OLD_TAG}")" ]; do
echo "ERROR: Tag '${OLD_TAG}' not found!" 1>&2
read -p "Enter start tag [${DEFAULT_OLD_TAG}]: " OLD_TAG
if [ -z "${OLD_TAG}" ]; then
OLD_TAG="${DEFAULT_OLD_TAG}"
while [ -z "$(git tag --list "${old_tag}")" ]; do
echo "ERROR: Tag '${old_tag}' not found!" 1>&2
echo "Enter start tag [${default_old_tag}]:"
read -r old_tag
if [ -z "${old_tag}" ]; then
old_tag="${default_old_tag}"
fi
done
else
OLD_TAG=${1}
if [ -z "$(git tag --list "${OLD_TAG}")" ]; then
echo "ERROR: Tag '${OLD_TAG}' not found!" 1>&2
old_tag=${1}
if [ -z "$(git tag --list "${old_tag}")" ]; then
echo "ERROR: Tag '${old_tag}' not found!" 1>&2
exit 1
fi
fi
if [ -z "${2}" ]; then
read -p "Enter end tag [${DEFAULT_NEW_TAG}]: " NEW_TAG
if [ -z "${NEW_TAG}" ]; then
NEW_TAG="${DEFAULT_NEW_TAG}"
echo "Enter end tag [${default_new_tag}]:"
read -r new_tag
if [ -z "${new_tag}" ]; then
new_tag="${default_new_tag}"
fi
while [ -z "$(git tag --list "${NEW_TAG}")" ]; do
echo "ERROR: Tag '${NEW_TAG}' not found!" 1>&2
read -p "Enter end tag [${DEFAULT_NEW_TAG}]: " NEW_TAG
if [ -z "${NEW_TAG}" ]; then
NEW_TAG="${DEFAULT_NEW_TAG}"
while [ -z "$(git tag --list "${new_tag}")" ]; do
echo "ERROR: Tag '${new_tag}' not found!" 1>&2
echo "Enter end tag [${default_new_tag}]:"
read -r new_tag
if [ -z "${new_tag}" ]; then
new_tag="${default_new_tag}"
fi
done
else
NEW_TAG=${2}
new_tag=${2}
if [ -z "$(git tag --list "${NEW_TAG}")" ]; then
echo "ERROR: Tag '${NEW_TAG}' not found!" 1>&2
if [ -z "$(git tag --list "${new_tag}")" ]; then
echo "ERROR: Tag '${new_tag}' not found!" 1>&2
exit 1
fi
fi
CHANGE_LOG_YAML=$(mktemp)
echo "[]" > "${CHANGE_LOG_YAML}"
change_log_yaml=$(mktemp)
echo "[]" > "${change_log_yaml}"
function map_type_to_kind() {
case "${1}" in
@@ -80,35 +84,42 @@ function map_type_to_kind() {
esac
}
COMMIT_TITLES="$(git log --pretty=format:"%s" "${OLD_TAG}..${NEW_TAG}")"
commit_titles="$(git log --pretty=format:"%s" "${old_tag}..${new_tag}")"
echo "INFO: Generate change log entries from ${OLD_TAG} until ${NEW_TAG}"
echo "INFO: Generate change log entries from ${old_tag} until ${new_tag}"
while IFS= read -r line; do
if [[ "${line}" =~ ^([a-zA-Z]+)(\([^\)]+\))?\:\ (.+)$ ]]; then
TYPE="${BASH_REMATCH[1]}"
KIND=$(map_type_to_kind "${TYPE}")
type="${BASH_REMATCH[1]}"
kind=$(map_type_to_kind "${type}")
if [ "${KIND}" == "skip" ]; then
if [ "${kind}" == "skip" ]; then
continue
fi
DESC="${BASH_REMATCH[3]}"
desc="${BASH_REMATCH[3]}"
echo "- ${KIND}: ${DESC}"
echo "- ${kind}: ${desc}"
jq --arg kind "${KIND}" --arg description "${DESC}" '. += [ $ARGS.named ]' < "${CHANGE_LOG_YAML}" > "${CHANGE_LOG_YAML}.new"
mv "${CHANGE_LOG_YAML}.new" "${CHANGE_LOG_YAML}"
jq --arg kind "${kind}" --arg description "${desc}" '. += [ $ARGS.named ]' < "${change_log_yaml}" > "${change_log_yaml}.new"
mv "${change_log_yaml}.new" "${change_log_yaml}"
fi
done <<< "${COMMIT_TITLES}"
done <<< "${commit_titles}"
if [ -s "${CHANGE_LOG_YAML}" ]; then
yq --inplace --input-format json --output-format yml "${CHANGE_LOG_YAML}"
yq --no-colors --inplace ".annotations.\"artifacthub.io/changes\" |= loadstr(\"${CHANGE_LOG_YAML}\") | sort_keys(.)" "${CHART_FILE}"
if [ -s "${change_log_yaml}" ]; then
yq --inplace --input-format json --output-format yml "${change_log_yaml}"
yq --no-colors --inplace ".annotations.\"artifacthub.io/changes\" |= loadstr(\"${change_log_yaml}\") | sort_keys(.)" "${chart_file}"
else
echo "ERROR: Changelog file is empty: ${CHANGE_LOG_YAML}" 1>&2
echo "ERROR: Changelog file is empty: ${change_log_yaml}" 1>&2
exit 1
fi
rm "${CHANGE_LOG_YAML}"
rm "${change_log_yaml}"
regexp=".*-alpha-[0-9]+(\.[0-9]+){,2}$"
if [[ "${new_tag}" =~ $regexp ]]; then
yq --inplace '.annotations."artifacthub.io/prerelease" = "true"' "${chart_file}"
else
yq --inplace '.annotations."artifacthub.io/prerelease" = "false"' "${chart_file}"
fi

View File

@@ -0,0 +1,86 @@
#!/bin/bash
DEFAULT_GITEA_SERVER_URL="${GITHUB_SERVER_URL:-"https://gitea.com"}"
DEFAULT_GITEA_REPOSITORY="${GITHUB_REPOSITORY:-"gitea/helm-gitea"}"
DEFAULT_GITEA_TOKEN="${ISSUE_RW_TOKEN:-""}"
if [ -z "${1}" ]; then
read -p "Enter hostname of the Gitea instance [${DEFAULT_GITEA_SERVER_URL}]: " CURRENT_GITEA_SERVER_URL
if [ -z "${CURRENT_GITEA_SERVER_URL}" ]; then
CURRENT_GITEA_SERVER_URL="${DEFAULT_GITEA_SERVER_URL}"
fi
else
CURRENT_GITEA_SERVER_URL=$1
fi
if [ -z "${2}" ]; then
read -p "Enter name of the git repository [${DEFAULT_GITEA_REPOSITORY}]: " CURRENT_GITEA_REPOSITORY
if [ -z "${CURRENT_GITEA_REPOSITORY}" ]; then
CURRENT_GITEA_REPOSITORY="${DEFAULT_GITEA_REPOSITORY}"
fi
else
CURRENT_GITEA_REPOSITORY=$2
fi
if [ -z "${3}" ]; then
read -p "Enter token to access the Gitea instance [${DEFAULT_GITEA_TOKEN}]: " CURRENT_GITEA_TOKEN
if [ -z "${CURRENT_GITEA_TOKEN}" ]; then
CURRENT_GITEA_TOKEN="${DEFAULT_GITEA_TOKEN}"
fi
else
CURRENT_GITEA_TOKEN=$3
fi
if ! git sv rn -o /tmp/changelog.md; then
echo "ERROR: Failed to generate /tmp/changelog.md" 1>&2
exit 1
fi
CURL_ARGS=(
"--data-urlencode" "q=Changelog for upcoming version"
# "--data-urlencode=\"q=Changelog for upcoming version\""
"--data-urlencode" "state=open"
"--fail"
"--header" "Accept: application/json"
"--header" "Authorization: token ${CURRENT_GITEA_TOKEN}"
"--request" "GET"
"--silent"
)
if ! ISSUE_NUMBER="$(curl "${CURL_ARGS[@]}" "${CURRENT_GITEA_SERVER_URL}/api/v1/repos/${CURRENT_GITEA_REPOSITORY}/issues" | jq '.[].number')"; then
echo "ERROR: Failed query issue number" 1>&2
exit 1
fi
export ISSUE_NUMBER
if ! echo "" | jq --raw-input --slurp --arg title "Changelog for upcoming version" --arg body "$(cat /tmp/changelog.md)" '{title: $title, body: $body}' 1> /tmp/payload.json; then
echo "ERROR: Failed to create JSON payload file" 1>&2
exit 1
fi
CURL_ARGS=(
"--data" "@/tmp/payload.json"
"--fail"
"--header" "Authorization: token ${CURRENT_GITEA_TOKEN}"
"--header" "Content-Type: application/json"
"--location"
"--silent"
"--output" "/dev/null"
)
if [ -z "${ISSUE_NUMBER}" ]; then
if ! curl "${CURL_ARGS[@]}" --request POST "${CURRENT_GITEA_SERVER_URL}/api/v1/repos/${CURRENT_GITEA_REPOSITORY}/issues"; then
echo "ERROR: Failed to create new issue!" 1>&2
exit 1
else
echo "INFO: Successfully created new issue!"
fi
else
if ! curl "${CURL_ARGS[@]}" --request PATCH "${CURRENT_GITEA_SERVER_URL}/api/v1/repos/${CURRENT_GITEA_REPOSITORY}/issues/${ISSUE_NUMBER}"; then
echo "ERROR: Failed to update issue with ID ${ISSUE_NUMBER}!" 1>&2
exit 1
else
echo "INFO: Successfully updated existing issue with ID ${ISSUE_NUMBER}!"
echo "INFO: ${CURRENT_GITEA_SERVER_URL}/${CURRENT_GITEA_REPOSITORY}/issues/${ISSUE_NUMBER}"
fi
fi

View File

@@ -1,32 +0,0 @@
name: changelog
on:
push:
branches:
- main
jobs:
changelog:
runs-on: ubuntu-latest
container: docker.io/thegeeklab/git-sv:2.0.9
steps:
- name: install tools
run: |
apk add -q --update --no-cache nodejs curl jq sed
- uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Generate upcoming changelog
run: |
git sv rn -o changelog.md
export RELEASE_NOTES=$(cat changelog.md)
export ISSUE_NUMBER=$(curl -s "https://gitea.com/api/v1/repos/gitea/helm-gitea/issues?state=open&q=Changelog%20for%20upcoming%20version" | jq '.[].number')
echo $RELEASE_NOTES
JSON_DATA=$(echo "" | jq -Rs --arg title 'Changelog for upcoming version' --arg body "$(cat changelog.md)" '{title: $title, body: $body}')
if [ -z "$ISSUE_NUMBER" ]; then
curl -s -X POST "https://gitea.com/api/v1/repos/gitea/helm-gitea/issues" -H "Authorization: token ${{ secrets.ISSUE_RW_TOKEN }}" -H "Content-Type: application/json" -d "$JSON_DATA"
else
curl -s -X PATCH "https://gitea.com/api/v1/repos/gitea/helm-gitea/issues/$ISSUE_NUMBER" -H "Authorization: token ${{ secrets.ISSUE_RW_TOKEN }}" -H "Content-Type: application/json" -d "$JSON_DATA"
fi

View File

@@ -1,19 +1,17 @@
name: commitlint
name: Rum commitlint
on:
pull_request:
branches:
- "*"
types:
- opened
- edited
branches: [ '**' ]
types: [ "opened", "edited" ]
jobs:
check-and-test:
container: docker.io/commitlint/commitlint:19.9.1
name: Execute commitlint
runs-on: ubuntu-latest
container: commitlint/commitlint:20.4.0
steps:
- uses: actions/checkout@v6
- name: check PR title
- uses: actions/checkout@v5.0.0
- name: Check PR title
run: |
echo "${{ gitea.event.pull_request.title }}" | commitlint --config .commitlintrc.json

75
.gitea/workflows/helm.yml Normal file
View File

@@ -0,0 +1,75 @@
name: Run Helm tests
on:
pull_request:
branches: [ '**' ]
push:
branches: [ '**' ]
tags-ignore: [ '**' ]
workflow_call: {}
env:
# renovate: datasource=github-releases depName=helm-unittest/helm-unittest
HELM_UNITTEST_VERSION: "v1.0.1"
jobs:
helm-lint:
container: docker.io/alpine/helm:3.18.6
name: Execute helm lint
runs-on: ubuntu-latest
steps:
- name: Install additional tools
run: |
apk update
apk add --update bash make nodejs
- uses: actions/checkout@v5.0.0
- name: Install helm chart dependencies
run: helm dependency build
- name: Execute helm lint
run: helm lint
helm-template:
container: docker.io/alpine/helm:3.18.6
name: Execute helm template
runs-on: ubuntu-latest
steps:
- name: Install additional tools
run: |
apk update
apk add --update bash make nodejs
- uses: actions/checkout@v5.0.0
- name: Install helm chart dependencies
run: helm dependency build
- name: Execute helm template
run: helm template --debug gitea-helm .
helm-unittest:
container: docker.io/alpine/helm:3.18.6
name: Execute helm unittest
runs-on: ubuntu-latest
steps:
- name: Install additional tools
run: |
apk update
apk add --update bash make nodejs npm yamllint ncurses
- uses: actions/checkout@v5.0.0
- name: Install helm chart dependencies
run: helm dependency build
- name: Install helm plugin 'unittest'
run: |
helm plugin install --version ${{ env.HELM_UNITTEST_VERSION }} https://github.com/helm-unittest/helm-unittest
git submodule update --init --recursive
- name: Execute helm unittest
env:
TERM: xterm
run: make unittests
# - name: verify readme
# run: |
# make readme
# git diff --exit-code --name-only README.md
# - name: yaml lint
# uses: https://github.com/ibiqlik/action-yamllint@v3

View File

@@ -0,0 +1,52 @@
name: Markdown linter
on:
pull_request:
types: [ "opened", "reopened", "synchronize" ]
push:
branches: [ '**' ]
tags-ignore: [ '**' ]
workflow_dispatch: {}
jobs:
readme-link:
container:
image: docker.io/library/node:24.9.0-alpine
name: Execute npm run readme:link
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5.0.0
- name: Execute npm run readme:link
run: |
npm install
npm run readme:link
readme-lint:
container:
image: docker.io/library/node:24.9.0-alpine
name: Execute npm run readme:lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5.0.0
- name: Execute npm run readme:lint
run: |
npm install
npm run readme:lint
readme-parameters:
container:
image: docker.io/library/node:24.9.0-alpine
name: Execute npm run readme:parameters
runs-on: ubuntu-latest
steps:
- name: Install tooling
run: |
apk update
apk add git
- uses: actions/checkout@v5.0.0
- name: Execute npm run readme:parameters
run: |
npm install
npm run readme:parameters
- name: Compare diff
run: git diff --exit-code --name-only README.md

View File

@@ -1,110 +1,176 @@
name: generate-chart
name: Release
env:
GPG_PRIVATE_KEY_FILE: ${{ runner.temp }}/private.key
GPG_PRIVATE_KEY_FINGERPRINT: ${{ vars.GPG_PRIVATE_KEY_FINGERPRINT }}
GPG_PRIVATE_KEY_PASSPHRASE_FILE: ${{ runner.temp }}/passphrase.txt
on:
push:
tags:
- "*"
tags: [ '**' ]
jobs:
generate-chart-publish:
publish-chart:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: azure/setup-helm@v4.3.1
with:
version: "v4.0.1" # renovate: datasource=github-tags depName=helm/helm
- name: Install helm plugins
env:
HELM_SIGSTORE_VERSION: "0.3.0" # renovate: datasource=github-tags depName=sigstore/helm-sigstore extractVersion='^v(?<version>\d+\.\d+\.\d+)$'
HELM_SCHEMA_VALUES_VERSION: "2.3.1" # renovate: datasource=github-tags depName=losisin/helm-values-schema-json extractVersion='^v(?<version>\d+\.\d+\.\d+)$'
HELM_UNITTEST_VERSION: "1.0.3" # renovate: datasource=github-tags depName=helm-unittest/helm-unittest extractVersion='^v(?<version>\d+\.\d+\.\d+)$'
run: |
helm plugin install --verify=false https://github.com/sigstore/helm-sigstore.git --version "${HELM_SIGSTORE_VERSION}" 1> /dev/null
helm plugin install --verify=false https://github.com/losisin/helm-values-schema-json.git --version "${HELM_SCHEMA_VALUES_VERSION}" 1> /dev/null
helm plugin install --verify=false https://github.com/helm-unittest/helm-unittest.git --version "${HELM_UNITTEST_VERSION}" 1> /dev/null
helm plugin list
- name: GPG configuration
env:
GPG_PRIVATE_KEY_PASSPHRASE: ${{ secrets.GPGSIGN_PASSPHRASE }}
GPG_PRIVATE_KEY: ${{ secrets.GPGSIGN_KEY }}
run: |
# Configure GPG and GPG Agent
mkdir --parents "${HOME}/.gnupg"
chmod 0700 "${HOME}/.gnupg"
cat > "${HOME}/.gnupg/gpg.conf" <<EOF
use-agent
pinentry-mode loopback
EOF
cat > "${HOME}/.gnupg/gpg-agent.conf" <<EOF
allow-loopback-pinentry
max-cache-ttl 86400
default-cache-ttl 86400
EOF
gpgconf --kill gpg-agent
gpgconf --launch gpg-agent
# Import GPG private key
cat 1> "${GPG_PRIVATE_KEY_PASSPHRASE_FILE}" <<< "${GPG_PRIVATE_KEY_PASSPHRASE}"
cat 1> "${GPG_PRIVATE_KEY_FILE}" <<< "${GPG_PRIVATE_KEY}"
gpg --batch --yes --passphrase-fd 0 --import "${GPG_PRIVATE_KEY_FILE}" <<< "${GPG_PRIVATE_KEY_PASSPHRASE}"
# Export GPG keyring
gpg --batch --yes --export "${GPG_PRIVATE_KEY_FINGERPRINT}" 1> "${HOME}/.gnupg/pubring.gpg"
gpg --batch --yes --passphrase-fd 0 --export-secret-keys "${GPG_PRIVATE_KEY_FINGERPRINT}" 1> "${HOME}/.gnupg/secring.gpg" <<< "${GPG_PRIVATE_KEY_PASSPHRASE}"
- uses: actions/checkout@v6.0.2
with:
fetch-depth: 0
- name: Install packages via apt
run: |
apt update --yes
apt install --yes curl ca-certificates curl gnupg jq
- name: Install helm
env:
# renovate: datasource=docker depName=alpine/helm
HELM_VERSION: "4.1.0"
run: |
curl --fail --location --output /dev/stdout --silent --show-error https://get.helm.sh/helm-v${HELM_VERSION}-linux-$(dpkg --print-architecture).tar.gz | tar --extract --gzip --file /dev/stdin
mv linux-$(dpkg --print-architecture)/helm /usr/local/bin/
rm --force --recursive linux-$(dpkg --print-architecture) helm-v${HELM_VERSION}-linux-$(dpkg --print-architecture).tar.gz
helm version
- name: Install yq
env:
YQ_VERSION: v4.45.4 # renovate: datasource=github-releases depName=mikefarah/yq
run: |
curl --fail --location --output /dev/stdout --silent --show-error https://github.com/mikefarah/yq/releases/download/${YQ_VERSION}/yq_linux_$(dpkg --print-architecture).tar.gz | tar --extract --gzip --file /dev/stdin
mv yq_linux_$(dpkg --print-architecture) /usr/local/bin
rm --force --recursive yq_linux_$(dpkg --print-architecture) yq_linux_$(dpkg --print-architecture).tar.gz
yq --version
- name: Install docker-ce via apt
run: |
install -m 0755 -d /etc/apt/keyrings
curl --fail --location --silent --show-error https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update --yes
apt install --yes python3 python3-pip apt-transport-https docker-ce-cli
- name: Install awscli
run: |
pip install awscli --break-system-packages
aws --version
- name: Import GPG key
id: import_gpg
uses: https://github.com/crazy-max/ghaction-import-gpg@v6
with:
gpg_private_key: ${{ secrets.GPGSIGN_KEY }}
passphrase: ${{ secrets.GPGSIGN_PASSPHRASE }}
fingerprint: CC64B1DB67ABBEECAB24B6455FC346329753F4B0
- name: Add Artifacthub.io annotations
run: |
NEW_TAG="$(git tag --sort=-version:refname | head --lines 1)"
OLD_TAG="$(git tag --sort=-version:refname | head --lines 2 | tail --lines 1)"
.gitea/scripts/add-annotations.sh "${OLD_TAG}" "${NEW_TAG}"
- name: Print Chart.yaml
run: cat Chart.yaml
# Using helm gpg plugin as 'helm package --sign' has issues with gpg2: https://github.com/helm/helm/issues/2843
- name: package chart
- name: Extract meta information
run: |
echo "GITEA_SERVER_HOSTNAME=$(echo "${GITHUB_SERVER_URL}" | cut --delimiter '/' --fields 3)" >> $GITHUB_ENV
echo "PACKAGE_VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_ENV
echo "REPOSITORY_NAME=$(echo ${GITHUB_REPOSITORY} | cut --delimiter '/' --fields 2)" >> $GITHUB_ENV
echo "REPOSITORY_OWNER=$(echo ${GITHUB_REPOSITORY} | cut --delimiter '/' --fields 1)" >> $GITHUB_ENV
- name: Package chart
run: |
echo ${{ secrets.DOCKER_CHARTS_PASSWORD }} | docker login -u ${{ secrets.DOCKER_CHARTS_USERNAME }} --password-stdin
# FIXME: use upstream after https://github.com/technosophos/helm-gpg/issues/1 is solved
helm plugin install https://github.com/pat-s/helm-gpg
helm dependency build
helm package --version "${GITHUB_REF#refs/tags/v}" ./
mkdir gitea
mv gitea*.tgz gitea/
curl --fail --location --output gitea/index.yaml --silent --show-error https://dl.gitea.com/charts/index.yaml
helm repo index gitea/ --url https://dl.gitea.com/charts --merge gitea/index.yaml
# push to dockerhub
echo ${{ secrets.DOCKER_CHARTS_PASSWORD }} | helm registry login -u ${{ secrets.DOCKER_CHARTS_USERNAME }} registry-1.docker.io --password-stdin
helm push gitea/gitea-${GITHUB_REF#refs/tags/v}.tgz oci://registry-1.docker.io/giteacharts
helm registry logout registry-1.docker.io
helm package \
--sign \
--key "$(gpg --with-colons --list-keys "${GPG_PRIVATE_KEY_FINGERPRINT}" | grep uid | cut --delimiter ':' --fields 10)" \
--keyring "${HOME}/.gnupg/secring.gpg" \
--passphrase-file "${GPG_PRIVATE_KEY_PASSPHRASE_FILE}" \
--version "${PACKAGE_VERSION}" ./
- name: aws credential configure
uses: https://github.com/aws-actions/configure-aws-credentials@v5
- uses: docker/login-action@v3.7.0
with:
aws-access-key-id: ${{ secrets.AWS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
username: ${{ secrets.DOCKER_IO_USERNAME }}
password: ${{ secrets.DOCKER_IO_PASSWORD }}
- name: Copy files to S3 and clear cache
- name: Upload package as OCI artifact to docker.io
env:
DOCKER_IO_REPO_NAME: ${{ vars.DOCKER_IO_REPO_NAME }}
run: |
aws s3 sync gitea/ s3://${{ secrets.AWS_S3_BUCKET}}/charts/
helm push *-${PACKAGE_VERSION}.tgz "oci://registry-1.docker.io/${DOCKER_IO_REPO_NAME}"
release-gitea:
needs: generate-chart-publish
- uses: docker/login-action@v3.7.0
with:
registry: ${{ github.server_url }}
username: ${{ secrets.GT_PACKAGE_REGISTRY_USERNAME }}
password: ${{ secrets.GT_PACKAGE_REGISTRY_TOKEN }}
- name: Upload package as OCI artifact to Gitea
run: |
helm push *-${PACKAGE_VERSION}.tgz "oci://${GITEA_SERVER_HOSTNAME}/${REPOSITORY_OWNER}/${REPOSITORY_NAME}"
- name: Upload package as Helm chart to Gitea
env:
GITEA_REGISTRY_TOKEN: ${{ secrets.GT_PACKAGE_REGISTRY_TOKEN }}
run: |
for package in *"${PACKAGE_VERSION}.tgz"*; do
echo "Uploading ${package}..."
curl \
--fail \
--request POST \
--show-error \
--silent \
--upload-file "${package}" \
--user "${REPOSITORY_OWNER}:${GITEA_REGISTRY_TOKEN}" \
https://${GITEA_SERVER_HOSTNAME}/api/packages/${REPOSITORY_OWNER}/helm/api/charts
done
# - name: Build new index.yaml
# run: |
# mkdir gitea
# curl \
# --fail \
# --header \
# --location \
# --output gitea/index.yaml \
# --show-error \
# --silent \
# https://dl.gitea.com/charts/index.yaml
# helm repo index \
# --merge gitea/index.yaml \
# --url https://dl.gitea.com/charts \
# gitea/
# - uses: aws-actions/configure-aws-credentials@v6.0.0
# with:
# aws-access-key-id: ${{ secrets.AWS_KEY_ID }}
# aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# aws-region: ${{ secrets.AWS_REGION }}
# - name: Upload package as Helm chart to AWS S3
# run: |
# aws s3 sync gitea/ s3://${{ secrets.AWS_S3_BUCKET }}/charts/
publish-release-notes:
needs: publish-chart
runs-on: ubuntu-latest
container: docker.io/thegeeklab/git-sv:2.0.9
steps:
- name: install tools
- name: Install gitsv
env:
GITSV_VERSION: v2.0.9 # renovate: datasource=github-releases depName=thegeeklab/git-sv
run: |
apk add -q --update --no-cache nodejs
- uses: actions/checkout@v6
curl \
--fail \
--location \
--output git-sv \
--output-dir /usr/local/bin \
--silent \
--show-error \
https://github.com/thegeeklab/git-sv/releases/download/${GITSV_VERSION}/git-sv-linux-$(dpkg --print-architecture)
git-sv --version
- uses: actions/checkout@v6.0.0
with:
fetch-tags: true
fetch-depth: 0
@@ -112,12 +178,12 @@ jobs:
- name: Create changelog
run: |
git sv current-version
git sv release-notes -t ${GITHUB_REF#refs/tags/} -o CHANGELOG.md
sed -i '1,2d' CHANGELOG.md # remove version
git sv release-notes -t "${PACKAGE_VERSION}" -o CHANGELOG.md
sed -i '1,2d' CHANGELOG.md
cat CHANGELOG.md
- name: Release
uses: https://github.com/akkuman/gitea-release-action@v1
uses: akkuman/gitea-release-action@v1.3.5
with:
body_path: CHANGELOG.md
token: "${{ secrets.RELEASE_TOKEN }}"

View File

@@ -1,45 +0,0 @@
name: check-and-test
on:
pull_request:
branches:
- "*"
push:
branches:
- main
env:
# renovate: datasource=github-releases depName=helm-unittest/helm-unittest
HELM_UNITTEST_VERSION: "v1.0.3"
jobs:
check-and-test:
runs-on: ubuntu-latest
container: alpine/helm:4.1.0
steps:
- name: install tools
run: |
apk update
apk add --update bash make nodejs npm yamllint ncurses
- uses: actions/checkout@v6
- name: install chart dependencies
run: helm dependency build
- name: lint
run: helm lint
- name: template
run: helm template --debug gitea-helm .
- name: prepare unit test environment
run: |
helm plugin install --version ${{ env.HELM_UNITTEST_VERSION }} https://github.com/helm-unittest/helm-unittest
git submodule update --init --recursive
- name: unit tests
env:
TERM: xterm
run: |
make unittests
- name: verify readme
run: |
make readme
git diff --exit-code --name-only README.md
- name: yaml lint
uses: https://github.com/ibiqlik/action-yamllint@v3

View File

@@ -0,0 +1,29 @@
name: Update changelog
on:
push:
branches: [ "main" ]
workflow_dispatch: {}
jobs:
changelog:
runs-on: ubuntu-latest
steps:
- name: Install packages via apt-get
run: |
apt-get update &&
apt-get install --yes curl jq
- uses: actions/checkout@v5.0.0
with:
fetch-depth: 0
- name: Install git-sv
env:
GIT_SV_VERSION: v2.0.4 # renovate: datasource=github-releases depName=thegeeklab/git-sv
run: |
curl --fail --location --output /usr/local/bin/git-sv --silent --show-error https://github.com/thegeeklab/git-sv/releases/download/${GIT_SV_VERSION}/git-sv-linux-$(dpkg --print-architecture)
chmod +x /usr/local/bin/git-sv
git-sv --version
- name: Update changelog issue
env:
ISSUE_RW_TOKEN: ${{ secrets.ISSUE_RW_TOKEN }}
run: .gitea/scripts/update-changelog.sh

8
.markdownlink.json Normal file
View File

@@ -0,0 +1,8 @@
{
"projectBaseUrl":"${workspaceFolder}",
"ignorePatterns": [
{
"pattern": "^http://localhost"
}
]
}

View File

@@ -1,6 +1,6 @@
{
"yaml.schemas": {
"https://raw.githubusercontent.com/helm-unittest/helm-unittest/v1.0.3/schema/helm-testsuite.json": [
"https://raw.githubusercontent.com/helm-unittest/helm-unittest/v1.0.1/schema/helm-testsuite.json": [
"/unittests/**/*.yaml"
]
},

View File

@@ -44,8 +44,7 @@ be used:
`helm install --dependency-update gitea . -f values.yaml`.
1. Gitea is now deployed in `minikube`.
To access it, it's port needs to be forwarded first from `minikube` to localhost first via `kubectl --namespace
default port-forward svc/gitea-http 3000:3000`.
Now Gitea is accessible at [http://localhost:3000](http://localhost:3000).
default port-forward svc/gitea-http 3000:3000`. Now Gitea is accessible at [http://localhost:3000](http://localhost:3000).
### Unit tests

View File

@@ -4,7 +4,7 @@ description: Gitea Helm chart for Kubernetes
type: application
version: 0.0.0
# renovate datasource=github-releases depName=go-gitea/gitea extractVersion=^v(?<version>.*)$
appVersion: 1.25.4
appVersion: 1.24.6
icon: https://gitea.com/assets/img/logo.svg
annotations:

125
README.md
View File

@@ -17,7 +17,7 @@
- [Rootless Defaults](#rootless-defaults)
- [Session, Cache and Queue](#session-cache-and-queue)
- [Single-Pod Configurations](#single-pod-configurations)
- [Additional _app.ini_ settings](#additional-appini-settings)
- [Additional app.ini settings](#additional-appini-settings)
- [User defined environment variables in app.ini](#user-defined-environment-variables-in-appini)
- [External Database](#external-database)
- [Ports and external url](#ports-and-external-url)
@@ -72,7 +72,7 @@ Additionally, this chart allows to provide LDAP and admin user configuration wit
## Update and versioning policy
The Gitea helm chart versioning does not follow Gitea's versioning.
The latest chart version can be looked up in [https://dl.gitea.com/charts](https://dl.gitea.com/charts) or in the [repository releases](https://gitea.com/gitea/helm-gitea/releases).
The latest chart version can be looked up in [https://dl.gitea.com/charts/](https://dl.gitea.com/charts/) or in the [repository releases](https://gitea.com/gitea/helm-gitea/releases).
The chart aims to follow Gitea's releases closely.
There might be times when the chart is behind the latest Gitea release.
@@ -266,7 +266,7 @@ If `.Values.image.rootless: true`, then the following will occur. In case you us
- `$HOME` becomes `/data/gitea/git`
[see deployment.yaml](./templates/gitea/deployment.yaml) template inside (init-)container "env" declarations
[see deployment.yaml](./templates/deployment.yaml) template inside (init-)container "env" declarations
- `START_SSH_SERVER: true` (Unless explicity overwritten by `gitea.config.server.START_SSH_SERVER`)
@@ -278,7 +278,7 @@ If `.Values.image.rootless: true`, then the following will occur. In case you us
- `SSH_LOG_LEVEL` environment variable is not injected into the container
[see deployment.yaml](./templates/gitea/deployment.yaml) template inside container "env" declarations
[see deployment.yaml](./templates/deployment.yaml) template inside container "env" declarations
#### Session, Cache and Queue
@@ -360,7 +360,7 @@ If HA is not needed/desired, the following configurations can be used to deploy
</details>
### Additional _app.ini_ settings
### Additional app.ini settings
> **The [generic](https://docs.gitea.com/administration/config-cheat-sheet#overall-default)
> section cannot be defined that way.**
@@ -1158,89 +1158,68 @@ To comply with the Gitea helm chart definition of the digest parameter, a "custo
| `gitea.startupProbe.successThreshold` | Success threshold for startup probe | `1` |
| `gitea.startupProbe.failureThreshold` | Failure threshold for startup probe | `10` |
### Network Policy
| Name | Description | Value |
| --------------------------- | ------------------------------------------------------------------------- | ------- |
| `networkPolicy.enabled` | Enable network policies in general. | `false` |
| `networkPolicy.annotations` | Additional network policy annotations. | `{}` |
| `networkPolicy.labels` | Additional network policy labels. | `{}` |
| `networkPolicy.policyTypes` | List of policy types. Supported is ingress, egress or ingress and egress. | `[]` |
| `networkPolicy.egress` | Concrete egress network policy implementation. | `[]` |
| `networkPolicy.ingress` | Concrete ingress network policy implementation. | `[]` |
### valkey-cluster
Valkey cluster and [Valkey](#valkey) cannot be enabled at the same time.
| Name | Description | Value |
| --------------------------------------------------- | --------------------------------------------------------------------------- | ------------------------------ |
| `valkey-cluster.enabled` | Enable valkey cluster | `true` |
| `valkey-cluster.usePassword` | Whether to use password authentication. | `false` |
| `valkey-cluster.usePasswordFiles` | Whether to mount passwords as files instead of environment variables. | `false` |
| `valkey-cluster.image.repository` | Image repository, eg. `bitnamilegacy/valkey-cluster`. | `bitnamilegacy/valkey-cluster` |
| `valkey-cluster.cluster.nodes` | Number of valkey cluster master nodes | `3` |
| `valkey-cluster.cluster.replicas` | Number of valkey cluster master node replicas | `0` |
| `valkey-cluster.metrics.image.repository` | Image repository, eg. `bitnamilegacy/redis-exporter`. | `bitnamilegacy/redis-exporter` |
| `valkey-cluster.persistence.enabled` | Enable persistence on Valkey replicas nodes using Persistent Volume Claims. | `true` |
| `valkey-cluster.persistence.storageClass` | Persistent Volume storage class. | `""` |
| `valkey-cluster.persistence.size` | Persistent Volume size. | `8Gi` |
| `valkey-cluster.service.ports.valkey` | Port of Valkey service | `6379` |
| `valkey-cluster.sysctlImage.repository` | Image repository, eg. `bitnamilegacy/os-shell`. | `bitnamilegacy/os-shell` |
| `valkey-cluster.volumePermissions.image.repository` | Image repository, eg. `bitnamilegacy/os-shell`. | `bitnamilegacy/os-shell` |
| Name | Description | Value |
| ------------------------------------- | -------------------------------------------------------------------- | ------- |
| `valkey-cluster.enabled` | Enable valkey cluster | `true` |
| `valkey-cluster.usePassword` | Whether to use password authentication | `false` |
| `valkey-cluster.usePasswordFiles` | Whether to mount passwords as files instead of environment variables | `false` |
| `valkey-cluster.cluster.nodes` | Number of valkey cluster master nodes | `3` |
| `valkey-cluster.cluster.replicas` | Number of valkey cluster master node replicas | `0` |
| `valkey-cluster.service.ports.valkey` | Port of Valkey service | `6379` |
### valkey
Valkey and [Valkey cluster](#valkey-cluster) cannot be enabled at the same time.
| Name | Description | Value |
| ------------------------------------------- | --------------------------------------------------------------------------- | ------------------------------- |
| `valkey.enabled` | Enable valkey standalone or replicated | `false` |
| `valkey.architecture` | Whether to use standalone or replication | `standalone` |
| `valkey.kubectl.image.repository` | Image repository, eg. `bitnamilegacy/kubectl`. | `bitnamilegacy/kubectl` |
| `valkey.image.repository` | Image repository, eg. `bitnamilegacy/valkey`. | `bitnamilegacy/valkey` |
| `valkey.global.valkey.password` | Required password | `changeme` |
| `valkey.master.count` | Number of Valkey master instances to deploy | `1` |
| `valkey.master.service.ports.valkey` | Port of Valkey service | `6379` |
| `valkey.metrics.image.repository` | Image repository, eg. `bitnamilegacy/redis-exporter`. | `bitnamilegacy/redis-exporter` |
| `valkey.primary.persistence.enabled` | Enable persistence on Valkey replicas nodes using Persistent Volume Claims. | `true` |
| `valkey.primary.persistence.storageClass` | Persistent Volume storage class. | `""` |
| `valkey.primary.persistence.size` | Persistent Volume size. | `8Gi` |
| `valkey.replica.persistence.enabled` | Enable persistence on Valkey replicas nodes using Persistent Volume Claims. | `true` |
| `valkey.replica.persistence.storageClass` | Persistent Volume storage class. | `""` |
| `valkey.replica.persistence.size` | Persistent Volume size. | `8Gi` |
| `valkey.sentinel.image.repository` | Image repository, eg. `bitnamilegacy/sentinel`. | `bitnamilegacy/valkey-sentinel` |
| `valkey.volumePermissions.image.repository` | Image repository, eg. `bitnamilegacy/os-shell`. | `bitnamilegacy/os-shell` |
| Name | Description | Value |
| ------------------------------------ | ------------------------------------------- | ------------ |
| `valkey.enabled` | Enable valkey standalone or replicated | `false` |
| `valkey.architecture` | Whether to use standalone or replication | `standalone` |
| `valkey.global.valkey.password` | Required password | `changeme` |
| `valkey.master.count` | Number of Valkey master instances to deploy | `1` |
| `valkey.master.service.ports.valkey` | Port of Valkey service | `6379` |
### PostgreSQL HA
| Name | Description | Value |
| -------------------------------------------------- | ---------------------------------------------------------------- | --------------------------------- |
| `postgresql-ha.enabled` | Enable PostgreSQL HA | `true` |
| `postgresql-ha.global.postgresql.database` | Name for a custom database to create (overrides `auth.database`) | `gitea` |
| `postgresql-ha.global.postgresql.username` | Name for a custom user to create (overrides `auth.username`) | `gitea` |
| `postgresql-ha.global.postgresql.password` | Name for a custom password to create (overrides `auth.password`) | `gitea` |
| `postgresql-ha.metrics.image.repository` | Image repository, eg. `bitnamilegacy/postgres-exporter`. | `bitnamilegacy/postgres-exporter` |
| `postgresql-ha.postgresql.image.repository` | Image repository, eg. `bitnamilegacy/postgresql-repmgr`. | `bitnamilegacy/postgresql-repmgr` |
| `postgresql-ha.postgresql.repmgrPassword` | Repmgr Password | `changeme2` |
| `postgresql-ha.postgresql.postgresPassword` | postgres Password | `changeme1` |
| `postgresql-ha.postgresql.password` | Password for the `gitea` user (overrides `auth.password`) | `changeme4` |
| `postgresql-ha.pgpool.adminPassword` | pgpool adminPassword | `changeme3` |
| `postgresql-ha.pgpool.image.repository` | Image repository, eg. `bitnamilegacy/pgpool`. | `bitnamilegacy/pgpool` |
| `postgresql-ha.pgpool.srCheckPassword` | pgpool srCheckPassword | `changeme4` |
| `postgresql-ha.service.ports.postgresql` | PostgreSQL service port (overrides `service.ports.postgresql`) | `5432` |
| `postgresql-ha.persistence.enabled` | Enable persistence. | `true` |
| `postgresql-ha.persistence.storageClass` | Persistent Volume Storage Class. | `""` |
| `postgresql-ha.persistence.size` | PVC Storage Request for PostgreSQL HA volume | `10Gi` |
| `postgresql-ha.volumePermissions.image.repository` | Image repository, eg. `bitnamilegacy/os-shell`. | `bitnamilegacy/os-shell` |
| Name | Description | Value |
| ------------------------------------------- | ---------------------------------------------------------------- | ----------- |
| `postgresql-ha.enabled` | Enable PostgreSQL HA | `true` |
| `postgresql-ha.postgresql.password` | Password for the `gitea` user (overrides `auth.password`) | `changeme4` |
| `postgresql-ha.global.postgresql.database` | Name for a custom database to create (overrides `auth.database`) | `gitea` |
| `postgresql-ha.global.postgresql.username` | Name for a custom user to create (overrides `auth.username`) | `gitea` |
| `postgresql-ha.global.postgresql.password` | Name for a custom password to create (overrides `auth.password`) | `gitea` |
| `postgresql-ha.postgresql.repmgrPassword` | Repmgr Password | `changeme2` |
| `postgresql-ha.postgresql.postgresPassword` | postgres Password | `changeme1` |
| `postgresql-ha.pgpool.adminPassword` | pgpool adminPassword | `changeme3` |
| `postgresql-ha.pgpool.srCheckPassword` | pgpool srCheckPassword | `changeme4` |
| `postgresql-ha.service.ports.postgresql` | PostgreSQL service port (overrides `service.ports.postgresql`) | `5432` |
| `postgresql-ha.persistence.size` | PVC Storage Request for PostgreSQL HA volume | `10Gi` |
### PostgreSQL
| Name | Description | Value |
| ------------------------------------------------------- | ---------------------------------------------------------------- | --------------------------------- |
| `postgresql.enabled` | Enable PostgreSQL | `false` |
| `postgresql.global.postgresql.auth.password` | Password for the `gitea` user (overrides `auth.password`) | `gitea` |
| `postgresql.global.postgresql.auth.database` | Name for a custom database to create (overrides `auth.database`) | `gitea` |
| `postgresql.global.postgresql.auth.username` | Name for a custom user to create (overrides `auth.username`) | `gitea` |
| `postgresql.global.postgresql.service.ports.postgresql` | PostgreSQL service port (overrides `service.ports.postgresql`) | `5432` |
| `postgresql.image.repository` | Image repository, eg. `bitnamilegacy/postgresql`. | `bitnamilegacy/postgresql` |
| `postgresql.primary.persistence.enabled` | Enable persistence. | `true` |
| `postgresql.primary.persistence.storageClass` | Persistent Volume storage class. | `""` |
| `postgresql.primary.persistence.size` | PVC Storage Request for PostgreSQL volume. | `10Gi` |
| `postgresql.readReplicas.persistence.enabled` | Enable PostgreSQL read only data persistence using PVC. | `true` |
| `postgresql.readReplicas.persistence.storageClass` | Persistent Volume storage class. | `""` |
| `postgresql.readReplicas.persistence.size` | PVC Storage Request for PostgreSQL volume. | `""` |
| `postgresql.metrics.image.repository` | Image repository, eg. `bitnamilegacy/postgres-exporter`. | `bitnamilegacy/postgres-exporter` |
| `postgresql.volumePermissions.image.repository` | Image repository, eg. `bitnamilegacy/os-shell`. | `bitnamilegacy/os-shell` |
| Name | Description | Value |
| ------------------------------------------------------- | ---------------------------------------------------------------- | ------- |
| `postgresql.enabled` | Enable PostgreSQL | `false` |
| `postgresql.global.postgresql.auth.password` | Password for the `gitea` user (overrides `auth.password`) | `gitea` |
| `postgresql.global.postgresql.auth.database` | Name for a custom database to create (overrides `auth.database`) | `gitea` |
| `postgresql.global.postgresql.auth.username` | Name for a custom user to create (overrides `auth.username`) | `gitea` |
| `postgresql.global.postgresql.service.ports.postgresql` | PostgreSQL service port (overrides `service.ports.postgresql`) | `5432` |
| `postgresql.primary.persistence.size` | PVC Storage Request for PostgreSQL volume | `10Gi` |
### Advanced

1447
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -9,11 +9,13 @@
"npm": ">=8.0.0"
},
"scripts": {
"readme:link": "markdown-link-check --config .markdownlink.json *.md",
"readme:lint": "markdownlint *.md -f",
"readme:parameters": "readme-generator -v values.yaml -r README.md"
},
"devDependencies": {
"@bitnami/readme-generator-for-helm": "^2.5.0",
"markdownlint-cli": "^0.47.0"
"markdown-link-check": "^3.13.6",
"markdownlint-cli": "^0.45.0"
}
}

View File

@@ -87,6 +87,12 @@ storageClassName: {{ $storageClass | quote }}
{{- end }}
{{- end -}}
{{/*
Common annotations
*/}}
{{- define "gitea.annotations" -}}
{{- end }}
{{/*
Common labels
*/}}

View File

@@ -0,0 +1,19 @@
{{/* vim: set filetype=mustache: */}}
{{/* annotations */}}
{{- define "gitea.networkPolicy.annotations" -}}
{{ include "gitea.annotations" . }}
{{- if .Values.networkPolicy.annotations }}
{{ toYaml .Values.networkPolicy.annotations }}
{{- end }}
{{- end }}
{{/* labels */}}
{{- define "gitea.networkPolicy.labels" -}}
{{ include "gitea.labels" . }}
{{- if .Values.networkPolicy.labels }}
{{ toYaml .Values.networkPolicy.labels }}
{{- end }}
{{- end }}

17
templates/_pod.tpl Normal file
View File

@@ -0,0 +1,17 @@
---
{{/* labels */}}
{{- define "gitea.pod.labels" -}}
{{- include "gitea.labels" . }}
{{- if .Values.deployment.labels }}
{{ toYaml .Values.deployment.labels }}
{{- end }}
{{- end }}
{{- define "gitea.pod.selectorLabels" -}}
{{- include "gitea.selectorLabels" . }}
{{- if .Values.deployment.labels }}
{{ toYaml .Values.deployment.labels }}
{{- end }}
{{- end }}

View File

@@ -23,11 +23,11 @@ spec:
{{- end }}
selector:
matchLabels:
{{- include "gitea.selectorLabels" . | nindent 6 }}
{{- include "gitea.pod.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/gitea/config.yaml") . | sha256sum }}
checksum/config: {{ include (print $.Template.BasePath "/config.yaml") . | sha256sum }}
{{- range $idx, $value := .Values.gitea.ldap }}
checksum/ldap_{{ $idx }}: {{ include "gitea.ldap_settings" (list $idx $value) | sha256sum }}
{{- end }}
@@ -38,10 +38,7 @@ spec:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "gitea.labels" . | nindent 8 }}
{{- if .Values.deployment.labels }}
{{- toYaml .Values.deployment.labels | nindent 8 }}
{{- end }}
{{- include "gitea.pod.labels" . | nindent 8 }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"

View File

@@ -1,8 +1,8 @@
{{- range .Values.extraDeploy }}
---
{{- if typeIs "string" . }}
{{ tpl . $ }}
{{- tpl . $ }}
{{- else }}
{{ tpl (. | toYaml) $ }}
{{- tpl (. | toYaml) $ }}
{{- end }}
{{- end }}

View File

@@ -64,7 +64,7 @@ stringData:
echo 'Wait for valkey to become avialable...'
until [ "${RETRY}" -ge "${MAX}" ]; do
RES_OPTIONS="ndots:0" nc -vz -w2 {{ include "valkey.servicename" . }} {{ include "valkey.port" . }} && break
nc -vz -w2 {{ include "valkey.servicename" . }} {{ include "valkey.port" . }} && break
RETRY=$[${RETRY}+1]
echo "...not ready yet (${RETRY}/${MAX})"
done
@@ -225,4 +225,4 @@ stringData:
configure_oauth
echo '==== END GITEA CONFIGURATION ===='
echo '==== END GITEA CONFIGURATION ===='

View File

@@ -0,0 +1,32 @@
{{- if .Values.networkPolicy.enabled }}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
{{- with (include "gitea.networkPolicy.annotations" . | fromYaml) }}
annotations:
{{- tpl (toYaml .) $ | nindent 4 }}
{{- end }}
{{- with (include "gitea.networkPolicy.labels" . | fromYaml) }}
labels:
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "gitea.fullname" . }}
namespace: {{ .Release.Namespace }}
spec:
podSelector:
matchLabels:
{{- include "gitea.pod.selectorLabels" $ | nindent 6 }}
{{- with .Values.networkPolicy.policyTypes }}
policyTypes:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- with .Values.networkPolicy.egress }}
egress:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- with .Values.networkPolicy.ingress }}
ingress:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- end }}

View File

@@ -3,17 +3,17 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/config.yaml
- templates/config.yaml
tests:
- it: "actions are enabled by default (based on vanilla Gitea behavior)"
template: templates/gitea/config.yaml
template: templates/config.yaml
asserts:
- documentIndex: 0
notExists:
path: stringData.actions
- it: "actions can be disabled via inline config"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
gitea.config.actions.ENABLED: false
asserts:

View File

@@ -4,7 +4,7 @@ release:
namespace: testing
tests:
- it: "cache is configured correctly for valkey-cluster"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
valkey-cluster:
enabled: true
@@ -19,7 +19,7 @@ tests:
HOST=redis+cluster://:@gitea-unittests-valkey-cluster-headless.testing.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s&
- it: "cache is configured correctly for valkey"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
valkey-cluster:
enabled: false
@@ -34,7 +34,7 @@ tests:
HOST=redis://:changeme@gitea-unittests-valkey-headless.testing.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s&
- it: "cache is configured correctly for 'memory' when valkey (or valkey-cluster) is disabled"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
valkey-cluster:
enabled: false
@@ -49,7 +49,7 @@ tests:
HOST=
- it: "cache can be customized when valkey (or valkey-cluster) is disabled"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
valkey-cluster:
enabled: false

View File

@@ -4,7 +4,7 @@ release:
namespace: testing
tests:
- it: metrics token is set
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
gitea:
metrics:
@@ -18,7 +18,7 @@ tests:
ENABLED=true
TOKEN=somepassword
- it: metrics token is empty
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
gitea:
metrics:
@@ -31,7 +31,7 @@ tests:
value: |-
ENABLED=true
- it: metrics token is nil
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
gitea:
metrics:
@@ -44,7 +44,7 @@ tests:
value: |-
ENABLED=true
- it: does not configures a token if metrics are disabled
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
gitea:
metrics:

View File

@@ -4,7 +4,7 @@ release:
namespace: testing
tests:
- it: "queue is configured correctly for valkey-cluster"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
valkey-cluster:
enabled: true
@@ -19,7 +19,7 @@ tests:
TYPE=redis
- it: "queue is configured correctly for valkey"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
valkey-cluster:
enabled: false
@@ -34,7 +34,7 @@ tests:
TYPE=redis
- it: "queue is configured correctly for 'levelDB' when valkey (and valkey-cluster) is disabled"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
valkey-cluster:
enabled: false
@@ -49,7 +49,7 @@ tests:
TYPE=level
- it: "queue can be customized when valkey (and valkey-cluster) are disabled"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
valkey-cluster:
enabled: false

View File

@@ -4,7 +4,7 @@ release:
namespace: testing
tests:
- it: "[default values] uses ingress host for DOMAIN|SSH_DOMAIN|ROOT_URL"
template: templates/gitea/config.yaml
template: templates/config.yaml
asserts:
- documentIndex: 0
matchRegex:
@@ -22,7 +22,7 @@ tests:
################################################
- it: "[no ingress hosts] uses gitea http service for DOMAIN|SSH_DOMAIN|ROOT_URL"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
ingress:
hosts: []
@@ -43,7 +43,7 @@ tests:
################################################
- it: "[provided via values] uses that for DOMAIN|SSH_DOMAIN|ROOT_URL"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
gitea.config.server.DOMAIN: provided.example.com
ingress:

View File

@@ -4,7 +4,7 @@ release:
namespace: testing
tests:
- it: "session is configured correctly for valkey-cluster"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
valkey-cluster:
enabled: true
@@ -19,7 +19,7 @@ tests:
PROVIDER_CONFIG=redis+cluster://:@gitea-unittests-valkey-cluster-headless.testing.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s&
- it: "session is configured correctly for valkey"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
valkey-cluster:
enabled: false
@@ -34,7 +34,7 @@ tests:
PROVIDER_CONFIG=redis://:changeme@gitea-unittests-valkey-headless.testing.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s&
- it: "session is configured correctly for 'memory' when valkey (and valkey-cluster) is disabled"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
valkey-cluster:
enabled: false
@@ -49,7 +49,7 @@ tests:
PROVIDER_CONFIG=
- it: "session can be customized when valkey (and valkey-cluster) is disabled"
template: templates/gitea/config.yaml
template: templates/config.yaml
set:
valkey-cluster:
enabled: false

View File

@@ -106,14 +106,14 @@ tests:
name: gitea-unittests-postgresql-ha-pgpool
namespace: testing
- it: "[gitea] connects to pgpool service"
template: templates/gitea/config.yaml
template: templates/config.yaml
asserts:
- documentIndex: 0
matchRegex:
path: stringData.database
pattern: HOST=gitea-unittests-postgresql-ha-pgpool.testing.svc.cluster.local:1234
- it: "[gitea] connects to configured database"
template: templates/gitea/config.yaml
template: templates/config.yaml
asserts:
- documentIndex: 0
matchRegex:

View File

@@ -65,14 +65,14 @@ tests:
name: gitea-unittests-postgresql
namespace: testing
- it: "[gitea] connects to postgresql service"
template: templates/gitea/config.yaml
template: templates/config.yaml
asserts:
- documentIndex: 0
matchRegex:
path: stringData.database
pattern: HOST=gitea-unittests-postgresql.testing.svc.cluster.local:1234
- it: "[gitea] connects to configured database"
template: templates/gitea/config.yaml
template: templates/config.yaml
asserts:
- documentIndex: 0
matchRegex:

View File

@@ -82,7 +82,7 @@ tests:
port: 6379
targetPort: tcp-redis
- it: "[gitea] waits for valkey-cluster to be up and running"
template: templates/gitea/init.yaml
template: templates/init.yaml
asserts:
- documentIndex: 0
matchRegex:

View File

@@ -44,7 +44,7 @@ tests:
port: 6379
targetPort: redis
- it: "[gitea] waits for valkey to be up and running"
template: templates/gitea/init.yaml
template: templates/init.yaml
asserts:
- documentIndex: 0
matchRegex:

View File

@@ -15,7 +15,7 @@ tests:
matchRegex:
path: spec.template.spec.containers[0].image
# IN CASE OF AN INTENTIONAL MAJOR BUMP, ADJUST THIS TEST
pattern: bitnamilegacy/postgresql-repmgr:17.+$
pattern: bitnami/postgresql-repmgr:17.+$
- it: "[postgresql] ensures we detect major image version upgrades"
template: charts/postgresql/templates/primary/statefulset.yaml
set:
@@ -28,7 +28,7 @@ tests:
matchRegex:
path: spec.template.spec.containers[0].image
# IN CASE OF AN INTENTIONAL MAJOR BUMP, ADJUST THIS TEST
pattern: bitnamilegacy/postgresql:17.+$
pattern: bitnami/postgresql:17.+$
- it: "[valkey-cluster] ensures we detect major image version upgrades"
template: charts/valkey-cluster/templates/valkey-statefulset.yaml
set:
@@ -41,7 +41,7 @@ tests:
matchRegex:
path: spec.template.spec.containers[0].image
# IN CASE OF AN INTENTIONAL MAJOR BUMP, ADJUST THIS TEST
pattern: bitnamilegacy/valkey-cluster:8.+$
pattern: bitnami/valkey-cluster:8.+$
- it: "[valkey] ensures we detect major image version upgrades"
template: charts/valkey/templates/primary/application.yaml
set:
@@ -54,4 +54,4 @@ tests:
matchRegex:
path: spec.template.spec.containers[0].image
# IN CASE OF AN INTENTIONAL MAJOR BUMP, ADJUST THIS TEST
pattern: bitnamilegacy/valkey:8.+$
pattern: bitnami/valkey:8.+$

View File

@@ -3,11 +3,11 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
- templates/deployment.yaml
- templates/config.yaml
tests:
- it: fails with multiple replicas and "GIT_GC_REPOS" enabled
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
replicaCount: 2
persistence:
@@ -22,14 +22,14 @@ tests:
- failedTemplate:
errorMessage: "Invoking the garbage collector via CRON is not yet supported when running with multiple replicas. Please set 'gitea.config.cron.GIT_GC_REPOS.enabled = false'."
- it: fails with multiple replicas and RWX file system not set
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
replicaCount: 2
asserts:
- failedTemplate:
errorMessage: "When using multiple replicas, a RWX file system is required and persistence.accessModes[0] must be set to ReadWriteMany."
- it: fails with multiple replicas and bleve issue indexer
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
replicaCount: 2
persistence:
@@ -43,7 +43,7 @@ tests:
- failedTemplate:
errorMessage: "When using multiple replicas, the issue indexer (gitea.config.indexer.ISSUE_INDEXER_TYPE) must be set to a HA-ready provider such as 'meilisearch', 'elasticsearch' or 'db' (if the DB is HA-ready)."
- it: fails with multiple replicas and bleve repo indexer
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
replicaCount: 2
persistence:

View File

@@ -3,11 +3,11 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
- templates/deployment.yaml
- templates/config.yaml
tests:
- it: renders a deployment
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
asserts:
- hasDocuments:
count: 1
@@ -16,7 +16,7 @@ tests:
apiVersion: apps/v1
name: gitea-unittests
- it: deployment labels are set
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
deployment.labels:
hello: world
@@ -29,27 +29,11 @@ tests:
path: spec.template.metadata.labels
content:
hello: world
- isNotSubset:
path: spec.selector.matchLabels
content:
hello: world
- it: deployment labels are not in selector matchLabels
template: templates/gitea/deployment.yaml
set:
deployment.labels:
custom-label: custom-value
another-label: another-value
asserts:
- equal:
path: spec.selector.matchLabels
value:
app.kubernetes.io/name: gitea
app.kubernetes.io/instance: gitea-unittests
- it: nodeSelector is undefined
asserts:
- notExists:
path: spec.template.spec.nodeSelector
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
- it: nodeSelector is defined
set:
nodeSelector:
@@ -61,10 +45,10 @@ tests:
content:
foo: bar
bar: foo
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
- it: "injects TMP_EXISTING_ENVS_FILE as environment variable to 'init-app-ini' init container"
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
asserts:
- contains:
path: spec.template.spec.initContainers[1].env
@@ -72,7 +56,7 @@ tests:
name: TMP_EXISTING_ENVS_FILE
value: /tmp/existing-envs
- it: "injects ENV_TO_INI_MOUNT_POINT as environment variable to 'init-app-ini' init container"
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
asserts:
- contains:
path: spec.template.spec.initContainers[1].env
@@ -80,7 +64,7 @@ tests:
name: ENV_TO_INI_MOUNT_POINT
value: /env-to-ini-mounts
- it: CPU resources are defined as well as GOMAXPROCS
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
resources:
limits:
@@ -108,7 +92,7 @@ tests:
cpu: 100ms
memory: 100Mi
- it: Init containers have correct volumeMount path
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
initContainersScriptsVolumeMountPath: "/custom/init/path"
asserts:
@@ -119,7 +103,7 @@ tests:
path: spec.template.spec.initContainers[*].volumeMounts[?(@.name=="config")].mountPath
value: "/custom/init/path"
- it: Init containers have correct volumeMount path if there is no override
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
asserts:
- equal:
path: spec.template.spec.initContainers[*].volumeMounts[?(@.name=="init")].mountPath

View File

@@ -3,11 +3,11 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
- templates/deployment.yaml
- templates/config.yaml
tests:
- it: Renders a deployment
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
asserts:
- hasDocuments:
count: 1
@@ -16,7 +16,7 @@ tests:
apiVersion: apps/v1
name: gitea-unittests
- it: Deployment with empty additionalConfigFromEnvs
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
gitea.additionalConfigFromEnvs: []
asserts:
@@ -44,7 +44,7 @@ tests:
- name: ENV_TO_INI_MOUNT_POINT
value: /env-to-ini-mounts
- it: Deployment with standard additionalConfigFromEnvs
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
gitea.additionalConfigFromEnvs: [{name: GITEA_database_HOST, value: my-db:123}, {name: GITEA_database_USER, value: my-user}]
asserts:
@@ -76,7 +76,7 @@ tests:
- name: GITEA_database_USER
value: my-user
- it: Deployment with templated additionalConfigFromEnvs
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
gitea.misc.host: my-db-host:321
gitea.misc.user: my-db-user
@@ -110,7 +110,7 @@ tests:
- name: GITEA_database_USER
value: my-db-user
- it: Deployment with additionalConfigFromEnvs templated secret name
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
gitea.misc.existingSecret: my-db-secret
gitea.additionalConfigFromEnvs[0]:

View File

@@ -3,18 +3,18 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
- templates/deployment.yaml
- templates/config.yaml
tests:
- it: Render the deployment (default)
asserts:
- hasDocuments:
count: 1
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
- lengthEqual:
path: spec.template.spec.initContainers
count: 3
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
- it: Render the deployment (signing)
set:
@@ -22,11 +22,11 @@ tests:
asserts:
- hasDocuments:
count: 1
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
- lengthEqual:
path: spec.template.spec.initContainers
count: 4
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
- it: Render the deployment (extraInitContainers)
set:
@@ -40,20 +40,20 @@ tests:
asserts:
- hasDocuments:
count: 1
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
- lengthEqual:
path: spec.template.spec.initContainers
count: 6
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
- contains:
path: spec.template.spec.initContainers
content:
name: foo
image: docker.io/library/busybox:latest
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
- contains:
path: spec.template.spec.initContainers
content:
name: bar
image: docker.io/library/busybox:latest
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml

View File

@@ -6,17 +6,17 @@ chart:
# Override appVersion to be consistent with used digest :)
appVersion: 1.19.3
templates:
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
- templates/deployment.yaml
- templates/config.yaml
tests:
- it: default values
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
asserts:
- equal:
path: spec.template.spec.containers[0].image
value: "docker.gitea.com/gitea:1.19.3-rootless"
- it: tag override
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
image.tag: "1.19.4"
asserts:
@@ -24,7 +24,7 @@ tests:
path: spec.template.spec.containers[0].image
value: "docker.gitea.com/gitea:1.19.4-rootless"
- it: root-based image
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
image.rootless: false
asserts:
@@ -32,7 +32,7 @@ tests:
path: spec.template.spec.containers[0].image
value: "docker.gitea.com/gitea:1.19.3"
- it: scoped registry
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
image.registry: "example.com"
asserts:
@@ -40,7 +40,7 @@ tests:
path: spec.template.spec.containers[0].image
value: "example.com/gitea:1.19.3-rootless"
- it: global registry
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
global.imageRegistry: "global.example.com"
asserts:
@@ -48,7 +48,7 @@ tests:
path: spec.template.spec.containers[0].image
value: "global.example.com/gitea:1.19.3-rootless"
- it: digest for rootless image
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
image:
rootless: true
@@ -58,7 +58,7 @@ tests:
path: spec.template.spec.containers[0].image
value: "docker.gitea.com/gitea:1.19.3-rootless@sha256:b28e8f3089b52ebe6693295df142f8c12eff354e9a4a5bfbb5c10f296c3a537a"
- it: image fullOverride (does not append rootless)
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
image:
fullOverride: docker.gitea.com/gitea:1.19.3
@@ -73,7 +73,7 @@ tests:
path: spec.template.spec.containers[0].image
value: "docker.gitea.com/gitea:1.19.3"
- it: digest for root-based image
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
image:
rootless: false
@@ -83,7 +83,7 @@ tests:
path: spec.template.spec.containers[0].image
value: "docker.gitea.com/gitea:1.19.3@sha256:b28e8f3089b52ebe6693295df142f8c12eff354e9a4a5bfbb5c10f296c3a537a"
- it: digest and global registry
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
global.imageRegistry: "global.example.com"
image.digest: "sha256:b28e8f3089b52ebe6693295df142f8c12eff354e9a4a5bfbb5c10f296c3a537a"
@@ -92,7 +92,7 @@ tests:
path: spec.template.spec.containers[0].image
value: "global.example.com/gitea:1.19.3-rootless@sha256:b28e8f3089b52ebe6693295df142f8c12eff354e9a4a5bfbb5c10f296c3a537a"
- it: correctly renders floating tag references
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
image.tag: 1.21 # use non-quoted value on purpose. See: https://gitea.com/gitea/helm-gitea/issues/631
asserts:

View File

@@ -1,6 +1,6 @@
suite: Test ingress tpl use
templates:
- templates/gitea/ingress.yaml
- templates/ingress.yaml
tests:
- it: Ingress Class using TPL
set:

View File

@@ -3,7 +3,7 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/config.yaml
- templates/config.yaml
tests:
- it: inline config stringData.server using TPL
set:

View File

@@ -3,11 +3,11 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
- templates/deployment.yaml
- templates/config.yaml
tests:
- it: renders default liveness probe
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
asserts:
- notExists:
path: spec.template.spec.containers[0].livenessProbe.enabled
@@ -22,7 +22,7 @@ tests:
port: http
timeoutSeconds: 1
- it: renders default readiness probe
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
asserts:
- notExists:
path: spec.template.spec.containers[0].readinessProbe.enabled
@@ -37,12 +37,12 @@ tests:
port: http
timeoutSeconds: 1
- it: does not render a default startup probe
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
asserts:
- notExists:
path: spec.template.spec.containers[0].startupProbe
- it: allows enabling a startup probe
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
gitea.startupProbe.enabled: true
asserts:
@@ -60,7 +60,7 @@ tests:
timeoutSeconds: 1
- it: allows overwriting the default port of the liveness probe
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
gitea:
livenessProbe:
@@ -74,7 +74,7 @@ tests:
port: my-port
- it: allows overwriting the default port of the readiness probe
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
gitea:
readinessProbe:
@@ -88,7 +88,7 @@ tests:
port: my-port
- it: allows overwriting the default port of the startup probe
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
gitea:
startupProbe:
@@ -103,7 +103,7 @@ tests:
port: my-port
- it: allows using a non-default method as liveness probe
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
gitea:
livenessProbe:
@@ -131,7 +131,7 @@ tests:
timeoutSeconds: 13372
- it: allows using a non-default method as readiness probe
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
gitea:
readinessProbe:
@@ -159,7 +159,7 @@ tests:
timeoutSeconds: 13372
- it: allows using a non-default method as startup probe
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
gitea:
startupProbe:

View File

@@ -3,11 +3,11 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
- templates/deployment.yaml
- templates/config.yaml
tests:
- it: supports adding a sidecar container
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
extraContainers:
- name: sidecar-bob

View File

@@ -3,11 +3,11 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
- templates/deployment.yaml
- templates/config.yaml
tests:
- it: skips gpg init container
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
asserts:
- notContains:
path: spec.template.spec.initContainers
@@ -15,7 +15,7 @@ tests:
content:
name: configure-gpg
- it: skips gpg env in `init-directories` init container
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
signing.enabled: false
asserts:
@@ -25,14 +25,14 @@ tests:
name: GNUPGHOME
value: /data/git/.gnupg
- it: skips gpg env in runtime container
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
asserts:
- notContains:
path: spec.template.spec.containers[0].env
content:
name: GNUPGHOME
- it: skips gpg volume spec
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
asserts:
- notContains:
path: spec.template.spec.volumes

View File

@@ -3,11 +3,11 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
- templates/deployment.yaml
- templates/config.yaml
tests:
- it: adds gpg init container
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
signing:
enabled: true
@@ -41,7 +41,7 @@ tests:
mountPath: /raw
readOnly: true
- it: adds gpg env in `init-directories` init container
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
signing.enabled: true
signing.existingSecret: "custom-gpg-secret"
@@ -52,7 +52,7 @@ tests:
name: GNUPGHOME
value: /data/git/.gnupg
- it: adds gpg env in runtime container
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
signing.enabled: true
signing.existingSecret: "custom-gpg-secret"
@@ -63,7 +63,7 @@ tests:
name: GNUPGHOME
value: /data/git/.gnupg
- it: adds gpg volume spec
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
signing:
enabled: true
@@ -80,7 +80,7 @@ tests:
path: private.asc
defaultMode: 0100
- it: supports gpg volume spec with external reference
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
signing:
enabled: true

View File

@@ -3,11 +3,11 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
- templates/deployment.yaml
- templates/config.yaml
tests:
- it: supports defining SSH log level for root based image
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
image.rootless: false
asserts:
@@ -17,7 +17,7 @@ tests:
name: SSH_LOG_LEVEL
value: "INFO"
- it: supports overriding SSH log level
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
image.rootless: false
gitea.ssh.logLevel: "DEBUG"
@@ -28,7 +28,7 @@ tests:
name: SSH_LOG_LEVEL
value: "DEBUG"
- it: supports overriding SSH log level (even when image.fullOverride set)
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
image.fullOverride: docker.gitea.com/gitea:1.19.3
image.rootless: false
@@ -40,7 +40,7 @@ tests:
name: SSH_LOG_LEVEL
value: "DEBUG"
- it: skips SSH_LOG_LEVEL for rootless image
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
image.rootless: true
gitea.ssh.logLevel: "DEBUG" # explicitly defining a non-standard level here
@@ -51,7 +51,7 @@ tests:
content:
name: SSH_LOG_LEVEL
- it: skips SSH_LOG_LEVEL for rootless image (even when image.fullOverride set)
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
image.fullOverride: docker.gitea.com/gitea:1.19.3
image.rootless: true

View File

@@ -7,11 +7,11 @@ release:
namespace: testing
templates:
- templates/gitea/pvc.yaml
- templates/pvc.yaml
tests:
- it: should set storageClassName when persistence.storageClass is defined
template: templates/gitea/pvc.yaml
template: templates/pvc.yaml
set:
persistence.storageClass: "my-storage-class"
asserts:
@@ -20,7 +20,7 @@ tests:
value: "my-storage-class"
- it: should set global.storageClass when persistence.storageClass is not defined
template: templates/gitea/pvc.yaml
template: templates/pvc.yaml
set:
global.storageClass: "default-storage-class"
asserts:
@@ -29,7 +29,7 @@ tests:
value: "default-storage-class"
- it: should set storageClassName when persistence.storageClass is defined and global.storageClass is defined
template: templates/gitea/pvc.yaml
template: templates/pvc.yaml
set:
global.storageClass: "default-storage-class"
persistence.storageClass: "my-storage-class"

View File

@@ -3,11 +3,11 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/ssh-svc.yaml
- templates/gitea/http-svc.yaml
- templates/ssh-svc.yaml
- templates/http-svc.yaml
tests:
- it: supports adding custom labels to ssh-svc
template: templates/gitea/ssh-svc.yaml
template: templates/ssh-svc.yaml
set:
service:
ssh:
@@ -19,7 +19,7 @@ tests:
value: "testvalue"
- it: keeps existing labels (ssh)
template: templates/gitea/ssh-svc.yaml
template: templates/ssh-svc.yaml
set:
service:
ssh:
@@ -29,7 +29,7 @@ tests:
path: metadata.labels["app"]
- it: supports adding custom labels to http-svc
template: templates/gitea/http-svc.yaml
template: templates/http-svc.yaml
set:
service:
http:
@@ -41,7 +41,7 @@ tests:
value: "testvalue"
- it: keeps existing labels (http)
template: templates/gitea/http-svc.yaml
template: templates/http-svc.yaml
set:
service:
http:
@@ -51,7 +51,7 @@ tests:
path: metadata.labels["app"]
- it: render service.ssh.loadBalancerClass if set and type is LoadBalancer
template: templates/gitea/ssh-svc.yaml
template: templates/ssh-svc.yaml
set:
service:
ssh:
@@ -73,7 +73,7 @@ tests:
value: ["1.2.3.4/32", "5.6.7.8/32"]
- it: does not render when loadbalancer properties are set but type is not loadBalancerClass
template: templates/gitea/http-svc.yaml
template: templates/http-svc.yaml
set:
service:
http:
@@ -92,7 +92,7 @@ tests:
path: spec.loadBalancerSourceRanges
- it: does not render loadBalancerClass by default even when type is LoadBalancer
template: templates/gitea/http-svc.yaml
template: templates/http-svc.yaml
set:
service:
http:
@@ -107,8 +107,8 @@ tests:
- it: both ssh and http services exist
templates:
- templates/gitea/ssh-svc.yaml
- templates/gitea/http-svc.yaml
- templates/ssh-svc.yaml
- templates/http-svc.yaml
asserts:
- matchRegex:
path: metadata.name

View File

@@ -3,7 +3,7 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/gpg-secret.yaml
- templates/gpg-secret.yaml
tests:
- it: renders nothing
set:

View File

@@ -3,7 +3,7 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/gpg-secret.yaml
- templates/gpg-secret.yaml
tests:
- it: fails rendering when nothing is configured
set:

View File

@@ -1,6 +1,6 @@
suite: Test ingress.yaml
templates:
- templates/gitea/ingress.yaml
- templates/ingress.yaml
tests:
- it: should enable ingress when ingress.enabled is true
set:

View File

@@ -1,6 +1,6 @@
suite: Test ingress with implicit path defaults
templates:
- templates/gitea/ingress.yaml
- templates/ingress.yaml
tests:
- it: should use default path and pathType when no paths are specified
set:

View File

@@ -1,6 +1,6 @@
suite: Test ingress tpl use
templates:
- templates/gitea/ingress.yaml
- templates/ingress.yaml
tests:
- it: Ingress Class using TPL
set:

View File

@@ -1,6 +1,6 @@
suite: Test ingress with structured paths
templates:
- templates/gitea/ingress.yaml
- templates/ingress.yaml
tests:
- it: should work with structured path definitions
set:

View File

@@ -3,7 +3,7 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/init.yaml
- templates/init.yaml
tests:
- it: renders a secret
asserts:

View File

@@ -3,7 +3,7 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/init.yaml
- templates/init.yaml
tests:
- it: runs gpg in batch mode
set:
@@ -63,7 +63,7 @@ tests:
chown -v 1000:1000 "${GNUPGHOME}"
fi
- it: it does not chown /data even when image.fullOverride is set
template: templates/gitea/init.yaml
template: templates/init.yaml
set:
image.fullOverride: docker.gitea.com/gitea:1.20.5
asserts:

View File

@@ -3,7 +3,7 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/init.yaml
- templates/init.yaml
tests:
- it: runs gpg in batch mode
set:

View File

@@ -3,7 +3,7 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/metrics-secret.yaml
- templates/metrics-secret.yaml
tests:
- it: renders nothing if monitoring disabled and gitea.metrics.token empty
set:

View File

@@ -3,7 +3,7 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/metrics-secret.yaml
- templates/metrics-secret.yaml
tests:
- it: renders nothing if monitoring enabled and gitea.metrics.token empty
set:

View File

@@ -0,0 +1,100 @@
chart:
appVersion: 0.1.0
version: 0.1.0
suite: NetworkPolicy template
release:
name: gitea-unittest
namespace: testing
templates:
- templates/networkPolicy.yaml
tests:
- it: Skip rendering networkPolicy
set:
networkPolicy.enabled: false
asserts:
- hasDocuments:
count: 0
- it: Render default networkPolicy
set:
networkPolicy.enabled: true
asserts:
- hasDocuments:
count: 1
- containsDocument:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
name: gitea-unittest
namespace: testing
- notExists:
path: metadata.annotations
- equal:
path: metadata.labels
value:
app: gitea
app.kubernetes.io/instance: gitea-unittest
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: gitea
app.kubernetes.io/version: 0.1.0
helm.sh/chart: gitea-0.1.0
version: 0.1.0
- equal:
path: spec.podSelector.matchLabels
value:
app.kubernetes.io/instance: gitea-unittest
app.kubernetes.io/name: gitea
- notExists:
path: spec.policyTypes
- notExists:
path: spec.egress
- notExists:
path: spec.ingress
- it: Template networkPolicy with policyTypes, egress and ingress configuration
set:
networkPolicy.enabled: true
networkPolicy.policyTypes:
- Egress
- Ingress
networkPolicy.ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
networkPolicy.egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
asserts:
- equal:
path: spec.policyTypes
value:
- Egress
- Ingress
- equal:
path: spec.egress
value:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
- equal:
path: spec.ingress
value:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus

View File

@@ -3,7 +3,7 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/pvc.yaml
- templates/pvc.yaml
tests:
- it: Storage Class using TPL
set:

View File

@@ -3,7 +3,7 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/serviceaccount.yaml
- templates/serviceaccount.yaml
tests:
- it: skips rendering by default
asserts:

View File

@@ -3,17 +3,17 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/serviceaccount.yaml
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
- templates/serviceaccount.yaml
- templates/deployment.yaml
- templates/config.yaml
tests:
- it: does not modify the deployment by default
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
asserts:
- notExists:
path: spec.serviceAccountName
- it: adds the reference to the deployment with serviceAccount.create=true
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
serviceAccount.create: true
asserts:
@@ -21,7 +21,7 @@ tests:
path: spec.template.spec.serviceAccountName
value: gitea-unittests
- it: allows referencing an externally created ServiceAccount to the deployment
template: templates/gitea/deployment.yaml
template: templates/deployment.yaml
set:
serviceAccount:
create: false # explicitly set to define rendering behavior

View File

@@ -3,7 +3,7 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/servicemonitor.yaml
- templates/servicemonitor.yaml
tests:
- it: skips rendering by default
asserts:

View File

@@ -3,7 +3,7 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/servicemonitor.yaml
- templates/servicemonitor.yaml
tests:
- it: renders nothing if gitea.metrics.serviceMonitor disabled and gitea.metrics.token empty
set:

View File

@@ -3,7 +3,7 @@ release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/servicemonitor.yaml
- templates/servicemonitor.yaml
tests:
- it: renders unsecure ServiceMonitor if gitea.metrics.token nil
set:

View File

@@ -20,7 +20,7 @@ global:
# hostnames:
# - example.com
## @param namespace An explicit namespace to deploy gitea into. Defaults to the release namespace if not specified
## @param namespace An explicit namespace to deploy Gitea into. Defaults to the release namespace if not specified
namespace: ""
## @param replicaCount number of replicas for the deployment
@@ -281,13 +281,13 @@ extraContainers: []
# image: busybox
# command: [/bin/sh, -c, 'echo "Hello world"']
## @param preExtraInitContainers Additional init containers to run in the pod before gitea runs it owns init containers.
## @param preExtraInitContainers Additional init containers to run in the pod before Gitea runs it owns init containers.
preExtraInitContainers: []
# - name: pre-init-container
# image: docker.io/library/busybox
# command: [ /bin/sh, -c, 'echo "Hello world! I am a pre init container."' ]
## @param postExtraInitContainers Additional init containers to run in the pod after gitea runs it owns init containers.
## @param postExtraInitContainers Additional init containers to run in the pod after Gitea runs it owns init containers.
postExtraInitContainers: []
# - name: post-init-container
# image: docker.io/library/busybox
@@ -513,192 +513,189 @@ gitea:
successThreshold: 1
failureThreshold: 10
## @section Network Policy
networkPolicy:
## @param networkPolicy.enabled Enable network policies in general.
## @param networkPolicy.annotations Additional network policy annotations.
## @param networkPolicy.labels Additional network policy labels.
## @param networkPolicy.policyTypes List of policy types. Supported is ingress, egress or ingress and egress.
## @param networkPolicy.egress Concrete egress network policy implementation.
## @skip networkPolicy.egress Skip individual egress configuration.
## @param networkPolicy.ingress Concrete ingress network policy implementation.
## @skip networkPolicy.ingress Skip individual ingress configuration.
enabled: false
annotations: {}
labels: {}
policyTypes: []
# - Egress
# - Ingress
egress: []
# Allow outgoing DNS traffic to the internal running DNS-Server. For example core-dns.
#
# - to:
# - namespaceSelector:
# matchLabels:
# kubernetes.io/metadata.name: kube-system
# podSelector:
# matchLabels:
# k8s-app: kube-dns
# ports:
# - port: 53
# protocol: TCP
# - port: 53
# protocol: UDP
# Allow outgoing traffic via HTTPS. For example for oAuth2, Gravatar and other third party APIs.
#
# - to:
# ports:
# - port: 443
# protocol: TCP
# Allow outgoing traffic to PostgreSQL.
#
# - to:
# - podSelector:
# matchLabels:
# app.kubernetes.io/name: postgresql-ha
# ports: []
# # Avoid explicit list of ports, because Gitea tries to ping the PostgreSQL database during the initialization
# # process. The ICMP protocol is currently not supported as list of protocols by kubernetes. For this reason would
# # lead listing of the ports to an issue. Therefore, please handle the database ports with care.
# #
# # - port: 5432
# # protocol: TCP
# Allow outgoing traffic to Valkey.
#
# - to:
# - podSelector:
# matchLabels:
# app.kubernetes.io/name: valkey-cluster
# ports:
# - port: 6379
# protocol: TCP
# - port: 16379
# protocol: TCP
ingress: []
# Allow incoming HTTP traffic from prometheus.
#
# - from:
# - namespaceSelector:
# matchLabels:
# kubernetes.io/metadata.name: monitoring
# podSelector:
# matchLabels:
# app.kubernetes.io/name: prometheus
# ports:
# - port: http
# protocol: TCP
# Allow incoming HTTP traffic from ingress-nginx.
#
# - from:
# - namespaceSelector:
# matchLabels:
# kubernetes.io/metadata.name: ingress-nginx
# podSelector:
# matchLabels:
# app.kubernetes.io/name: ingress-nginx
# ports:
# - port: http
# protocol: TCP
## @section valkey-cluster
## @param valkey-cluster.enabled Enable valkey cluster
# ⚠️ The valkey charts do not work well with special characters in the password (<https://gitea.com/gitea/helm-chart/issues/690>).
# Consider omitting such or open an issue in the Bitnami repo and let us know once this got fixed.
## @param valkey-cluster.usePassword Whether to use password authentication
## @param valkey-cluster.usePasswordFiles Whether to mount passwords as files instead of environment variables
## @param valkey-cluster.cluster.nodes Number of valkey cluster master nodes
## @param valkey-cluster.cluster.replicas Number of valkey cluster master node replicas
## @param valkey-cluster.service.ports.valkey Port of Valkey service
## @descriptionStart
## Valkey cluster and [Valkey](#valkey) cannot be enabled at the same time.
## @descriptionEnd
valkey-cluster:
## @param valkey-cluster.enabled Enable valkey cluster
# ⚠️ The valkey charts do not work well with special characters in the password (<https://gitea.com/gitea/helm-chart/issues/690>).
# Consider omitting such or open an issue in the Bitnami repo and let us know once this got fixed.
## @param valkey-cluster.usePassword Whether to use password authentication.
## @param valkey-cluster.usePasswordFiles Whether to mount passwords as files instead of environment variables.
enabled: true
usePassword: false
usePasswordFiles: false
## @param valkey-cluster.image.repository Image repository, eg. `bitnamilegacy/valkey-cluster`.
image:
repository: bitnamilegacy/valkey-cluster
## @param valkey-cluster.cluster.nodes Number of valkey cluster master nodes
## @param valkey-cluster.cluster.replicas Number of valkey cluster master node replicas
cluster:
nodes: 3 # default: 6
replicas: 0 # default: 1
## @param valkey-cluster.metrics.image.repository Image repository, eg. `bitnamilegacy/redis-exporter`.
metrics:
image:
repository: bitnamilegacy/redis-exporter
## @param valkey-cluster.persistence.enabled Enable persistence on Valkey replicas nodes using Persistent Volume Claims.
## @param valkey-cluster.persistence.storageClass Persistent Volume storage class.
## @param valkey-cluster.persistence.size Persistent Volume size.
persistence:
enabled: true
storageClass: ""
size: 8Gi
## @param valkey-cluster.service.ports.valkey Port of Valkey service
service:
ports:
valkey: 6379
## @param valkey-cluster.sysctlImage.repository Image repository, eg. `bitnamilegacy/os-shell`.
sysctlImage:
repository: bitnamilegacy/os-shell
## @param valkey-cluster.volumePermissions.image.repository Image repository, eg. `bitnamilegacy/os-shell`.
volumePermissions:
image:
repository: bitnamilegacy/os-shell
## @section valkey
## @param valkey.enabled Enable valkey standalone or replicated
## @param valkey.architecture Whether to use standalone or replication
# ⚠️ The valkey charts do not work well with special characters in the password (<https://gitea.com/gitea/helm-chart/issues/690>).
# Consider omitting such or open an issue in the Bitnami repo and let us know once this got fixed.
## @param valkey.global.valkey.password Required password
## @param valkey.master.count Number of Valkey master instances to deploy
## @param valkey.master.service.ports.valkey Port of Valkey service
## @descriptionStart
## Valkey and [Valkey cluster](#valkey-cluster) cannot be enabled at the same time.
## @descriptionEnd
valkey:
## @param valkey.enabled Enable valkey standalone or replicated
## @param valkey.architecture Whether to use standalone or replication
enabled: false
architecture: standalone
## @param valkey.kubectl.image.repository Image repository, eg. `bitnamilegacy/kubectl`.
kubectl:
image:
repository: bitnamilegacy/kubectl
## @param valkey.image.repository Image repository, eg. `bitnamilegacy/valkey`.
image:
repository: bitnamilegacy/valkey
# ⚠️ The valkey charts do not work well with special characters in the password (<https://gitea.com/gitea/helm-chart/issues/690>).
# Consider omitting such or open an issue in the Bitnami repo and let us know once this got fixed.
## @param valkey.global.valkey.password Required password
global:
valkey:
password: changeme
## @param valkey.master.count Number of Valkey master instances to deploy
## @param valkey.master.service.ports.valkey Port of Valkey service
master:
count: 1
service:
ports:
valkey: 6379
## @param valkey.metrics.image.repository Image repository, eg. `bitnamilegacy/redis-exporter`.
metrics:
image:
repository: bitnamilegacy/redis-exporter
primary:
## @param valkey.primary.persistence.enabled Enable persistence on Valkey replicas nodes using Persistent Volume Claims.
## @param valkey.primary.persistence.storageClass Persistent Volume storage class.
## @param valkey.primary.persistence.size Persistent Volume size.
persistence:
enabled: true
storageClass: ""
size: 8Gi
replica:
## @param valkey.replica.persistence.enabled Enable persistence on Valkey replicas nodes using Persistent Volume Claims.
## @param valkey.replica.persistence.storageClass Persistent Volume storage class.
## @param valkey.replica.persistence.size Persistent Volume size.
persistence:
enabled: true
storageClass: ""
size: 8Gi
## @param valkey.sentinel.image.repository Image repository, eg. `bitnamilegacy/sentinel`.
sentinel:
image:
repository: bitnamilegacy/valkey-sentinel
## @param valkey.volumePermissions.image.repository Image repository, eg. `bitnamilegacy/os-shell`.
volumePermissions:
image:
repository: bitnamilegacy/os-shell
## @section PostgreSQL HA
#
## @param postgresql-ha.enabled Enable PostgreSQL HA
## @param postgresql-ha.postgresql.password Password for the `gitea` user (overrides `auth.password`)
## @param postgresql-ha.global.postgresql.database Name for a custom database to create (overrides `auth.database`)
## @param postgresql-ha.global.postgresql.username Name for a custom user to create (overrides `auth.username`)
## @param postgresql-ha.global.postgresql.password Name for a custom password to create (overrides `auth.password`)
## @param postgresql-ha.postgresql.repmgrPassword Repmgr Password
## @param postgresql-ha.postgresql.postgresPassword postgres Password
## @param postgresql-ha.pgpool.adminPassword pgpool adminPassword
## @param postgresql-ha.pgpool.srCheckPassword pgpool srCheckPassword
## @param postgresql-ha.service.ports.postgresql PostgreSQL service port (overrides `service.ports.postgresql`)
## @param postgresql-ha.persistence.size PVC Storage Request for PostgreSQL HA volume
postgresql-ha:
## @param postgresql-ha.enabled Enable PostgreSQL HA
enabled: true
## @param postgresql-ha.global.postgresql.database Name for a custom database to create (overrides `auth.database`)
## @param postgresql-ha.global.postgresql.username Name for a custom user to create (overrides `auth.username`)
## @param postgresql-ha.global.postgresql.password Name for a custom password to create (overrides `auth.password`)
global:
postgresql:
database: gitea
password: gitea
username: gitea
## @param postgresql-ha.metrics.image.repository Image repository, eg. `bitnamilegacy/postgres-exporter`.
metrics:
image:
repository: bitnamilegacy/postgres-exporter
## @param postgresql-ha.postgresql.image.repository Image repository, eg. `bitnamilegacy/postgresql-repmgr`.
## @param postgresql-ha.postgresql.repmgrPassword Repmgr Password
## @param postgresql-ha.postgresql.postgresPassword postgres Password
## @param postgresql-ha.postgresql.password Password for the `gitea` user (overrides `auth.password`)
enabled: true
postgresql:
image:
repository: bitnamilegacy/postgresql-repmgr
repmgrPassword: changeme2
postgresPassword: changeme1
password: changeme4
## @param postgresql-ha.pgpool.adminPassword pgpool adminPassword
## @param postgresql-ha.pgpool.image.repository Image repository, eg. `bitnamilegacy/pgpool`.
## @param postgresql-ha.pgpool.srCheckPassword pgpool srCheckPassword
pgpool:
adminPassword: changeme3
image:
repository: bitnamilegacy/pgpool
srCheckPassword: changeme4
## @param postgresql-ha.service.ports.postgresql PostgreSQL service port (overrides `service.ports.postgresql`)
service:
ports:
postgresql: 5432
## @param postgresql-ha.persistence.enabled Enable persistence.
## @param postgresql-ha.persistence.storageClass Persistent Volume Storage Class.
## @param postgresql-ha.persistence.size PVC Storage Request for PostgreSQL HA volume
persistence:
enabled: true
storageClass: ""
size: 10Gi
## @param postgresql-ha.volumePermissions.image.repository Image repository, eg. `bitnamilegacy/os-shell`.
volumePermissions:
image:
repository: bitnamilegacy/os-shell
## @section PostgreSQL
#
## @param postgresql.enabled Enable PostgreSQL
## @param postgresql.global.postgresql.auth.password Password for the `gitea` user (overrides `auth.password`)
## @param postgresql.global.postgresql.auth.database Name for a custom database to create (overrides `auth.database`)
## @param postgresql.global.postgresql.auth.username Name for a custom user to create (overrides `auth.username`)
## @param postgresql.global.postgresql.service.ports.postgresql PostgreSQL service port (overrides `service.ports.postgresql`)
## @param postgresql.primary.persistence.size PVC Storage Request for PostgreSQL volume
postgresql:
## @param postgresql.enabled Enable PostgreSQL
enabled: false
## @param postgresql.global.postgresql.auth.password Password for the `gitea` user (overrides `auth.password`)
## @param postgresql.global.postgresql.auth.database Name for a custom database to create (overrides `auth.database`)
## @param postgresql.global.postgresql.auth.username Name for a custom user to create (overrides `auth.username`)
## @param postgresql.global.postgresql.service.ports.postgresql PostgreSQL service port (overrides `service.ports.postgresql`)
global:
postgresql:
auth:
@@ -708,39 +705,10 @@ postgresql:
service:
ports:
postgresql: 5432
## @param postgresql.image.repository Image repository, eg. `bitnamilegacy/postgresql`.
image:
repository: bitnamilegacy/postgresql
primary:
## @param postgresql.primary.persistence.enabled Enable persistence.
## @param postgresql.primary.persistence.storageClass Persistent Volume storage class.
## @param postgresql.primary.persistence.size PVC Storage Request for PostgreSQL volume.
persistence:
enabled: true
storageClass: ""
size: 10Gi
readReplicas:
## @param postgresql.readReplicas.persistence.enabled Enable PostgreSQL read only data persistence using PVC.
## @param postgresql.readReplicas.persistence.storageClass Persistent Volume storage class.
## @param postgresql.readReplicas.persistence.size PVC Storage Request for PostgreSQL volume.
persistence:
enabled: true
storageClass: ""
size: ""
## @param postgresql.metrics.image.repository Image repository, eg. `bitnamilegacy/postgres-exporter`.
metrics:
image:
repository: bitnamilegacy/postgres-exporter
## @param postgresql.volumePermissions.image.repository Image repository, eg. `bitnamilegacy/os-shell`.
volumePermissions:
image:
repository: bitnamilegacy/os-shell
# By default, removed or moved settings that still remain in a user defined values.yaml will cause Helm to fail running the install/update.
# Set it to false to skip this basic validation check.
## @section Advanced