Go to file
CSRBot e554464a8f
All checks were successful
Helm / helm-lint (pull_request) Successful in 16s
Helm / helm-unittest (pull_request) Successful in 15s
chore(deps): update docker.io/library/node docker tag to v23
2025-01-19 23:34:23 +00:00
.gitea/workflows chore(deps): update docker.io/library/node docker tag to v23 2025-01-19 23:34:23 +00:00
.vscode Initial Commit 2025-01-19 21:18:53 +01:00
templates/prometheus-fail2ban-exporter Initial Commit 2025-01-19 21:18:53 +01:00
unittests test(unittest): adapt test result 2025-01-19 21:23:54 +01:00
.editorconfig Initial Commit 2025-01-19 21:18:53 +01:00
.gitignore Initial Commit 2025-01-19 21:18:53 +01:00
.helmignore Initial Commit 2025-01-19 21:18:53 +01:00
.markdownlint.yaml Initial Commit 2025-01-19 21:18:53 +01:00
.markdownlintignore Initial Commit 2025-01-19 21:18:53 +01:00
.npmrc Initial Commit 2025-01-19 21:18:53 +01:00
.prettierignore Initial Commit 2025-01-19 21:18:53 +01:00
.yamllint.yaml Initial Commit 2025-01-19 21:18:53 +01:00
Chart.yaml Initial Commit 2025-01-19 21:18:53 +01:00
CODEOWNERS Initial Commit 2025-01-19 21:18:53 +01:00
CONTRIBUTING.md Initial Commit 2025-01-19 21:18:53 +01:00
Makefile Initial Commit 2025-01-19 21:18:53 +01:00
package-lock.json Initial Commit 2025-01-19 21:18:53 +01:00
package.json Initial Commit 2025-01-19 21:18:53 +01:00
README.md doc(README): adapt comment 2025-01-19 23:09:09 +01:00
renovate.json chore(renovate): update configuration 2025-01-19 21:38:07 +01:00
values.yaml doc(values): add missing dot 2025-01-19 21:35:49 +01:00

Prometheus Fail2Ban exporter

Build Status Artifact Hub

This helm chart enables the deployment of a Prometheus metrics exporter for Fail2Ban and allows the individual configuration of additional containers/initContainers, mounting of volumes and defining additional environment variables, apply a user-defined webConfig.yaml and much more.

Important

This helm chart does not contain a fail2ban daemon, nor any jail configurations. The daemon can be mounted into the filesystem of the exporter via a volume. By default is the hostPath /var/run/fail2ban mounted into the pod.

Chapter configuration and installation describes the basics how to configure helm and use it to deploy the exporter. It also contains further configuration examples.

Furthermore, this helm chart contains unit tests to detect regressions and stabilize the deployment. Additionally, this helm chart is tested for deployment scenarios with ArgoCD.

Helm: configuration and installation

  1. A helm chart repository must be configured, to pull the helm charts from.
  2. All available parameters are here in detail documented. The parameters can be defined via the helm --set flag or directly as part of a values.yaml file. The following example defines the prometheus-exporter repository and use the --set flag for a basic deployment.

Important

By default is neither a serviceMonitor nor a podMonitor enabled. Use prometheus.metrics.serviceMonitor.enabled=true or prometheus.metrics.podMonitor.enabled=true to enable one monitor deployment. Deploying both monitors at the same time is not possible.

helm repo add prometheus-exporters https://charts.cryptic.systems/prometheus-exporters
helm repo update
helm install prometheus-fail2ban-exporter prometheus-exporters/prometheus-fail2ban-exporter \
  --set 'prometheus.metrics.enabled=true' \
  --set 'prometheus.metrics.serviceMonitor.enabled=true'

Instead of passing all parameters via the set flag, it is also possible to define them as part of the values.yaml. The following command downloads the values.yaml for a specific version of this chart. Please keep in mind, that the version of the chart must be in sync with the values.yaml. Newer minor versions can have new features. New major versions can break something!

CHART_VERSION=0.1.0
helm show values prometheus-exporters/prometheus-fail2ban-exporter --version "${CHART_VERSION}" > values.yaml

A complete list of available helm chart versions can be displayed via the following command:

helm search repo prometheus-fail2ban-exporter --versions

The helm chart also contains some prometheusRules. These are deactivated by default and serve as examples/inspiration for customizations. These can be configured in more detail via values.yaml.

Examples

The following examples serve as individual configurations and as inspiration for how deployment problems can be solved.

Avoid CPU throttling by defining a CPU limit

If the application is deployed with a CPU resource limit, Prometheus may throw a CPU throttling warning for the application. This has more or less to do with the fact that the application finds the number of CPUs of the host, but cannot use the available CPU time to perform computing operations.

The application must be informed that despite several CPUs only a part (limit) of the available computing time is available. As this is a Golang application, this can be implemented using GOMAXPROCS. The following example is one way of defining GOMAXPROCS automatically based on the defined CPU limit like 100m. Please keep in mind, that the CFS rate of 100ms - default on each kubernetes node, is also very important to avoid CPU throttling.

Further information about this topic can be found here.

Note

The environment variable GOMAXPROCS is set automatically, when a CPU limit is defined. An explicit configuration is not anymore required.

helm install prometheus-fail2ban-exporter prometheus-exporters/prometheus-fail2ban-exporter \
  --set 'prometheus.metrics.enabled=true' \
  --set 'prometheus.metrics.serviceMonitor.enabled=true' \
  --set 'daemonSet.fail2banExporter.env.name=GOMAXPROCS' \
  --set 'daemonSet.fail2banExporter.env.valueFrom.resourceFieldRef.resource=limits.cpu' \
  --set 'daemonSet.fail2banExporter.resources.limits.cpu=100m'

Grafana dashboard

The helm chart includes Grafana dashboards. These can be deployed as a configMap by activating Grafana integration. It is assumed that the dashboard is consumed by Grafana or a sidecar container itself and that the dashboard is stored in the Grafana container file system so that it is subsequently available to the user. The kube-prometheus-stack deployment makes this possible.

helm install prometheus-fail2ban-exporter prometheus-exporters/prometheus-fail2ban-exporter \
  --set 'grafana.enabled=true'

Parameters

Global

Name Description Value
nameOverride Individual release name suffix. ""
fullnameOverride Override the complete release name logic. ""

Configuration

Name Description Value
config.webConfig.existingSecret.enabled Mount an existing secret containing the key webConfig.yaml. false
config.webConfig.existingSecret.secretName Name of the existing secret containing the key webConfig.yaml. ""
config.webConfig.secret.annotations Additional annotations of the secret containing the webConfig.yaml. {}
config.webConfig.secret.labels Additional labels of the secret containing the webConfig.yaml. {}
config.webConfig.secret.webConfig Content of the webConfig.yaml. {}

Daemonset

Name Description Value
daemonSet.annotations Additional deployment annotations. {}
daemonSet.labels Additional deployment labels. {}
daemonSet.additionalContainers List of additional containers. []
daemonSet.affinity Affinity for the fail2ban-exporter daemonSet. {}
daemonSet.initContainers List of additional init containers. []
daemonSet.dnsConfig dnsConfig of the fail2ban-exporter daemonSet. {}
daemonSet.dnsPolicy dnsPolicy of the fail2ban-exporter daemonSet. ""
daemonSet.hostname Individual hostname of the pod. ""
daemonSet.subdomain Individual domain of the pod. ""
daemonSet.hostNetwork Use the kernel network namespace of the host system. false
daemonSet.imagePullSecrets Secret to use for pulling the image. []
daemonSet.fail2banExporter.args Arguments passed to the fail2ban-exporter container. []
daemonSet.fail2banExporter.env List of environment variables for the fail2ban-exporter container. []
daemonSet.fail2banExporter.envFrom List of environment variables mounted from configMaps or secrets for the fail2ban-exporter container. []
daemonSet.fail2banExporter.image.registry Image registry, eg. docker.io. git.cryptic.systems
daemonSet.fail2banExporter.image.repository Image repository, eg. library/busybox. volker.raschek/prometheus-fail2ban-exporter
daemonSet.fail2banExporter.image.tag Custom image tag, eg. 0.1.0. Defaults to appVersion. ""
daemonSet.fail2banExporter.image.pullPolicy Image pull policy. IfNotPresent
daemonSet.fail2banExporter.resources CPU and memory resources of the pod. {}
daemonSet.fail2banExporter.securityContext Security context of the container of the daemonSet. {}
daemonSet.fail2banExporter.volumeMounts Additional volume mounts. undefined
daemonSet.nodeSelector NodeSelector of the fail2ban-exporter daemonSet. {}
daemonSet.priorityClassName PriorityClassName of the fail2ban-exporter daemonSet. ""
daemonSet.restartPolicy Restart policy of the fail2ban-exporter daemonSet. ""
daemonSet.securityContext Security context of the fail2ban-exporter daemonSet. {}
daemonSet.updateStrategy.rollingUpdate.maxSurge The maximum number of pods that can be scheduled above the desired number of pods during a rolling update. 1
daemonSet.updateStrategy.rollingUpdate.maxUnavailable The maximum number of pods that can be unavailable during a rolling update. 0
daemonSet.updateStrategy.type Strategy type - OnDelete or RollingUpdate. RollingUpdate
daemonSet.terminationGracePeriodSeconds How long to wait until forcefully kill the pod. 60
daemonSet.tolerations Tolerations of the fail2ban-exporter daemonSet. []
daemonSet.topologySpreadConstraints TopologySpreadConstraints of the fail2ban-exporter daemonSet. []
daemonSet.volumes Additional volumes to mount into the pods of the prometheus-exporter daemonset. undefined

Grafana

Name Description Value
grafana.enabled Enable integration into Grafana. Require the Prometheus operator daemonSet. false
grafana.dashboardDiscoveryLabels Labels that Grafana uses to discover resources. The labels may vary depending on the Grafana daemonSet. undefined
grafana.dashboards.fail2banExporter.enabled Enable deployment of Grafana dashboard fail2banExporter. true
grafana.dashboards.fail2banExporter.annotations Additional configmap annotations. {}
grafana.dashboards.fail2banExporter.labels Additional configmap labels. {}

Ingress

Name Description Value
ingress.enabled Enable creation of an ingress resource. Requires, that the http service is also enabled. false
ingress.className Ingress class. nginx
ingress.annotations Additional ingress annotations. {}
ingress.labels Additional ingress labels. {}
ingress.hosts Ingress specific configuration. Specification only required when another ingress controller is used instead of `t1k. []
ingress.tls Ingress TLS settings. Specification only required when another ingress controller is used instead of `t1k``. []

Pod disruption

Name Description Value
podDisruptionBudget Pod disruption budget. {}

Network

Name Description Value
networkPolicies Deploy network policies based on the used container network interface (CNI) implementation - like calico or weave. {}

Prometheus

Name Description Value
prometheus.metrics.enabled Enable of scraping metrics by Prometheus. true
prometheus.metrics.podMonitor.enabled Enable creation of a podMonitor. Excludes the existence of a serviceMonitor resource. false
prometheus.metrics.podMonitor.annotations Additional podMonitor annotations. {}
prometheus.metrics.podMonitor.enableHttp2 Enable HTTP2. true
prometheus.metrics.podMonitor.followRedirects FollowRedirects configures whether scrape requests follow HTTP 3xx redirects. false
prometheus.metrics.podMonitor.honorLabels Honor labels. false
prometheus.metrics.podMonitor.labels Additional podMonitor labels. {}
prometheus.metrics.podMonitor.interval Interval at which metrics should be scraped. If not specified Prometheus' global scrape interval is used. 60s
prometheus.metrics.podMonitor.path HTTP path for scraping Prometheus metrics. /metrics
prometheus.metrics.podMonitor.relabelings RelabelConfigs to apply to samples before scraping. Prometheus Operator automatically adds relabelings for a few standard Kubernetes fields. []
prometheus.metrics.podMonitor.scrapeTimeout Timeout after which the scrape is ended. If not specified, global Prometheus scrape timeout is used. 30s
prometheus.metrics.podMonitor.scheme HTTP scheme to use for scraping. For example http or https. http
prometheus.metrics.podMonitor.tlsConfig TLS configuration to use when scraping the metric endpoint by Prometheus. {}
prometheus.metrics.serviceMonitor.enabled Enable creation of a serviceMonitor. Excludes the existence of a podMonitor resource. false
prometheus.metrics.serviceMonitor.annotations Additional serviceMonitor annotations. {}
prometheus.metrics.serviceMonitor.labels Additional serviceMonitor labels. {}
prometheus.metrics.serviceMonitor.enableHttp2 Enable HTTP2. true
prometheus.metrics.serviceMonitor.followRedirects FollowRedirects configures whether scrape requests follow HTTP 3xx redirects. false
prometheus.metrics.serviceMonitor.honorLabels Honor labels. false
prometheus.metrics.serviceMonitor.interval Interval at which metrics should be scraped. If not specified Prometheus' global scrape interval is used. 60s
prometheus.metrics.serviceMonitor.path HTTP path for scraping Prometheus metrics. /metrics
prometheus.metrics.serviceMonitor.relabelings RelabelConfigs to apply to samples before scraping. Prometheus Operator automatically adds relabelings for a few standard Kubernetes fields. []
prometheus.metrics.serviceMonitor.scrapeTimeout Timeout after which the scrape is ended. If not specified, global Prometheus scrape timeout is used. 30s
prometheus.metrics.serviceMonitor.scheme HTTP scheme to use for scraping. For example http or https. http
prometheus.metrics.serviceMonitor.tlsConfig TLS configuration to use when scraping the metric endpoint by Prometheus. {}
prometheus.rules Array of Prometheus rules for monitoring the application and triggering alerts. []

Service

Name Description Value
services.http.enabled Enable the service. true
services.http.annotations Additional service annotations. {}
services.http.externalIPs External IPs for the service. []
services.http.externalTrafficPolicy If service.type is NodePort or LoadBalancer, set this to Local to tell kube-proxy to only use node local endpoints for cluster external traffic. Furthermore, this enables source IP preservation. Cluster
services.http.internalTrafficPolicy If service.type is NodePort or LoadBalancer, set this to Local to tell kube-proxy to only use node local endpoints for cluster internal traffic. Cluster
services.http.ipFamilies IPFamilies is list of IP families (e.g. IPv4, IPv6) assigned to this service. This field is usually assigned automatically based on cluster configuration and only required for customization. []
services.http.labels Additional service labels. {}
services.http.loadBalancerClass LoadBalancerClass is the class of the load balancer implementation this Service belongs to. Requires service from type LoadBalancer. ""
services.http.loadBalancerIP LoadBalancer will get created with the IP specified in this field. Requires service from type LoadBalancer. ""
services.http.loadBalancerSourceRanges Source range filter for LoadBalancer. Requires service from type LoadBalancer. []
services.http.port Port to forward the traffic to. 9191
services.http.sessionAffinity Supports ClientIP and None. Enable client IP based session affinity via ClientIP. None
services.http.sessionAffinityConfig Contains the configuration of the session affinity. {}
services.http.type Kubernetes service type for the traffic. ClusterIP

ServiceAccount

Name Description Value
serviceAccount.existing.enabled Use an existing service account instead of creating a new one. Assumes that the user has all the necessary kubernetes API authorizations. false
serviceAccount.existing.serviceAccountName Name of the existing service account. ""
serviceAccount.new.annotations Additional service account annotations. {}
serviceAccount.new.labels Additional service account labels. {}
serviceAccount.new.automountServiceAccountToken Enable/disable auto mounting of the service account token. true
serviceAccount.new.imagePullSecrets ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this serviceAccount. []
serviceAccount.new.secrets Secrets is the list of secrets allowed to be used by pods running using this ServiceAccount. []