--- /dev/null
+CrowdSec for Debian
+===================
+
+# Local API and Central API
+
+There are multiple ways to configure `crowdsec`, leveraging a Local
+API (LAPI) and/or the Central API (CAPI).
+
+
+At the moment, the default configuration does the following:
+
+ 1. Set up a Local API locally, that doesn't listen on the
+ network. This can be adjusted by following the
+ [upstream local API documentation](https://doc.crowdsec.net/Crowdsec/v1/localAPI/).
+
+ 1. Register to the Central API by default, to take part in the
+ collective effort. If that's not desired, it is possible to create
+ a `/etc/crowdsec/online_api_credentials.yaml` file before
+ installing the package, that contains a comment (e.g.
+ `# no thanks`). In this case, the registration is skipped, and
+ this file is also left behind in case the package is purged, so as
+ to respective the admin's wishes if later reinstalled. If one
+ reconsiders, it's sufficient to empty this file and run the
+ following command manually:
+
+ cscli capi register
+
+
+# Configuration management
+
+## Offline hub
+
+The `crowdsec` Debian package ships a copy of the available
+collections (parsers, scenarios, and some other items) on the online
+[hub](https://hub.crowdsec.net/) so that it can be configured out of
+the box, without having to download anything from the internet. For
+the purpose of this document, let's call this copy the “offline hub”.
+
+Those items will automatically be updated when the `crowdsec` package
+gets updated, without user intervention.
+
+During initial configuration, all available items are enabled. That is
+achieved by creating symlinks below the `/etc/crowdsec` directories,
+for collections, parsers, postoverflows, and scenarios.
+
+
+## Online hub
+
+It is also possible to move away from the local, offline hub to the
+online hub, so as to benefit from new or updated items without having
+to wait for a package update. To do so, follow the upstream docs and
+run:
+
+ cscli hub update
+
+Once that has happened, the offline hub will no longer be considered
+and only items from the online hub will be used.
+
+If going back to the offline hub is desired, that can be achieved by
+running this command:
+
+ /var/lib/dpkg/info/crowdsec.postinst disable-online-hub
+
+It will undo the previous `enable-online-hub` action that happened
+automatically when calling `cscli hub update` the first time,
+but it might remove items that were available on the online hub, but
+that are not on the offline hub. One might want to double check the
+state of all configured collections afterward.
+
+Once that has happened, don't forget to restart the crowdsec unit:
+
+ systemctl restart crowdsec.service
+
+
+## Implementation details
+
+When configuring a collection, symlinks are created under
+`/etc/crowdsec`, pointing at items under `/var/lib/crowdsec/hub`.
+
+Initially, that directory points at items from the offline hub,
+shipped under `/usr/share/crowdsec/hub`.
+
+When switching to the online hub, the `/var/lib/crowdsec/hub`
+directory no longer points at the offline hub, and contains a copy of
+items downloaded from <https://hub.crowdsec.net/> instead.
+
+If switching back to the offline hub, `/var/lib/crowdsec/hub` is
+cleaned up (downloaded items are removed), and it starts pointing at
+the offline hub again.
--- /dev/null
+crowdsec (1.0.9-3) unstable; urgency=medium
+
+ * Backport upstream patches to deal with missing MMDB files gracefully
+ (geolocation files aren't shipped by default):
+ - 5ae69aa293: fix stacktrace when mmdb files are not present (#935)
+ - 4dbbd4b3c4: automatically download files when needed (#895), so
+ that switching to the online hub doesn't require extra steps to
+ fetch files.
+
+ -- Cyril Brulebois <cyril@debamax.com> Sat, 04 Dec 2021 05:03:33 +0100
+
+crowdsec (1.0.9-2) unstable; urgency=medium
+
+ * Backport hub patch from upstream to fix false positives due to
+ substring matches (https://github.com/crowdsecurity/hub/pull/197):
+ + 0009-Improve-http-bad-user-agent-use-regexp-197.patch
+
+ -- Cyril Brulebois <cyril@debamax.com> Mon, 03 May 2021 07:29:06 +0000
+
+crowdsec (1.0.9-1) unstable; urgency=medium
+
+ * New upstream stable release:
+ + Improve documentation.
+ + Fix disabled Central API use case: without Central API credentials
+ in the relevant config file, crowdsec would still try and establish
+ a connection.
+ * Add patch to disable broken scenario (ban-report-ssh_bf_report, #181):
+ + 0008-hub-disable-broken-scenario.patch
+ * Add logrotate config for /var/log/crowdsec{,_api}.log (weekly, 4).
+
+ -- Cyril Brulebois <cyril@debamax.com> Mon, 15 Mar 2021 01:19:43 +0100
+
+crowdsec (1.0.8-2) unstable; urgency=medium
+
+ * Update postinst to also strip ltsich/ when installing symlinks
+ initially (new vendor in recent hub files, in addition to the usual
+ crowdsecurity/).
+
+ -- Cyril Brulebois <cyril@debamax.com> Tue, 02 Mar 2021 01:29:29 +0000
+
+crowdsec (1.0.8-1) unstable; urgency=medium
+
+ * New upstream stable release.
+ * Refresh patches:
+ + 0001-use-a-local-machineid-implementation.patch (unfuzzy)
+ + 0002-add-compatibility-for-older-sqlite-driver.patch
+ * Set cwversion variables through debian/rules (build metadata).
+ * Add patch so that upstream's crowdsec.service is correct on Debian:
+ + 0003-adjust-systemd-unit.patch
+ * Really add lintian overrides for hardening-no-pie warnings.
+ * Ship patterns below /etc/crowdsec/patterns: they're supposed to be
+ stable over time, and it's advised not to modify them, but let's allow
+ for some configurability.
+ * Include a snapshot of hub files from the master branch, at commit
+ d8a8509bdf: hub1. Further updates for a given crowdsec upstream
+ version will be numbered hubN. After a while, they will be generated
+ from a dedicated vX.Y.Z branch instead of from master.
+ * Implement a generate_hub_tarball target in debian/rules to automate
+ generating a tarball for hub files.
+ * Add patch to disable geoip-enrich in the hub files as it requires
+ downloading some files from the network that aren't under the usual
+ MIT license:
+ + 0004-disable-geoip-enrich.patch
+ * Ship a selection of hub files in /usr/share/crowdsec/hub so that
+ crowdsec can be set up without having to download data from the
+ collaborative hub (https://hub.crowdsec.net/).
+ * Ditto for some data files (in /usr/share/crowdsec/data).
+ * Use DH_GOLANG_EXCLUDES to avoid including extra Go files from the
+ hub into the build directory.
+ * Implement an extract_hub_tarball target in debian/rules to automate
+ extracting hub files from the tarball.
+ * Implement an extract_data_tarball target in debian/rules to automate
+ extracting data files from the tarball.
+ * Ship crowdsec-cli (automated Golang naming) as cscli (upstream's
+ preference).
+ * Add patch to adjust the default config:
+ + 0005-adjust-config.patch
+ * Ship config/config.yaml accordingly, along with the config files it
+ references.
+ * Also adjust the hub_branch variable in config.yaml, pointing to the
+ branch related to the current upstream release instead of master.
+ * Create /var/lib/crowdsec/{data,hub} directories.
+ * Implement configure in postinst to generate credentials files:
+ Implement a simple agent setup with a Local API (LAPI), and with an
+ automatic registration to the Central API (CAPI). The latter can be
+ disabled by creating a /etc/crowdsec/online_api_credentials.yaml file
+ containing a comment (e.g. “# no thanks”) before installing this
+ package.
+ * Implement purge in postrm. Drop all of /etc/crowdsec except
+ online_api_credentials.yaml if this file doesn't seem to have been
+ created during CAPI registration (likely because an admin created the
+ file in advance to prevent it). Also remove everything below
+ /var/lib/crowdsec/{data,hub}, along with log files.
+ * Implement custom enable-online-hub and disable-online-hub actions in
+ postinst. The latter is called once automatically to make sure the
+ offline hub is ready to use. See README.Debian for details.
+ * Also enable all items using the offline hub on fresh installation.
+ * Add patch advertising `systemctl restart crowdsec` when updating the
+ configuration: reload doesn't work at the moment (#656 upstream).
+ + 0006-prefer-systemctl-restart.patch
+ * Add patch automating switching from the offline hub to the online hub
+ when `cscli hub update` is called:
+ + 0007-automatically-enable-online-hub.patch
+ * Add lintian override accordingly: uses-dpkg-database-directly.
+ * Add ca-certificates to Depends for the CAPI registration.
+ * Create /etc/machine-id if it doesn't exist already (e.g. in piuparts
+ environments).
+
+ -- Cyril Brulebois <cyril@debamax.com> Tue, 02 Mar 2021 00:25:48 +0000
+
+crowdsec (1.0.4-1) unstable; urgency=medium
+
+ * New upstream release.
+ * Bump copyright years.
+ * Bump golang-github-facebook-ent-dev build-dep.
+ * Swap Maintainer/Uploaders: the current plan is for me to keep in touch
+ with upstream to coordinate packaging work in Debian. Help from fellow
+ members of the Debian Go Packaging Team is very welcome, though!
+ * Fix typos in the long description, and merge upstream's review.
+ * Refresh patch:
+ + 0001-use-a-local-machineid-implementation.patch
+ * Drop patch (merged upstream):
+ + 1001-fix-docker-container-creation-for-metabase-563.patch
+
+ -- Cyril Brulebois <cyril@debamax.com> Wed, 03 Feb 2021 08:54:24 +0000
+
+crowdsec (1.0.2-1) unstable; urgency=medium
+
+ * Initial release (Closes: #972573): start by shipping binaries,
+ while better integration is being worked on with upstream:
+ documentation and assisted configuration are coming up.
+ * Version some build-deps as earlier versions are known not to work.
+ * Use a local machineid implementation instead of depending on an
+ extra package:
+ + 0001-use-a-local-machineid-implementation.patch
+ * Use a syntax that's compatible with version 1.6.0 of the sqlite3
+ driver:
+ + 0002-add-compatibility-for-older-sqlite-driver.patch
+ * Backport upstream fix for golang-github-docker-docker-dev version
+ currently in unstable:
+ + 1001-fix-docker-container-creation-for-metabase-563.patch
+ * Install all files in the build directory so that the testsuite finds
+ required test data that's scattered all over the place.
+ * Add systemd to Build-Depends for the testsuite, so that it finds
+ the journalctl binary.
+ * Add lintian overrides for the hardening-no-pie warnings: PIE is not
+ relevant for Go packages.
+
+ -- Cyril Brulebois <cyril@debamax.com> Thu, 14 Jan 2021 02:46:18 +0000
--- /dev/null
+Source: crowdsec
+Maintainer: Cyril Brulebois <cyril@debamax.com>
+Uploaders: Debian Go Packaging Team <team+pkg-go@tracker.debian.org>
+Section: golang
+Testsuite: autopkgtest-pkg-go
+Priority: optional
+Build-Depends: debhelper-compat (= 13),
+ dh-golang,
+ golang-any,
+ golang-github-alecaivazis-survey-dev,
+ golang-github-antonmedv-expr-dev,
+ golang-github-appleboy-gin-jwt-dev,
+ golang-github-buger-jsonparser-dev,
+ golang-github-coreos-go-systemd-dev,
+ golang-github-davecgh-go-spew-dev,
+ golang-github-dghubble-sling-dev,
+ golang-github-docker-docker-dev,
+ golang-github-docker-go-connections-dev,
+ golang-github-enescakir-emoji-dev,
+ golang-github-facebook-ent-dev (>= 0.5.4),
+ golang-github-gin-gonic-gin-dev (>= 1.6.3),
+ golang-github-go-co-op-gocron-dev,
+ golang-github-go-openapi-errors-dev,
+ golang-github-go-openapi-strfmt-dev,
+ golang-github-go-openapi-swag-dev,
+ golang-github-go-openapi-validate-dev,
+ golang-github-go-sql-driver-mysql-dev,
+ golang-github-google-go-querystring-dev,
+ golang-github-goombaio-namegenerator-dev,
+ golang-github-hashicorp-go-version-dev,
+ golang-github-logrusorgru-grokky-dev,
+ golang-github-mattn-go-sqlite3-dev,
+ golang-github-mohae-deepcopy-dev,
+ golang-github-nxadm-tail-dev,
+ golang-github-olekukonko-tablewriter-dev,
+ golang-github-opencontainers-image-spec-dev,
+ golang-github-oschwald-geoip2-golang-dev (>= 1.2),
+ golang-github-oschwald-maxminddb-golang-dev (>= 1.4),
+ golang-github-pkg-errors-dev,
+ golang-github-prometheus-client-model-dev,
+ golang-github-prometheus-prom2json-dev,
+ golang-github-spf13-cobra-dev,
+ golang-github-stretchr-testify-dev,
+ golang-golang-x-crypto-dev,
+ golang-golang-x-mod-dev,
+ golang-golang-x-sys-dev,
+ golang-gopkg-natefinch-lumberjack.v2-dev,
+ golang-gopkg-tomb.v2-dev,
+ golang-logrus-dev,
+ golang-pq-dev,
+ golang-prometheus-client-dev,
+ golang-yaml.v2-dev,
+ systemd
+Standards-Version: 4.5.0
+Vcs-Browser: https://salsa.debian.org/go-team/packages/crowdsec
+Vcs-Git: https://salsa.debian.org/go-team/packages/crowdsec.git
+Homepage: https://github.com/crowdsecurity/crowdsec
+Rules-Requires-Root: no
+XS-Go-Import-Path: github.com/crowdsecurity/crowdsec
+
+Package: crowdsec
+Architecture: any
+Depends: ca-certificates,
+ ${misc:Depends},
+ ${shlibs:Depends}
+Built-Using: ${misc:Built-Using}
+Description: lightweight and collaborative security engine
+ CrowdSec is a lightweight security engine, able to detect and remedy
+ aggressive network behavior. It can leverage and also enrich a
+ global community-wide IP reputation database, to help fight online
+ cybersec aggressions in a collaborative manner.
+ .
+ CrowdSec can read many log sources, parse and also enrich them, in
+ order to detect specific scenarios, that usually represent malevolent
+ behavior. Parsers, Enrichers, and Scenarios are YAML files that can
+ be shared and downloaded through a specific Hub, as well as be created
+ or adapted locally.
+ .
+ Detection results are available for CrowdSec, its CLI tools and
+ bouncers via an HTTP API. Triggered scenarios lead to an alert, which
+ often results in a decision (e.g. IP banned for 4 hours) that can be
+ consumed by bouncers (software components enforcing a decision, such
+ as an iptables ban, an nginx lua script, or any custom user script).
+ .
+ The CLI allows users to deploy a Metabase Docker image to provide
+ simple-to-deploy dashboards of ongoing activity. The CrowdSec daemon
+ is also instrumented with Prometheus to provide observability.
+ .
+ CrowdSec can be used against live logs (“à la fail2ban”), but can
+ also work on cold logs to help, in a forensic context, to build an
+ analysis for past events.
+ .
+ On top of that, CrowdSec aims at sharing detection signals amongst
+ all participants, to pre-emptively allow users to block likely
+ attackers. To achieve this, minimal meta-information about the attack
+ is shared with the CrowdSec organization for further retribution.
+ .
+ Users can also decide not to take part into the collective effort via
+ the central API, but to register on a local API instead.
--- /dev/null
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+Upstream-Name: crowdsec
+Upstream-Contact: contact@crowdsec.net
+Source: https://github.com/crowdsecurity/crowdsec
+
+Files: *
+Copyright: 2020-2021 crowdsecurity
+License: Expat
+
+Files: pkg/time
+Copyright: 2009-2015 The Go Authors
+ 2020 crowdsecurity
+License: BSD-3
+Comment: improved version of x/time/rate
+
+Files: data*/bad_user_agents.txt
+Copyright: 2017 Mitchell Krog <mitchellkrog@gmail.com>
+License: Expat
+
+Files: hub*/parsers/s01-parse/crowdsecurity/postfix-logs.yaml
+Copyright: 2014, 2015 Rudy Gevaert
+ 2020 Crowdsec
+License: Expat
+
+Files: debian/*
+Copyright: 2020-2021 Cyril Brulebois <cyril@debamax.com>
+License: Expat
+Comment: Debian packaging is licensed under the same terms as upstream
+
+License: Expat
+ Permission is hereby granted, free of charge, to any person obtaining a copy
+ of this software and associated documentation files (the "Software"), to deal
+ in the Software without restriction, including without limitation the rights
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ copies of the Software, and to permit persons to whom the Software is
+ furnished to do so, subject to the following conditions:
+ .
+ The above copyright notice and this permission notice shall be included in all
+ copies or substantial portions of the Software.
+ .
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ SOFTWARE.
+
+License: BSD-3
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+ .
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+ copyright notice, this list of conditions and the following disclaimer
+ in the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Google Inc. nor the names of its
+ contributors may be used to endorse or promote products derived from
+ this software without specific prior written permission.
+ .
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--- /dev/null
+/var/log/crowdsec.log
+/var/log/crowdsec_api.log
+{
+ rotate 4
+ weekly
+ compress
+ missingok
+ notifempty
+}
--- /dev/null
+/var/lib/crowdsec/data
+/var/lib/crowdsec/hub
--- /dev/null
+[DEFAULT]
+debian-branch = debian/sid
+dist = DEP14
--- /dev/null
+# auto-generated, DO NOT MODIFY.
+# The authoritative copy of this file lives at:
+# https://salsa.debian.org/go-team/infra/pkg-go-tools/blob/master/config/gitlabciyml.go
+---
+include:
+ - https://salsa.debian.org/go-team/infra/pkg-go-tools/-/raw/master/pipeline/test-archive.yml
--- /dev/null
+# Main config:
+config/config.yaml etc/crowdsec/
+# Referenced configs:
+config/acquis.yaml etc/crowdsec/
+config/profiles.yaml etc/crowdsec/
+config/simulation.yaml etc/crowdsec/
+
+config/patterns/* etc/crowdsec/patterns
+config/crowdsec.service lib/systemd/system
+hub*/blockers usr/share/crowdsec/hub
+hub*/collections usr/share/crowdsec/hub
+hub*/parsers usr/share/crowdsec/hub
+hub*/postoverflows usr/share/crowdsec/hub
+hub*/scenarios usr/share/crowdsec/hub
+hub*/.index.json usr/share/crowdsec/hub
+data*/* usr/share/crowdsec/data
--- /dev/null
+# PIE is not relevant for Go packages (for reference, lintian's
+# $built_with_golang variable is the one that's not set properly
+# for this package, meaning this tag is emitted regardless):
+crowdsec: hardening-no-pie usr/bin/crowdsec
+crowdsec: hardening-no-pie usr/bin/cscli
+
+# The postinst script implements custom actions, sharing code with the
+# "configure" one:
+crowdsec: uses-dpkg-database-directly usr/bin/cscli
--- /dev/null
+From: Cyril Brulebois <cyril@debamax.com>
+Date: Thu, 7 Jan 2021 17:07:12 +0000
+Subject: Use local machineid implementation
+
+Let's avoid a dependency on an extra package (denisbrodbeck/machineid),
+since its ID() function is mostly about trying to read from two files.
+
+Signed-off-by: Manuel Sabban <manuel@crowdsec.net>
+Signed-off-by: Cyril Brulebois <cyril@debamax.com>
+
+---
+ cmd/crowdsec-cli/machines.go | 2 +-
+ go.mod | 1 -
+ go.sum | 2 --
+ pkg/machineid/machineid.go | 29 +++++++++++++++++++++++++++++
+ 4 files changed, 30 insertions(+), 4 deletions(-)
+ create mode 100644 pkg/machineid/machineid.go
+
+--- a/cmd/crowdsec-cli/machines.go
++++ b/cmd/crowdsec-cli/machines.go
+@@ -13,7 +13,7 @@ import (
+ "github.com/AlecAivazis/survey/v2"
+ "github.com/crowdsecurity/crowdsec/pkg/csconfig"
+ "github.com/crowdsecurity/crowdsec/pkg/database"
+- "github.com/denisbrodbeck/machineid"
++ "github.com/crowdsecurity/crowdsec/pkg/machineid"
+ "github.com/enescakir/emoji"
+ "github.com/go-openapi/strfmt"
+ "github.com/olekukonko/tablewriter"
+--- a/go.mod
++++ b/go.mod
+@@ -11,7 +11,6 @@ require (
+ github.com/containerd/containerd v1.4.3 // indirect
+ github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf
+ github.com/davecgh/go-spew v1.1.1
+- github.com/denisbrodbeck/machineid v1.0.1
+ github.com/dghubble/sling v1.3.0
+ github.com/docker/distribution v2.7.1+incompatible // indirect
+ github.com/docker/docker v20.10.2+incompatible
+--- /dev/null
++++ b/pkg/machineid/machineid.go
+@@ -0,0 +1,29 @@
++package machineid
++
++import (
++ "io/ioutil"
++ "strings"
++)
++
++const (
++ // dbusPath is the default path for dbus machine id.
++ dbusPath = "/var/lib/dbus/machine-id"
++ // dbusPathEtc is the default path for dbus machine id located in /etc.
++ // Some systems (like Fedora 20) only know this path.
++ // Sometimes it's the other way round.
++ dbusPathEtc = "/etc/machine-id"
++)
++
++// idea of code is stolen from https://github.com/denisbrodbeck/machineid/
++// but here we are on Debian GNU/Linux
++func ID() (string, error) {
++ id, err := ioutil.ReadFile(dbusPath)
++ if err != nil {
++ // try fallback path
++ id, err = ioutil.ReadFile(dbusPathEtc)
++ }
++ if err != nil {
++ return "", err
++ }
++ return strings.TrimSpace(string(id)), nil
++}
+--- a/go.sum
++++ b/go.sum
+@@ -112,8 +112,6 @@ github.com/davecgh/go-spew v0.0.0-201610
+ github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
+ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+-github.com/denisbrodbeck/machineid v1.0.1 h1:geKr9qtkB876mXguW2X6TU4ZynleN6ezuMSRhl4D7AQ=
+-github.com/denisbrodbeck/machineid v1.0.1/go.mod h1:dJUwb7PTidGDeYyUBmXZ2GphQBbjJCrnectwCyxcUSI=
+ github.com/dghubble/sling v1.3.0 h1:pZHjCJq4zJvc6qVQ5wN1jo5oNZlNE0+8T/h0XeXBUKU=
+ github.com/dghubble/sling v1.3.0/go.mod h1:XXShWaBWKzNLhu2OxikSNFrlsvowtz4kyRuXUG7oQKY=
+ github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
--- /dev/null
+From: Cyril Brulebois <cyril@debamax.com>
+Date: Fri, 8 Jan 2021 17:27:15 +0000
+Subject: Use _foreign_keys=1 instead of _fk=1
+
+The _foreign_keys=1 syntax is widely supported but the _fk=1 alias for
+it was only added in version 1.8.0 of the sqlite3 driver. Avoid using
+the alias for the time being (the freeze is near).
+
+---
+ pkg/database/database.go | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/pkg/database/database.go
++++ b/pkg/database/database.go
+@@ -46,7 +46,7 @@ func NewClient(config *csconfig.Database
+ return &Client{}, fmt.Errorf("unable to set perms on %s: %v", config.DbPath, err)
+ }
+ }
+- client, err = ent.Open("sqlite3", fmt.Sprintf("file:%s?_busy_timeout=100000&_fk=1", config.DbPath))
++ client, err = ent.Open("sqlite3", fmt.Sprintf("file:%s?_busy_timeout=100000&_foreign_keys=1", config.DbPath))
+ if err != nil {
+ return &Client{}, fmt.Errorf("failed opening connection to sqlite: %v", err)
+ }
--- /dev/null
+From: Cyril Brulebois <cyril@debamax.com>
+Date: Fri, 22 Jan 2021 13:25:54 +0000
+Subject: Adjust systemd unit
+
+ - Drop PIDFile (that uses an obsolete path, and doesn't seem to be
+ used at all).
+ - Adjust paths for the packaged crowdsec binary (/usr/bin).
+ - Drop commented out ExecStartPost entirely.
+ - Drop syslog.target dependency, it's socket-activated (thanks to the
+ systemd-service-file-refers-to-obsolete-target lintian tag).
+ - Ensure both local and online API credentials have been defined.
+
+--- a/config/crowdsec.service
++++ b/config/crowdsec.service
+@@ -1,14 +1,15 @@
+ [Unit]
+ Description=Crowdsec agent
+-After=syslog.target network.target remote-fs.target nss-lookup.target
++After=network.target remote-fs.target nss-lookup.target
++# Ensure configuration happened:
++ConditionPathExists=/etc/crowdsec/local_api_credentials.yaml
++ConditionPathExists=/etc/crowdsec/online_api_credentials.yaml
+
+ [Service]
+ Type=notify
+ Environment=LC_ALL=C LANG=C
+-PIDFile=/var/run/crowdsec.pid
+-ExecStartPre=/usr/local/bin/crowdsec -c /etc/crowdsec/config.yaml -t
+-ExecStart=/usr/local/bin/crowdsec -c /etc/crowdsec/config.yaml
+-#ExecStartPost=/bin/sleep 0.1
++ExecStartPre=/usr/bin/crowdsec -c /etc/crowdsec/config.yaml -t
++ExecStart=/usr/bin/crowdsec -c /etc/crowdsec/config.yaml
+ ExecReload=/bin/kill -HUP $MAINPID
+
+ [Install]
--- /dev/null
+From: Cyril Brulebois <cyril@debamax.com>
+Date: Fri, 22 Jan 2021 14:35:42 +0000
+Subject: Disable geoip-enrich in the hub files
+
+It would download GeoLite2*.mmdb files from the network. Let users
+enable the hub by themselves if they want to use it.
+
+--- a/hub1/.index.json
++++ b/hub1/.index.json
+@@ -115,12 +115,11 @@
+ },
+ "long_description": "Kipjb3JlIHBhY2thZ2UgZm9yIGxpbnV4KioKCmNvbnRhaW5zIHN1cHBvcnQgZm9yIHN5c2xvZywgZG8gbm90IHJlbW92ZS4K",
+ "content": "cGFyc2VyczoKICAtIGNyb3dkc2VjdXJpdHkvc3lzbG9nLWxvZ3MKICAtIGNyb3dkc2VjdXJpdHkvZ2VvaXAtZW5yaWNoCiAgLSBjcm93ZHNlY3VyaXR5L2RhdGVwYXJzZS1lbnJpY2gKY29sbGVjdGlvbnM6CiAgLSBjcm93ZHNlY3VyaXR5L3NzaGQKZGVzY3JpcHRpb246ICJjb3JlIGxpbnV4IHN1cHBvcnQgOiBzeXNsb2crZ2VvaXArc3NoIgphdXRob3I6IGNyb3dkc2VjdXJpdHkKdGFnczoKICAtIGxpbnV4Cgo=",
+- "description": "core linux support : syslog+geoip+ssh",
++ "description": "core linux support : syslog+ssh",
+ "author": "crowdsecurity",
+ "labels": null,
+ "parsers": [
+ "crowdsecurity/syslog-logs",
+- "crowdsecurity/geoip-enrich",
+ "crowdsecurity/dateparse-enrich"
+ ],
+ "collections": [
+@@ -393,26 +392,6 @@
+ "author": "crowdsecurity",
+ "labels": null
+ },
+- "crowdsecurity/geoip-enrich": {
+- "path": "parsers/s02-enrich/crowdsecurity/geoip-enrich.yaml",
+- "stage": "s02-enrich",
+- "version": "0.2",
+- "versions": {
+- "0.1": {
+- "digest": "c0718adfc71ad462ad90485ad5c490e5de0e54d8af425bff552994e114443ab6",
+- "deprecated": false
+- },
+- "0.2": {
+- "digest": "ab327e6044a32de7d2f3780cbc8e0c4af0c11716f353023d2dc7b986571bb765",
+- "deprecated": false
+- }
+- },
+- "long_description": "VGhlIEdlb0lQIG1vZHVsZSByZWxpZXMgb24gZ2VvbGl0ZSBkYXRhYmFzZSB0byBwcm92aWRlIGVucmljaG1lbnQgb24gc291cmNlIGlwLgoKVGhlIGZvbGxvd2luZyBpbmZvcm1hdGlvbnMgd2lsbCBiZSBhZGRlZCB0byB0aGUgZXZlbnQgOgogLSBgTWV0YS5Jc29Db2RlYCA6IHR3by1sZXR0ZXJzIGNvdW50cnkgY29kZQogLSBgTWV0YS5Jc0luRVVgIDogYSBib29sZWFuIGluZGljYXRpbmcgaWYgSVAgaXMgaW4gRVUKIC0gYE1ldGEuR2VvQ29vcmRzYCA6IGxhdGl0dWRlICYgbG9uZ2l0dWRlIG9mIElQCiAtIGBNZXRhLkFTTk51bWJlcmAgOiBBdXRvbm9tb3VzIFN5c3RlbSBOdW1iZXIKIC0gYE1ldGEuQVNOT3JnYCA6IEF1dG9ub21vdXMgU3lzdGVtIE5hbWUKIC0gYE1ldGEuU291cmNlUmFuZ2VgIDogVGhlIHB1YmxpYyByYW5nZSB0byB3aGljaCB0aGUgSVAgYmVsb25ncwoKClRoaXMgY29uZmlndXJhdGlvbiBpbmNsdWRlcyBHZW9MaXRlMiBkYXRhIGNyZWF0ZWQgYnkgTWF4TWluZCBhdmFpbGFibGUgZnJvbSBbaHR0cHM6Ly93d3cubWF4bWluZC5jb21dKGh0dHBzOi8vd3d3Lm1heG1pbmQuY29tKSwgaXQgaW5jbHVkZXMgdHdvIGRhdGEgZmlsZXM6IAoqIFtHZW9MaXRlMi1DaXR5Lm1tZGJdKGh0dHBzOi8vY3Jvd2RzZWMtc3RhdGljcy1hc3NldHMuczMtZXUtd2VzdC0xLmFtYXpvbmF3cy5jb20vR2VvTGl0ZTItQ2l0eS5tbWRiKQoqIFtHZW9MaXRlMi1BU04ubW1kYl0oaHR0cHM6Ly9jcm93ZHNlYy1zdGF0aWNzLWFzc2V0cy5zMy1ldS13ZXN0LTEuYW1hem9uYXdzLmNvbS9HZW9MaXRlMi1BU04ubW1kYikKCg==",
+- "content": "ZmlsdGVyOiAiJ3NvdXJjZV9pcCcgaW4gZXZ0Lk1ldGEiCm5hbWU6IGNyb3dkc2VjdXJpdHkvZ2VvaXAtZW5yaWNoCmRlc2NyaXB0aW9uOiAiUG9wdWxhdGUgZXZlbnQgd2l0aCBnZW9sb2MgaW5mbyA6IGFzLCBjb3VudHJ5LCBjb29yZHMsIHNvdXJjZSByYW5nZS4iCmRhdGE6CiAgLSBzb3VyY2VfdXJsOiBodHRwczovL2Nyb3dkc2VjLXN0YXRpY3MtYXNzZXRzLnMzLWV1LXdlc3QtMS5hbWF6b25hd3MuY29tL0dlb0xpdGUyLUNpdHkubW1kYgogICAgZGVzdF9maWxlOiBHZW9MaXRlMi1DaXR5Lm1tZGIKICAtIHNvdXJjZV91cmw6IGh0dHBzOi8vY3Jvd2RzZWMtc3RhdGljcy1hc3NldHMuczMtZXUtd2VzdC0xLmFtYXpvbmF3cy5jb20vR2VvTGl0ZTItQVNOLm1tZGIKICAgIGRlc3RfZmlsZTogR2VvTGl0ZTItQVNOLm1tZGIKc3RhdGljczoKICAtIG1ldGhvZDogR2VvSXBDaXR5CiAgICBleHByZXNzaW9uOiBldnQuTWV0YS5zb3VyY2VfaXAKICAtIG1ldGE6IElzb0NvZGUKICAgIGV4cHJlc3Npb246IGV2dC5FbnJpY2hlZC5Jc29Db2RlCiAgLSBtZXRhOiBJc0luRVUKICAgIGV4cHJlc3Npb246IGV2dC5FbnJpY2hlZC5Jc0luRVUKICAtIG1ldGE6IEdlb0Nvb3JkcwogICAgZXhwcmVzc2lvbjogZXZ0LkVucmljaGVkLkdlb0Nvb3JkcwogIC0gbWV0aG9kOiBHZW9JcEFTTgogICAgZXhwcmVzc2lvbjogZXZ0Lk1ldGEuc291cmNlX2lwCiAgLSBtZXRhOiBBU05OdW1iZXIKICAgIGV4cHJlc3Npb246IGV2dC5FbnJpY2hlZC5BU05OdW1iZXIKICAtIG1ldGE6IEFTTk9yZwogICAgZXhwcmVzc2lvbjogZXZ0LkVucmljaGVkLkFTTk9yZwogIC0gbWV0aG9kOiBJcFRvUmFuZ2UKICAgIGV4cHJlc3Npb246IGV2dC5NZXRhLnNvdXJjZV9pcAogIC0gbWV0YTogU291cmNlUmFuZ2UKICAgIGV4cHJlc3Npb246IGV2dC5FbnJpY2hlZC5Tb3VyY2VSYW5nZQo=",
+- "description": "Populate event with geoloc info : as, country, coords, source range.",
+- "author": "crowdsecurity",
+- "labels": null
+- },
+ "crowdsecurity/http-logs": {
+ "path": "parsers/s02-enrich/crowdsecurity/http-logs.yaml",
+ "stage": "s02-enrich",
--- /dev/null
+From: Cyril Brulebois <cyril@debamax.com>
+Date: Mon, 01 Mar 2021 14:11:36 +0000
+Subject: Adjust default config
+
+Let's have all hub-related data under /var/lib/crowdsec/hub instead of
+the default /etc/crowdsec/hub directory.
+
+Signed-off-by: Cyril Brulebois <cyril@debamax.com>
+--- a/config/config.yaml
++++ b/config/config.yaml
+@@ -9,8 +9,8 @@ config_paths:
+ config_dir: /etc/crowdsec/
+ data_dir: /var/lib/crowdsec/data/
+ simulation_path: /etc/crowdsec/simulation.yaml
+- hub_dir: /etc/crowdsec/hub/
+- index_path: /etc/crowdsec/hub/.index.json
++ hub_dir: /var/lib/crowdsec/hub/
++ index_path: /var/lib/crowdsec/hub/.index.json
+ crowdsec_service:
+ acquisition_path: /etc/crowdsec/acquis.yaml
+ parser_routines: 1
--- /dev/null
+From: Cyril Brulebois <cyril@debamax.com>
+Date: Mon, 01 Mar 2021 20:40:04 +0000
+Subject: Prefer `systemctl restart crowdsec` to `systemctl reload crowdsec`
+
+As of version 1.0.8, reloading doesn't work due to failures to reopen
+the database:
+ https://github.com/crowdsecurity/crowdsec/issues/656
+
+Until this is fixed, advertise `systemctl restart crowdsec` instead.
+
+Signed-off-by: Cyril Brulebois <cyril@debamax.com>
+--- a/cmd/crowdsec-cli/capi.go
++++ b/cmd/crowdsec-cli/capi.go
+@@ -96,7 +96,7 @@ func NewCapiCmd() *cobra.Command {
+ fmt.Printf("%s\n", string(apiConfigDump))
+ }
+
+- log.Warningf("Run 'sudo systemctl reload crowdsec' for the new configuration to be effective")
++ log.Warningf("Run 'sudo systemctl restart crowdsec' for the new configuration to be effective")
+ },
+ }
+ cmdCapiRegister.Flags().StringVarP(&outputFile, "file", "f", "", "output file destination")
+--- a/cmd/crowdsec-cli/collections.go
++++ b/cmd/crowdsec-cli/collections.go
+@@ -31,7 +31,7 @@ func NewCollectionsCmd() *cobra.Command
+ if cmd.Name() == "inspect" || cmd.Name() == "list" {
+ return
+ }
+- log.Infof("Run 'sudo systemctl reload crowdsec' for the new configuration to be effective.")
++ log.Infof("Run 'sudo systemctl restart crowdsec' for the new configuration to be effective.")
+ },
+ }
+
+--- a/cmd/crowdsec-cli/lapi.go
++++ b/cmd/crowdsec-cli/lapi.go
+@@ -112,7 +112,7 @@ Keep in mind the machine needs to be val
+ } else {
+ fmt.Printf("%s\n", string(apiConfigDump))
+ }
+- log.Warningf("Run 'sudo systemctl reload crowdsec' for the new configuration to be effective")
++ log.Warningf("Run 'sudo systemctl restart crowdsec' for the new configuration to be effective")
+ },
+ }
+ cmdLapiRegister.Flags().StringVarP(&apiURL, "url", "u", "", "URL of the API (ie. http://127.0.0.1)")
+--- a/cmd/crowdsec-cli/parsers.go
++++ b/cmd/crowdsec-cli/parsers.go
+@@ -35,7 +35,7 @@ cscli parsers remove crowdsecurity/sshd-
+ if cmd.Name() == "inspect" || cmd.Name() == "list" {
+ return
+ }
+- log.Infof("Run 'sudo systemctl reload crowdsec' for the new configuration to be effective.")
++ log.Infof("Run 'sudo systemctl restart crowdsec' for the new configuration to be effective.")
+ },
+ }
+
+--- a/cmd/crowdsec-cli/postoverflows.go
++++ b/cmd/crowdsec-cli/postoverflows.go
+@@ -34,7 +34,7 @@ func NewPostOverflowsCmd() *cobra.Comman
+ if cmd.Name() == "inspect" || cmd.Name() == "list" {
+ return
+ }
+- log.Infof("Run 'sudo systemctl reload crowdsec' for the new configuration to be effective.")
++ log.Infof("Run 'sudo systemctl restart crowdsec' for the new configuration to be effective.")
+ },
+ }
+
+--- a/cmd/crowdsec-cli/scenarios.go
++++ b/cmd/crowdsec-cli/scenarios.go
+@@ -35,7 +35,7 @@ cscli scenarios remove crowdsecurity/ssh
+ if cmd.Name() == "inspect" || cmd.Name() == "list" {
+ return
+ }
+- log.Infof("Run 'sudo systemctl reload crowdsec' for the new configuration to be effective.")
++ log.Infof("Run 'sudo systemctl restart crowdsec' for the new configuration to be effective.")
+ },
+ }
+
+--- a/cmd/crowdsec-cli/simulation.go
++++ b/cmd/crowdsec-cli/simulation.go
+@@ -112,7 +112,7 @@ cscli simulation disable crowdsecurity/s
+ },
+ PersistentPostRun: func(cmd *cobra.Command, args []string) {
+ if cmd.Name() != "status" {
+- log.Infof("Run 'sudo systemctl reload crowdsec' for the new configuration to be effective.")
++ log.Infof("Run 'sudo systemctl restart crowdsec' for the new configuration to be effective.")
+ }
+ },
+ }
--- /dev/null
+From: Cyril Brulebois <cyril@debamax.com>
+Date: Mon, 01 Mar 2021 20:40:04 +0000
+Subject: Automatically enable the online hub
+
+By default, crowdsec comes with an offline copy of the hub (see
+README.Debian). When running `cscli hub update`, ensure switching from
+this offline copy to the online hub.
+
+To ensure cscli doesn't disable anything that was configured (due to
+symlinks from /etc/crowdsec becoming dangling all of a sudden), copy the
+offline hub in the live directory (/var/lib/crowdsec/hub), and let
+further operations (`cscli hub upgrade`, or `cscli <type> install`)
+update the live directory as required.
+
+Signed-off-by: Cyril Brulebois <cyril@debamax.com>
+--- a/cmd/crowdsec-cli/hub.go
++++ b/cmd/crowdsec-cli/hub.go
+@@ -2,6 +2,7 @@ package main
+
+ import (
+ "fmt"
++ "os/exec"
+
+ "github.com/crowdsecurity/crowdsec/pkg/cwhub"
+
+@@ -77,6 +78,12 @@ Fetches the [.index.json](https://github
+ return nil
+ },
+ Run: func(cmd *cobra.Command, args []string) {
++ /* Make sure to move away from the offline hub (see README.Debian) */
++ command := exec.Command("/var/lib/dpkg/info/crowdsec.postinst", "enable-online-hub")
++ if err := command.Run(); err != nil {
++ log.Printf("Enabling Online Hub failed with error: %v", err)
++ }
++
+ if err := cwhub.UpdateHubIdx(csConfig.Cscli); err != nil {
+ log.Fatalf("Failed to get Hub index : %v", err)
+ }
--- /dev/null
+From e601f44760ce6310ca4df3904c96883edf80d366 Mon Sep 17 00:00:00 2001
+From: "Thibault \"bui\" Koechlin" <thibault@crowdsec.net>
+Date: Fri, 12 Mar 2021 16:01:53 +0100
+Subject: [PATCH] remove broken scenario `ban-report-ssh_bf_report` (#181)
+
+* remove broken scenario
+
+* Update index
+
+Co-authored-by: GitHub Action <action@github.com>
+---
+ .index.json | 21 -------------------
+ .../crowdsecurity/ban-report-ssh_bf_report.md | 1 -
+ .../ban-report-ssh_bf_report.yaml | 10 ---------
+ 3 files changed, 32 deletions(-)
+ delete mode 100644 scenarios/crowdsecurity/ban-report-ssh_bf_report.md
+ delete mode 100644 scenarios/crowdsecurity/ban-report-ssh_bf_report.yaml
+
+--- a/hub1/.index.json
++++ b/hub1/.index.json
+@@ -732,27 +732,6 @@
+ "remediation": "true"
+ }
+ },
+- "crowdsecurity/ban-report-ssh_bf_report": {
+- "path": "scenarios/crowdsecurity/ban-report-ssh_bf_report.yaml",
+- "version": "0.2",
+- "versions": {
+- "0.1": {
+- "digest": "0a7bc501a12b4a8aff250d95d3a08dd0f53ad9eb874ac523ba9c628302749c4d",
+- "deprecated": false
+- },
+- "0.2": {
+- "digest": "34d80ea3e271c1c1735e55076610063b137a2311a11d51fecff93715b9a4ac39",
+- "deprecated": false
+- }
+- },
+- "long_description": "Q291bnQgdGhlIG51bWJlciBvZiB1bmlxdWUgaXBzIHRoYXQgcGVyZm9ybWVkIHNzaF9icnV0ZWZvcmNlcywgcmVwb3J0IGV2ZXJ5IDEwIG1pbnV0ZXMuCg==",
+- "content": "dHlwZTogY291bnRlcgpuYW1lOiBjcm93ZHNlY3VyaXR5L2Jhbi1yZXBvcnRzLXNzaF9iZl9yZXBvcnQKZGVzY3JpcHRpb246ICJDb3VudCB1bmlxdWUgaXBzIHBlcmZvcm1pbmcgc3NoIGJydXRlZm9yY2UiCiNkZWJ1ZzogdHJ1ZQpmaWx0ZXI6ICJldnQuT3ZlcmZsb3cuQWxlcnQuU2NlbmFyaW8gPT0gJ3NzaF9icnV0ZWZvcmNlJyIKZGlzdGluY3Q6ICJldnQuT3ZlcmZsb3cuQWxlcnQuU291cmNlLklQIgpjYXBhY2l0eTogLTEKZHVyYXRpb246IDEwbQpsYWJlbHM6CiAgc2VydmljZTogc3NoCg==",
+- "description": "Count unique ips performing ssh bruteforce",
+- "author": "crowdsecurity",
+- "labels": {
+- "service": "ssh"
+- }
+- },
+ "crowdsecurity/dovecot-spam": {
+ "path": "scenarios/crowdsecurity/dovecot-spam.yaml",
+ "version": "0.1",
+--- a/hub1/scenarios/crowdsecurity/ban-report-ssh_bf_report.md
++++ /dev/null
+@@ -1 +0,0 @@
+-Count the number of unique ips that performed ssh_bruteforces, report every 10 minutes.
+--- a/hub1/scenarios/crowdsecurity/ban-report-ssh_bf_report.yaml
++++ /dev/null
+@@ -1,10 +0,0 @@
+-type: counter
+-name: crowdsecurity/ban-reports-ssh_bf_report
+-description: "Count unique ips performing ssh bruteforce"
+-#debug: true
+-filter: "evt.Overflow.Alert.Scenario == 'ssh_bruteforce'"
+-distinct: "evt.Overflow.Alert.Source.IP"
+-capacity: -1
+-duration: 10m
+-labels:
+- service: ssh
--- /dev/null
+From 7a50abdef0e723508b3fbbc41430d80ae93625b1 Mon Sep 17 00:00:00 2001
+From: "Thibault \"bui\" Koechlin" <thibault@crowdsec.net>
+Date: Thu, 22 Apr 2021 11:08:16 +0200
+Subject: [PATCH] Improve http bad user agent : use regexp (#197)
+
+* switch to regexp with word boundaries to avoid false positives when a legit user agent contains a bad one
+
+Co-authored-by: GitHub Action <action@github.com>
+---
+ .index.json | 8 ++++++--
+ .../.tests/http-bad-user-agent/bucket_results.yaml | 2 +-
+ scenarios/crowdsecurity/http-bad-user-agent.yaml | 2 +-
+ 3 files changed, 8 insertions(+), 4 deletions(-)
+
+diff --git a/.index.json b/.index.json
+index da76124..4119b7b 100644
+--- a/hub1/.index.json
++++ b/hub1/.index.json
+@@ -895,7 +895,7 @@
+ },
+ "crowdsecurity/http-bad-user-agent": {
+ "path": "scenarios/crowdsecurity/http-bad-user-agent.yaml",
+- "version": "0.3",
++ "version": "0.4",
+ "versions": {
+ "0.1": {
+ "digest": "46e7058419bc3086f2919fb9afad6b2e85f0d4764f74153dd336ed491f99fa08",
+@@ -908,10 +908,14 @@
+ "0.3": {
+ "digest": "d3cae6c40fadd16693e449b4eb7a030586c8f1a9d9dd33c97001c9dc717c68f2",
+ "deprecated": false
++ },
++ "0.4": {
++ "digest": "8dd16e9de043f47f026d2e3c1b53ad4bbc6dd8f8aac3adaf26a7f4bd2bb6e6fd",
++ "deprecated": false
+ }
+ },
+ "long_description": "IyBLbm93biBiYWQgdXNlci1hZ2VudHMKCkRldGVjdCBrbm93biBiYWQgdXNlci1hZ2VudHMuCgpCYW5zIGFmdGVyIHR3byByZXF1ZXN0cy4KCgoKCgo=",
+- "content": "dHlwZTogbGVha3kKZm9ybWF0OiAyLjAKI2RlYnVnOiB0cnVlCm5hbWU6IGNyb3dkc2VjdXJpdHkvaHR0cC1iYWQtdXNlci1hZ2VudApkZXNjcmlwdGlvbjogIkRldGVjdCBiYWQgdXNlci1hZ2VudHMiCmZpbHRlcjogJ2V2dC5NZXRhLmxvZ190eXBlIGluIFsiaHR0cF9hY2Nlc3MtbG9nIiwgImh0dHBfZXJyb3ItbG9nIl0gJiYgYW55KEZpbGUoImJhZF91c2VyX2FnZW50cy50eHQiKSwge2V2dC5QYXJzZWQuaHR0cF91c2VyX2FnZW50IGNvbnRhaW5zICN9KScKZGF0YToKICAtIHNvdXJjZV91cmw6IGh0dHBzOi8vcmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbS9jcm93ZHNlY3VyaXR5L3NlYy1saXN0cy9tYXN0ZXIvd2ViL2JhZF91c2VyX2FnZW50cy50eHQKICAgIGRlc3RfZmlsZTogYmFkX3VzZXJfYWdlbnRzLnR4dAogICAgdHlwZTogc3RyaW5nCmNhcGFjaXR5OiAxCmxlYWtzcGVlZDogMW0KZ3JvdXBieTogImV2dC5NZXRhLnNvdXJjZV9pcCIKYmxhY2tob2xlOiAybQpsYWJlbHM6CiAgdHlwZTogc2NhbgogIHJlbWVkaWF0aW9uOiB0cnVlCg==",
++ "content": "dHlwZTogbGVha3kKZm9ybWF0OiAyLjAKI2RlYnVnOiB0cnVlCm5hbWU6IGNyb3dkc2VjdXJpdHkvaHR0cC1iYWQtdXNlci1hZ2VudApkZXNjcmlwdGlvbjogIkRldGVjdCBiYWQgdXNlci1hZ2VudHMiCmZpbHRlcjogJ2V2dC5NZXRhLmxvZ190eXBlIGluIFsiaHR0cF9hY2Nlc3MtbG9nIiwgImh0dHBfZXJyb3ItbG9nIl0gJiYgYW55KEZpbGUoImJhZF91c2VyX2FnZW50cy50eHQiKSwge2V2dC5QYXJzZWQuaHR0cF91c2VyX2FnZW50IG1hdGNoZXMgIlxcYiIrIysiXFxiIn0pJwpkYXRhOgogIC0gc291cmNlX3VybDogaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL2Nyb3dkc2VjdXJpdHkvc2VjLWxpc3RzL21hc3Rlci93ZWIvYmFkX3VzZXJfYWdlbnRzLnR4dAogICAgZGVzdF9maWxlOiBiYWRfdXNlcl9hZ2VudHMudHh0CiAgICB0eXBlOiBzdHJpbmcKY2FwYWNpdHk6IDEKbGVha3NwZWVkOiAxbQpncm91cGJ5OiAiZXZ0Lk1ldGEuc291cmNlX2lwIgpibGFja2hvbGU6IDJtCmxhYmVsczoKICB0eXBlOiBzY2FuCiAgcmVtZWRpYXRpb246IHRydWUK",
+ "description": "Detect bad user-agents",
+ "author": "crowdsecurity",
+ "labels": {
+diff --git a/scenarios/crowdsecurity/.tests/http-bad-user-agent/bucket_results.yaml b/scenarios/crowdsecurity/.tests/http-bad-user-agent/bucket_results.yaml
+index 709526b..578f91b 100644
+--- a/hub1/scenarios/crowdsecurity/.tests/http-bad-user-agent/bucket_results.yaml
++++ b/hub1/scenarios/crowdsecurity/.tests/http-bad-user-agent/bucket_results.yaml
+@@ -1,6 +1,6 @@
+ - Type: 1
+ Alert:
+- MapKey: 25fa9229bd06e973b3e656d1cc9b0a093cb779d1
++ MapKey: 726dc5f15649d6ffac5a8aff8d85f2427775c823
+ Sources:
+ 8.8.8.8:
+ asname: ""
+diff --git a/scenarios/crowdsecurity/http-bad-user-agent.yaml b/scenarios/crowdsecurity/http-bad-user-agent.yaml
+index 6c7baf3..0069956 100644
+--- a/hub1/scenarios/crowdsecurity/http-bad-user-agent.yaml
++++ b/hub1/scenarios/crowdsecurity/http-bad-user-agent.yaml
+@@ -3,7 +3,7 @@ format: 2.0
+ #debug: true
+ name: crowdsecurity/http-bad-user-agent
+ description: "Detect bad user-agents"
+-filter: 'evt.Meta.log_type in ["http_access-log", "http_error-log"] && any(File("bad_user_agents.txt"), {evt.Parsed.http_user_agent contains #})'
++filter: 'evt.Meta.log_type in ["http_access-log", "http_error-log"] && any(File("bad_user_agents.txt"), {evt.Parsed.http_user_agent matches "\\b"+#+"\\b"})'
+ data:
+ - source_url: https://raw.githubusercontent.com/crowdsecurity/sec-lists/master/web/bad_user_agents.txt
+ dest_file: bad_user_agents.txt
+--
+2.30.2
+
--- /dev/null
+From 6365cf98fb894a716685b761ed678d90232a987a Mon Sep 17 00:00:00 2001
+From: AlteredCoder <64792091+AlteredCoder@users.noreply.github.com>
+Date: Thu, 9 Sep 2021 16:27:30 +0200
+Subject: [PATCH] fix stacktrace when mmdb file are not present (#935)
+
+* fix stacktrace when mmdb file are not present
+---
+ pkg/exprhelpers/visitor.go | 2 +-
+ pkg/parser/enrich.go | 122 ++++++++++++++-----------------------
+ pkg/parser/enrich_date.go | 70 +++++++++++++++++++++
+ pkg/parser/enrich_dns.go | 4 ++
+ pkg/parser/enrich_geoip.go | 39 ++++++------
+ pkg/parser/node.go | 17 ++----
+ pkg/parser/node_test.go | 4 +-
+ pkg/parser/parsing_test.go | 8 +--
+ pkg/parser/runtime.go | 37 ++++++-----
+ pkg/parser/stage.go | 2 +-
+ pkg/parser/unix_parser.go | 2 +-
+ 11 files changed, 171 insertions(+), 136 deletions(-)
+ create mode 100644 pkg/parser/enrich_date.go
+
+diff --git a/pkg/exprhelpers/visitor.go b/pkg/exprhelpers/visitor.go
+index 86bea79..7a65c06 100644
+--- a/pkg/exprhelpers/visitor.go
++++ b/pkg/exprhelpers/visitor.go
+@@ -124,7 +124,7 @@ func (e *ExprDebugger) Run(logger *logrus.Entry, filterResult bool, exprEnv map[
+ if err != nil {
+ logger.Errorf("unable to print debug expression for '%s': %s", expression.Str, err)
+ }
+- logger.Debugf(" %s = '%s'", expression.Str, debug)
++ logger.Debugf(" %s = '%v'", expression.Str, debug)
+ }
+ }
+
+diff --git a/pkg/parser/enrich.go b/pkg/parser/enrich.go
+index 4aa8a34..43331c6 100644
+--- a/pkg/parser/enrich.go
++++ b/pkg/parser/enrich.go
+@@ -1,9 +1,6 @@
+ package parser
+
+ import (
+- "plugin"
+- "time"
+-
+ "github.com/crowdsecurity/crowdsec/pkg/types"
+ log "github.com/sirupsen/logrus"
+ )
+@@ -13,87 +10,62 @@ type EnrichFunc func(string, *types.Event, interface{}) (map[string]string, erro
+ type InitFunc func(map[string]string) (interface{}, error)
+
+ type EnricherCtx struct {
+- Funcs map[string]EnrichFunc
+- Init InitFunc
+- Plugin *plugin.Plugin //pointer to the actual plugin
++ Registered map[string]*Enricher
++}
++
++type Enricher struct {
+ Name string
+- Path string //path to .so ?
+- RuntimeCtx interface{} //the internal context of plugin, given back over every call
+- initiated bool
++ InitFunc InitFunc
++ EnrichFunc EnrichFunc
++ Ctx interface{}
+ }
+
+ /* mimic plugin loading */
+-// TODO fix this shit with real plugin loading
+-func Loadplugin(path string) ([]EnricherCtx, error) {
+- var err error
++func Loadplugin(path string) (EnricherCtx, error) {
++ enricherCtx := EnricherCtx{}
++ enricherCtx.Registered = make(map[string]*Enricher)
+
+- c := EnricherCtx{}
+- c.Name = path
+- c.Path = path
+- /* we don't want to deal with plugin loading for now :p */
+- c.Funcs = map[string]EnrichFunc{
+- "GeoIpASN": GeoIpASN,
+- "GeoIpCity": GeoIpCity,
+- "reverse_dns": reverse_dns,
+- "ParseDate": ParseDate,
+- "IpToRange": IpToRange,
+- }
+- c.Init = GeoIpInit
++ enricherConfig := map[string]string{"datadir": path}
+
+- c.RuntimeCtx, err = c.Init(map[string]string{"datadir": path})
+- if err != nil {
+- log.Warningf("load (fake) plugin load : %v", err)
+- c.initiated = false
++ EnrichersList := []*Enricher{
++ {
++ Name: "GeoIpCity",
++ InitFunc: GeoIPCityInit,
++ EnrichFunc: GeoIpCity,
++ },
++ {
++ Name: "GeoIpASN",
++ InitFunc: GeoIPASNInit,
++ EnrichFunc: GeoIpASN,
++ },
++ {
++ Name: "IpToRange",
++ InitFunc: IpToRangeInit,
++ EnrichFunc: IpToRange,
++ },
++ {
++ Name: "reverse_dns",
++ InitFunc: reverseDNSInit,
++ EnrichFunc: reverse_dns,
++ },
++ {
++ Name: "ParseDate",
++ InitFunc: parseDateInit,
++ EnrichFunc: ParseDate,
++ },
+ }
+- c.initiated = true
+- return []EnricherCtx{c}, nil
+-}
+
+-func GenDateParse(date string) (string, time.Time) {
+- var retstr string
+- var layouts = [...]string{
+- time.RFC3339,
+- "02/Jan/2006:15:04:05 -0700",
+- "Mon Jan 2 15:04:05 2006",
+- "02-Jan-2006 15:04:05 europe/paris",
+- "01/02/2006 15:04:05",
+- "2006-01-02 15:04:05.999999999 -0700 MST",
+- //Jan 5 06:25:11
+- "Jan 2 15:04:05",
+- "Mon Jan 02 15:04:05.000000 2006",
+- "2006-01-02T15:04:05Z07:00",
+- "2006/01/02",
+- "2006/01/02 15:04",
+- "2006-01-02",
+- "2006-01-02 15:04",
+- }
+-
+- for _, dateFormat := range layouts {
+- t, err := time.Parse(dateFormat, date)
+- if err == nil && !t.IsZero() {
+- //if the year isn't set, set it to current date :)
+- if t.Year() == 0 {
+- t = t.AddDate(time.Now().Year(), 0, 0)
+- }
+- retstr, err := t.MarshalText()
+- if err != nil {
+- log.Warningf("Failed marshaling '%v'", t)
+- continue
+- }
+- return string(retstr), t
++ for _, enricher := range EnrichersList {
++ log.Debugf("Initiating enricher '%s'", enricher.Name)
++ pluginCtx, err := enricher.InitFunc(enricherConfig)
++ if err != nil {
++ log.Errorf("unable to register plugin '%s': %v", enricher.Name, err)
++ continue
+ }
++ enricher.Ctx = pluginCtx
++ log.Infof("Successfully registered enricher '%s'", enricher.Name)
++ enricherCtx.Registered[enricher.Name] = enricher
+ }
+- return retstr, time.Time{}
+-}
+-
+-func ParseDate(in string, p *types.Event, x interface{}) (map[string]string, error) {
+
+- var ret map[string]string = make(map[string]string)
+-
+- tstr, tbin := GenDateParse(in)
+- if !tbin.IsZero() {
+- ret["MarshaledTime"] = string(tstr)
+- return ret, nil
+- }
+- return nil, nil
++ return enricherCtx, nil
+ }
+diff --git a/pkg/parser/enrich_date.go b/pkg/parser/enrich_date.go
+new file mode 100644
+index 0000000..bc3b946
+--- /dev/null
++++ b/pkg/parser/enrich_date.go
+@@ -0,0 +1,70 @@
++package parser
++
++import (
++ "time"
++
++ "github.com/crowdsecurity/crowdsec/pkg/types"
++ log "github.com/sirupsen/logrus"
++)
++
++func GenDateParse(date string) (string, time.Time) {
++ var (
++ layouts = [...]string{
++ time.RFC3339,
++ "02/Jan/2006:15:04:05 -0700",
++ "Mon Jan 2 15:04:05 2006",
++ "02-Jan-2006 15:04:05 europe/paris",
++ "01/02/2006 15:04:05",
++ "2006-01-02 15:04:05.999999999 -0700 MST",
++ "Jan 2 15:04:05",
++ "Mon Jan 02 15:04:05.000000 2006",
++ "2006-01-02T15:04:05Z07:00",
++ "2006/01/02",
++ "2006/01/02 15:04",
++ "2006-01-02",
++ "2006-01-02 15:04",
++ "2006/01/02 15:04:05",
++ "2006-01-02 15:04:05",
++ }
++ )
++
++ for _, dateFormat := range layouts {
++ t, err := time.Parse(dateFormat, date)
++ if err == nil && !t.IsZero() {
++ //if the year isn't set, set it to current date :)
++ if t.Year() == 0 {
++ t = t.AddDate(time.Now().Year(), 0, 0)
++ }
++ retstr, err := t.MarshalText()
++ if err != nil {
++ log.Warningf("Failed marshaling '%v'", t)
++ continue
++ }
++ return string(retstr), t
++ }
++ }
++
++ now := time.Now()
++ retstr, err := now.MarshalText()
++ if err != nil {
++ log.Warningf("Failed marshaling current time")
++ return "", time.Time{}
++ }
++ return string(retstr), now
++}
++
++func ParseDate(in string, p *types.Event, x interface{}) (map[string]string, error) {
++
++ var ret map[string]string = make(map[string]string)
++ tstr, tbin := GenDateParse(in)
++ if !tbin.IsZero() {
++ ret["MarshaledTime"] = string(tstr)
++ return ret, nil
++ }
++
++ return nil, nil
++}
++
++func parseDateInit(cfg map[string]string) (interface{}, error) {
++ return nil, nil
++}
+diff --git a/pkg/parser/enrich_dns.go b/pkg/parser/enrich_dns.go
+index 86944a7..d568a00 100644
+--- a/pkg/parser/enrich_dns.go
++++ b/pkg/parser/enrich_dns.go
+@@ -25,3 +25,7 @@ func reverse_dns(field string, p *types.Event, ctx interface{}) (map[string]stri
+ ret["reverse_dns"] = rets[0]
+ return ret, nil
+ }
++
++func reverseDNSInit(cfg map[string]string) (interface{}, error) {
++ return nil, nil
++}
+diff --git a/pkg/parser/enrich_geoip.go b/pkg/parser/enrich_geoip.go
+index c07fead..7a33e0b 100644
+--- a/pkg/parser/enrich_geoip.go
++++ b/pkg/parser/enrich_geoip.go
+@@ -13,15 +13,6 @@ import (
+ //"github.com/crowdsecurity/crowdsec/pkg/parser"
+ )
+
+-type GeoIpEnricherCtx struct {
+- dbc *geoip2.Reader
+- dba *geoip2.Reader
+- dbraw *maxminddb.Reader
+-}
+-
+-/* All plugins must export a list of function pointers for exported symbols */
+-var ExportedFuncs = []string{"GeoIpASN", "GeoIpCity"}
+-
+ func IpToRange(field string, p *types.Event, ctx interface{}) (map[string]string, error) {
+ var dummy interface{}
+ ret := make(map[string]string)
+@@ -34,7 +25,7 @@ func IpToRange(field string, p *types.Event, ctx interface{}) (map[string]string
+ log.Infof("Can't parse ip %s, no range enrich", field)
+ return nil, nil
+ }
+- net, ok, err := ctx.(GeoIpEnricherCtx).dbraw.LookupNetwork(ip, &dummy)
++ net, ok, err := ctx.(*maxminddb.Reader).LookupNetwork(ip, &dummy)
+ if err != nil {
+ log.Errorf("Failed to fetch network for %s : %v", ip.String(), err)
+ return nil, nil
+@@ -58,14 +49,16 @@ func GeoIpASN(field string, p *types.Event, ctx interface{}) (map[string]string,
+ log.Infof("Can't parse ip %s, no ASN enrich", ip)
+ return nil, nil
+ }
+- record, err := ctx.(GeoIpEnricherCtx).dba.ASN(ip)
++ record, err := ctx.(*geoip2.Reader).ASN(ip)
+ if err != nil {
+ log.Errorf("Unable to enrich ip '%s'", field)
+ return nil, nil
+ }
+ ret["ASNNumber"] = fmt.Sprintf("%d", record.AutonomousSystemNumber)
+ ret["ASNOrg"] = record.AutonomousSystemOrganization
++
+ log.Tracef("geoip ASN %s -> %s, %s", field, ret["ASNNumber"], ret["ASNOrg"])
++
+ return ret, nil
+ }
+
+@@ -79,7 +72,7 @@ func GeoIpCity(field string, p *types.Event, ctx interface{}) (map[string]string
+ log.Infof("Can't parse ip %s, no City enrich", ip)
+ return nil, nil
+ }
+- record, err := ctx.(GeoIpEnricherCtx).dbc.City(ip)
++ record, err := ctx.(*geoip2.Reader).City(ip)
+ if err != nil {
+ log.Debugf("Unable to enrich ip '%s'", ip)
+ return nil, nil
+@@ -94,26 +87,32 @@ func GeoIpCity(field string, p *types.Event, ctx interface{}) (map[string]string
+ return ret, nil
+ }
+
+-/* All plugins must export an Init function */
+-func GeoIpInit(cfg map[string]string) (interface{}, error) {
+- var ctx GeoIpEnricherCtx
+- var err error
+- ctx.dbc, err = geoip2.Open(cfg["datadir"] + "/GeoLite2-City.mmdb")
++func GeoIPCityInit(cfg map[string]string) (interface{}, error) {
++ dbCityReader, err := geoip2.Open(cfg["datadir"] + "/GeoLite2-City.mmdb")
+ if err != nil {
+ log.Debugf("couldn't open geoip : %v", err)
+ return nil, err
+ }
+- ctx.dba, err = geoip2.Open(cfg["datadir"] + "/GeoLite2-ASN.mmdb")
++
++ return dbCityReader, nil
++}
++
++func GeoIPASNInit(cfg map[string]string) (interface{}, error) {
++ dbASReader, err := geoip2.Open(cfg["datadir"] + "/GeoLite2-ASN.mmdb")
+ if err != nil {
+ log.Debugf("couldn't open geoip : %v", err)
+ return nil, err
+ }
+
+- ctx.dbraw, err = maxminddb.Open(cfg["datadir"] + "/GeoLite2-ASN.mmdb")
++ return dbASReader, nil
++}
++
++func IpToRangeInit(cfg map[string]string) (interface{}, error) {
++ ipToRangeReader, err := maxminddb.Open(cfg["datadir"] + "/GeoLite2-ASN.mmdb")
+ if err != nil {
+ log.Debugf("couldn't open geoip : %v", err)
+ return nil, err
+ }
+
+- return ctx, nil
++ return ipToRangeReader, nil
+ }
+diff --git a/pkg/parser/node.go b/pkg/parser/node.go
+index 0593907..5d3d345 100644
+--- a/pkg/parser/node.go
++++ b/pkg/parser/node.go
+@@ -44,7 +44,7 @@ type Node struct {
+ //If node has leafs, execute all of them until one asks for a 'break'
+ LeavesNodes []Node `yaml:"nodes,omitempty"`
+ //Flag used to describe when to 'break' or return an 'error'
+- EnrichFunctions []EnricherCtx
++ EnrichFunctions EnricherCtx
+
+ /* If the node is actually a leaf, it can have : grok, enrich, statics */
+ //pattern_syntax are named grok patterns that are re-utilised over several grok patterns
+@@ -58,7 +58,7 @@ type Node struct {
+ Data []*types.DataSource `yaml:"data,omitempty"`
+ }
+
+-func (n *Node) validate(pctx *UnixParserCtx, ectx []EnricherCtx) error {
++func (n *Node) validate(pctx *UnixParserCtx, ectx EnricherCtx) error {
+
+ //stage is being set automagically
+ if n.Stage == "" {
+@@ -87,15 +87,8 @@ func (n *Node) validate(pctx *UnixParserCtx, ectx []EnricherCtx) error {
+ if static.ExpValue == "" {
+ return fmt.Errorf("static %d : when method is set, expression must be present", idx)
+ }
+- method_found := false
+- for _, enricherCtx := range ectx {
+- if _, ok := enricherCtx.Funcs[static.Method]; ok && enricherCtx.initiated {
+- method_found = true
+- break
+- }
+- }
+- if !method_found {
+- return fmt.Errorf("the method '%s' doesn't exist or the plugin has not been initialized", static.Method)
++ if _, ok := ectx.Registered[static.Method]; !ok {
++ log.Warningf("the method '%s' doesn't exist or the plugin has not been initialized", static.Method)
+ }
+ } else {
+ if static.Meta == "" && static.Parsed == "" && static.TargetByName == "" {
+@@ -350,7 +343,7 @@ func (n *Node) process(p *types.Event, ctx UnixParserCtx) (bool, error) {
+ return NodeState, nil
+ }
+
+-func (n *Node) compile(pctx *UnixParserCtx, ectx []EnricherCtx) error {
++func (n *Node) compile(pctx *UnixParserCtx, ectx EnricherCtx) error {
+ var err error
+ var valid bool
+
+diff --git a/pkg/parser/node_test.go b/pkg/parser/node_test.go
+index 4724fc7..f8fdea1 100644
+--- a/pkg/parser/node_test.go
++++ b/pkg/parser/node_test.go
+@@ -41,7 +41,7 @@ func TestParserConfigs(t *testing.T) {
+ //{&Node{Debug: true, Grok: []GrokPattern{ GrokPattern{}, }}, false},
+ }
+ for idx := range CfgTests {
+- err := CfgTests[idx].NodeCfg.compile(pctx, []EnricherCtx{})
++ err := CfgTests[idx].NodeCfg.compile(pctx, EnricherCtx{})
+ if CfgTests[idx].Compiles == true && err != nil {
+ t.Fatalf("Compile: (%d/%d) expected valid, got : %s", idx+1, len(CfgTests), err)
+ }
+@@ -49,7 +49,7 @@ func TestParserConfigs(t *testing.T) {
+ t.Fatalf("Compile: (%d/%d) expected errror", idx+1, len(CfgTests))
+ }
+
+- err = CfgTests[idx].NodeCfg.validate(pctx, []EnricherCtx{})
++ err = CfgTests[idx].NodeCfg.validate(pctx, EnricherCtx{})
+ if CfgTests[idx].Valid == true && err != nil {
+ t.Fatalf("Valid: (%d/%d) expected valid, got : %s", idx+1, len(CfgTests), err)
+ }
+diff --git a/pkg/parser/parsing_test.go b/pkg/parser/parsing_test.go
+index 2a57b3a..bcf3919 100644
+--- a/pkg/parser/parsing_test.go
++++ b/pkg/parser/parsing_test.go
+@@ -89,7 +89,7 @@ func BenchmarkParser(t *testing.B) {
+ }
+ }
+
+-func testOneParser(pctx *UnixParserCtx, ectx []EnricherCtx, dir string, b *testing.B) error {
++func testOneParser(pctx *UnixParserCtx, ectx EnricherCtx, dir string, b *testing.B) error {
+
+ var (
+ err error
+@@ -139,11 +139,11 @@ func testOneParser(pctx *UnixParserCtx, ectx []EnricherCtx, dir string, b *testi
+ }
+
+ //prepTests is going to do the initialisation of parser : it's going to load enrichment plugins and load the patterns. This is done here so that we don't redo it for each test
+-func prepTests() (*UnixParserCtx, []EnricherCtx, error) {
++func prepTests() (*UnixParserCtx, EnricherCtx, error) {
+ var (
+ err error
+ pctx *UnixParserCtx
+- ectx []EnricherCtx
++ ectx EnricherCtx
+ )
+
+ err = exprhelpers.Init()
+@@ -166,7 +166,7 @@ func prepTests() (*UnixParserCtx, []EnricherCtx, error) {
+ // Init the parser
+ pctx, err = Init(map[string]interface{}{"patterns": cfgdir + string("/patterns/"), "data": "./tests/"})
+ if err != nil {
+- return nil, nil, fmt.Errorf("failed to initialize parser : %v", err)
++ return nil, ectx, fmt.Errorf("failed to initialize parser : %v", err)
+ }
+ return pctx, ectx, nil
+ }
+diff --git a/pkg/parser/runtime.go b/pkg/parser/runtime.go
+index a701ff2..2ce3059 100644
+--- a/pkg/parser/runtime.go
++++ b/pkg/parser/runtime.go
+@@ -140,29 +140,26 @@ func (n *Node) ProcessStatics(statics []types.ExtraField, event *types.Event) er
+ if static.Method != "" {
+ processed := false
+ /*still way too hackish, but : inject all the results in enriched, and */
+- for _, x := range n.EnrichFunctions {
+- if fptr, ok := x.Funcs[static.Method]; ok && x.initiated {
+- clog.Tracef("Found method '%s'", static.Method)
+- ret, err := fptr(value, event, x.RuntimeCtx)
+- if err != nil {
+- clog.Fatalf("plugin function error : %v", err)
+- }
+- processed = true
+- clog.Debugf("+ Method %s('%s') returned %d entries to merge in .Enriched\n", static.Method, value, len(ret))
+- if len(ret) == 0 {
+- clog.Debugf("+ Method '%s' empty response on '%s'", static.Method, value)
+- }
+- for k, v := range ret {
+- clog.Debugf("\t.Enriched[%s] = '%s'\n", k, v)
+- event.Enriched[k] = v
+- }
+- break
+- } else {
+- clog.Warningf("method '%s' doesn't exist or plugin not initialized", static.Method)
++ if enricherPlugin, ok := n.EnrichFunctions.Registered[static.Method]; ok {
++ clog.Tracef("Found method '%s'", static.Method)
++ ret, err := enricherPlugin.EnrichFunc(value, event, enricherPlugin.Ctx)
++ if err != nil {
++ clog.Errorf("method '%s' returned an error : %v", static.Method, err)
+ }
++ processed = true
++ clog.Debugf("+ Method %s('%s') returned %d entries to merge in .Enriched\n", static.Method, value, len(ret))
++ if len(ret) == 0 {
++ clog.Debugf("+ Method '%s' empty response on '%s'", static.Method, value)
++ }
++ for k, v := range ret {
++ clog.Debugf("\t.Enriched[%s] = '%s'\n", k, v)
++ event.Enriched[k] = v
++ }
++ } else {
++ clog.Debugf("method '%s' doesn't exist or plugin not initialized", static.Method)
+ }
+ if !processed {
+- clog.Warningf("method '%s' doesn't exist", static.Method)
++ clog.Debugf("method '%s' doesn't exist", static.Method)
+ }
+ } else if static.Parsed != "" {
+ clog.Debugf(".Parsed[%s] = '%s'", static.Parsed, value)
+diff --git a/pkg/parser/stage.go b/pkg/parser/stage.go
+index a5635b4..fe1e2d4 100644
+--- a/pkg/parser/stage.go
++++ b/pkg/parser/stage.go
+@@ -37,7 +37,7 @@ type Stagefile struct {
+ Stage string `yaml:"stage"`
+ }
+
+-func LoadStages(stageFiles []Stagefile, pctx *UnixParserCtx, ectx []EnricherCtx) ([]Node, error) {
++func LoadStages(stageFiles []Stagefile, pctx *UnixParserCtx, ectx EnricherCtx) ([]Node, error) {
+ var nodes []Node
+ tmpstages := make(map[string]bool)
+ pctx.Stages = []string{}
+diff --git a/pkg/parser/unix_parser.go b/pkg/parser/unix_parser.go
+index c21d4ed..892c2f3 100644
+--- a/pkg/parser/unix_parser.go
++++ b/pkg/parser/unix_parser.go
+@@ -24,7 +24,7 @@ type Parsers struct {
+ PovfwStageFiles []Stagefile
+ Nodes []Node
+ Povfwnodes []Node
+- EnricherCtx []EnricherCtx
++ EnricherCtx EnricherCtx
+ }
+
+ func Init(c map[string]interface{}) (*UnixParserCtx, error) {
+--
+2.30.2
+
--- /dev/null
+From 5fc744d27dbffc852eb4d2c5874a7b981aad6335 Mon Sep 17 00:00:00 2001
+From: Manuel Sabban <github@sabban.eu>
+Date: Thu, 19 Aug 2021 09:08:20 +0200
+Subject: [PATCH] Download datafile (#895)
+
+* add the ability to download datafile on cscli hub upgrade on files are missing
+* fix stuff + lint
+* fix error management
+
+Co-authored-by: sabban <15465465+sabban@users.noreply.github.com>
+---
+ cmd/crowdsec-cli/utils.go | 4 +++
+ pkg/cwhub/download.go | 54 +++++++++++++++++++++++++++++++++------
+ 2 files changed, 50 insertions(+), 8 deletions(-)
+
+diff --git a/cmd/crowdsec-cli/utils.go b/cmd/crowdsec-cli/utils.go
+index 003181b..925f779 100644
+--- a/cmd/crowdsec-cli/utils.go
++++ b/cmd/crowdsec-cli/utils.go
+@@ -216,7 +216,11 @@ func UpgradeConfig(itemType string, name string, force bool) {
+ found = true
+ if v.UpToDate {
+ log.Infof("%s : up-to-date", v.Name)
++
+ if !force {
++ if err = cwhub.DownloadDataIfNeeded(csConfig.Cscli.DataDir, csConfig.Cscli.HubDir, v, false); err != nil {
++ log.Fatalf("%s : download failed : %v", v.Name, err)
++ }
+ continue
+ }
+ }
+diff --git a/pkg/cwhub/download.go b/pkg/cwhub/download.go
+index 91fb8ec..64df7e8 100644
+--- a/pkg/cwhub/download.go
++++ b/pkg/cwhub/download.go
+@@ -3,6 +3,7 @@ package cwhub
+ import (
+ "bytes"
+ "crypto/sha256"
++ "path"
+ "path/filepath"
+
+ //"errors"
+@@ -134,7 +135,7 @@ func DownloadItem(cscli *csconfig.CscliCfg, target Item, overwrite bool) (Item,
+ }
+ if target.UpToDate {
+ log.Debugf("%s : up-to-date, not updated", target.Name)
+- return target, nil
++ // We still have to check if data files are present
+ }
+ }
+ req, err := http.NewRequest("GET", fmt.Sprintf(RawFileURLTemplate, HubBranch, target.RemotePath), nil)
+@@ -204,7 +205,34 @@ func DownloadItem(cscli *csconfig.CscliCfg, target Item, overwrite bool) (Item,
+ target.Tainted = false
+ target.UpToDate = true
+
+- dec := yaml.NewDecoder(bytes.NewReader(body))
++ if err = downloadData(dataFolder, overwrite, bytes.NewReader(body)); err != nil {
++ return target, errors.Wrapf(err, "while downloading data for %s", target.FileName)
++ }
++
++ hubIdx[target.Type][target.Name] = target
++ return target, nil
++}
++
++func DownloadDataIfNeeded(dataFolder string, hubdir string, target Item, force bool) error {
++ var (
++ itemFile *os.File
++ err error
++ )
++ itemFilePath := fmt.Sprintf("%s/%s", hubdir, target.RemotePath)
++
++ if itemFile, err = os.Open(itemFilePath); err != nil {
++ return errors.Wrapf(err, "while opening %s", itemFilePath)
++ }
++ if err = downloadData(dataFolder, force, itemFile); err != nil {
++ return errors.Wrapf(err, "while downloading data for %s", itemFilePath)
++ }
++ return nil
++}
++
++func downloadData(dataFolder string, force bool, reader io.Reader) error {
++ var err error
++ dec := yaml.NewDecoder(reader)
++
+ for {
+ data := &types.DataSet{}
+ err = dec.Decode(data)
+@@ -212,14 +240,24 @@ func DownloadItem(cscli *csconfig.CscliCfg, target Item, overwrite bool) (Item,
+ if err == io.EOF {
+ break
+ } else {
+- return target, errors.Wrap(err, "while reading file")
++ return errors.Wrap(err, "while reading file")
+ }
+ }
+- err = types.GetData(data.Data, dataFolder)
+- if err != nil {
+- return target, errors.Wrap(err, "while getting data")
++
++ download := false
++ if !force {
++ for _, dataS := range data.Data {
++ if _, err := os.Stat(path.Join(dataFolder, dataS.DestPath)); os.IsNotExist(err) {
++ download = true
++ }
++ }
++ }
++ if download || force {
++ err = types.GetData(data.Data, dataFolder)
++ if err != nil {
++ return errors.Wrap(err, "while getting data")
++ }
+ }
+ }
+- hubIdx[target.Type][target.Name] = target
+- return target, nil
++ return nil
+ }
+--
+2.30.2
+
--- /dev/null
+0001-use-a-local-machineid-implementation.patch
+0002-add-compatibility-for-older-sqlite-driver.patch
+0003-adjust-systemd-unit.patch
+0004-disable-geoip-enrich.patch
+0005-adjust-config.patch
+0006-prefer-systemctl-restart.patch
+0007-automatically-enable-online-hub.patch
+0008-hub-disable-broken-scenario.patch
+0009-Improve-http-bad-user-agent-use-regexp-197.patch
+0010-5ae69aa293-fix-stacktrace-when-mmdb-files-are-not-present.patch
+0011-4dbbd4b3c4-automatically-download-files-when-needed.patch
--- /dev/null
+#!/bin/sh
+set -e
+
+# See README.Debian for the distinction between online and offline
+# hubs:
+OFFLINE_HUB=/usr/share/crowdsec/hub
+LIVE_HUB=/var/lib/crowdsec/hub
+ITEMS="blockers collections parsers postoverflows scenarios .index.json"
+
+# Offline hub = symlinks are in place, so that an updated Debian
+# package ships updated items from the hub:
+disable_online_hub() {
+ rm -rf "$LIVE_HUB"
+ mkdir "$LIVE_HUB"
+ for item in $ITEMS; do
+ ln -s "$OFFLINE_HUB/$item" "$LIVE_HUB"
+ done
+}
+
+# Online hub = we replace symlinks with a copy of the items they point
+# to, so that enabled items (symlinks from /etc) aren't disabled
+# because of dangling symlinks. Let `cscli hub upgrade` replace the
+# original copy as required:
+enable_online_hub() {
+ # Idempotence: once this function has been called once, .index.json
+ # should no longer be a symlink, so it can be called each time
+ # `cscli hub update` is called:
+ if [ -L "$LIVE_HUB/.index.json" ]; then
+ echo "I: Switching from offline hub to online hub (see README.Debian)"
+ for item in $ITEMS; do
+ if [ -L "$LIVE_HUB/$item" ]; then
+ rm -f "$LIVE_HUB/$item"
+ cp -r "$OFFLINE_HUB/$item" "$LIVE_HUB"
+ fi
+ done
+ fi
+}
+
+
+CAPI=/etc/crowdsec/online_api_credentials.yaml
+LAPI=/etc/crowdsec/local_api_credentials.yaml
+
+if [ "$1" = configure ]; then
+ if [ ! -f "$LAPI" ]; then
+ echo "I: Registering to LAPI ($LAPI)"
+ touch "$LAPI"
+ # This is required as of 1.0.8 at least:
+ touch "$CAPI"
+
+ # Minimal environments (e.g. piuparts):
+ if [ ! -f /etc/machine-id ]; then
+ echo "W: Missing /etc/machine-id, initializing"
+ sed 's/-//g' < /proc/sys/kernel/random/uuid > /etc/machine-id
+ fi
+
+ cscli machines add --force "$(cat /etc/machine-id)" --password "$(tr -dc 'a-zA-Z0-9' < /dev/urandom | fold -w 32 | head -n 1)"
+ fi
+
+ # Heuristics: if the file is empty, it's probably been just created
+ # by the touch call above, and we want to register. Otherwise,
+ # either the user has created a file in advance to disable CAPI
+ # registration, or we've already registered to CAPI in a previous
+ # configure run (in both cases, don't do anything):
+ if [ ! -s "$CAPI" ]; then
+ echo "I: Registering to CAPI ($CAPI)"
+ cscli capi register
+ fi
+
+ # Missing index means initial install, let's go for setting up
+ # offline hub + enabling everything per upstream recommendation:
+ if [ ! -e /var/lib/crowdsec/hub/.index.json ]; then
+ echo "I: Setting up offline hub (see README.Debian)"
+ disable_online_hub
+
+ # Symlinks:
+ echo "I: Enabling all items (via symlinks from /etc/crowdsec)"
+ find /var/lib/crowdsec/hub/*/ -name '*yaml' | \
+ while read target; do
+ source=${target##/var/lib/crowdsec/hub/}
+ # Code as of 1.0.8 is picky about the number of
+ # (sub)directories, so the vendor must be stripped:
+ source=$(echo "$source"|sed 's,crowdsecurity/\|ltsich/,,')
+ mkdir -p /etc/crowdsec/$(dirname "$source")
+ ln -s "$target" "/etc/crowdsec/$source"
+ done
+
+ # Initial copy of data files:
+ cp /usr/share/crowdsec/data/* /var/lib/crowdsec/data/
+ fi
+fi
+
+case "$1" in
+ disable-online-hub)
+ disable_online_hub
+ echo "I: Don't forget to inspect the config, and run 'systemctl restart crowdsec' afterward"
+ ;;
+ enable-online-hub)
+ enable_online_hub
+ ;;
+esac
+
+
+#DEBHELPER#
--- /dev/null
+#!/bin/sh
+set -e
+
+CAPI=/etc/crowdsec/online_api_credentials.yaml
+LAPI=/etc/crowdsec/local_api_credentials.yaml
+
+if [ "$1" = purge ]; then
+ # Might have been created by the postinst during CAPI registration,
+ # or created by the admin to prevent CAPI registration. Keep only
+ # this file if it doesn't seem to have been generated by the CAPI
+ # registration. The rest of /etc/crowdsec goes away in all cases:
+ if [ -f "$CAPI" ] && ! grep -qs '^url: https://api.crowdsec.net/$' "$CAPI"; then
+ mv "$CAPI" /var/lib/crowdsec/online_api_credentials.yaml
+ rm -rf /etc/crowdsec
+ mkdir -p /etc/crowdsec
+ mv /var/lib/crowdsec/online_api_credentials.yaml "$CAPI"
+ else
+ rm -rf /etc/crowdsec
+ fi
+
+ # Local config and hub:
+ rm -rf /var/lib/crowdsec/data
+ rm -rf /var/lib/crowdsec/hub
+
+ # Logs:
+ rm -f /var/log/crowdsec.log
+ rm -f /var/log/crowdsec_api.log
+fi
+
+#DEBHELPER#
--- /dev/null
+#!/usr/bin/make -f
+
+export DH_GOLANG_INSTALL_ALL := 1
+export DH_GOLANG_EXCLUDES := hub\d+ data\d+
+
+export BUILD_VERSION := $(shell dpkg-parsechangelog -SVersion)
+export BUILD_TAG := debian
+export BUILD_CODENAME := $(shell awk '/CodeName/ { gsub(/\"/, "", $$2); print $$2 }' RELEASE.json)
+export BUILD_GOVERSION := $(shell go version | awk '{ gsub(/^go/, "", $$3); print $$3 }')
+export BUILD_DATE := $(shell TZ=Etc/UTC date +'%F_%T' -d @$(SOURCE_DATE_EPOCH))
+export set_cwversion := -X github.com/crowdsecurity/crowdsec/pkg/cwversion
+export LD_FLAGS := -ldflags '-s -w \
+ $(set_cwversion).Version=$(BUILD_VERSION) \
+ $(set_cwversion).Tag=$(BUILD_TAG) \
+ $(set_cwversion).Codename=$(BUILD_CODENAME) \
+ $(set_cwversion).GoVersion=$(BUILD_GOVERSION) \
+ $(set_cwversion).BuildDate=$(BUILD_DATE) \
+'
+
+# Use 1 for a new upstream release, and bump it when an update of the
+# hub files is desired while the upstream version doesn't change. See
+# below for the generate_hub_tarball target:
+export DATA_ID := 1
+export HUB_ID := 1
+export HUB_BRANCH := master
+export HUB_DIR := ../hub
+export U_VERSION := $(shell dpkg-parsechangelog -SVersion|sed 's/-.*//')
+
+%:
+ dh $@ --builddirectory=_build --buildsystem=golang --with=golang
+
+override_dh_auto_build:
+ dh_auto_build -- $(LD_FLAGS)
+
+override_dh_auto_install:
+ dh_auto_install -- --no-source
+
+override_dh_install:
+ dh_install
+ # Switch from Golang naming to upstream-desired naming:
+ mv debian/crowdsec/usr/bin/crowdsec-cli \
+ debian/crowdsec/usr/bin/cscli
+ # Adjust the hub branch according to the upstream version:
+ sed "s/\(.*hub_branch:\) master/\1 v$(U_VERSION)/" -i debian/crowdsec/etc/crowdsec/config.yaml
+ # Drop unit tests from the hub:
+ find debian/crowdsec/usr/share/crowdsec/hub -depth -name '.tests' -exec rm -rf '{}' ';'
+
+
+### Maintainer targets:
+
+generate_hub_tarball:
+ cd $(HUB_DIR) && git archive --prefix hub$(HUB_ID)/ $(HUB_BRANCH) | gzip -9 > ../crowdsec_$(U_VERSION).orig-hub$(HUB_ID).tar.gz \
+ && echo "Generated hub tarball from branch $(HUB_BRANCH), at commit `git show $(HUB_BRANCH) | awk '/^commit / {print $$2; quit}' | cut -b -10`"
+
+extract_hub_tarball:
+ tar xf ../crowdsec_$(U_VERSION).orig-hub$(HUB_ID).tar.gz
+
+extract_data_tarball:
+ tar xf ../crowdsec_$(U_VERSION).orig-data$(HUB_ID).tar.gz
--- /dev/null
+3.0 (quilt)
--- /dev/null
+---
+Bug-Database: https://github.com/crowdsecurity/crowdsec/issues
+Bug-Submit: https://github.com/crowdsecurity/crowdsec/issues/new
+Repository: https://github.com/crowdsecurity/crowdsec.git
+Repository-Browse: https://github.com/crowdsecurity/crowdsec
--- /dev/null
+version=4
+opts="filenamemangle=s%(?:.*?)?v?(\d[\d.]*)\.tar\.gz%crowdsec-$1.tar.gz%,\
+ uversionmangle=s/(\d)[_\.\-\+]?(RC|rc|pre|dev|beta|alpha)[.]?(\d*)$/\$1~\$2\$3/" \
+ https://github.com/crowdsecurity/crowdsec/tags .*/v?(\d\S*)\.tar\.gz debian