Filter out ip's

So we seem to be having some errors coming through on tracker. with that said we found out that a subset of IP’s:

file_date    ip_address_in_error
1    2018-02-04 00:15:20 -0500    170.6.96.16
2    2018-02-04 00:15:20 -0500    170.6.96.17
3    2018-01-28 00:15:20 -0500    170.6.96.16
4    2018-01-28 00:15:20 -0500    170.6.96.17
5    2018-01-21 00:15:19 -0500    170.6.177.160
6    2018-01-21 00:15:19 -0500    170.6.96.17
7    2018-01-14 00:15:19 -0500    170.6.177.160
8    2018-01-14 00:15:19 -0500    170.6.49.113
9    2018-01-07 05:15:20 -0500    170.6.177.160
10    2018-01-07 05:15:20 -0500    170.6.96.16
11    2017-12-30 08:00:19 -0500    170.6.177.160
12    2017-12-30 08:00:19 -0500    170.6.49.113
13    2017-12-26 15:32:50 -0500    170.6.177.161
14    2017-12-26 15:32:50 -0500    170.6.177.160

are ending up in the bad folder and not being enriched, which is ok, but is there a way to filter this traffic out along the pipeline.

What errors are you seeing such that they aren’t being enriched? It’d be peculiar if an event was failing due to a specific IP address.

Heres an example:

{"line":"2018-02-03t06:28:50t-t13t170.6.96.16tGETt170.6.96.16t/tiki5.2/t404t-t-t&cv=clj-1.1.0-tom-0.2.0&nuid=-t-t-t-t-t-","errors":[{"level":"error","message":"Request path /tiki5.2/ does not match (/)vendor/version(/) pattern nor is a legacy /i(ce.png) request"}],"failure_tstamp":"2018-02-04T00:29:22.328Z"}

This traffic is typically caused by bots scanning the web looking for vulnerabilities in various different web applications. It is possible to block this traffic at the firewall/load balancer level though it’s likely more effort than it’s worth as these IP ranges won’t remain the same over time.