Tracking impressions and ad click - no data

Hi there!

We’re using Snowplow to track visitors on our pages and it works great. Now, we wanted to extend usage to track the progress of our marketing campaigns using ad impression and ad clicks trackers. I’ve followed tutorial on how to set JS tracker and that one works, where I define “works” as seeing an event in Kibana. However, not always we can use JS tracker (for instance when using ad servers) so we wanted to implement pixel tracker (followed the tutorial How to track ad impressions and clicks? [tutorial]). However, using pixel tracker we do not see any data in Kibana.

At the moment my pixel looks as follows:

"://collector_name.com/com.snowplowanalytics.iglu/v1?schema=iglu%3Acom.snowplowanalytics.snowplow%2Fad_impression%2Fjsonschema%2F1-0-0&impressionId=&zoneId=&bannerId=&campaignId=321&advertiserId=123&targetUrl=example.com&costModel=cpm&cost=0.0015"

Also pixel like that:
://collector_name.com/i?e=se&p=web&tv=no-js-0.1.0&se_ca=ad&se_ac=impression&se_la=123&se_pr=

do not work.

In addition I’ve tried to use as a pixel an URL address which is requested by JS tracker. Also without any efect.

In all above cases I do receive response 200 from the collector. I’ve tested in on different browsers with and without private mode. The similar problem is with click tracker - it only works with JS tracker.

Any ideas how I can debug the problem of missing events?

@mkarpicz, the Iglu webhook URI’s composed does not look right to me and your events are likely to end up in the bad bucket/index. You should be able to debug those events to know for sure what is wrong. You mentioned Kibana. Have you checked your bad index?

The Pixel URI seems OK and you might not be looking for it in Kibana right. The vendor for this event will be com.google.analytics. Try to search for that.

The Iglu webhook URI contains &. I would expect just &. Also you provide some empty values (null) which are not permitted by the schema: https://github.com/snowplow/iglu-central/blob/master/schemas/com.snowplowanalytics.snowplow/ad_impression/jsonschema/1-0-0. Also, your cost property being in GET request could be interpreted as string rather than number. You might consider POST requests if possible (applicable). Do check your bad index.

@ihor thanks for your hints. As to a pixel URI I do not have vendor event com.google.analytics. But we were investigating it further and seems to me that all those requests ends up in loader queue “bad” instead of “good”. I’ve tried to modify my requests so no fields are missing but with no luck. Also from config

# Where to write good and bad records
sink {
  # Sinks currently supported are:
  # "elasticsearch" for writing good records to Elasticsearch
  # "stdout" for writing good records to stdout
  good = "elasticsearch"

  # Sinks currently supported are:
  # "kinesis" for writing bad records to Kinesis
  # "stderr" for writing bad records to stderr
  # "nsq" for writing bad records to NSQ
  # "none" for ignoring bad records
  bad = "kinesis"
}

hence there is no sink “elasticsearch” for bad requests. Is there any way to debug loader or set logging options?

@mkarpicz, sink.good property denotes loading valid data into Elasticsearch. That includes loading data to both good and bad indexes. You should be able to check your bad data in Elasticsearch/Kibana. You can use this tutorial to query bad data: Debugging bad rows in Elasticsearch and Kibana [tutorial].

As an alternative solution, before sending your events to production pipeline you can test the events with Snowplow Mini.

You send structured events with the pixel (e=se). We gave the vendor name for those events com.google.analytics as we borrowed the idea from Google Analytics.

It turns out that the problem was with one timestamp in our SE schema. Thx @ihor