Integrating Zendesk events with Snowplow

Intro

Zendesk Support is a simple yet powerful system (from the family of products by Zendesk, Inc.) for tracking, prioritizing, and solving customer support tickets.

Zendesk Support (referred below as just Zendesk) provides clear visibility into customer interactions, which helps better serve their needs. It comes with built-in customer analytics and machine learning capabilities which allows you better understand and predict customer satisfaction, measure performance, and uncover actionable insights across your data.

Yet, you might want to have a more granular insight into tickets workflow or have an ability to enrich (dimension widen) data stored by Zendesk by means of Snowplow pipeline enrichment capabilities.

Integrating Zendesk events with Snowplow is relatively simple. Recently we published a guide on Snowplow wiki describing our approach to sending Zendesk events to Snowplow pipeline. In this guide, we will walk you through this generalized approach, but also show you how you could define your own set of Zendesk data for emitting to Snowplow.

Overview of how the integration works

Our integration uses extensions - a Zendesk feature for notifying external targets when a ticket is created or updated. External targets are cloud-based applications and services (such as Twitter and Twilio) as well as HTTP and email.

On the Snowplow side, we will use the Iglu webhook adapter to ensure that the event is received and processed correctly. We will make use of an event schema that matches the structure of the data sent from Zendesk.

The guide is illustrative. One of the nice things about Zendesk is that it gives you a huge amount of flexibility about what data you want to send into Snowplow and how; one of the great things about Snowplow is that it is flexible enough to work with very many structures of data, just so long as it knows the schema that the data adheres to.

So, there are lots of ways you can adapt the following setup. However, the basic integration set out below should be a good start for most users.

Introduction guide

Snowplow approach

You can stick to our own Snowplow version of Zendesk events by just following the instructions on our wiki. The schemas describing those events are located here.

Our approach is to capture ticket creation and the subsequent ticket update events. The events are sent with the contexts providing data for 3 types of users:

  • requester (either the user that created the ticket or the user on behalf of whom the ticket was created)
  • submitter (user that actually submitted the ticket, either end user or agent)
  • assignee (agent the ticket is assigned to)

To simplify the data modeling task, we send all the three contexts each time a ticket event takes place even though the submitter would be always the same for the ticket throughout the ticket life cycle while the requester and the assignee could change. We also do not distinguish different events like “ticket pending” and “ticket closed”. Instead, we provide the ticket status with any update to the ticket as part of the generic ticket_updated event.

To join the ticket event with its associated users you would apply the relation like shown below:

FROM atomic.com_zendesk_snowplow_ticket_updated_1 AS tu
INNER JOIN atomic.com_zendesk_snowplow_user_1 AS u
   ON tu.ticket_id = u.ticket_id AND tu.updated_at = u.updated_at

Custom solution

Some of you may want to capture Zendesk data not included in the standard Snowplow integration - in which case, read on.

1. Setup Zendesk webhook

You can refer to Setting up a collector as a Zendesk extension section of the wiki explaining how the Zendesk webhook could be created as an HTTP target. In short, our recommendation is to send the data as JSON via POST request. This way you have more control over the format of the values you send. The restriction, however, is the POST data could not be sent to CloudFront Collector.

If you are restricted to GET requests you could pick that format too. Do bear in mind that any value sent via GET will be a string. Also, it’s quite tedious to fill up all the key/value pairs when composing the data payload to be sent with GET request.

2. Decide what data you want to send

Most of the data you want to send would be dynamically generated. Zendesk comes with abundant
placeholders referencing the data in many categories:

  • User data
  • Organization data
  • Agent data
  • Ticket data
  • Comment data
  • Satisfaction rating data

You might want to capture more specific events like “ticket_onhold” and have a dedicated schema for it (rather than having a generic schema for all types of events, as in Snowplow’s own solution).

3. Create schema

Having decided what data you want to see in your custom Zendesk event, you can start schema’ing it. It is not different from creating schemas for any other custom self-describing events.

Once the JSON Schema has been defined, use our Igluctl tool to generate the table definitions and the corresponding jsonpaths if you are to load the Zendesk events into Amazon Redshift.

Now you are ready to upload the statics to Iglu server and create the relevant tables in Redshift.

4. Create triggers and build the event body

Any ticket-related data could be sent to the Snowplow pipeline with the help of either a trigger or an automation which could be set up to use a notification target.

While triggers run immediately after tickets are created or updated, automations execute when a time event occurs after a ticket property was set or updated rather than immediately.

You can either create new dedicated triggers/automations or extend the existing ones with an action “Notify target”, the value for which would be the Zendesk webhook you set up in step 1 above. On adding the target, you need to provide the JSON body which will be sent when the trigger/automation is fired.

A cool feature Zendesk supports is Liquid markup. The Zendesk placeholders are in fact the variables taking form {{variable.name}}. The format of the values can also be controlled by means of filters. For example, the input {{ticket.due_date | date: "%H:%M"}} would generate the string value of the time in the format hh:mm. You also can implement a control flow, as in the snippet below:

{
   . . .
   "tags": 
      {% if ticket.tags.size > 0 %}
         "{{ticket.tags}}"
      {% else %}
         null
      {% endif %},
   . . .
}

The above example ensures you either send a string value if not empty or null otherwise.

The whole JSON body would follow the typical self-describing JSON format. Thus, if you, say, implemented custom “ticket_solved” event the data you send with it could look like

{
  "schema": "iglu:com.zendesk.acme/ticket_solved/jsonschema/1-0-0",
  "data": {
    "ticketId": {{ticket.id}},
    "updatedAt": "{{ticket.updated_at}}",
    "score": "{{ticket.score}}",
    "assignee": {
      "name": "{{ticket.assignee.name}}",
      "timeZone": "{{ticket.assignee.time_zone}}",
      "role": "{{ticket.assignee.role}}",
      "organization": "{{ticket.assignee.organization.name}}",
      "email": "{{ticket.assignee.email}}"
    }
  }
}

Conclusion

Hopefully this introductory overview gives you sufficient information and provides the relevant references so that you could start exploring your own integration strategy for Zendesk events. It does require knowledge of self-describing events and the need to run your own Iglu schema registry.

If you are not comfortable with those concepts, we have created our own “out-of-the-box” solution so that you can start tracking Zendesk events easily. This simple solution should be sufficient for most Zendesk tracking needs.

As always with Snowplow, the choice is yours. Do share with us your experience and let us know if you have any questions or feedback in the thread below!

2 Likes