Best analytics UI to use with Snowplow?


We are looking for something similar to the Mixpanel UI, that can be used with Snowplow data. We need something where our Product Managers can drag and drop to explore the data without having to write SQL queries. I’ve got Amplitude, Segment, Looker and Indicative on my list to look at. Does anyone have experience with any of these, did they work with Snowplow data? Or have you got another solution working?

This has been asked before but specifically about Segment, and with no replies: Using Snowplow Analytics and

I’ve also seen that Snowplow endorse Indicative but am keen to hear from other users. :slightly_smiling_face:



we only rely on Looker and Jupyter Notebooks here at my company and use snow plow data for both. Looker/SnowPlow has a lot of great snow plow blocks you can insert. plug and play. ex: we’ve also set up a raw explore that points to and our custom event tables in looker that anyone with click knowledge can view in looker and play with easily without knowing SQL.



There’s quite a few good tools that will do drag-and-drop on arbitrary data - a lot of this comes down to what features you want within the visualisations.

Some that allow drag and drop include: Tableau,, Holistics, Periscope, Superset, Domo, Metabase, Looker to name a few.

Most tools including paid products like Amplitude and Mixpanel rely on having structured data models underneath. Snowplow isn’t much different in this sense, the success of having end users like PMs using data is dependent on having clear, accessible data models rather than access to raw data which is more appropriate for analysts and data scientists.



Hi, Indicative employee here! I can help elaborate on how Indicative and Snowplow work together and how we can help your product managers analyze your data without SQL. Indicative is an integration partner with Snowplow, so you can connect your data using the Indicative Relay (instructions here) and analyze it via our drag and drop interface to understand the customer journey.

Indicative is free for up to 1 billion user events/actions per month (Amplitude offers 10 million events). Regarding the other platforms you mentioned, Segment is not an analytics platform. Typically Segment is used to track your user data, which can then be sent to other platforms for analysis (e.g. Indicative, Amplitude, etc.). Looker is great, but still requires data analysts/SQL to model the data and build out the views, while all analysis in Indicative is drag and drop (no SQL) and designed to answer questions around product usage.

If you have any questions, you can set up a call here.

1 Like


At Mint Metrics we expose Snowplow data through Superset (for dashboards and collaborative data exploration) and R markdown/knitR templates (for more static reports and bespoke analysis).

Superset has an approachable UI and is likely about to ship a new stable release with a whole host of features and refinements. There’s also no fees to setup, other than a cheap VM for hosting it.

It’s much easier to start using something free and upgrade to a paid solution IMO, because you won’t need to ask anyone for money to spend.

1 Like


I totally agree with @robkingston about starting to gain traction and thorough understanding of your data with a lower-cost solution.

At O’Reilly, we used Pentaho sitting on top of a number of cubes in Postgres/Redshift. It was relatively expensive, inflexible, and well… cubes were valuable in Kimball land before columnar db’s came into their own and storage became cheap. Wagon was heavily used at O’Reilly/Safari for collaborative sql, and worked incredibly well. PopSQL has become the closest post-Wagon-acquisition analog.

At Wanderu we used Metabase sitting on top of Redshift with good success for a couple of years (at an extremently low operational cost). Eventually the company outgrew the system and moved onto Looker (where the snowplow block is easily implemented as @mjensen mentioned above). During that time, the knowledge of internal data assets and shared understanding of how it was generated/loaded/represented was enormously valuable to the organization. Metabase just doesn’t have some of the functionality that Looker does, and Looker is definitely a common solution as BI/ analytics teams mature. But then again, Looker has a very hefty price tag.

Currently the team I work on at CarGurus uses Looker very heavily, sitting on top of a pretty large Snowflake database. A small team of application/ bi engineers builds out the Looker blocks/models, and a large number of analysts are free to explore the data at depth, with no prior SQL experience.

I’ve also helped companies like Friday Feedback and GetAbstract with analytics, and Mode/ Periscope have worked well.

For pricing context, Metabase cost < $50/month to run and reliability was very good. It doesn’t have a lot of the functionality that Looker does, but until you’ve reached a point of in-depth knowledge of your data it would be hard to justify >100k annually for a UI. Periscope/Mode are awesome lower-priced options for small to medium-sized teams (and you don’t want to maintain the system yourself).

With all this being said, what I’ve found is the following:

  1. There is no perfect solution, and the gripes about various systems are almost always the same: “inflexible for pure exploration”, “doesn’t let me do what shows on their site”, “too expensive”, “slow” (b/c a lot of data is still a lot of data, and a columnar db is still a columnar db), etc.
  2. Having a solid awareness of the tradeoffs, and answering those before picking one solution over another is probably the most important part. Want to optimize for speed of implementation and a low cost but know that you’ll have to migrate in a couple years? Metabase is a very solid option (and getting much better, quickly). Want to avoid potential migrations down the road, don’t care about spend, and want to have/pay for support straight from the company? Looker is great.
  3. Vendor lock-in is a very real (and very scary!) thing. In my opinion, the most freeing thing a company can do is learn and love SQL and/or model data in the database, and let a UI just be a UI. If you’re unsatisfied with your experience using database-vendor-1 or chart-vendor-1, migrating your dashboards to another ANSI-compliant database is trivially easy. When you need to learn how vendor #1 implemented a “select a, b, count(*) from table where x > y group by 1, 2 order by 3 desc” query, you might as well learn sql once and get the initial pain over with. There is definitely a learning curve, but once you’re over that initial curve you find that your knowledge is directly transferrable to numerous other systems. Another side benefit of sql is you become deeply knowledgable about your own data instead of becoming deeply knowledgable about how company X’s sql abstraction layer. With this being said, Looker’s explore functionality or Metabase’s question exploration are both incredibly useful.
  4. There (still) isn’t really a good one-size-fits-all solution. Jupyter is enormously valuable, but serves a completely different purpose than a dashboarding tool an executive stares at all day. And neither of those are good sql editors.
  5. I’ve become rather skeptical of third-party tools that attempt to do it all (ie: “we can chart things, govern your data, do predictive analytics, and track things in one single tool!”).

Hope this helps, apologies for the rant :slight_smile:.