I’m totally new to Snowplow and I’m just digging through extensive documentation. I managed to set-up a proof of concept by using Apache Kafka for collector & enrichment phase instead of Kinesis, but then I hit the wall and have no idea how to move my data to something suitable for further, both batch and real-time, analysis - PostgreSQL for example. I grepped 4-storage folder and there is no single mention of Kafka, after some research I found out that whole Kafka support is quite new. But if there is this support, there must be some way to use the data collected by Kafka?
My question is - once I’m past enrichment phase, and data is back in Kafka topic (post_enrichment one), how do I store it in PostgreSQL or Elastic? Or I shouldn’t pick Kafka as the final post-enrich store and move data to something, for which there is an existing loader/sink?
Sorry for such basic question but I couldn’t find any remark on this, and I found the hard way trying first storage-loader then kinesis-elasticsearch-sink and both didn’t even mention Kafka support.