I’m looking to set up an event driven architecture with the Snowplow real-time steam at the heart of it, having read this thread: How to use Snowplow as a Publish/Subscriber event bus?
The stream exists in a sub-account, and events are made available to the production account via Lambda and SQS. We chose this as we want to keep our solution serverless and scalable, but it does introduce the limitation of just one consumer.
The one consumer will be responsible for fanning events out to applications that care about them, so my question is, what would be the best way to maintain stability within our architecture? Our main concern is that if one application goes down, what happens to the event.
My current thinking is to have the single consumer publishing to SNS, and have SQS subscriptions attached with filter policies - but I’m very keen to hear if others have overcome this and/or have any recommendations.