Hi @vytenisj - you are right, Snowplow is a very strongly typed pipeline - we validate types in Hadoop Enrich and in Hadoop Shred to make sure that all events can safely load into Redshift.
If your organization doesn't have a mature process around testing new event types before putting them live (e.g. Snowplow Mini, Selenium integration tests etc), then as you say this can lead to events being rejected. And yes, Hadoop Event Recovery is a relatively involved process, not least because there are various ways that events can fail to process.
We have a ticket to explore replacing some validation failures with warnings:
However this is not yet roadmapped, and causes complexities of its own: it will mean that some events are partially processed (e.g. loaded into Redshift with a couple of contexts missing), which will make recovery even harder to reason about and accomplish.
In any case - in the absence of support for partial event processing, it is well worth investing in a thorough internal testing process for your events and contexts. This will pay off in various ways (e.g. your analysts will have much greater confidence in the quality of the event data they are being sent).