The genuine answer to that question is that Snowplow Mini is something which organically grew from an idea by chance, rather than something we sat down, designed and developed as a product.
We at Snowplow had a Hackathon a few years ago and one of the ideas was a small-scale, demo version of Snowplow - so we can show prospective customers a material example of what Snowplow is. It worked really well, and we realised that an out of the box, small and cheap version of Snowplow which runs in a sandbox is a really useful testing tool.
So we ran with it and it’s incredibly useful to that end. The latest version of Mini allows you to test everything about your tracking setup, including enrichments.
The idea of loading from mini to a storage target is something that’s come up a few times, and been knocked around. I don’t have a role in setting the roadmap or anything so I can’t speak to why we haven’t given that a good go, but I can guess that there’s a cost-benefit judgment there. We’ve got a LOT of development on the core Snowplow product - we’ve recently released the GCP pipeline, we’ve been improving loading to different storage targets, and we’ve got a big initiative to refactor bad rows. So in short I think it’s probably just because we build and maintain a lot of software and unfortunately not every good idea is something we can focus on.
Another personal opinion of mine is that once you get to a point where you’re loading to a storage target, you’re essentially just building an unscalable Snowplow pipeline (ie it’ll break at volume). So if that’s where you’re at why not just build the full pipeline? In other words I think the use case to having SP Mini load is pretty close to the use case for the full pipeline - I’m not sure it’s terribly common to need the former without being likely to need the latter too.
Good question, a bit of Snowplow folklore there for you!