I’m wondering about the setup of Snowplow on GCP where you could possible help me. For example what are the recommended sizes for compute engines?
- Beam Enrich?
- Big Query Loader? Mutator?
We do have around 100 million hits per month.
Is it possible to run Enrich and BQ Loader on one compute engine or do I need separate ones for every job?
What is the job of the BQ forwarder?
Is it possible to use Cloud Storage and BigQuery in parallel or does the BigQuery Loader replace the Storage loading?
Also does the web model exist for BigQuery somewhere?
Thanks for your help!