I have a couple of questions regarding the BigQuery loader and mutator.
I have a pipeline in production which I completely provision using Terraform. It has a VM group for the collector and a VM group to facilitate the beam enrich and the BigQuery Mutator & Loader
When I would like to update my snowplow BigQuery Loader, my VM group will restart and re-execute the mutator task to create the table in BigQuery.
First question: If my table already exists, will the mutator overwrite my table?
Secondly, I would like to use the snowplow deployment to collect, enrich and store the data of multiple websites. Is it possible to separate the data into multiple tables while still using the same pipeline?
Thanks in advance,