Snowplow BigQuery stream loader throwing an error

Hi All,
Recently I have setup snowplow services for events collection, enrichments and loading events into BigQuery. Collector and enricher are working fine but BiqQuery stream loader which I using to load events from enricher pubsub to BigQuery I am seeing below error sometimes which I am unable to understand. and when this error occurs, it stops stream loader. Could you please help what this error means ?

com.google.cloud.bigquery.BigQueryException: A load-shedding retryable throttled error could not be retried due to Extensible Stubs retrying limits (see go/stubs-retries). (old status: RPC::STREAM_BROKEN: Connection to server broken (OnChannelError))
	at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.translate(HttpBigQueryRpc.java:106)
	at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.insertAll(HttpBigQueryRpc.java:460)
	at com.google.cloud.bigquery.BigQueryImpl.insertAll(BigQueryImpl.java:978)
	at com.snowplowanalytics.snowplow.storage.bigquery.streamloader.Bigquery$.$anonfun$mkInsert$2(Bigquery.scala:65)
	at delay$extension @ com.snowplowanalytics.snowplow.storage.bigquery.streamloader.Bigquery$.$anonfun$mkInsert$1(Bigquery.scala:65)
	at delay$extension @ com.permutive.pubsub.consumer.grpc.internal.PubsubSubscriber$.$anonfun$subscribe$1(PubsubSubscriber.scala:72)
	at flatMap @ com.snowplowanalytics.snowplow.storage.bigquery.streamloader.Bigquery$.insert(Bigquery.scala:45)
	at apply @ fs2.Stream$InvariantOps$.observeAsync$extension(Stream.scala:3667)
	at apply @ fs2.Stream$InvariantOps$.observeAsync$extension(Stream.scala:3667)
	at main$ @ com.snowplowanalytics.snowplow.storage.bigquery.streamloader.Main$.main(Main.scala:17)
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 500 Internal Server Error
POST https://www.googleapis.com/bigquery/v2/projects/newscorp-newsid-dev/datasets/inca/tables/good_events/insertAll
{
  "code" : 500,
  "errors" : [ {
    "domain" : "global",
    "message" : "A load-shedding retryable throttled error could not be retried due to Extensible Stubs retrying limits (see go/stubs-retries). (old status: RPC::STREAM_BROKEN: Connection to server broken (OnChannelError))",
    "reason" : "backendError"
  } ],
  "message" : "A load-shedding retryable throttled error could not be retried due to Extensible Stubs retrying limits (see go/stubs-retries). (old status: RPC::STREAM_BROKEN: Connection to server broken (OnChannelError))",
  "status" : "INTERNAL"
}
	at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:150)
	at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
	at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:443)
	at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1108)
	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:541)
	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:474)
	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:591)
	at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.insertAll(HttpBigQueryRpc.java:458)
	at com.google.cloud.bigquery.BigQueryImpl.insertAll(BigQueryImpl.java:978)
	at com.snowplowanalytics.snowplow.storage.bigquery.streamloader.Bigquery$.$anonfun$mkInsert$2(Bigquery.scala:65)
	at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:108)
	at cats.effect.internals.IORunLoop$.restartCancelable(IORunLoop.scala:51)
	at cats.effect.internals.IOBracket$BracketStart.run(IOBracket.scala:100)
	at cats.effect.internals.Trampoline.cats$effect$internals$Trampoline$$immediateLoop(Trampoline.scala:67)
	at cats.effect.internals.Trampoline.startLoop(Trampoline.scala:35)
	at cats.effect.internals.TrampolineEC$JVMTrampoline.super$startLoop(TrampolineEC.scala:90)
	at cats.effect.internals.TrampolineEC$JVMTrampoline.$anonfun$startLoop$1(TrampolineEC.scala:90)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
	at cats.effect.internals.TrampolineEC$JVMTrampoline.startLoop(TrampolineEC.scala:90)
	at cats.effect.internals.Trampoline.execute(Trampoline.scala:43)
	at cats.effect.internals.TrampolineEC.execute(TrampolineEC.scala:42)
	at cats.effect.internals.IOBracket$BracketStart.apply(IOBracket.scala:80)
	at cats.effect.internals.IOBracket$BracketStart.apply(IOBracket.scala:58)
	at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:192)
	at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:480)
	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:501)
	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:439)
	at cats.effect.internals.IOShift$Tick.run(IOShift.scala:36)
	at cats.effect.internals.PoolUtils$$anon$2$$anon$3.run(PoolUtils.scala:52)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

What sort of volume (bytes / events) are you trying to sink into BigQuery?

Given that this is an internal GCP issue I suspect the best support you may get is raising it directly with GCP support.

Thanks @mike for your valuable suggestion. our current traffic pattern is approx 2-3 million events/day and it might increase more in coming days.

That is low enough that you should be a fair way from hitting any BigQuery quotas - unfortunately I think GCP support might be your best bet for this one, I’d be interested to know the outcome.