Bad Event Recovery Failing!

Hi Guys,
I am trying to recover the bad events from the bad events bucket and the schema of the bad event is “Schema violation” and I’m using the spark
*Release label: emr-5.17.0
Hadoop distribution: Amazon 2.8.4
Applications: Spark 2.3.1, Zeppelin 0.7.3
I am getting this error while running the emr step, Can you guys help me with this!

> java.lang.NoClassDefFoundError: org/apache/spark/metrics/source/Source
> 	at com.snowplowanalytics.snowplow.event.recovery.Main$.$anonfun$main$9(Main.scala:96)
> 	at com.snowplowanalytics.snowplow.event.recovery.Main$.$anonfun$main$9$adapted(Main.scala:96)
> 	at
> 	at com.snowplowanalytics.snowplow.event.recovery.Main$.$anonfun$main$8(Main.scala:96)
> 	at cats.SemigroupalArityFunctions.$anonfun$map7$1(SemigroupalArityFunctions.scala:105)
> 	at
> 	at
> 	at
> 	at cats.ComposedApply.$anonfun$ap$2(Composed.scala:33)
> 	at cats.Monad.$anonfun$map$1(Monad.scala:16)
> 	at cats.instances.Function0Instances$$anon$4.$anonfun$flatMap$1(function.scala:75)
> 	at cats.instances.Function0Instances$$anon$4.$anonfun$flatMap$1(function.scala:75)
> 	at com.monovore.decline.Parser.evalResult(Parser.scala:30)
> 	at com.monovore.decline.Parser.consumeAll(Parser.scala:107)
> 	at com.monovore.decline.Parser.apply(Parser.scala:19)
> 	at com.monovore.decline.Command.parse(opts.scala:18)
> 	at com.monovore.decline.effect.CommandIOApp.$anonfun$run$2(CommandIOApp.scala:42)
> 	at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:87)
> 	at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:355)
> 	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:376)
> 	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:316)
> 	at cats.effect.internals.IOShift$
> 	at cats.effect.internals.PoolUtils$$anon$2$$anon$
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(
> 	at java.util.concurrent.ThreadPoolExecutor$
> 	at
> Caused by: java.lang.ClassNotFoundException: org.apache.spark.metrics.source.Source
> 	at
> 	at java.lang.ClassLoader.loadClass(
> 	at java.lang.ClassLoader.loadClass(
> 	... 26 more

This is my job config

  "schema": "iglu:com.snowplowanalytics.snowplow/recoveries/jsonschema/3-0-0",
  "data": {
    "iglu:com.snowplowanalytics.snowplow.badrows/schema_violations/jsonschema/2-0-0": [
        "name": "mainFlow",
        "conditions": [],
        "steps": [
            "op": "Cast",
            "path": "$",
            "from": "Numeric",
            "to": "String"

Thanks in advance!

Hey Guys,
My command is

aws emr add-steps --cluster-id j-MMMMNNNN --steps "Name=snowplow-event-recovery, Type=CUSTOM_JAR,Jar=s3://snowplow-hosted-assets/3-enrich/snowplow-event-recovery/snowplow-event-recovery-spark-0.2.0.jar,MainClass=com.snowplowanalytics.snowplow.event.recovery.Main,
 --failedOutput,s3://ijkl,--unrecoverableOutput,s3://mnop,--config, base64-string,
 --resolver,base64-string], ActionOnFailure=CONTINUE"

When I run this command I’m getting this error

Unexpected argument: com.snowplowanalytics.snowplow.event.recovery.Main

Usage: snowplow-event-recovery-job --input <string> --output <string> [--failedOutput <string>] [--unrecoverableOutput <string>] --region <string> [--batchSize <integer>] --resolver <string> --config <string>

Snowplow event recovery job

Options and flags:
        Display this help text.
    --input <string>
        Input S3 path
    --output <string>
        Output Kinesis topic
    --failedOutput <string>
        Unrecovered (bad row) output S3 path. Defaults to `input`
    --unrecoverableOutput <string>
        Unrecoverable (bad row) output S3 path. Defaults failedOutput/unrecoverable` or `input/unrecoverable`
    --region <string>
        Kinesis region
    --batchSize <integer>
        Kinesis batch size
    --resolver <string>
        Iglu resolver configuration
    --config <string>
        Base64 config with schema com.snowplowanalytics.snowplow/recovery_config/jsonschema/1-0-0

But If I remove the main class I’m getting the class not found exception, which I mentioned above!
Can you share some thoughts on how to resolve this error?
Release label: emr-5.17.0
Hadoop distribution: Amazon 2.8.4
Applications: Spark 2.3.1, Zeppelin 0.7.3
Jar=s3://snowplow-hosted-assets/3-enrich/snowplow-event-recovery/snowplow-event-recovery-spark-0.3.1.jar ,
@ihor can you help me with this?

@colm can you help me with this issue?

@Prasanth_Rosario 2 questions:

What version of enrich are you using?
Which event recovery job are you using? (a link to the repo or the docs will do - whatever you are working from)

@colm Thanks for replying!
Here are the jars and versions which i use
The enricher version is the latest one - 2.0.2

The recovery job is using this jar

The type of recovery needed Is schema violation!

The recovery schema is

I can’t see what’s going wrong here, but this isn’t my area of expertise. I’ll see if anyone who knows a bit more about it is free to take a look.

Sure Thanks @Colm !

I am using EMR steps to run the event recovery jobs, and while running the step the main class which is sent Is being sent along with the arguments so that (but I’m referring it in a separate field not inside args), the error is being thrown!

Hi Prasanth,

Looks like the schema should be 4-0-0, rather than 3-0-0. Also we use AMI version 6.0.0 but I’m not sure how much of an impact that has.

Hi Colm,

I have made those changes as well, but I’m still getting the same errors!