Cluster: Snowplow ETLTerminated with errorsShut down as step failed

Hi All,

I’m trying to setup a snowplow pipeline

  javascriptTracker -> scala collector -> kinesis -> s3 (shredding)-> redshift, 

with the emr-etl on version r92 and rdb loader 0.13.0

Upto EmrEtlRunner (shredding) configuration is successfully completed.

i am using below command to run storageloader

    ./snowplow-emr-etl-runner run --config snowplow/4-storage/config/emretlrunner.yml --resolver snowplow/4-storage/config/resolver.json --targets snowplow/4-storage/config/targets/ --skip analyze

Below is error message in EMR console

Cluster: Snowplow ETLTerminated with errorsShut down as step failed

I am getting Below error in CLI

           		D, [2017-10-10T07:10:47.748000 #7534] DEBUG -- : Initializing EMR jobflow
	D, [2017-10-10T07:11:03.050000 #7534] DEBUG -- : EMR jobflow j-2SWKVVR0IB4TM started, waiting for jobflow to complete...
	I, [2017-10-10T07:23:06.527000 #7534]  INFO -- : No RDB Loader logs
	F, [2017-10-10T07:23:06.970000 #7534] FATAL -- :

	Snowplow::EmrEtlRunner::EmrExecutionError (EMR jobflow j-2SWKVVR0IB4TM failed, check Amazon EMR console and Hadoop logs for details (help: https://github.com/snowplow/snowplow/wiki/Troubleshooting-jobs-on-Elastic-MapReduce). Data files not archived.
	Snowplow ETL: TERMINATING [STEP_FAILURE] ~ elapsed time n/a [2017-10-10 07:17:05 +0000 - ]
	 - 1. Elasticity S3DistCp Step: Raw s3://snowplowdataevents2/ -> Raw Staging S3: FAILED ~ 00:04:42 [2017-10-10 07:17:07 +0000 - 2017-10-10 07:21:49 +0000]
	 - 2. Elasticity S3DistCp Step: Shredded S3 -> Shredded Archive S3: CANCELLED ~ elapsed time n/a [ - ]
	 - 3. Elasticity S3DistCp Step: Enriched S3 -> Enriched Archive S3: CANCELLED ~ elapsed time n/a [ - ]
	 - 4. Elasticity Custom Jar Step: Load AWS Redshift enriched events storage Storage Target: CANCELLED ~ elapsed time n/a [ - ]
	 - 5. Elasticity S3DistCp Step: Raw Staging S3 -> Raw Archive S3: CANCELLED ~ elapsed time n/a [ - ]
	 - 6. Elasticity S3DistCp Step: Shredded HDFS _SUCCESS -> S3: CANCELLED ~ elapsed time n/a [ - ]
	 - 7. Elasticity S3DistCp Step: Shredded HDFS -> S3: CANCELLED ~ elapsed time n/a [ - ]
	 - 8. Elasticity Spark Step: Shred Enriched Events: CANCELLED ~ elapsed time n/a [ - ]
	 - 9. Elasticity Custom Jar Step: Empty Raw HDFS: CANCELLED ~ elapsed time n/a [ - ]
	 - 10. Elasticity S3DistCp Step: Enriched HDFS _SUCCESS -> S3: CANCELLED ~ elapsed time n/a [ - ]
	 - 11. Elasticity S3DistCp Step: Enriched HDFS -> S3: CANCELLED ~ elapsed time n/a [ - ]
	 - 12. Elasticity Spark Step: Enrich Raw Events: CANCELLED ~ elapsed time n/a [ - ]
	 - 13. Elasticity S3DistCp Step: Raw S3 -> Raw HDFS: CANCELLED ~ elapsed time n/a [ - ]):
		uri:classloader:/emr-etl-runner/lib/snowplow-emr-etl-runner/emr_job.rb:586:in `run'
		uri:classloader:/gems/contracts-0.11.0/lib/contracts/method_reference.rb:43:in `send_to'
		uri:classloader:/gems/contracts-0.11.0/lib/contracts/call_with.rb:76:in `call_with'
		uri:classloader:/gems/contracts-0.11.0/lib/contracts/method_handler.rb:138:in `block in redefine_method'
		uri:classloader:/emr-etl-runner/lib/snowplow-emr-etl-runner/runner.rb:103:in `run'
		uri:classloader:/gems/contracts-0.11.0/lib/contracts/method_reference.rb:43:in `send_to'
		uri:classloader:/gems/contracts-0.11.0/lib/contracts/call_with.rb:76:in `call_with'
		uri:classloader:/gems/contracts-0.11.0/lib/contracts/method_handler.rb:138:in `block in redefine_method'
		uri:classloader:/emr-etl-runner/bin/snowplow-emr-etl-runner:41:in `<main>'
		org/jruby/RubyKernel.java:979:in `load'
		uri:classloader:/META-INF/main.rb:1:in `<main>'
		org/jruby/RubyKernel.java:961:in `require'
		uri:classloader:/META-INF/main.rb:1:in `(root)'
		uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:1:in `<main>'

Please help me how to fix this issue

Hi @shashi - you seem to be encountering the exact same issues as @sandesh at the exact same time:

Can you please stop cross-posting? It creates noise and wastes our community’s time and attention.

If you do it again, both of your accounts will be banned.

Locking this topic, you can discuss the problem further on the other topic, which is already getting help from one of our data engineers.