Both R89 and R88 are taking forever to enrich


#1

Hi all,

Enrich is taking longer than 2 hours each to process 11 files totaling approximately 77MB.

R88 hadoop us-east-1 on c4.large master and 1 c4.xlarge core 1 minute (last good run)
R88 hadoop us-east-1 on c4.largemaster and 2 c4.xlarge core 2 hours, 4 minutes
R88 hadoop us-east-1on c4.largemaster and 4 c4.xlarge core 8 hours, 3 minutes
R89 scala us-east-1 on c4.large master and 2 c4.xlarge core 2 hours, 13 minutes

The instances are running cold - load averages, memory, logs - everything looks like nothing is happening. Any ideas on profiling the code to get to the bottom of this?


Reprocessing / Rerunning logs from IGLU server failure for unstructured events
#2

Hi @dashirov - can you share the step breakdown? Where is the job getting stuck; what enrichments/deduplication do you have configured?


#3


INFO redirectOutput to /mnt/var/log/hadoop/steps/s-1CA2HOTB5WU0H/stdout
INFO redirectError to /mnt/var/log/hadoop/steps/s-1CA2HOTB5WU0H/stderr
INFO Working dir /mnt/var/lib/hadoop/steps/s-1CA2HOTB5WU0H
INFO ProcessRunner started child process 8306 :
hadoop    8306  3994  0 04:25 ?        00:00:00 bash /usr/lib/hadoop/bin/hadoop jar /usr/share/aws/emr/s3-dist-cp/lib/s3-dist-cp.jar --src s3://ga-snowplow-production/snowplow-raw/processing/ --dest hdfs:///local/snowplow/raw-events/ --s3Endpoint s3.amazonaws.com
2017-06-20T04:25:05.385Z INFO HadoopJarStepRunner.Runner: startRun() called for s-1CA2HOTB5WU0H Child Pid: 8306
INFO Synchronously wait child process to complete : hadoop jar /usr/share/aws/emr/s3-dist-cp/lib/s3...
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running
INFO Process still running

#4

Other runs:

Elasticity S3DistCp Step: Raw S3 Staging -> S3 Archive		1 minute	
Elasticity S3DistCp Step: Shredded HDFS -> S3		1 minute	
Elasticity Spark Step: Shred Enriched Events		7 minutes	
Elasticity S3DistCp Step: Enriched HDFS _SUCCESS -> S3		54 seconds	
Elasticity S3DistCp Step: Enriched HDFS -> S3		1 minute	
Elasticity Spark Step: Enrich Raw Events		2 hours, 13 minutes	
Elasticity S3DistCp Step: Raw S3 -> HDFS		1 minute

#5

Enrichments (IP lookups is currently disabled, but was enabled when posted stats were collected):
./enrichments/cookie_extractor_config.json
./enrichments/ua_parser_config.json
./enrichments/ip_lookups.json


#6

@alex i work for dashirov. we are now stuck at this stage.

2017-06-20 14:53:14,838 INFO org.apache.hadoop.mapreduce.Job (main): Job job_1497970223541_0001 running in uber mode : false
2017-06-20 14:53:14,839 INFO org.apache.hadoop.mapreduce.Job (main): map 0% reduce 0%

and normally it only takes at most 30 seconds to get past this stage. previous log:

2017-06-17 19:13:43,822 INFO org.apache.hadoop.mapreduce.Job (main): map 0% reduce 0%
2017-06-17 19:13:48,865 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2017-06-17 19:13:57,910 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 29%
2017-06-17 19:13:58,916 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 43%
2017-06-17 19:13:59,922 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 100%
2017-06-17 19:14:00,934 INFO org.apache.hadoop.mapreduce.Job (main): Job job_1497726710304_0001 completed successfully

we terminated the job this morning and tried running it again. last night it was stuck at this stage for over 8 hours and not moving. and again these are really small files in processing. maybe 80-100 megs and only about 88 files. also followed the troubleshooting docs and cleaned out enriched good and shredded good folder and still the same.


#7

Further the following container log on the EMR CORE instance has the last line perpetually repeated:
2017-06-20 16:06:34,895 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1497974621025_0001_000001
2017-06-20 16:06:35,464 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2017-06-20 16:06:35,464 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1497974621025 } attemptId: 1 } keyId: -135228129)
2017-06-20 16:06:35,698 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter.
2017-06-20 16:06:35,699 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config org.apache.hadoop.mapred.DirectFileOutputCommitter
2017-06-20 16:06:35,740 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1
2017-06-20 16:06:36,252 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.DirectFileOutputCommitter
2017-06-20 16:06:36,382 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2017-06-20 16:06:36,383 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2017-06-20 16:06:36,384 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2017-06-20 16:06:36,385 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2017-06-20 16:06:36,385 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2017-06-20 16:06:36,390 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2017-06-20 16:06:36,391 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2017-06-20 16:06:36,392 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2017-06-20 16:06:36,428 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://ip-172-30-1-230.ec2.internal:8020]
2017-06-20 16:06:36,452 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://ip-172-30-1-230.ec2.internal:8020]
2017-06-20 16:06:36,482 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://ip-172-30-1-230.ec2.internal:8020]
2017-06-20 16:06:36,503 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms after creating 488, Expected: 504
2017-06-20 16:06:36,503 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly setting permissions to : 504, rwxrwx—
2017-06-20 16:06:36,508 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Emitting job history data to the timeline server is not enabled
2017-06-20 16:06:36,552 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
2017-06-20 16:06:36,635 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-06-20 16:06:36,641 INFO [main] com.amazon.ws.emr.hadoop.metrics2.sink.cloudwatch.CloudWatchSink: Initializing the CloudWatchSink for metrics.
2017-06-20 16:06:36,699 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink cloudwatch started
2017-06-20 16:06:36,772 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 300 second(s).
2017-06-20 16:06:36,772 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics system started
2017-06-20 16:06:36,780 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for job_1497974621025_0001 to jobTokenSecretManager
2017-06-20 16:06:36,876 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing job_1497974621025_0001 because: not enabled; too many reduces;
2017-06-20 16:06:36,887 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job job_1497974621025_0001 = 36350. Number of splits = 1
2017-06-20 16:06:36,889 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for job job_1497974621025_0001 = 10
2017-06-20 16:06:36,889 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1497974621025_0001Job Transitioned from NEW to INITED
2017-06-20 16:06:36,890 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized, multi-container job job_1497974621025_0001.
2017-06-20 16:06:36,911 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2017-06-20 16:06:36,916 INFO [Socket Reader #1 for port 35132] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 35132
2017-06-20 16:06:36,932 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
2017-06-20 16:06:36,937 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2017-06-20 16:06:36,938 INFO [IPC Server listener on 35132] org.apache.hadoop.ipc.Server: IPC Server listener on 35132: starting
2017-06-20 16:06:36,938 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated MRClientService at ip-172-30-4-165.ec2.internal/172.30.4.165:35132
2017-06-20 16:06:37,009 INFO [main] org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-06-20 16:06:37,016 INFO [main] org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-06-20 16:06:37,020 INFO [main] org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.mapreduce is not defined
2017-06-20 16:06:37,025 INFO [main] org.apache.hadoop.http.HttpServer2: Added global filter ‘safety’ (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-06-20 16:06:37,028 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context mapreduce
2017-06-20 16:06:37,029 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context static
2017-06-20 16:06:37,031 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /mapreduce/*
2017-06-20 16:06:37,031 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2017-06-20 16:06:37,387 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2017-06-20 16:06:37,388 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to port 39405
2017-06-20 16:06:37,389 INFO [main] org.mortbay.log: jetty-6.1.26-emr
2017-06-20 16:06:37,416 INFO [main] org.mortbay.log: Extract jar:file:/usr/lib/hadoop-yarn/hadoop-yarn-common-2.7.3-amzn-2.jar!/webapps/mapreduce to /mnt/yarn/usercache/hadoop/appcache/application_1497974621025_0001/container_1497974621025_0001_01_000001/tmp/Jetty_0_0_0_0_39405_mapreduce____.namnp6/webapp
2017-06-20 16:06:38,560 INFO [main] org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:39405
2017-06-20 16:06:38,562 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app mapreduce started at 39405
2017-06-20 16:06:38,565 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: JOB_CREATE job_1497974621025_0001
2017-06-20 16:06:38,568 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2017-06-20 16:06:38,570 INFO [Socket Reader #1 for port 34775] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 34775
2017-06-20 16:06:38,574 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2017-06-20 16:06:38,574 INFO [IPC Server listener on 34775] org.apache.hadoop.ipc.Server: IPC Server listener on 34775: starting
2017-06-20 16:06:38,607 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true
2017-06-20 16:06:38,607 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3
2017-06-20 16:06:38,607 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33
2017-06-20 16:06:38,664 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at ip-172-30-1-230.ec2.internal/172.30.1.230:8030
2017-06-20 16:06:38,741 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: maxContainerCapability: <memory:1792, vCores:128>
2017-06-20 16:06:38,741 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: queue: default
2017-06-20 16:06:38,745 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500
2017-06-20 16:06:38,745 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: The thread pool initial size is 10
2017-06-20 16:06:38,747 INFO [main] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
2017-06-20 16:06:38,753 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1497974621025_0001Job Transitioned from INITED to SETUP
2017-06-20 16:06:38,757 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP
2017-06-20 16:06:38,764 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1497974621025_0001Job Transitioned from SETUP to RUNNING
2017-06-20 16:06:38,780 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.util.RackResolver: Resolved ip-172-30-4-165.ec2.internal to /default-rack
2017-06-20 16:06:38,787 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1497974621025_0001_m_000000 Task Transitioned from NEW to SCHEDULED
2017-06-20 16:06:38,787 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1497974621025_0001_r_000000 Task Transitioned from NEW to SCHEDULED
2017-06-20 16:06:38,787 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1497974621025_0001_r_000001 Task Transitioned from NEW to SCHEDULED
2017-06-20 16:06:38,788 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1497974621025_0001_r_000002 Task Transitioned from NEW to SCHEDULED
2017-06-20 16:06:38,788 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1497974621025_0001_r_000003 Task Transitioned from NEW to SCHEDULED
2017-06-20 16:06:38,788 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1497974621025_0001_r_000004 Task Transitioned from NEW to SCHEDULED
2017-06-20 16:06:38,788 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1497974621025_0001_r_000005 Task Transitioned from NEW to SCHEDULED
2017-06-20 16:06:38,789 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1497974621025_0001_r_000006 Task Transitioned from NEW to SCHEDULED
2017-06-20 16:06:38,789 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1497974621025_0001_r_000007 Task Transitioned from NEW to SCHEDULED
2017-06-20 16:06:38,789 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1497974621025_0001_r_000008 Task Transitioned from NEW to SCHEDULED
2017-06-20 16:06:38,789 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1497974621025_0001_r_000009 Task Transitioned from NEW to SCHEDULED
2017-06-20 16:06:38,790 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1497974621025_0001_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2017-06-20 16:06:38,791 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1497974621025_0001_r_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2017-06-20 16:06:38,791 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1497974621025_0001_r_000001_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2017-06-20 16:06:38,791 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1497974621025_0001_r_000002_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2017-06-20 16:06:38,791 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1497974621025_0001_r_000003_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2017-06-20 16:06:38,791 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1497974621025_0001_r_000004_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2017-06-20 16:06:38,791 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1497974621025_0001_r_000005_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2017-06-20 16:06:38,791 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1497974621025_0001_r_000006_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2017-06-20 16:06:38,791 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1497974621025_0001_r_000007_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2017-06-20 16:06:38,792 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1497974621025_0001_r_000008_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2017-06-20 16:06:38,792 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1497974621025_0001_r_000009_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2017-06-20 16:06:38,794 INFO [Thread-83] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceRequest:<memory:896, vCores:1>
2017-06-20 16:06:38,805 INFO [Thread-83] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: reduceResourceRequest:<memory:1792, vCores:1>
2017-06-20 16:06:38,838 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1497974621025_0001, File: hdfs://ip-172-30-1-230.ec2.internal:8020/tmp/hadoop-yarn/staging/hadoop/.staging/job_1497974621025_0001/job_1497974621025_0001_1.jhist
2017-06-20 16:06:39,748 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:10 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
2017-06-20 16:06:39,784 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1497974621025_0001: ask=3 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:0, vCores:0> knownNMs=1
2017-06-20 16:06:39,785 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Going to preempt 1 due to lack of space for maps
2017-06-20 16:06:40,789 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Going to preempt 1 due to lack of space for maps


#8

quick update. dashirov changed cores to be m4.xlarge and that got us past the S3 -> HDFS stage. but we are still seeing very slow times in the enrichment process. sample from yarn container on one of the CORE machines below. shouldn’t be taking 5 minutes just to process a 3 meg file? and confirmed IP lookups is disabled but still doing UA and cookie extraction.

CPU and IO REALLY quiet on one of the core machines as well.

05:40:01 PM CPU %user %nice %system %iowait %steal %idle
05:50:01 PM all 9.20 0.00 0.29 0.08 0.01 90.42
06:00:01 PM all 0.93 0.00 0.07 0.00 0.01 98.99
06:10:01 PM all 1.61 0.23 0.83 0.33 0.02 96.98
06:20:01 PM all 0.60 0.00 0.07 0.00 0.01 99.33
06:30:01 PM all 1.17 0.00 0.10 0.00 0.01 98.72
06:40:01 PM all 0.96 0.00 0.09 0.00 0.01 98.94
06:50:01 PM all 1.80 0.00 0.09 0.00 0.01 98.10
07:00:01 PM all 0.56 0.00 0.08 0.00 0.01 99.35
07:10:01 PM all 0.49 0.00 0.07 0.00 0.01 99.43
07:20:01 PM all 0.45 0.00 0.06 0.00 0.01 99.49
07:30:01 PM all 0.39 0.00 0.08 0.00 0.01 99.52
07:40:01 PM all 0.54 0.00 0.08 0.00 0.01 99.37
07:50:01 PM all 0.34 0.00 0.06 0.00 0.01 99.59
Average: all 1.46 0.02 0.15 0.03 0.01 98.33

17/06/20 19:03:12 INFO TaskSetManager: Starting task 37.0 in stage 0.0 (TID 37, ip-172-30-12-212.ec2.internal, executor 3, partition 37, NODE_LOCAL, 6226 bytes)
17/06/20 19:03:12 INFO TaskSetManager: Finished task 26.0 in stage 0.0 (TID 26) in 515382 ms on ip-172-30-12-212.ec2.internal (executor 3) (18/44)
17/06/20 19:03:12 INFO BlockManagerInfo: Added rdd_3_33 in memory on ip-172-30-27-152.ec2.internal:44499 (size: 23.7 MB, free: 2.7 GB)
17/06/20 19:03:12 INFO TaskSetManager: Finished task 33.0 in stage 0.0 (TID 35) in 5282 ms on ip-172-30-27-152.ec2.internal (executor 5) (19/44)
17/06/20 19:03:14 INFO BlockManagerInfo: Added rdd_3_27 in memory on ip-172-30-12-212.ec2.internal:35958 (size: 34.5 MB, free: 2.6 GB)
17/06/20 19:03:15 INFO TaskSetManager: Starting task 35.0 in stage 0.0 (TID 38, ip-172-30-27-152.ec2.internal, executor 5, partition 35, RACK_LOCAL, 6226 bytes)
17/06/20 19:03:16 INFO TaskSetManager: Starting task 40.0 in stage 0.0 (TID 39, ip-172-30-12-212.ec2.internal, executor 3, partition 40, NODE_LOCAL, 6226 bytes)
17/06/20 19:03:16 INFO TaskSetManager: Finished task 27.0 in stage 0.0 (TID 29) in 262365 ms on ip-172-30-12-212.ec2.internal (executor 3) (20/44)
17/06/20 19:03:16 INFO YarnAllocator: Driver requested a total number of 6 executor(s).
17/06/20 19:03:16 INFO YarnAllocator: Canceling requests for 1 executor container(s) to have a new desired total 6 executors.
17/06/20 19:03:16 INFO BlockManagerInfo: Added rdd_3_37 in memory on ip-172-30-12-212.ec2.internal:35958 (size: 25.4 MB, free: 2.6 GB)
17/06/20 19:03:17 INFO TaskSetManager: Starting task 43.0 in stage 0.0 (TID 40, ip-172-30-12-212.ec2.internal, executor 3, partition 43, NODE_LOCAL, 6226 bytes)
17/06/20 19:03:17 INFO TaskSetManager: Finished task 37.0 in stage 0.0 (TID 37) in 4780 ms on ip-172-30-12-212.ec2.internal (executor 3) (21/44)
17/06/20 19:03:18 INFO BlockManagerInfo: Added rdd_3_22 in memory on ip-172-30-27-152.ec2.internal:34460 (size: 38.9 MB, free: 2.6 GB)
17/06/20 19:03:18 INFO TaskSetManager: Finished task 22.0 in stage 0.0 (TID 30) in 260966 ms on ip-172-30-27-152.ec2.internal (executor 4) (22/44)
17/06/20 19:03:20 INFO TaskSetManager: Starting task 36.0 in stage 0.0 (TID 41, ip-172-30-27-152.ec2.internal, executor 4, partition 36, RACK_LOCAL, 6226 bytes)
17/06/20 19:03:21 INFO BlockManagerInfo: Added rdd_3_43 in memory on ip-172-30-12-212.ec2.internal:35958 (size: 22.9 MB, free: 2.5 GB)
17/06/20 19:03:22 INFO TaskSetManager: Starting task 39.0 in stage 0.0 (TID 42, ip-172-30-12-212.ec2.internal, executor 3, partition 39, RACK_LOCAL, 6226 bytes)
17/06/20 19:03:22 INFO TaskSetManager: Finished task 43.0 in stage 0.0 (TID 40) in 4934 ms on ip-172-30-12-212.ec2.internal (executor 3) (23/44)
17/06/20 19:03:24 INFO BlockManagerInfo: Added rdd_3_25 in memory on ip-172-30-27-152.ec2.internal:34460 (size: 38.2 MB, free: 2.6 GB)
17/06/20 19:03:24 INFO TaskSetManager: Starting task 42.0 in stage 0.0 (TID 43, ip-172-30-27-152.ec2.internal, executor 4, partition 42, RACK_LOCAL, 6226 bytes)
17/06/20 19:03:24 INFO TaskSetManager: Finished task 25.0 in stage 0.0 (TID 33) in 260559 ms on ip-172-30-27-152.ec2.internal (executor 4) (24/44)
17/06/20 19:03:24 INFO YarnAllocator: Driver requested a total number of 5 executor(s).
17/06/20 19:03:24 INFO YarnAllocator: Canceling requests for 1 executor container(s) to have a new desired total 5 executors.
17/06/20 19:03:25 INFO BlockManagerInfo: Added rdd_3_39 in memory on ip-172-30-12-212.ec2.internal:35958 (size: 23.7 MB, free: 2.5 GB)
17/06/20 19:03:25 INFO TaskSetManager: Finished task 39.0 in stage 0.0 (TID 42) in 3941 ms on ip-172-30-12-212.ec2.internal (executor 3) (25/44)
17/06/20 19:03:28 INFO BlockManagerInfo: Added rdd_3_42 in memory on ip-172-30-27-152.ec2.internal:34460 (size: 26.3 MB, free: 2.5 GB)
17/06/20 19:03:28 INFO TaskSetManager: Finished task 42.0 in stage 0.0 (TID 43) in 4462 ms on ip-172-30-27-152.ec2.internal (executor 4) (26/44)
17/06/20 19:07:31 INFO BlockManagerInfo: Added rdd_3_20 in memory on ip-172-30-12-212.ec2.internal:35958 (size: 38.4 MB, free: 2.5 GB)
17/06/20 19:07:31 INFO TaskSetManager: Finished task 20.0 in stage 0.0 (TID 24) in 1278502 ms on ip-172-30-12-212.ec2.internal (executor 3) (27/44)
17/06/20 19:07:33 INFO BlockManagerInfo: Added rdd_3_18 in memory on ip-172-30-12-212.ec2.internal:37372 (size: 39.7 MB, free: 2.7 GB)
17/06/20 19:07:34 INFO TaskSetManager: Finished task 18.0 in stage 0.0 (TID 21) in 1533331 ms on ip-172-30-12-212.ec2.internal (executor 2) (28/44)
17/06/20 19:07:34 INFO BlockManagerInfo: Added rdd_3_35 in memory on ip-172-30-27-152.ec2.internal:44499 (size: 23.4 MB, free: 2.7 GB)
17/06/20 19:07:34 INFO YarnAllocator: Driver requested a total number of 4 executor(s).


#9

S3DistCp consistently hanging when only processing 80-100 megs / 88 files is not normal - given that this component is maintained by AWS themselves, you should be able to ask your Support contact at AWS to investigate what is going on. I suspect if you solve that, that should also solve the downstream issues with the Snowplow components.

Interested to hear what the problem was…


#10

will contact our amazon rep now and keep you posted. thanks,


#11

btw we solved the S3DistCp step by moving to a bigger machine. Now the problem is enrichment taking a long time. Is that still amazon? or any advice you can give us on what we can look for?


#12

Yes, but why did you need to move to a bigger machine for S3DistCp, and why did you have runs where the Shred took 7 minutes but Enrich took 2h 13m? Those aren’t normal Snowplow behaviors - let us know what your AWS rep says.


#13

didn’t think so. thanks,


#14

We’ve resolved this. Ultimately the root cause was tracked down to accidental reconfiguration of resolvers. As the result, internal iglu servers were not found and enricher took tremendous amount of time seeking and not finding jsonschema models for custom unstructured events we’re using.


#15

@alex now that things are back to normal, we need to re-process logs from 6/14 -> today without creating duplicates. some structured events got into redshift fine of course. it’s all the unstructured events like dashirov mentioned didn’t make it and got moved into bad folder. any advice?

so ETL not being able to contact iglu server caused the very slow performance.


#16

Thanks for the update @mjensen. Please create a new thread for your new questions about handling bad rows!


#17

@alex thanks, posted a new thread. we appreciate all the help.


#18

I think this will forever remain a mystery

  1. This step is not related to resolver misconfiguration
  2. It is as dumbed down ( dependency wise ) as one could get
  3. It solely relies on AWS managed S3 service and Hadoop/HDFS of a recently provisioned, transient cluster that performed no other tasks or steps to be potentially tarnished by.

The only thing I could think of is not enough resources using m4.xlarge configuration, but experience of other community members dictates the opposite.


#19

I agree @dashirov. My only guess is some kind of “noisy neighbour” problem with the VMs…