Header Shadow Image


Required executor memory (1024), overhead (384 MB), and PySpark memory (0 MB) is above the max threshold (1024 MB) of this cluster!

Getting this?

java.lang.IllegalArgumentException: Required executor memory (1024), overhead (384 MB), and PySpark memory (0 MB) is above the max threshold (1024 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.

Either increase the Container Memory:

yarn.nodemanager.resource.memory-mb to 2GB from 1GB

Or reduce the maximum container memory:

yarn.scheduler.maximum-allocation-mb from 1GB to 512MB

in the YARN configuration settings.  However, you may get this error:

Service ResourceManager failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Invalid resource scheduler memory allocation configuration: yarn.scheduler.minimum-allocation-mb=1024, yarn.scheduler.maximum-allocation-mb=512.  Both values must be greater than or equal to 0and the maximum allocation value must be greater than or equal tothe minimum allocation value.

So we can set the container memory minimal to 1/2 the max:

yarn.scheduler.minimum-allocation-mb from 1G to 256MB

But that didn't work.  Ultimately setting this to 2GB worked:

yarn.scheduler.maximum-allocation-mb

Cheers,
TK

 


     
  Copyright © 2003 - 2013 Tom Kacperski (microdevsys.com). All rights reserved.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License