Skip to content

Commit

Permalink
update proportion of memory
Browse files Browse the repository at this point in the history
The default value of "spark.storage.memoryFraction" has been changed from 0.66 to 0.6 . So it should be 60% of the memory to cache while 40% used for task execution.

Author: Chen Chao <[email protected]>

Closes #66 from CrazyJvm/master and squashes the following commits:

0f84d86 [Chen Chao] update proportion of memory
  • Loading branch information
CrazyJvm authored and rxin committed Mar 3, 2014
1 parent 369aad6 commit 9d225a9
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions docs/tuning.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,8 +163,8 @@ their work directories), *not* on your driver program.
**Cache Size Tuning**

One important configuration parameter for GC is the amount of memory that should be used for caching RDDs.
By default, Spark uses 66% of the configured executor memory (`spark.executor.memory` or `SPARK_MEM`) to
cache RDDs. This means that 33% of memory is available for any objects created during task execution.
By default, Spark uses 60% of the configured executor memory (`spark.executor.memory` or `SPARK_MEM`) to
cache RDDs. This means that 40% of memory is available for any objects created during task execution.

In case your tasks slow down and you find that your JVM is garbage-collecting frequently or running out of
memory, lowering this value will help reduce the memory consumption. To change this to say 50%, you can call
Expand Down

0 comments on commit 9d225a9

Please sign in to comment.