Packages

sealed trait Executor extends InputLayers with InputPartitioner with OutputLayers with OutputPartitioner

Base class for all the executors.

Executors work at RDD level, meaning that RDDs are passed and returned to the functions that each executor implements. It is important to define a common policy regarding the persistence of RDDs that are passed and returned. Not respecting this policy introduces a risk of Spark throwing an exception due to the fact that some RDDs may be persisted twice with different storage levels. This policy may be applicable to classes other than executors, such as com.here.platform.data.processing.compiler.NonIncrementalCompiler, com.here.platform.data.processing.compiler.DepCompilerBase and derived.

Note

The policy established is as follows: RDDs passed to every execute function are guaranteed to be reusable multiple times, efficiently, without the need for the implementations to persist them. Implementations shall not persist passed RDDs. These are either already persisted by the processing library or guaranteed to be reusable multiple time efficiently. Therefore, implementations shall not require() or assert() that passed RDDs are persisted, although it's guaranteed that they will be, or equivalent. RDDs returned by every execute function don't have to be persisted. They may be persisted if it's useful to the implementations. The processing library may persist the RDDs once returned, if not already persisted.

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. Executor
  2. OutputPartitioner
  3. OutputLayers
  4. InputPartitioner
  5. InputLayers
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Abstract Value Members

  1. abstract def id: Id

    Unique identifier of the com.here.platform.data.processing.driver.Executor.

  2. abstract def inLayers: Map[Id, Set[Id]]

    Represents layers of the input catalogs that you should query and provide to the compiler.

    Represents layers of the input catalogs that you should query and provide to the compiler. These layers are grouped by input catalog and identified by catalog ID and layer ID.

    Definition Classes
    InputLayers
  3. abstract def inPartitioner(parallelism: Int): Partitioner[InKey]

    Specifies the partitioner to use when querying the input catalogs.

    Specifies the partitioner to use when querying the input catalogs.

    parallelism

    The number of partitions the partitioner should partition the catalog into, this should match the parallelism of the Spark RDD containing the input partitions.

    returns

    The input partitioner with the parallelism specified.

    Definition Classes
    InputPartitioner
  4. abstract def outLayers: Set[Id]

    Layers to be produced by the compiler.

    Layers to be produced by the compiler.

    Definition Classes
    OutputLayers
  5. abstract def outPartitioner(parallelism: Int): Partitioner[OutKey]

    Specifies the partitioner to use when querying the output catalog and producing output data.

    Specifies the partitioner to use when querying the output catalog and producing output data.

    parallelism

    The number of partitions the partitioner should partition the catalog into, this should match the parallelism of the Spark RDD containing the output partitions.

    returns

    The output partitioner with the parallelism specified.

    Definition Classes
    OutputPartitioner

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  6. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  7. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  8. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  9. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  10. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  11. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  12. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  13. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  14. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  15. final val outCatalogId: Id

    Identifier for the output catalog.

    Identifier for the output catalog.

    Definition Classes
    OutputLayers
  16. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  17. def toString(): String
    Definition Classes
    AnyRef → Any
  18. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  19. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  20. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from OutputPartitioner

Inherited from OutputLayers

Inherited from InputPartitioner

Inherited from InputLayers

Inherited from AnyRef

Inherited from Any

Ungrouped