com.here.platform.data.processing.java.impl.compiler
NonIncrementalCompilerWrapper
Companion object NonIncrementalCompilerWrapper
trait NonIncrementalCompilerWrapper[W <: NonIncrementalCompiler] extends NonIncrementalCompiler with WrapperInputLayers[W] with InputPartitionerWrapper[W] with WrapperOutputLayers[W] with OutputPartitionerWrapper[W]
- Alphabetic
- By Inheritance
- NonIncrementalCompilerWrapper
- OutputPartitionerWrapper
- WrapperOutputLayers
- InputPartitionerWrapper
- WrapperInputLayers
- Wrapper
- NonIncrementalCompiler
- OutputPartitioner
- OutputLayers
- InputPartitioner
- InputLayers
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Abstract Value Members
- abstract def compile(toCompile: InData, parallelism: Int): ToPublish
Compiles partitions and returns actual compiled data, if any.
Compiles partitions and returns actual compiled data, if any.
- toCompile
the complete input metadata for the layer requested. Keys are partitioned as specified in com.here.platform.data.processing.compiler.InputPartitioner
- parallelism
the parallelism of both the input and the output RDDs. This parameter is normally needed to get partitioners from com.here.platform.data.processing.compiler.InputOptPartitioner and/or from com.here.platform.data.processing.compiler.OutputOptPartitioner traits
- returns
com.here.platform.data.processing.compiler.OutKey is the key of compiled partition. com.here.platform.data.processing.blobstore.Payload is the output data, if any The returned keys shall be partitioned as specified in com.here.platform.data.processing.compiler.OutputPartitioner
- Definition Classes
- NonIncrementalCompiler
- Note
please note and follow the RDD persistence policy described in com.here.platform.data.processing.driver.Executor
- abstract def impl: W
- Definition Classes
- Wrapper
Concrete Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(o: Any): Boolean
- Definition Classes
- Wrapper → AnyRef → Any
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- def hashCode(): Int
- Definition Classes
- Wrapper → AnyRef → Any
- def inLayers: Map[Id, Set[Id]]
Represents layers of the input catalogs that you should query and provide to the compiler.
Represents layers of the input catalogs that you should query and provide to the compiler. These layers are grouped by input catalog and identified by catalog ID and layer ID.
- Definition Classes
- WrapperInputLayers → InputLayers
- def inPartitioner(parallelism: Int): Partitioner[InKey]
Specifies the partitioner to use when querying the input catalogs.
Specifies the partitioner to use when querying the input catalogs.
- parallelism
The number of partitions the partitioner should partition the catalog into, this should match the parallelism of the Spark RDD containing the input partitions.
- returns
The input partitioner with the parallelism specified.
- Definition Classes
- InputPartitionerWrapper → InputPartitioner
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- final val outCatalogId: Id
Identifier for the output catalog.
Identifier for the output catalog.
- Definition Classes
- OutputLayers
- def outLayers: Set[Id]
Layers to be produced by the compiler.
Layers to be produced by the compiler.
- Definition Classes
- WrapperOutputLayers → OutputLayers
- def outPartitioner(parallelism: Int): Partitioner[OutKey]
Specifies the partitioner to use when querying the output catalog and producing output data.
Specifies the partitioner to use when querying the output catalog and producing output data.
- parallelism
The number of partitions the partitioner should partition the catalog into, this should match the parallelism of the Spark RDD containing the output partitions.
- returns
The output partitioner with the parallelism specified.
- Definition Classes
- OutputPartitionerWrapper → OutputPartitioner
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- Wrapper → AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
Deprecated Value Members
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
(Since version 9)