final class ConcreteNonIncrementalCompiler[W <: NonIncrementalCompiler] extends NonIncrementalCompilerWrapper[W]
- Alphabetic
- By Inheritance
- ConcreteNonIncrementalCompiler
- NonIncrementalCompilerWrapper
- OutputPartitionerWrapper
- WrapperOutputLayers
- InputPartitionerWrapper
- WrapperInputLayers
- Wrapper
- NonIncrementalCompiler
- OutputPartitioner
- OutputLayers
- InputPartitioner
- InputLayers
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
- new ConcreteNonIncrementalCompiler(impl: W)
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
def
compile(toCompileScala: InData, parallelism: Int): ToPublish
Compiles partitions and returns actual compiled data, if any.
Compiles partitions and returns actual compiled data, if any.
- parallelism
the parallelism of both the input and the output RDDs. This parameter is normally needed to get partitioners from com.here.platform.data.processing.compiler.InputOptPartitioner and/or from com.here.platform.data.processing.compiler.OutputOptPartitioner traits
- returns
com.here.platform.data.processing.compiler.OutKey is the key of compiled partition. com.here.platform.data.processing.blobstore.Payload is the output data, if any The returned keys shall be partitioned as specified in com.here.platform.data.processing.compiler.OutputPartitioner
- Definition Classes
- ConcreteNonIncrementalCompiler → NonIncrementalCompiler
- Note
please note and follow the RDD persistence policy described in com.here.platform.data.processing.driver.Executor
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(o: Any): Boolean
- Definition Classes
- Wrapper → AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- Wrapper → AnyRef → Any
-
val
impl: W
- Definition Classes
- ConcreteNonIncrementalCompiler → Wrapper
-
def
inLayers: Map[Id, Set[Id]]
Represents layers of the input catalogs that you should query and provide to the compiler.
Represents layers of the input catalogs that you should query and provide to the compiler. These layers are grouped by input catalog and identified by catalog ID and layer ID.
- Definition Classes
- WrapperInputLayers → InputLayers
-
def
inPartitioner(parallelism: Int): Partitioner[InKey]
Specifies the partitioner to use when querying the input catalogs.
Specifies the partitioner to use when querying the input catalogs.
- parallelism
The number of partitions the partitioner should partition the catalog into, this should match the parallelism of the Spark RDD containing the input partitions.
- returns
The input partitioner with the parallelism specified.
- Definition Classes
- InputPartitionerWrapper → InputPartitioner
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
val
outCatalogId: Id
Identifier for the output catalog.
Identifier for the output catalog.
- Definition Classes
- OutputLayers
-
def
outLayers: Set[Id]
Layers to be produced by the compiler.
Layers to be produced by the compiler.
- Definition Classes
- WrapperOutputLayers → OutputLayers
-
def
outPartitioner(parallelism: Int): Partitioner[OutKey]
Specifies the partitioner to use when querying the output catalog and producing output data.
Specifies the partitioner to use when querying the output catalog and producing output data.
- parallelism
The number of partitions the partitioner should partition the catalog into, this should match the parallelism of the Spark RDD containing the output partitions.
- returns
The output partitioner with the parallelism specified.
- Definition Classes
- OutputPartitionerWrapper → OutputPartitioner
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- Wrapper → AnyRef → Any
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()