Packages

case class DriverConfig(appName: String, parallelUploads: Int, parallelRetrievers: Int, numCommitPartitions: Int, sparkStorageLevels: SparkStorageLevelsConfig, state: StateConfig, disableIncremental: Boolean, uniquePartitionLimitInBytes: Int, disableCommitIntegrityCheck: Boolean, allowEmptyPayloads: Boolean, catalogs: CatalogsConfig) extends Product with Serializable

The configuration necessary to instantiate and configure a com.here.platform.data.processing.driver.Driver.

appName

The name of the application to be set in the Spark context.

parallelUploads

The number of parallel uploads the library should perform inside a Spark task, when data is published to the Blob API.

parallelRetrievers

The number of parallel retrieves the library should perform inside a Spark task, when data is retrieved from the Blob API.

numCommitPartitions

The maximum number of parts to commit within a multipart commit to the Data API.

sparkStorageLevels

The configuration of the Spark storage levels for each RDD category in the library.

state

The configuration for the state layer that specifies how the layer is stored.

disableIncremental

If true, incremental compilation is disabled.

uniquePartitionLimitInBytes

The size limit beyond which partitions are considered to be unique. The data handle for partitions with identical content is reused to avoid uploading the same payload multiple times.

disableCommitIntegrityCheck

If true, the final integrity check on the committed partitions is disabled.

allowEmptyPayloads

Whether to allow publishing of empty (0 byte) payloads.

catalogs

catalog specific driver configurations.

Linear Supertypes
Serializable, Serializable, Product, Equals, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DriverConfig
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new DriverConfig(appName: String, parallelUploads: Int, parallelRetrievers: Int, numCommitPartitions: Int, sparkStorageLevels: SparkStorageLevelsConfig, state: StateConfig, disableIncremental: Boolean, uniquePartitionLimitInBytes: Int, disableCommitIntegrityCheck: Boolean, allowEmptyPayloads: Boolean, catalogs: CatalogsConfig)

    appName

    The name of the application to be set in the Spark context.

    parallelUploads

    The number of parallel uploads the library should perform inside a Spark task, when data is published to the Blob API.

    parallelRetrievers

    The number of parallel retrieves the library should perform inside a Spark task, when data is retrieved from the Blob API.

    numCommitPartitions

    The maximum number of parts to commit within a multipart commit to the Data API.

    sparkStorageLevels

    The configuration of the Spark storage levels for each RDD category in the library.

    state

    The configuration for the state layer that specifies how the layer is stored.

    disableIncremental

    If true, incremental compilation is disabled.

    uniquePartitionLimitInBytes

    The size limit beyond which partitions are considered to be unique. The data handle for partitions with identical content is reused to avoid uploading the same payload multiple times.

    disableCommitIntegrityCheck

    If true, the final integrity check on the committed partitions is disabled.

    allowEmptyPayloads

    Whether to allow publishing of empty (0 byte) payloads.

    catalogs

    catalog specific driver configurations.

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. val allowEmptyPayloads: Boolean
  5. val appName: String
  6. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  7. val catalogs: CatalogsConfig
  8. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  9. val disableCommitIntegrityCheck: Boolean
  10. val disableIncremental: Boolean
  11. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  12. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  13. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  14. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  15. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  16. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  17. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  18. val numCommitPartitions: Int
  19. val parallelRetrievers: Int
  20. val parallelUploads: Int
  21. val sparkStorageLevels: SparkStorageLevelsConfig
  22. val state: StateConfig
  23. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  24. val uniquePartitionLimitInBytes: Int
  25. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  26. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  27. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from AnyRef

Inherited from Any

Ungrouped