Packages

package spark

Ordering
  1. Alphabetic
Visibility
  1. Public
  2. All

Type Members

  1. trait ClientsFactory extends AnyRef
  2. class DataClientSparkContext extends DataClientContext

    Context holder with shared resources used by DataClient.

    Context holder with shared resources used by DataClient.

    The context is not serializable (contains threads and sockets) and should never be shared between master and workers.

  3. class DefaultClientsFactory extends ClientsFactory
  4. class DefaultLayerDataFrameReaderFactory extends LayerDataFrameReaderFactory with Logging
  5. class DefaultLayerUpdaterFactory extends LayerUpdaterFactory
  6. class DefaultWritersFactory extends WritersFactory with Logging
  7. trait IndexDataFrameReader extends LayerDataFrameReader

    LayerDataFrameReader to query data from an index layer.

  8. final class InteractiveMapDataFrameConstants extends AnyRef
  9. trait InteractiveMapDataFrameReader extends LayerDataFrameReader

    LayerDataFrameReader to query data from an interactive Map layer.

    LayerDataFrameReader to query data from an interactive Map layer. Currently, the class with private access because doesn't implement additional methods towards LayerDataFrameReader class.

  10. trait LayerDataFrameReader extends AnyRef

    Custom Spark DataFrameReader for querying data from a given layer.

    Custom Spark DataFrameReader for querying data from a given layer.

    The layer type will be inferred from the layer configuration. Therefore the api to read from index layer or versioned layer is the same. To read from an index or versioned layer, your application must perform the following operations:

    Below is an example written in Scala that demonstrates how to query data from an index layer:

    import com.here.platform.data.client.spark.LayerDataFrameReader.SparkSessionExt
    
    val spark =
      SparkSession
        .builder()
        .appName(getClass.getSimpleName)
        .master("local[*]")
        .getOrCreate()
    
    val dataFrame: DataFrame = spark
        .readLayer(catalogHrn, indexLayer)
        .query(
            "tileId=INBOUNDINGBOX=(23.648524, 22.689013, 62.284241, 60.218811) and eventType==SignRecognition")
        .load()

    Java developers should use com.here.platform.data.client.spark.javadsl.JavaLayerDataFrameReader#readLayer instead of spark.readLayer:

    Dataset<Row> df =
          JavaLayerDataFrameReader.create(spark)
              .readLayer(catalogHrn, layerId)
              .query(
                  "tileId=INBOUNDINGBOX=(23.648524, 22.689013, 62.284241, 60.218811) and eventType==SignRecognition")
              .load();
    Note

    If the load method cannot correctly infer the data format from the layer content type, the application can enforce the data format by previously calling the format method.

  11. trait LayerDataFrameReaderFactory extends AnyRef
  12. trait LayerDataFrameWriter extends AnyRef

    Custom Spark DataFrameWriter for writing data to a given layer.

    Custom Spark DataFrameWriter for writing data to a given layer.

    The layer type will be inferred from the layer configuration. Therefore the api to write to index layer or versioned layer is the same. To write to an index or versioned layer, your application must perform the following operations:

    Below is an example written in Scala that demonstrates how to write data to an index layer:

    import com.here.platform.data.client.spark.LayerDataFrameWriter.DataFrameExt
    
    val spark =
      SparkSession
        .builder()
        .appName(getClass.getSimpleName)
        .master("local[*]")
        .getOrCreate()
    
    val dataFrame: DataFrame = ???
    dataFrame
        .writeLayer(catalogHrn, indexLayer)
        .save()

    Java developers should use com.here.platform.data.client.spark.javadsl.JavaLayerDataFrameWriter#writeLayer instead of dataFrame.writeLayer:

    Dataset<Row> df = ???
    JavaLayerDataFrameWriter.create(df)
        .writeLayer(catalogHrn, layerId)
        .save()

    The batch size (number of Rows) for a grouping can be restricted to a certain amount by setting the option olp.groupedBatchSize (ie 2 for 2 Rows to be in each group):

    val dataFrame: DataFrame = ???
    dataFrame
        .writeLayer(catalogHrn, indexLayer)
        .option("olp.groupedBatchSize", 2)
        .save()
    Note

    If the save method cannot correctly infer the DataConverter from the layer content type, the application will be required to provide the DataConverter using the withDataConverter method.

  13. trait LayerUpdater extends AnyRef

    Trait to represent a layer updater to mutate a layer by deleting some partitions

  14. trait LayerUpdaterFactory extends AnyRef
  15. trait WritersFactory extends AnyRef

Value Members

  1. object DataClientSparkContextUtils

    Utility class to initialize and hold shared resource required by DataClient.

  2. object InteractiveMapDataFrame
  3. object LayerDataFrameReader

    Provides the implicit class LayerDataFrameReader.SparkSessionExt that simplifies the creation of LayerDataFrameReaders.

  4. object LayerDataFrameWriter
  5. object SparkSupport

    Implicits helper to easily access the API using synchronous calls required by Spark.

Ungrouped