Integrate with Spark

ModelOp Center seamlessly integrates with existing Spark environments, such as Databricks or Cloudera, allowing enterprises to leverage existing IT investments in their data platforms.

Table of Contents

 

Overview

To seamlessly integrate with existing Spark environments, ModelOp Center offers a Spark runtime service. This service is responsible for submitting Spark jobs to a pre-defined Spark cluster, monitoring their statuses, and updating them in model-manage. Additionally, it supports auto-enrollment with Eureka and model-manage, along with secure interactions via OAuth2.

The Spark runtime usually runs outside the K8s fleet, likely but not exclusively in an edge node.

Please check the following page for additional information on configuring the spark runtime service via helm: Configuring the Spark Runtime via Helm


Pre-requisites:

The node hosting the spark-runtime-service needs to meet the next criteria:

  • Ensure Apache Spark is installed on the host machine.

    • Current validated spark and hadoop versions:

      • Spark 2.4

      • Hadoop 2.6

  • ENV variables are set

    • SPARK_HOME

    • HADOOP_HOME

    • JAVA_HOME

    • HADOOP_CONF_DIR

  • Hadoop cluster configuration files (e.g):

    • hdfs-site.xml

    • core-site.xml

    • mapred-site.xml

    • yarn-site.xml

  • Ensure host machine can communicate with Spark cluster. (e.g)

    • master-node

    • yarn

      • nodeManager:

        • remote-app-log-dir

        • remote-app-log-dir-suffix

      • resourceManager:

        • hostname

        • address

    • hdfs

      • host

  • Ensure host machine can communicate with ModelOp Center and ModelOp Center Eureka (Registry)

  • Security

    • Kerberos:

      • krb5.conf

      • Principal

      • keytab

      • jaas.conf ( optional )

      • jaas-conf-key ( optional )

 

Service

Port

Service

Port

Spark

  • 7077

  • 18088

Yarn

  • 8032

HDFS

  • 8020

ModelOp Center

  • 8090

  • 8761 (Eureka)

 

Kerberos glossary:

krb5.conf - Tells host machine how to talk to Kerberos. Tells host machine where to find the kerberos server and what rules to follow.

keytab - Secret key for host machine. Key used by the host machine to prove its allowed to execute specific actions in the Kerberos environemnt.

jaas.conf - file used by host machine that tells it how to interact with other applications or programs (such as kerberos) - It helps host machine to know location of the keytab.

 


Core components roles and responsibilities:

ModelOpJobMonitor

  • Monitors jobs of type MODEL_BATCH_JOB, MODEL_BATCH_TEST_JOB and MODEL_BATCH_TRAINING_JOB in CREATED state with a SPARK_RUNTIME as the runtime type

    • Updates job status from CREATED to WAITING

    • Submits job for execution

    • Updates job with Spark application id

  • Monitors jobs of type MODEL_BATCH_JOB, MODEL_BATCH_TEST_JOB and MODEL_BATCH_TRAINING_JOB in WAITINGor RUNNING state with a SPARK_RUNTIME as the runtime type

    • Uses Spark application id to monitor job the status on the Spark cluster

    • Updates job status based on the latest Spark application status

    • Updates job with the logs generated by the Spark cluster

    • Updates job with output data (if the output data contains embedded asset(s))

    • Cleans job’s temporary working directory

PySparkPreprocessorService

  • Translates the jobs of type MODEL_BATCH_JOB/MODEL_BATCH_TEST_JOB/MODEL_BATCH_TRAINING_JOB into a PySparkJobManifest

    • Creates temp files used during execution, such as ModelOpPySparkDriver, primary source code, metadata, other assets and non primary source code

    • Creates temporary HDFS file(s) - (if input data contains embedded asset(s))

SparkLauncherService

  • Builds the SparkLauncher from the content of the PySparkJobManifest for execution

ModelManageRegistrationService

  • Auto enrolls the Spark runtime service as a SPARK_RUNTIME

LoadRuntimesListener

  • Sends a heart beat to Eureka in order to keep service status as alive

KerberizedYarnMonitorService

  • Authenticates the principal with Kerberos before attempting to use the YarnClient