Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

ModelOp Center seamlessly integrates with existing Spark environments, such as Databricks or Cloudera, allowing enterprises to leverage existing IT investments in their data platforms.

Table of Contents

Overview

To enable integration with existing Spark environments, ModelOp Center provides a Spark runtime microservice. This component is in charge of submitting Spark jobs to a pre-defined Spark cluster, monitoring their statuses, and updating them at model-manage accordingly. Also, it supports auto-enrollment with Eureka and model-manage, as well as OAuth2 secured interactions.

The Spark runtime should be able to run outside the K8s fleet, likely but not exclusively in an edge node.

Consult the following page for information on configuring the spark runtime service via helm: Configuring the Spark Runtime via Helm


Pre-requisites:

The node hosting the spark-runtime-service needs to meet the next criteria:

  • Ensure Apache Spark is installed on the host machine.

    • Current validated spark and hadoop versions:

      • Spark 2.4

      • Hadoop 2.6

  • ENV variables are set

    • SPARK_HOME

    • HADOOP_HOME

    • JAVA_HOME

    • HADOOP_CONF_DIR

  • Hadoop cluster configuration files (e.g):

    • hdfs-site.xml

    • core-site.xml

    • mapred-site.xml

    • yarn-site.xml

  • Ensure host machine can communicate with Spark cluster. (e.g)

    • master-node

    • yarn

      • nodeManager:

        • remote-app-log-dir

        • remote-app-log-dir-suffix

      • resourceManager:

        • hostname

        • address

    • hdfs

      • host

  • Ensure host machine can communicate with ModelOp Center and ModelOp Center Eureka (Registry)

  • Security

    • Kerberos:

      • krb5.conf

      • Principal

      • keytab

      • jaas.conf ( optional )

      • jaas-conf-key ( optional )

Service

Port

Spark

  • 7077

  • 18088

Yarn

  • 8032

HDFS

  • 8020

ModelOp Center

  • 8090

  • 8761 (Eureka)

Kerberos glossary:

krb5.conf - Tells host machine how to talk to Kerberos. Tells host machine where to find the kerberos server and what rules to follow.

keytab - Secret key for host machine. Key used by the host machine to prove its allowed to execute specific actions in the Kerberos environemnt.

jaas.conf - file used by host machine that tells it how to interact with other applications or programs (such as kerberos) - It helps host machine to know location of the keytab.


Core components of the spark-runtime-service:

ModelOpJobMonitor

  • Monitor job repository for MODEL_BATCH_JOB, MODEL_BATCH_TEST_JOB and MODEL_BATCH_TRAINING_JOB in CREATED state with a SPARK_RUNTIME as the runtime type

    • Update the job status from CREATED to WAITING

    • Submit the job for execution to the Spark cluster

    • Update the job by appending the Spark application id generated by the Spark cluster

  • Monitor the job repository for MODEL_BATCH_JOB, MODEL_BATCH_TEST_JOB and MODEL_BATCH_TRAINING_JOB in WAITINGor RUNNING state with a SPARK_RUNTIME as the runtime type

    • Use the Spark application id to query the status of the job while still running on the Spark cluster

    • Update the job status based on the final Spark application status

    • Update the job with the logs generated by the Spark cluster

    • Update the job by storing the output data, if the output data contains embedded asset(s)

    • Clean the job’s temporary working directory

PySparkPreprocessorService

  • Translate the MODEL_BATCH_JOB/MODEL_BATCH_TEST_JOB/MODEL_BATCH_TRAINING_JOB into a PySparkJobManifest

    • Create temporary files for the ModelOpPySparkDriver, primary source code, metadata, attachments and non primary source code

      • The ModelOpPySparkDriver file is the main driver of the Spark job

      • The primary source code file is what will be executed in the Spark cluster

    • Create temporary HDFS file(s), if input data contains embedded asset(s)

SparkLauncherService

  • Build a SparkLauncher from the PySparkJobManifest for execution

ModelManageRegistrationService

  • Auto enroll the Spark runtime as a SPARK_RUNTIME with model-manage

LoadRuntimesListener

  • Maintain an alive/healthy status with Eureka

  • Re-register again, if spark-runtime-service is lost for some reason

KerberizedYarnMonitorService

  • Authenticate the principal with Kerberos before attempting to use the YarnClient

  • No labels