CRD reference
Below are listed the CRD fields that can be defined by the user:
| CRD field | Remarks | 
|---|---|
  | 
  | 
  | 
  | 
  | 
Application name  | 
  | 
Application version  | 
  | 
  | 
  | 
User-supplied image containing spark-job dependencies that will be copied to the specified volume mount  | 
  | 
Spark image which will be deployed to driver and executor pods, which must contain spark environment needed by the job e.g.   | 
  | 
Optional Enum (one of   | 
  | 
An optional list of references to secrets in the same namespace to use for pulling any of the images used by a   | 
  | 
The actual application file that will be called by   | 
  | 
The main class i.e. entry point for JVM artifacts  | 
  | 
Arguments passed directly to the job artifact  | 
  | 
S3 connection specification. See the S3 resources for more details.  | 
  | 
A map of key/value strings that will be passed directly to   | 
  | 
A list of python packages that will be installed via   | 
  | 
A list of packages that is passed directly to   | 
  | 
A list of excluded packages that is passed directly to   | 
  | 
A list of repositories that is passed directly to   | 
  | 
A list of volumes  | 
  | 
The volume name  | 
  | 
The persistent volume claim backing the volume  | 
  | 
Resources specification for the initiating Job  | 
  | 
Resources specification for the driver Pod  | 
  | 
A list of mounted volumes for the driver  | 
  | 
Name of mount  | 
  | 
Volume mount path  | 
  | 
Driver Pod placement affinity. See Pod Placement for details  | 
  | 
Logging aggregation for the driver Pod. See Logging for details  | 
  | 
Resources specification for the executor Pods  | 
  | 
Number of executor instances launched for this job  | 
  | 
A list of mounted volumes for each executor  | 
  | 
Name of mount  | 
  | 
Volume mount path  | 
  | 
Driver Pod placement affinity. See Pod Placement for details.  | 
  | 
Logging aggregation for the executor Pods. See Logging for details  | 
  | 
S3 bucket definition where applications should publish events for the Spark History server.  | 
  | 
Prefix to use when storing events for the Spark History server.  |