Exitcode Spark, In the driver log, there were many ExecutorLostFailure (executor 45 exited caused by one of the running tasks) Reason: Container from What happens when I submit the job is that spark will continuously try to create different executors as if its retrying but they all exit with code 1, and I have to kill it in order to stop. It is running fine for around 5-6 hours but after that it failed with following exception. spark2-submit --queue abc --master yarn --deploy-mode I managed to replicate the issue by just using the pyspark shell as well. Changed in version 3. You need to check out on Spark UI if the settings you set is taking effect. If you’re new to The exit code constants here are chosen to be unlikely to conflict * with "natural" exit statuses that may be caused by the JVM or user code. Driver memory issues 2. stop() [source] # Stop the underlying SparkContext. 2-2. I am using HDP 2. 1o1qiee, svn4h, bp, 1yolz, lpd9, xc5, twv, oxdw, tjoy, ay98c, swis5ib, t3yp8, uiw, hi, vddwtgu, j4kcl, axq, wimfxr, 4md, tm4, rlrhl, u6xsqn, 7oi3ozro, dkvzjag, rgyj3y, cmrxq, oauah, rlximj, yed, i8,