Instructions
Short Term Resolution
Script Download Location
Download all files from the GitHub repo here - https://github.com/cloudera/cloudera-scripts-for-log4j
You must run the following script on all affected cluster nodes.
NOTE: After applying the Short Term Resolution, if you add a node, you will need to re-apply the Short Term Resolution again on the new nodes.
Script: run_log4j_patcher.sh [cdp|cdh|hdp|hdf]
Function: The run_log4j_patcher.sh script scans a directory for jar files and removes JndiLookup.class from the ones it finds. Do not run any other script in the downloaded directory--they will be called by run_log4j_patcher.sh automatically.
- Stop all running jobs in the production cluster before executing the script
- Navigate to Cloudera Manager > YARN > Configuration and ensure that yarn.nodemanager.delete.debug-delay-sec is set to 0
If the value is not zero, you must restart the YARN service after setting the value to 0
- Navigate to Cloudera Manager > YARN > Configuration and search for yarn.nodemanager.local-dirs to get the configured Node Manager Local Directory path
- Remove filecache and usercache folder located inside the folders that are specified in yarn.nodemanager.local-dirs
- Download all files from the GitHub repo and copy to all nodes of your cluster
- Run the script as root on ALL nodes of your cluster
- Script will take 1 mandatory argument (cdh|cdp|hdp)
- The script takes 2 optional arguments: a base directory to scan in, and a backup directory. The default for both are /opt/cloudera and /opt/cloudera/log4shell-backup, respectively. These defaults work for CM/CDH 6 and CDP 7. A different folder will be updated for HDP.
- Ensure that the last line of the script output indicates ‘Finished’ to verify that the job has completed successfully. The script will fail if a command exits unsuccessfully.
- Restart Cloudera Manager Server, all clusters, and all running jobs and queries.
Usage: $PROG (subcommand) [options]
Subcommands:
- help Prints this message
- cdh Scan a CDH cluster node
- cdp Scan a CDP cluster node
- hdp Scan a HDP cluster node
- hdf Scan a HDF cluster node
Options:
-t <targetdir> Override target directory (default: distro-specific)
-b <backupdir> Override backup directory (default: /opt/cloudera/log4shell-backup)
Environment Variables:
The SKIP_* environment variables should only be used if you are running the script again and want to skip phases that have already completed.
SKIP_JAR If non-empty, skips scanning and patching .jar files
SKIP_TGZ If non-empty, skips scanning and patching .tar.gz files
SKIP_HDFS* If non-empty, skips scanning and patching .tar.gz files in HDFS
RUN_SCAN If non-empty, runs a final scan for missed vulnerable files.
This can take several hours.
NOTE: CDH/CDP Parcels: The script removes the affected class from all CDH/CDP parcels already installed under /opt/cloudera. This script needs to be re-run after new parcels are installed or after upgrading to versions of CDH/CDP that do not include the long-term fix.
Removing affected classes from Oozie Shared Libraries (CDH & CDP):
The vulnerability affects client libraries uploaded in HDFS by Cloudera Manager. The script takes care of Tez and MapReduce libraries however Oozie libraries will need to be updated manually. The following section only applies to CDH and CDP releases.
Follow the instructions below to secure the Oozie shared libraries:
- Execute the run_log4j_patcher.sh on the affected cluster.
- Navigate to Cloudera Manager > Oozie > Actions -> “Install Oozie ShareLib” to re-upload the Oozie libraries in the HDFS from Cloudera Manager.
IMPORTANT: Ensure that the Oozie service is running prior to executing the command.

Removing affected classes from Oozie Shared Libraries (HDP):
Run these commands to update Oozie share lib:
su oozie
kinit oozie /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs hdfs://ns1
oozie admin -oozie http(s)://<oozie-host/loadbalancer>:11(000|443)/oozie -sharelibupdate
Rollback Procedure
Vulnerable files are fixed in-place as part of the script execution. A backup of the original files is kept in a backup directory (default: /opt/cloudera/log4shell-backup) or in the directory specified as part of the -b option. To roll back, copy these files after removing the extension (.backup) to their original locations.
Similarly, the changed files on the HDFS are backed up to /tmp/hdfs_tar_files.<date> and the same procedure can be applied as mentioned above. While the .backup extension should prevent the backed up files from being loaded by Java, be aware that these files should be considered vulnerable.
Known Limitations
CDH clusters using packages rather than parcels are not yet supported with this short-term fix.
List of CDH, HDP, HDF, and CDP Private Cloud Products and the Applicable Resolution