With AI the rate of change has accelerated tremendously. With increased productivity by good actors, comes increased productivity by bad actors.
This article is written for security leaders responsible for web applications but is open to enthusiasts seeking to learn about web security.
TLDR:
- AI use in social engineering will grow substantially.
- Client-side script attacks will grow through use of socially engineered user credentials
- AI agents will change the risk model for eCommerce
- Ai agentic friendly fraud will take front stage - “I didn’t do this my AI agent might have”
- NPM supply chain will continue to threaten the free world and the root cause will not be addressed
Here are my predictions for 2026 in the context of web security.
What we’ll be watching for in 2026
- The first major AI-agent refund abuse story
- The first large Google Tag Manager breach story through social engineering
- Regulatory response on agent use responsibility
- Node Package Manager incidents
Why 2026 will look different from past years
AI adoption has grown substantially and AI also got a lot better. The pressure on businesses to adopt AI is high. But not every business has the competencies to use AI securely in their applications. That, due to the pressure, is not stopping half baked AI integrations from shipping. Vibe coded tools are all over the place and basic security concepts aren’t being followed. This is particularly problematic for SaaS style vibe coded tools where user information is being served.
AI abuse is ramping up at an unseen rate. This will cause security incidents across the board.
AI fueled social engineering
The weakest link of any dependency is often going to be the easiest to fool human beings. Social engineering has been one of the most effective attack methods bad actors use. And today, that is getting easier.
Back in May when we published the news regarding attempts of North Korean infiltration into our business, the world got an aggressive reminder of the extent bad actors are willing to infiltrate businesses. With AI the ease of sending humanlike custom responses makes a massive spread and prey attack easier than ever before. You name it, they will do it.
Deepfake Powered Phishing
- Deepfake videos of managers will come into play.
- AI generated audio messages of coworkers will be more convincing than an email
Through use of deepfake audio generation, a voice message may sound like your boss needs you to reset his password now, but it's actually an adversary. Bad actors will try, and your teams will have a harder time than ever before.
What you can and should do is create a culture where emotional pressure isn’t normalized but instead sounds out of character. Bad actors will attempt to use emotional pressure to bend the minds of targets to take actions they otherwise wouldn’t.Give your staff a safeword. That word is only known by you two, and that word can be used to verify legitimacy. That word should be changed from time to time. But that word should only act as one layer in the defence system.
Major malicious client-side injection attacks through maliciously obtained access
Bad actors profile specific individuals to gain access to specific systems. The marketing team is known as a particularly easy target as they often have a less security conscious reflex to social engineering. Through targeted attempts against marketing teams and consulting agencies, Google Tag Manager access can be gained which allows code to be snuck onto the web applications.
When a bad actor gains access, making small changes with big impact is their goal. Flying below the radar will be in their best interest. So by contacting non-engineering teams and using Google Tag Manager instead of making major server side actions where 100 tools keep a close eye and Git code reviews may get in the way, the odds of a successful attack increase.
Every organization already has a range of technical controls in place to protect the server-side and its open source dependencies but almost none have any grip on client-side executions.
I expect a handful of large incidents where adversaries target, through insider threat or through social engineering, marketing staff with access to tag managers.
Agents will change the commerce experience we know today
E-commerce is already taking AI agents as the next opportunity to make more money out of transactions. AI agents just want to click next and finish the task, so when brands add pre-selected cross sold services or goods, the agent will automatically buy. That extra revenue will be appreciated by merchants.
The problem is that the urge to sell more will also unlock a new wave of refunds and chargebacks. On top of that, the agentic infrastructure your legitimate shoppers will use is also going to be used by bad actors.
There is uncertainty on who is liable when an agent makes a wrong purchase or accepts an add-on service during the checkout process. The customer may file a dispute, but if it was their agent acting on their behalf is that a legitimate claim?
So detecting bot infrastructure is no longer relevant. The actions you allow however are, and therefore much more advanced AI detection tooling will be needed to counter abuse.
Legacy separation of automated from human behaviours simply doesn’t make sense anymore. The old concept of ‘all bot traffic is bad except Google for SEO’ is now officially dead. You want automated traffic on your site, because your competitors will allow it and otherwise the business will go to them.
NPM supply chain will continue to cause severe damage
The backstory here is rather simple but not widely known enough.
Microsoft acquired Github. Github previously acquired Node Package Manager (npm).Microsoft has a reputation to underfund security organizations. In fact, a large chunk of the cyber security industry exists because Microsoft didn’t tighten security on their products.
Imagine a grocery store chain has a few chickens distributed with some nasty bacteria, and a few shoppers become sick. The stores will be shut, there will be nation wide outcry, there will be long lasting reputational damage on the chain and the financial impact will be huge.
But when Microsoft underfunded the security of the biggest open source registry, to the point that over 100 startups built security layers on top in an attempt to counter the security risk, some like Snyk have existed for over a decade in this space… Well if that underfunding leads to websites with hundreds of millions of users getting hacked and hundreds of millions of users' data getting leaked, nothing really happens. The company that got attacked would get all the bad press. But Microsoft somehow never got in trouble for it.
NPM as a registry is by far the most insecure. A registry riddled with new malware every day.
Every day new malware is submitted to the npm registry. Some more aggressive than other, some inside of major open source dependencies with hundreds of millions of downloads each week. The open source community used to be a trust based community, but today with npm operating at a global scale it has been weaponized by bad actors and the party most able to do something about it manages to avoid accountability.
Rethinking consolidation, half-baked security solutions will end up leaving companies vulnerable
A lot of legacy vendors will ship half-baked AI solutions or slap an MCP on top of something legacy and inadvertently either make for a security incident or a really subpar product experience. This has been happening already but will only increase in the rate of incidents as the pressure from the top down to add AI functionality to existing products grows.
Businesses have incentivized executive buyers to consolidate vendors, for the past year or so that has changed as businesses start to recognize the difference in quality especially with the integration of AI into products.
LLM driven misinformation driving businesses to unsafe practices
We’ve all seen the old posts where AI would suggest eating rocks was healthy. As humans we know this is absurd, but when an LLM says it in an objectively sounding tone you may think about it longer than you really should.

Commercial pressure drives marketing fluff. And unfortunately, LLMs repeat it often without confirming its legitimacy. This drives risk as the noisiest claims get the most attention. The noisiest are not always those looking to really help.
We are already seeing signals of this. As an example, recently a vendor in the web security space wrote on their own site that they are 20-50% more likely to detect dynamic client-side attacks. But the catch is, they are a periodic static scanner. They simply can not catch dynamic attacks as they scan every couple of hours.
The result is that LLMs simply index the misinformation as truth, even when its absurd and basic human reasoning would lead you to understand that isn’t true.
There isn’t really a solution available for this today except for hoping humans double check claims and think critically about what LLMs amplify their way.
Treat LLM generated security advice as marketing, not as documentation.









