It’s been over a year since I started experimenting with AI to create tooling. The first tool I made was fairly simple. It took a Kubernetes YAML manifest and pointed out simple security guidance. In February of 2025, I sent a video to a friend with the message:

“So uh. I’ve been an AI skeptic for a while but claude [sonnet] 3.7 is literally insane”.

The video in question:

0:000:00


Baby’s first vibecoded app. How Cute.

Fast forward nearly to April 2026 and the majority of conversations I’ve had recently have been centered around AI. Many conversations are demonstrations of techniques or concepts to better puppeteer these models to get the desired output. Many are showcasing skills and hooks and mcp servers, and managing context windows, and persistent memory. It’s all very exciting, but take a day off and you risk missing the newest model release or 72 claude code updates.

Taking a day off in 2026 feels akin to taking a full year off in 2016.

For every conversation I have about unique projects people are creating, there are a dozen other conversations I’m having with people who feel extremely uneasy about all of this.

In the beginning, the conversations were about AI being no more than fancy autocomplete (a phrase I’ve long disliked) and LLMs being useless for any serious security work because of hallucinations. In retrospect these were not totally warranted, but I understand why someone who isn’t spending a significant amount of time using these tools can come to that conclusion, and early models were plagued with problems.

Today, almost everyone (in this wonderful tech bubble I exist in) has moved to a different level of anxiety about AI. The anxiety now is that it’s too much. Too much to keep up with, too much to compete against, too much to learn, too much to bid against, too much to ask of one person to drop everything they’re doing to “learn AI”.

Cloud computing is the closest comparison. It was a colossal industry shift that changed how businesses do business. It dramatically shifted what skills are relevant, and added a new domain security professionals need to spend their already limited bandwidth on. The ramp up time companies and individuals had to acquire these new skills was measured in decades. Amazon S3 was launched 20 years ago last month. There was a decade of gradual build up that allowed people to learn about this new technology at their own pace. If you didn’t pay attention for a day… nothing happened. You didn’t miss anything. You weren’t afraid you’ll miss 22 releases from AWS tweaking how S3 worked. Today we are not so fortunate to have a 2 decade ramp up time for AI tooling. We have what feels like 2 weeks, and that’s being generous.

Cloud computing also did not affect every industry instantly. In 2006 a lawyer did not know what S3 was. An accountant was not worried about S3 having serious consequences on their job role. A security engineer was probably actually hired because their company was using S3 buckets and no one understood it. AI is different. Every day the big 3 AI firms are pushing out tools trying to directly compete with law firms, cybersecurity companies, and web designers. All at once.

I don’t have a solution here. Cybersecurity has always been a winner take most field where the best in the field have way more job opportunities than those who are just working their 9-5. It seems AI is supercharging that mindset. Those who can spend the time, money, and mental effort to invest in deeply learning these new tools seem to be pulling away from their peers.

No one seems to know the implications of this, but it’s one of the reasons I’ve been spending an excessive amount of time for the past year deeply understanding these tools.


At least according to my git commit history…