If AI Can Break Chips, Are We Still Safe?


This is getting uncomfortable in a way that’s hard to ignore.

If AI can genuinely find vulnerabilities down at the chip or kernel level, then a lot of our security assumptions start to feel fragile. Not broken overnight, but quietly less trustworthy than we like to admit.

There are claims that researchers used Anthropics Mythos model to uncover a macOS kernel memory corruption exploit on Apple’s M5 chip. If true, it’s not just “a vulnerability.” It’s a demonstration that even heavily engineered hardware assumptions might not hold under AI assisted exploits.

First public macOS kernel memory corruption exploit on Apple M5

Video of exploit

We usually think of “secure servers” as meaning our data is safe. Encrypted, isolated, protected by layers of hardware and software. But that trust chain only holds if the underlying system behaves as expected. If those lower layers can be probed, understood, and broken faster than we can fix them, then the whole idea of “safe” starts to feel conditional.

And that’s the part that’s alarming.

Not that everything is compromised today, but that the boundary between safe system and exploited system might be shrinking to almost nothing.

If machines can find the cracks faster than we can patch them, then the question quietly changes from “are we secure?” to “how long have we already not been?”