I remember clearly how it went when we started deploying machine learning for detecting new malware. The toughest questions came from the strongest virus analysts. And it made sense: the technology did make mistakes, looked unconvincing in places, and clearly fell short of manual analysis.
But over time it became clear that at the early stage, what matters more than the current shortcomings are two other things: the technology’s core strength and the pace of its improvement.
With ML in detection, that’s exactly how it played out. Eventually it took over processing the vast majority of threats, which didn’t diminish the role of manual expertise but rather the opposite. Freed from the routine of handling mass threats, analysts focused on the hardest problems: new attack vectors, countermeasures, training the system itself. And ML’s weaknesses were offset by the ecosystem around the technology: expanding datasets, false positive management, and other mechanisms.
There were also cases where it went the other way. I remember a period when people tried to push neural networks into security too early. Back then it wasn’t clear where exactly they would deliver practical results. To be honest, none of us really saw it. So the skepticism was justified.
Since then, I’ve been evaluating new technologies by different criteria. Do they make mistakes early on? Almost always yes. But what matters more is whether they already have a clear core strength and whether their pace of improvement is fast enough that today’s weaknesses will eventually stop being decisive.
With GenAI, a very similar story is unfolding right now. Its core strength is becoming increasingly clear: it keeps getting better at turning human intent into working results, autonomously using data, code, and digital services. And the pace of progress is such that the range of practical applications continues to expand rapidly. I’ll share my observations on this dynamic in one of the upcoming posts.