ShipTalk - SRE, DevOps, Platform Engineering, Software Delivery

CTO Predictions for 2026: How AI Will Change Software Development (with Harness Field CTO Nick Durkin)

By Harness Season 4 Episode 7

In this special predictions episode of ShipTalk, host Dewan Ahmed (Principal Developer Advocate, Harness) sits down with Nick Durkin, Field CTO at Harness, to unpack what’s actually coming in 2026—beyond the hype.

They explore whether we’re heading toward the first AI-caused production meltdown, how much trust we should place in AI "confidence," and why many teams may face a wave of AI-driven tech debt before they find balance. Nick shares why the future isn’t about more tools or more gates, but about policy in the pipeline, guardrails instead of roadblocks, and teams finally operating with a shared rulebook.

The conversation also dives into:

  • Why DevSecOps may finally go mainstream in 2026
  • How MLOps, agents, and prompts become first-class delivery artifacts
  • The real risks behind prompt injection, model tampering, and shadow AI
  • Whether AI agents become coworkers—or chaos
  • What developer experience looks like when IDEs start to feel like chat windows
  • How engineers and teams can stay employable and relevant in an AI-native world

This episode is pragmatic, optimistic, and grounded in real-world delivery experience—perfect for engineers, tech leads, architects, and executives navigating the next phase of AI-driven software delivery.

Listen in for a CTO’s clear-eyed predictions on AI, DevSecOps, and what it really takes to ship safely at speed in 2026.

00:00Nick Durkin: Most of the people in the world play football with their feet. To be fair, that’s the football I at least believe should be called football; it’s at least the one that most resembles it. Right? In America, we play with our hands—God knows why we call it football. Zero clue.

00:14Nick: And then in Australia, they bump it, and they kick it, and they jump all over each other, and it's an amazing spectacle to watch, but also, you know, partially football, but a lot of other things too. So, I think what’s interesting is that’s our software teams, by the way. They all think they’re playing the same sport.

00:29Nick: They all think that they’re trying to do the same thing, but then, you know, ask yourself: why can’t we switch teams so quickly? Why can’t we go between projects? Why is it a huge learning curve? It's because we’re not playing the same game. And it’s not until we actually have that same rulebook, the same policies, the same things that actually allow us to know, like, you know, what’s a score worth? How big’s the pitch? Like, where’s offsides? Until we have that, it’s hard to actually go fast; it’s definitely hard to switch games.

00:54Nick: And so I think that’s actually what’s going to allow us to achieve. So, I know that was a long way of answering a very short question, but I think the idea is that if we can bring that together—make sure we’re all playing the same game together with security, not against or for, but with—I think that’s where we actually achieve the velocity that we need, and we’ve seen it time and time again with our customers.

01:14Dewan Ahmed: Hello everyone! Welcome. Good morning, good afternoon, good evening—time-appropriate greetings. This is your host, Dewan Ahmed, of Ship Talk podcast, where we talk about the ins and outs, ups and downs of software delivery. This is a very special episode. We’re going to talk about 2026 predictions, and I couldn’t have asked for a better guest. This person has been way ahead of the curve, way many times than I can count. It’s my privilege to welcome Harness Field CTO, Nick Durkin. How’s it going, Nick?

01:45Nick: Oh my god, thank you for having me on. I appreciate it. Uh, I don’t know—I know you’ve had an amazing set of people on this podcast, so I feel honored to be part of it. So, thank you for the phenomenal intro.

01:54Dewan: We’re really looking forward to your spicy takes. And, who doesn’t love to talk about futures, right? Predictions. So let’s start with the first spicy one, Nick: Are we going to see our first AI-caused meltdown in 2026?

02:08Nick: It’s funny when we think about an AI-caused meltdown. I see what will probably have happen is, you know, whether it be generative code or whether it be AI-assisted development that actually causes people to skip steps and to miss some things that, honestly, now we’re sitting here pointing a finger—and who are we pointing the finger at?

02:24Nick: And I see it almost like when you see the Waymos that get stuck in the middle of the street and they don’t know what to do. And and unfortunately, you know, whether it's human interaction or what have you to fix this problem—like, this causes larger issues. Most of the time it’s absolutely fine, but I think it’s preparedness and actually working towards: how do we handle those situations?

02:42Nick: And I think if we do it the same way we have for everything is, you know, come prepared for the success and the failure. I don’t see it’s going to be a huge issue, but I’m sure there’ll be someone who has put a little too much weight on AI and seen this kind of take it down its wrong path. Does that make sense? If that’s fair?

02:59Dewan: It totally makes sense. And we always hear, like, the confidence score of AI. So Nick, how confident should we be in AI confidence?

03:07Nick: That’s funny. I think I actually heard it best—and I can’t... I want to attribute it accordingly, and I want to say it was the CEO of NVIDIA, but don’t quote me on this one. But I think, you know, what I find is that we look at this idea and this concept of like a singular AI often.

03:21Nick: And the reality is, just like how we operate our day-to-day lives, you know, it’s not just me doing everything, right? Um, we work together, right? This is why we have different professions, different pieces. And so what I think we’re going to see is like a multitude of different AIs, and one’s designed to help actually define confidence and help like guarantee.

03:38Nick: So you’re going to have people almost like, "Who’s watching the watcher?" And the idea is that people are going to create tools to handle those next tasks. So, if we simply pushed down the task of now making humans just check everything that AI does—well, that’s not very fun. That doesn’t, you know, cause excitement for the job.

03:52Nick: So I think what we’ll see is a multitude of different AIs leveraged for different tasks to kind of gain that confidence. So, triangulated, if you will. That way it doesn’t necessarily have to come from a human every time, but if we can triangulate that amongst multiple outputs—just like anything else that we’ve done in history—that’s really what gives us confidence, is when you can triangulate.

04:10Dewan: I totally agree. And I just talked to my previous guest on this podcast, and he’s a RAG consultant, and we’re discussing how retrieval is one of the bottlenecks, not just the model itself. And on all the layers, there can be an audit layer—that now that we’re getting a response, that layer will say that how confident are you in the response.

04:31Dewan: So let’s switch gears. So let's say AI is doing wonderful, it's creating this ton of code. Managers are happy, CTOs are happy. But then the tech debt: Will 2026 be a year of rescue engineering or recovery engineering from all this massive pile of code—some of them work, and some of them we don’t know?

04:52Nick: It’s interesting. We’re seeing actually teams being hired as like AI cleanup crews. And I know that’s an interesting concept, and I think it’s... I don’t think that’s going to last, if I’m honest. I think there’s a subset—we went in, we dove deep, we went quick, we innovated fast, and now we might have, you know, gone a little over our skis.

05:07Nick: And so I think the idea is that, sure, there’s going to be some cleanup crews. But I really think what we’re going to see is finding that harmony. It’s when any new team member joins a team, and whether they’re exceptional at certain areas and trying to find out how they fit. And I think that’s really what it’s about this year.

05:21Nick: So in order to gain the velocity though, I think one of the things that we’re seeing people doing like crazy is no longer, you know, trusting in everything. And I think, you know, for the longest time we’ve talked about "automate everything." Like, this has been since what, the last 12, 15—I mean, god, as long as I’ve been dealing with this.

05:37Nick: But the reality is we never did. We still had manual processing in place. We had certain reviews, we had certain things that we felt too complicated. We had change advisory boards that, honestly, could have just been policy. And so I think what we’re finding now people are doing is saying, "Okay, in order to gain the benefit, we actually have to put policy in place."

05:52Nick: We have to make sure that all of the checks, the measures, the controls are in the pipeline, so that whatever goes through it meets it. So whether it’s written by a human, whether it’s written by AI, whether it’s a combination—it doesn’t really matter. We’re going to make sure that all the standards, all the quality, all the security standards are met.

06:06Nick: And do that in a way that’s not mean, that’s not pointing fingers—it's genuinely like playing a video game. Like, it should be—and I know it’s a silly one—but like, it should be like playing Mario. Right? The first time you run into the turtle shell you die—I think that’s like a critical vulnerability. "Okay, go fix the critical vulnerability, jump over that." Next one’s the flower—"Ah, we’re not resilient." Okay.

06:24Nick: So just keep learning through that level. And really, that’s what it’s about. It's about informing people what’s wrong, what’s going on, what’s needed by the business—whether it be for regulatory reasons, for security reasons, just for the business’s external view to its customers—whatever it is.

06:38Nick: And now, once you get done and you finished all those pieces, you’ve documented everything because it’s all done through the pipeline—not you, the pipeline’s documented it—now, now you know it’s ready for production. Now you know it’s met all of the things that we’ve tested for. And then it doesn’t matter whether it’s written by AI or whether it’s written by a human—it’s genuinely following that same practice. It kind of gives us that ability to fail fast. And I think that’s where people are headed is that we’ve got to change the way that we’re doing it, because if we don’t, I think it’s going to cause a larger problem.

07:06Dewan: I can already picture a talk or a joint blog we’re writing: "AI-Native Software Delivery: Like Playing a Game of Mario." I think that's a talk I'd like to go to!

07:16Nick: I mean, you know, it... I’ve actually taken it like down to each one of the levels and like finishing it and like getting through to finishing Bowser—now your artifact’s ready for production. I’ve taken it a little farther than I probably should sometimes, but I think that should be our goal.

07:30Nick: Like, nobody wants to just be told, "No, I can’t." They want to be able to be given the information. I think even look at the way that Google does it. So, if you ever use Google Maps, it doesn’t say, "Hey, you went the wrong way, you idiot." Right? It says, "Ah, okay. Well, maybe we can get there this way."

07:44Nick: "Right? Let’s readjust. Let’s pivot. Yeah, we can still reach the goal, but let’s go left instead of right, and we’ll take a different freeway." Like, I think that’s the idea is that let’s not go and beat our engineers over anything. Let’s not go cause rifts between them. In fact, let’s cause harmony amongst them. Let’s bring them all together in a unified place where they can all achieve together. I think that’s really the goal.

08:03Dewan: Mario needs Luigi! We win as a team. We win as a team. All right, next, switch gears: Is 2026 the year when DevSecOps finally goes mainstream? Nick, does security become one of those things that we add at the end, versus something that we bake into the platform?

08:21Nick: I think for too long we’ve tried to, you know, shift it left, we’ve tried to shield it right, we’ve tried to have it a bolt-on in too many different areas. And, you know, we even came up from DevOps to DevSecOps to bring it as part of the team, but it’s always been still separate and segmented.

08:36Nick: And the teams that I see doing it the best are truly teams that operate in harmony. And I know that’s like a very hippie way to put it, but when we can operate as a team doing the tasks and the roles—again, if I’ve got different AI or if I’ve got different teammates, they’re phenomenal at certain things, let’s empower them to do what they’re phenomenal at.

08:52Nick: But let’s do it in a way that isn’t, you know, buying a tool to go beat people. Right? You know, for too long engineers have been beaten by security tools or by finance tools. Infrastructure teams have been just, you know, beat by finance tools. And so instead, I think what we’re going to see—and what we are seeing, specifically in all of our customers that are flying and delivering software at rapid and breakneck pace—is that it’s not an afterthought.

09:14Nick: It’s not just shifted left to the engineers—it’s not like, "Ah, here, you do all the work." What it really is, is about sharing that information when it’s the right time. It’s about making sure that I know how to fix a vulnerability now, not two weeks later. I know the vulnerability even exists now, not two weeks later. I understand what the blast radius is to it, how much it could affect the business—that’s going to help me even determine what needs to happen now.

09:36Nick: I think it's about bringing the information left—so that’s the most key, important piece—and then it’s not about security being the roadblock where everything is "No," where a server is most secure if it’s off. I lived in a world, you know, operating critical infrastructure for the United States where that was the starting point of every conversation. It’s like, "Okay, it’s off. Now, why does it need to be on? What’s the risk profile?" And it started there.

09:56Nick: And I think that’s a little extreme. I think now, it’s actually operating together. So like, let’s help each other write rules, let’s help each other write those policies. You know, let’s make sure that people know why we’re doing things. I think the other part is for engineers, you know, complex things are okay to be complex as long as people know why.

10:11Nick: The issue is often times that we don’t know why we’re doing this thing. And so the first thing I do is I’m going to skip it because it’s just taking a ton of time. And so I think having that inner-communication, being together, being part of one unified team—to your point, is this truly the year? It has to be.

10:24Nick: If we want to get to true "automate everything," if we want to get to a point where we can actually get AI-generated code out to production in a rapid timeframe, it cannot be two separate groups. It can’t. Right? Because then there’s bottlenecks, there’s waiting—all these things. There needs to be a way, again, security can write the policies, they know nothing goes out that isn’t secure, and then engineers know the rulebook.

10:44Nick: You know, too often at software companies, I think we all think we’re playing the same game. I’m going to use another horrible analogy. I apologize. I’m going to use a few of them today. But I think, you know, everybody thinks they’re playing football. And I know everybody hates football analogies, so just stick with me. Right?

10:58Nick: Most of the people in the world play football with their feet. To be fair, that’s the football I at least believe should be called football; it’s at least the one that most resembles it. Right? In America, we play with our hands—God knows why we call it football. Zero clue. And then in Australia, they bump it, and they kick it, and they jump all over each other, and it's an amazing spectacle to watch, but also, you know, partially football, but a lot of other things too.

11:21Nick: So I think what’s interesting is that’s our software teams, by the way. They all think they’re playing the same sport. They all think that they’re trying to do the same thing, but then, you know, ask yourself: why can’t we switch teams so quickly? Why can’t we go between projects? Why is it a huge learning curve? It's because we’re not playing the same game.

11:37Nick: And it’s not until we actually have that same rulebook, the same policies, the same things that actually allow us to know, like, you know, what’s a score worth? How big’s the pitch? Like, where’s offsides? Until we have that, it’s hard to actually go fast; it’s definitely hard to switch games. And so I think that’s actually what’s going to allow us to achieve. So, I know that was a long way of answering a very short question, but I think the idea is that if we can bring that together—make sure we’re all playing the same game together with security, not against or for, but with—I think that’s where we actually achieve the velocity that we need, and we’ve seen it time and time again with our customers.

12:12Dewan: Beautifully said. Next segment is: Does MLOps finally break out of the basement? Does MLOps finally get added to the main software delivery pipeline in 2026?

12:22Nick: I think when we look at MLOps, we look at agents, we look at even prompts—all of this is really just artifacts that are being delivered to production. And they need to follow not the same delivery methodologies, but similar. Like, it's going... you know, we’re going to test an application, you know, a container, differently than I’m going to test a model.

12:42Nick: We’re going to test a lambda, you know, very different than I’m going to test an agent. But they still are going to go through a deployment, a testing phase, right, a validation. We’re going to want to make sure they’re performant. You’re also going to want to just, you know, security scanning. We want to make sure they’re not creating vulnerabilities.

12:56Nick: So the patterns that we see in any software delivery—whether it’s Tomcat, whether it’s WebSphere, whether it’s Pivotal, containers, you know, serverless, you name it—the similar things follow true for deploying ML models, for deploying agents, for deploying prompts.

13:11Nick: And I think the reality is we don’t want different systems and different tools to do it. We want different tests to happen. We want different, you know, results to be analyzed. We want different, you know, set of variables. But the reality is the way it gets to production, the way it gets tested is extremely similar.

13:26Nick: And so, you know, the request from a lot of our customers has been specifically is, "Why would I want another tool to do that? Why couldn’t I do that with, you know, a phenomenal execution engine that can execute the tests, that can guarantee that the security scans happened, they can guarantee the performance or the validity or making sure that we’re getting the appropriate results that when we do change models, right?"

13:46Nick: And so I think that is actually more impactful is having that again, all still playing the same sport. Not a different pitch, not a different area. And I think, you know, really the benefit comes is that it’s not 17 different tools in 17 different places. This is how we deploy software. And that software can be models. That software can be databases. That software can be infrastructure. It can be agents, it can... and I think that’s the benefit. Is that it doesn’t have to be different places for different things. We can all do it inside of one area.

14:14Dewan: I’ll be playing a developer's advocate here—well, I am a developer advocate. So now teams are facing actually like different threat landscape this time. Like of course they have their usual integration challenges as you’re saying that different people you tie them together they might not be able to function.

14:31Dewan: But now they were historically familiar with SQL injection in the last two-three decades, but now they have prompt injection. They have model tampering. So the threat landscape has also shifted. Doesn’t this add another layer of complexity in addition to all the challenges they already have?

14:48Nick: So I think this is where I’m even going to lean back on your previous question, which is like, "Is this the time for DevSecOps?" Throwing that over the fence and saying, "You, you secure my prompt"—I don’t think that’s a viable option. I think, again, this is where we have to work together.

15:03Nick: And look, to me—and also, you know, if you look at Traceable and in what we offer—like, whether it’s an API call, right, or a call to an MCP server, or someone trying to mess with your prompt, it’s the same thing. And it’s really being able to expose is that trying to negatively impact your business.

15:21Nick: And to do that in line to protect you. And so for us, it’s just again, you know, I’m following this idea that it’s it’s the same patterns, it's just a modern version. So instead of, you know, software sure maybe it’s models or agents or what have you, but instead of this, like, same concept.

15:35Nick: Instead of APIs, now it’s now it’s requests. And so I think looking at that data and understanding what it should be doing, right, understanding where those requests should be coming from—it's the same traditional type and sense of API security.

15:52Nick: And that’s the way we’ve leveraged it. We want to make sure that whether it’s an API call or whether it’s a, you know, a prompt that someone’s trying to maliciously gain access to your data or try to have your your models, you know, step outside the bounds of their their RBAC—like those are the types of things that we just stop cold turkey, instead of allowing them to continue because we can sit there in the middle.

16:13Nick: It’s really to me not a larger attack vector, it’s just making sure that you’re prepared to protect it. Right? It’d be like—this is a new one, so I’m I’m coming up with it on the fly—but it’s like, you know, if you built a guest house outside of your, you know, maybe your fence, right? It’s kind of the same concept.

16:30Nick: You’d probably build that guest house inside your fence and that way it has the same level of protection. And really I think it’s just as long as we think about it in that capacity—we don’t think that "Oh, it's just some new magical thing that can’t be hacked and people aren’t going to go after." In fact, that’s the first place they’re going to go because they assume people haven’t spent the time.

16:45Nick: So, to me it’s just another another attack vector, but it’s the same same problems, same challenges we had previously, and the technologies to help fix it or at least to mitigate it, to get early warnings about it and so forth. So, I don’t see it as a huge burden, but I do see it as something we have to make sure that we’re looking at. Like, you can’t overlook it. Right? We have to be working hand-in-hand with security, not as an afterthought.

17:07Dewan: Beautifully said. Let’s switch gear to the next topics, everyone’s favorite: AI agents—coworkers or chaos goblins? What responsibility are you going to give them in 2026? And follow-up question: What breaks first—trust or the pipeline itself?

17:23Nick: Ooh. Well, you know, we’ve seen these kind of ideas, these agents and them going off the rails and doing certain things. But I am going to go back to kind of where I was a few questions ago, and I think when we start designing agents, my assumption—and just what I’ve seen here—is that, you know, originally I thought that agents would start mirroring and mimicking human behavior.

17:42Nick: And I actually have an interesting take. Right? So maybe totally different than the industry. I think humans are actually going to start behaving like agents. Um, so we’re going to to mirror that. What do I mean by that? When we build an agent, I don’t build a generalist that knows everything about my business, that can do everything.

17:58Nick: Right? I build agents with specific tasks to do specific things that are honestly things that we can measurably know that we hate doing or that cause us troubles and and time. And so when I go build that agent, I do it specifically for a specific task or a specific job. Why I bring that up is that I genuinely believe if we go build agents to handle all the things that we hate, right?

18:18Nick: All the things we hate doing, all the things we want to put off till tomorrow, um, then that gives us time to do the things that we love. And so to be fair, we ourselves might have that opportunity to go and only focus on the things that we’re great at. You know, when you look at statistics right now, I think what, 20-30% of the time a developer’s actually spending time on the things they love, you know, doing really complex challenge-solve challenging problems, writing actual code, you know, like really diving into it versus, you know, what’s all the minutiae of getting it delivered and security tested and fixing vulnerabilities and taking care of backlog.

18:50Nick: So if we could wipe out that level of of minutiae of toil—if we could remove that part and allow people to focus on the things they love—well now it gets exciting. Right? Because now, you know, when someone’s passionate about doing something, right, time is not a a thing, right? Like we’ll work until it’s done because we’re so excited about it.

19:07Nick: The same way when we build an agent—like, yes, they’re computers and they’ll just operate until they’re finished—but so will a human when they’re doing the thing they’re excited about. And so I think it’s actually a huge opportunity if we do it right, to not have this massive chaos of like agents fighting agents and doing it, but it’s actually to build, you know, a team that you can now operate the same way that you’d build a team today.

19:26Nick: Because you you have specialists for security, you have specialists for databases, you have specialists for infrastructure on purpose. Sure, we’ve got a general knowledge because we have to stitch it together and higher-level architects and so forth, but I really think if we do it right, now we get to do what we’re passionate about.

19:40Nick: Right? We don’t have to spend the time doing the things we don’t, or if we find something that we don’t like doing we can go build an agent for it. The second part of it, "What breaks first, the agents or the pipelines?" was that the question?

19:51Dewan: Yeah.

19:52Nick: I think, you know, interestingly enough, we’re always breaking pipelines. And for good reason—we’re pushing the boundaries, right? Like we’ll go break a pipeline and find that we needed, you know, new security, new features, new functions. And I don’t think they’re ever finished.

20:04Nick: Just like an application rarely does one just sit there unless somebody dies or it’s, you know, quit being maintained—like there’s constant addition to it, which means there’s constant change. And so I think with both of them, I think that’s what we should should expect is just a constant balance.

20:17Nick: And and I think you’re going to find one’s going to overtake and start doing too much or too little and then again, you’ll back and forth and back and forth and you kind of AB test everything. I don’t know if it’s going to be one or the other—I probably should have a hot take on on which one’s first—but I think it’s just going to be a balance like it is with everything else that we do. And and we’ll adjust accordingly.

20:34Dewan: The next segment is my favorite segment. This is developer experience in 2026 and again our AI agents—so how much time developer will actually spend coding versus wrestling with their AI agents? And then a follow-up would be: Does our IDE turn into a chat window?

20:51Nick: Interesting questions. I think—I'm going to answer them out of order. But I think, will our IDE become a chat window? I think the IDE should be a chat window for junior engineers. And why do I say that?

21:03Nick: When you first come and you first, you know—and again I’m going to talk from my own experience—when you first come to a company and there’s a huge codebase that exists and reason that things have been done, you know, often times learning from the individuals that wrote it is either hard or dang near impossible—they might not even be there anymore.

21:19Nick: Um, but getting access to those senior-level folks to sit down and teach and operate and show you, you know, again that why things existed in the first place—why it was built that way, why that class was used—like to have access to that level of information through an AI to be able to ask those questions now and not have to wait and not have to get my 30 minutes once a month and all those different things—that’s valuable.

21:41Nick: So I think for that, like if we start using it appropriately, we can use it as a huge learning opportunity there. You know, I hope and pray as it goes on that, you know, the human creativeness doesn’t get dulled down to just a chat window, but I think it could be massively useful.

21:54Nick: And I have seen it massively useful for folks that want to come in and learn. That truly want to learn. Not think they know it all and going to go show why this why this person did it wrong 20 years ago, right? Have some empathy, understand it was built with the technology that was there when it was when it was... when it existed. Sure, new things could exist, but that wasn’t the problem they were solving.

22:11Nick: So I think if we go and look at it with empathy, we go understand the whys, that’ll help us create better. So, that’s with the chat piece. I think for the senior engineers, for folks who have been doing this long time, who have been at a company for a long period of time—I think having an assistant remind us on even some of the things that we forgot early in school and like, you know, just the syntax of different pieces—like that alone is helpful.

22:31Nick: I think at the end of the day, we hire really smart engineers to solve really challenging problems. We don’t hire really smart engineers just because they can write code. Right? Because they know Java, um, or Go or, you know, whatever. I think the idea is that we know that they can sit there and and extract out the true problems, you know, bring it into something that’s meaningful, and then get it out to customers. And I think that’s what we really want to empower people to do regardless. I forgot the first question. I went to the end one and and I forgot the first part of the question. So apologies.

23:01Dewan: No no, all good. You almost answered it: Do developers spend more time coding or wrestling with their AI agent?

23:08Nick: Yeah. I think, look, it’s going to follow—and I know this is like the most basic answer—it’s going to follow the same pattern you have today. If you’ve been an engineering manager, right, and you have to go and review a whole bunch of people’s code, right? If you let them off and just write into the world to oblivion and like use things that aren’t standard and like send them at libraries that we don’t use, guess what? That’s a lot of back and forth you’re going to have to do.

23:31Nick: And it’s the same with AI. However, if we’ve gone through and we’ve taught and we’ve understood like what are the standard best practices and how they operate and how they’re enforced, now that problem gets less. And so I think the same way that I see engineers actually becoming engineering managers. They’re not managing maybe people, but they’re managing, you know, a slew of AI agents doing those tasks.

23:51Nick: So, the time that you put in to make sure that you get an output that’s right, it’s going to be the same as when you when you work with a with an engineer. Right? Even to say that their way’s wrong or right isn’t necessarily true, it’s just you know, there is reason, there’s history, there’s context that you have that has to be bestowed.

24:08Nick: The more time you spend giving that context to a junior engineer, to to a new team member, um, to someone who’s coming over who’s even been part of the company for a long time but who’s coming over to help with a project—the more you can give them, the easier it is. And so I think it’s the same, right? That same level follows true whether it’s an AI or a human.

24:25Nick: The more context we give it, the more we can train it, the more we can trust it, the less we have to do babysitting. But to be fair, if you just say "Go off and do"—I mean, I know what happens when I let that you know happen even even here at Harness early days—like go off and do no language barriers, no no infrastructure barriers, like go do anything.

24:41Nick: Well, now it’s built on entirely different infrastructure, it’s built on different cloud we don’t have like—without parameters, it becomes hard to backtrack and bring it back in. So, you know, I think that’s... I know it was a long answer to a short question, but hopefully that makes sense.

24:52Dewan: Yeah. I think one thing we keep telling within Harness is: Guardrails, not gates. And I think that keeps flowing again and again, whether we talk about enterprise architecture or AI-native delivery, it’s guardrails instead of gates.

25:06Dewan: By the way, Nick, I know some of our listeners will listen to you saying that "Everyone’s now an engineering manager"—they’re going to update their LinkedIn saying that, "I’ve been an engineering manager—well, how many people are you... well, I’m managing a team of 12 AI agents." So folks, if you’re updating your LinkedIn profile, disclaimer: Nick was mentioning about AI agents, not an actual—

25:27Nick: That’s fair! That’s fair. No, I think look, I think it’s a new challenge. It’s a new skill set. You can do it all yourself still today—like that’s not... I’m not saying you can’t. So please, by all means, I’m not saying you can’t. But I’m just saying that, you know, in the same capacity where we can deliver a lot more when we get a team that operates well together, that focuses on the things that they’re phenomenal at and use each other’s skill sets—I think it’s the same thing.

25:47Nick: Yeah, not necessarily a LinkedIn change of title, but I think the role, at least we’re seeing in a lot of areas, is that’s that’s a real opportunity here coming in 2026. I think you’re going to see people doing that. And and "Guardrails and not gates" is one of my favorite things—it’s like I said, even with security, it’s not to say "No."

26:01Nick: Even just like with Google, it’s like, "No no, it's fine. Okay, turn left. That’s okay. We can still get there if you go down that route. Let’s just take a different way." And I think that’s really what it’s about—is giving people options, showing them why security believes that that shouldn’t happen. And and sure, there are a few things in the world of software delivery, especially for a regulated industry, that are "must haves." But for the most part, I think to your point, let’s put guardrails in place. Let’s make sure you can’t go off the rails, that you can’t go too far down the wrong direction and keep you headed down the right path. That’s the goal.

26:30Dewan: I love that GPS example because it matches in so many levels, Nick. Because for a lot of people, it’s not about getting there fastest. It’s probably avoiding some specific road—maybe some folks won’t drive on a toll road, avoid highways. Similar to that, our customers are not the same.

26:46Dewan: Some have enterprise or compliance requirements, some are like, they just want to move fast. Some have specific cloud requirements, some have specific like VMware infra requirements. So if you just have like one path for them, it doesn’t work.

27:00Nick: I’ll be honest, we—I won’t say we made a mistake—but even in our kind of iterations to the world of how we came in with Harness, we actually came into that same conclusion ourselves. So we started life like, we said, "Okay, let’s make it easy for people to do the right thing. Let’s give them standardized templates so they can do everything they want." Well, the problem, to your point, was I had to build a template for every possible path. Well, that got, you know, that got really difficult really quick.

27:21Nick: Especially because you couldn’t handle the permutations, you couldn’t handle small deviations. And so we said, "Okay, well how do we rethink that?" And we said, "Well, what if we just instead of just making it easy to do the right thing, what if we made it hard to do the wrong thing? What if we put a policy engine in place that said, 'Hey, you can’t do these things.' So, you know what, if you don’t want to follow the template—fine—but here’s all the rules."

27:38Nick: Right? Have to have a security scan, can’t have any net-new criticals, has to be tested in a staging environment before it’s in production—whatever those rules are. And then what we found is that we actually people loved it, but they got back to the same problem that they had previously—they had for every single application they had a new pipeline.

27:54Nick: Sure it was bound by the same policy. So that wasn’t going to work either because that was too too much maintenance to maintain those pipelines; they weren’t using the templates. And so we again—this is over the last nine years—so we came up with flexible templates.

28:06Nick: So here’s a template that has, you know, 80-90% of the things that you have to have—the required stuff, the security scans, the chaos engineering requirements for resiliency—whatever those are. Now those are all kind of built into the template, but you’ve given people flexible areas where they can actually add their parts and pieces.

28:22Nick: And so now I can create a template that’s not rigid, but I can kind of govern it to make sure they don’t go step outside the bounds—they’re not going to export all of our secrets into a doc and ship it to themselves because I can put that in the policy. And so now between the two—now I’ve got flexible templates where people can inject and I’ve got a policy that protects us.

28:38Nick: Now you can truly have the freedom, you don’t have the maintenance, you can have the guardrails, to your point. Sorry, I didn’t mean to diatribe on this. I just think it’s one of the things that we’ve gone through an evolution because we’ve taken those hard requirements and we’ve seen them at scale.

28:50Nick: And so to be able to deliver them—that’s what we’ve come to—is give people flexible templates, have 80-90% of it baked where they don’t have to think about it, they don’t have to put cognitive load to it, give them the freedom to do what they want, but put the guardrails in place to say, "Hey, don’t accidentally, you know, expose a secret. Don’t accidentally put this in that place. Don’t accidentally end up with a vulnerability that we shouldn’t."

29:08Nick: And I think those are the things that really help people scale—is when you can give them flexibility, but with guardrails.

29:14Dewan: Totally. And on the topic of giving people what they want, I think one thing that people want to hear—and this is the final topic of of this special episode—is about being still employable in 2026. Teams, budget, and org chart. Nick, this is a very sensitive topic for a lot of people.

29:33Dewan: They’re seeing basically their entire dreams shattered—like they had these skills for last few decades and now suddenly they’re seeing not skilled in this new era. What would be your suggestion to the engineering teams hiring for talent—how would AI reshape or shrink their their teams? And also for engineers who are trying to stay employable in this market?

29:56Nick: I think it’s interesting a lot of people talk about, "Well, we’re not going to need junior engineers anymore because the AI’s going to be the junior engineer." And I think that’s interesting. So now you got a whole set of kids that are about to come out of school thinking, "What’s going to happen?"

30:10Nick: And the reality is, regardless of whether AI’s there or not, we need a constant influx of talent. Right? We need a constant ability to bring in people to the company to grow, to do those things. And let’s play that theory out for a moment. Let’s say we don’t need any more junior engineers. What happens in 10 years? Now there’s no engineers because the senior engineers are starting to retire, there’s no pool of new people doing the work.

30:33Nick: I think that becomes a huge challenge. And so to me, I think what it is, it’s a different way of operating. And I’m going to take this back like—when the—and this is going to date me quite quite heavily, so I apologize—but let’s go back when VMware just started becoming a thing.

30:48Nick: People used to rack and stack physical servers and you know, go in—and the way you would get to and have backups to dial in to machines to be able to gain access to them if you lost network—like all these things existed and then VMware came and changed the game. And if we said, "Well, now you’re not going to need me because you can have 30 servers on on one server—what are we going to do?"

31:07Nick: It was quite the opposite. Right? An influx happened because of the influx that was there. And so sure, more code’s going to be generated, sure more apps are going to exist, more companies are going to become software companies that weren’t before. That doesn’t mean we’re going to need less engineers.

31:21Nick: In fact, here’s one thing that you should look at. If this was true, there’d be no hiring, there’d be no postings for a single engineer because we’d just be full up and we wouldn’t need them. The reality is that’s not the case. What I see at customers—and there’s a few examples and they’re harsh—of like, "Hey, we’re not going to replace anybody if anybody quits, you know, AI can do their job."

31:38Nick: The reality what we’re seeing from our customers is how much more they can get done now that it’s there. Now that they’ve automated everything, now that they’ve got AI helping, now that they’re generating code faster—they’re helping solve business problems more. We’re not seeing them remove employees; we’re seeing them get more done.

31:54Nick: And so I think you can look at history to go and understand where this happens. Every time there’s a large change—you can look at it in farming, you can look at it in in manufacturing. "Oh no, we’ve automated the building cars with machines." If that were true, there’d still be no humans involved, and there’s quite a few. They’re just doing higher-level tasks.

32:12Nick: And I think that’s where we need to go—is just understand we’ve had these changes all throughout history. To me, I see this one rising tides. And you can go again, just go back to history—every single time. Sure, there’ll be people that don’t want to follow through and they don’t want to continue with the modern technology—okay, get it.

32:27Nick: But if you do and you want to continue forward, the job’s not going to be the same, but there’ll still be opportunity. We’ve never seen it where it just collapses and says, "All right, nobody’s working anymore." Like, that has not happened in history’s past. I don’t see it happening now.

32:40Dewan: Totally. If someone tells you that we’ll just make a button for that, someone still needs to make that button. So with that, Nick, we’ll end with optimism. But we’re not going to let you go yet! With that heavy mode, we’re going to switch to light mode. We’re going to go to rapid fire! So these will be five questions, and we want your spiciest take on these questions. Don’t think too much—30 seconds, and you’re going to answer. Okay?

33:04Dewan: So the first one is, Nick: Is 2026 the year when AI is coming for Field CTO jobs?

33:11Nick: Ooh. Ooh, that’s a good one. I wish it would be—I need more Field CTOs. So that would be a great thing to have happen. I think the executive-level conversations that are happening right now are not happening with an AI bot yet, but I’ll be honest, if I could replicate a lot of my job to an AI, I wouldn’t mind doing it.

33:28Dewan: I can’t imagine multiple Nick Durkins! But there can’t be any hallucination, right Nick?

33:35Nick: That’s fair! But you know what? My dad told me things when I was a kid though, that I believed. And I look at that as the same as hallucination. In fact, you’ll hear me say specifically now whenever I give a statement I’m not 100% sure, I’m like, "Hey, just this could be a hallucinating AI, just take take that for what it is."

33:51Dewan: All right, next one: Most overhyped AI trend of 2026?

33:57Nick: Most overhyped trend of 2026. I think people will finally start realizing that they are investing all of their time, energy, and effort in the inner loop of software and realizing that most of the time spent is in the outer loop. And so there’ll be a wild correction, I believe, in where the actual money’s being spent—because if you can’t get anything out the door because you haven’t fixed your outer loop, it doesn’t matter how much you create in the inner loop, doesn’t matter how much code that’s just raw materials if you can’t get it out the door. So I think that’s one that we’re starting to see and we’ll see come to fruition in '26.

34:25Dewan: Beautifully said. The next one: Most underhyped risk in 2026?

34:31Nick: I think people... the underhyped risk—and I know people talk about it—but you know, it’s not... this is where I think shadow IT comes into play and it hurts you more than it did last time when it was just spinning up servers. The amount of data that you can load into a model, into an agent, into different things without actually having express written consent or lack thereof and making sure that your data’s protected—I think that’s going to be your biggest risk.

34:57Nick: Allowing true unfettered access to any AI without having commercial contracts could cause your company very big hurt, and people aren’t doing it maliciously. Unlike where people would, you know, or even how people would accidentally share credentials in in GitHub and so forth—like it wasn’t ever done maliciously, but it can be wildly used to your disadvantage. And so I think that’s some of the fear that we have to really focus on.

35:19Dewan: Totally. One prediction you’re confident in in 2026?

35:25Nick: I am confident that people are going to actually start being able to leverage agents appropriately and that truly security will become part of DevOps, so it will actually be DevSecOps. Or more importantly, we’ll actually bring the teams together closer than they’ve ever been. And I see that happening because there’s only way to success is for that to happen, so I’m confident that it will.

35:44Dewan: The final rapid-fire question, Nick: One prediction you hope doesn’t come true?

35:50Nick: One prediction that I think I hope doesn’t come true. I think thinking that AI becomes smarter than humans was a fear for a while, but now that I’m seeing Waymos get stuck in the street, I wouldn’t mind if they become a little bit smarter. I think that would be okay with me!

36:06Dewan: We’ll all be happy if we have less stuck Waymos! With that, Nick Durkin, our special guest for 2026 predictions of the Ship Talk podcast, where we talk about the ins and outs, ups and downs of software delivery. Hey, if you want to connect with Nick Durkin, I’ll link his LinkedIn in the podcast description, and follow our podcast. Thank you so much, Nick, for joining.

36:31Nick: Thank you so much for having me. Can’t wait to be on again!