ShipTalk - SRE, DevOps, Platform Engineering, Software Delivery

ShipTalk Season 4 Finale: Engineering Excellence at AWS re:Invent

By Harness Season 4 Episode 11

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:38:19

Welcome to the Season 4 finale of the Ship Talk podcast! Join special host Thomas Dockstader and several industry leaders at AWS re:Invent to discuss the intersection of AI and software delivery. The following is a series of interviews with partners, customers, and engineering leaders on the front lines of AI transformation.

Don't miss the "Ship It or Skip It" segment, where our guests give their rapid-fire takes on everything from AI code reviews to the four-day work week.

Connect with our featured host/guests:

Thomas Docstater | Ron Miller | Hasith KalpageEric BaranTim Knapp |
Piyush Diwan

Watch the full video on YouTube.

Subscribe to ‪@Harnessio‬ YouTube channel for more insights as we prepare for Season 5!


Intro

Speaker 4

Good morning, good afternoon, good evening, time appropriate greetings. My name is Dewan Ahmed, host of Ship Talk Podcast. And this is not just any episode, this is the finale of season four. And with me, I have Thomas Dockstader, the man in the middle of all the madness of reInvent. And we can't wait to hear from Thomas what was the discussion around uh AI meeting software delivery, which is the theme of season four. But this time, this time we have a spin with engineering excellence. Welcome, Thomas. Thanks for having me. Thank you. Tell me what was what was the vibe like? Like I heard people were saying like they had like 20,000, 30,000 steps uh during reInvent, uh, with with all the the energy or the innovations. Uh what was it like around the booth?

Speaker 7

Yeah, I mean, I think reInvent is, I've been going to um trade shows for years and years, and I think reInvent has just been one of the ones that has astonished me the last couple of years. The number of people that are there, the um the lengths at which AWS goes to in order to put the show on is incredible. Uh it spans across many hotels and just the uh the the trade show floor in and of itself. I've just never been at a show where there's so many people. And when you throw on the um the buzz of AI right now in the tech industry uh is just uh a formula for uh a lot of excitement and confusion and uh questions.

Speaker 4

So yeah, and one of the key questions, I guess, is engineering excellence. So is this a goal or a struggle? How do you see customers uh checking engineering excellence as their as their goal or struggle?

Speaker 7

Yeah, I would say um, you know, I think really it comes up in in two different areas, and and I feel like it's becoming somewhat of a pressure point. Um, you know, the pulse at reInvent was not so much in the in the realm of is AI coming or when is it going to come? It's it's here. And um I think one of the things that's really being magnified is when we talk about code assists and code assist being really, in my opinion, the first real big insertion of AI into engineering. Um, the amount of work that's coming out of code assist, the amount of code that's being pushed is accelerated by a massive amount. And what we're seeing really is that um engineering excellence, you know, uh what I heard was, you know, people weren't describing it as this nice to have aspiration. It's really kind of exposing um some of the areas in the SDLC that are not prepared for the kind of influx in code that's happening. And so because of AI, you know, you can absolutely uh it can help teams move faster, by no doubt. However, um, if the system underneath is weak and if the governance is weak and if the standards are inconsistent, um, you know, AI doesn't solve for that. It's it's really not that simple. And we we we don't have AI inserted into all of the SDLC. And so I would say engineering excellence was showing up really both as a goal and a struggle. Um uh it's really becoming the difference between teams that actually can turn AI AI on and into a reliable outcome versus the ones that are creating um more more risk and uh you know uh more challenges.

Speaker 4

Couldn't agree more. And you interviewed five uh partners, customers, engineering leaders in the space. Was there a common thread, Thomas, around how teams are using AI to transform their software delivery?

Speaker 7

Yeah, I would say that um there, you know, the common thread was honestly a little bit more grounded than um a lot of the market hype. Uh I think that really the shared theme was AI is not really um, at least at this point, replacing engineers, and more as it's kind of removing friction around engineering. And uh it is helping teams move faster, especially with repetitive work, test creations and pipeline configurations um that can drag. And really, I mean, when you talk about coil in engineering, um, AI is really great at reducing some of those um uh monotonous tasks that we have in engineering. However, um, you know, I think one one of the more important things was the common thread was that um the the the output is increasing and it is creating a tremendous pressure downstream uh in the SDLC. And so um I think the real transformation is not just you know AI making teams faster, uh, it is also forcing organizations to mature and create an operating model around delivery. Um that's that was that was abundantly clear um across all of my conversations. And that was probably the biggest thing that came out of all those conversations was AI is useful, AI is real, and uh, but it's already creating that um visibility into areas where our uh SDLC might need additional work uh to really leverage it. And I I could see some organizations um dealing with a lot of um tech debt in the near future.

Speaker 4

AI is real. AI is helping us create test cases for our code, yeah, he's deleting our entire mailbox. AI is very real. Um now, was there a difference in the discussion on how vendors are providing AI technology versus customers actually using it to ship real products?

Speaker 7

Yeah, I would say um, you know, um I think partners and customers, you know, they're looking at the same shift really, uh, real and and from but really from different altitudes. Um, you know, partners were generally very focused on what AI can unlock. Um, you know, and they talked a lot about acceleration and automation and uh reducing cognitive load and creating more streamlined developer experience. I think we see a lot of um a lot of vendors, you know, everybody has AI in their name. All the tools have AI. Um, but uh, you know, where is it actually making marked movement in in solving real problems? Um, you know, I think uh for the customers, you know, um it was much more grounded in operational reality in just kind of saying, like, hey, you know, that's great on these features that you're talking about and and and everybody's talking about them, but how do we validate this? Like, where does the value come from? How do we validate the value? And I think that's a really, really important piece is, and we're seeing it in SaaS in general. It's like, you know, the idea of moving from a seat-based cost model to a um outcome-based um model. And I think customers are going to be more and more um demanding when it comes to, hey, you've got all these features, you've got this amazing AI tool. Show me the outcome. And that seemed to be pretty consistent in from the customer side. And so, really, the strongest signal for me uh was that the gap between experimentation and production uh is still very real, right? We we I don't think we've come crossed that chasm. Um, a lot of organizations know they need to move to AI, but uh I think they're still figuring out how to do it in a way that is governed, repeatable, and sustainable.

Speaker 4

Yeah, and totally. And like these organizations, they're not in uh in a same maturity level in terms of their delivery as well, right? It's like some are still in like legacy technology, uh, some are still like uh uh trying to figure out like how to modernize, and while they are trying to do that, now there's this AI transformation they have to do. So, uh did did you talk to some customers uh uh trying to now battle like two grounds. One is modernizing their legacy tech, and now they have to do AI modernization as well.

Speaker 7

Yeah, it's interesting. Um, you know, I do uh in addition to you know reInvent, I do SDLC assessments for organizations, and it's funny because as fast as AI is moving, um I don't know that there's a solve for how we we move a legacy system. You know, it there's just so much data, and there's so many um uh learned and baked-in processes within an organization that um and there's a lot of um individual knowledge that's held by the individuals who who do those jobs, not documented necessarily in a system that AI could go in and read. And so I think it's gonna be a combination of um someone's gonna have to come up with a pretty smart idea on how to do this, but it's gonna be challenging for those legacy systems to move. I think some of them will get um could get disrupted, quite honestly. Um that's a that's a big possibility in uh um being on some of those older systems and just not having the ability to move to a velocity space.

Ron Miller, Editor at FastForward

Speaker 4

Yeah, and I'm pretty sure like we'll have some more insights from these five interviews. So without further ado, let's listen to the interviews.

Speaker 7

Thank you for coming out. Thank you for happy to have you in in the booth to record that to record uh an episode with you. Um this show's crazy. It is, and um, there's a lot of I see one word or sorry, two letters, AI is everywhere, right? Sure is. It's uh it's a pretty popular uh subject. It's what I write about all the time. And so I'm I'm so interested to hear your uh opinion on things. I really want to understand like what are you hearing um that that that AI is actually helping versus kind of hype not actually doing much? Right. Well, it's not an easy question to answer, right? Let's go simple, yeah, yeah.

Speaker 6

Some simple ideas. I think that um, you know, everybody I talk to, and I talk to a lot of CIOs, CTOs, sense of AI, obviously, people know that they kind of have to be getting the companies moving in this direction. Um you can't go from like zero to a hundred. You know, you have to kind of there's things you have to do first. So one of the big things is data, right? You have to get your data in order, and then you can begin to start to take advantage of this stuff as you tune the models to your data. But people know that they have to go there, not everybody is there yet, and there's a lot of experimentation that's going on. Um, what I'm hearing is that there's less like in production, but more um, you know, we're still doing proofs of concept, we're still doing experimentation. We're trying to figure out like how do we use AI to really like improve efficiency to get that return on investment. But, you know, being a um a company that deals with developers, one of the areas that seems to be the most um mature is the code area. So that's an area where you see a lot of companies kind of diving in a little deeper than maybe some of the other areas.

Speaker 7

Yeah, I think it's funny you mention that because I feel like that portion of the SDLC is accelerating quite a bit. Yeah. My assumption is well, it's actually happening because we're seeing that deploy deploy times are higher because there's so much code coming in, the PRs are accelerated, everything's going faster. I really wonder what's going to happen, call it a year from now, if we're gonna have this tidal wave of bugs, right? And like the the ability to fix them, I think because maybe the developers aren't so familiar with the code like they were, you know, 20 years ago when John, who has been with the company for 20 years and he knows the code backwards and forwards, if we're throwing a bunch of code in there, you know, it could potentially cause additional problems. I'm curious, the leaders that you're talking to, are they are they thinking about this?

Speaker 6

Or I mean, I think people have to be thinking about that, right? Because one thing that AI allows you to do, as you said, is to go faster. Yes. Like going faster has its own danger with it. So there are two things here, I think. One is your experience development team was going to be using these tools to kind of do things that were kind of routine and not very fun to do, right? To get those things done faster without kind of manually going in and doing them. But you also have this idea that maybe the citizen developer can start to develop. And I I think that's where a lot of problems could start to prop up because people who don't know the kinds of things that an experienced engineer knows can get into trouble very quickly, and they don't know security, they don't know governance, and they're just like, oh, look, I can create this program. Um, and I think there's part of the problem is that the engineering team is gonna have to reel that in. That's that's that's one side of it. But the other side of it is like, how good is this code? And you're putting this code into your pipelines, right? And like you said, you don't necessarily know it. I'll tell you a quick story. I was in Miami a few weeks ago, and I was in a coffee shop getting coffee before my meeting in the morning, and I see these two guys, I overheard these two guys talking. One guy's like sitting there, he's bleary-eyed, and his buddy's like, Wow, you look tired. What's going on? And he's like, Well, I had to read 10,000 lines of code last night. And the guy's like, Can't you get AI to do that? And he's like, No, I have to know my code. So, what you said, like like there's still, even though you're creating it faster, maybe that even makes it harder. Because if you're somebody who wants to still have that sense of, I need to understand what's going in my pipeline, then that's not and sure, you're efficient on one hand, but on the other hand, you're like, now I got all this code, I have to be uh supervising.

Speaker 7

How do you think um companies are redefining engineering excellence because of AI? Like, what's like what how what are you what are you seeing there from trends? Because I know a couple of my guests have talked about center of excellence uh groups and how they're so needed in the or to your point, creating compliance and um making sure we have checks and balances and all those things. Are the are the higher ups thinking about this from a perspective of I I think they have to be, right?

Speaker 6

You know, um and like I I spoke to um the the guy who runs the uh the center of of innovation here at AWS yesterday. Okay, so they are working you know directly with customers to you know kind of help them understand everybody's kind of learning together, right? I think the vendors are learning, the developers are learning, the companies are learning, and it's it's this kind of chaotic mix in a way, because everything's happening so quickly. And I think back like a year ago, like there was no there was no um you know management layer, there was right, there was very little security. Yeah, um, you're starting to see things like that be announced um you know by AWS and others. Um and like when you think about something like an agent, it needs basic stuff, you know, it it needs all the stuff that you've always needed, but as you say, you're making it everything faster and faster. So you have to have those checks and balances in the pipeline, or things are gonna blow up when you're pretty fast, I think.

Speaker 7

Yeah, that that actually leads me to my next question is like, what do you think the skills are that that that leaders are looking at to really develop? And and and I with my last guest, we were talking about agents. And I want I I asked the question are we are we now looking for talent that is learning specifically how to manage agents alone? Like, what is what is the trend from a perspective of the type now? You also mentioned experienced developers. I've heard that trend where uh some organizations are leaning more towards the heavily experienced developers versus hiring new out-of-college developers.

Speaker 6

Where so I think there's two schools of thought on that. Some people are saying we're hiring a lot of young people because they get this stuff apparently, right? But you know, I think ultimately you need both, right? Yeah, you still need mentors, you still need like even if you're an old school developer and you're working with a young kid out of college, you still need to um you know have that mix of skills and understanding how things get built, right? Um in terms of skills, I think I I wrote I wrote a commentary um earlier this year that was it was kind of interesting to me because when you think back, engineers become engineers because they're good at math, they're good at logic, they're good at understanding kind of like how code fits together. But when you start to become AI driven, the importance of prompting becomes you know, a lot becomes more important. And yeah, that's true. You know, so then what is what is that skill? That's writing, right? It's writing and describing what you want. And if you're building in an agent, like it's one thing to prompt, you know, a chat GPT type model and say, I need this, and that's you know, I'm a writer and I I find like you have to be kind of like play with them until you get what you want out of it, right? When you're an engineer and you're creating an agent that may be something you use internally or something you're selling to customers, that prompt has to be really rich. Right. It has to really have a you have to have a lot of expertise, and you have to be able to articulate that expertise. So suddenly I think it's no longer just math and logic, although I think that still applies. Suddenly you need those English skills. Yeah, the skills, the skills that I have suddenly become, you know, people say, oh, you know, with with uh these large language models, writing becomes less important because models. I don't I don't agree with that as a writer, but I also think in some ways it becomes more important because you have to be able to communicate what you want to these models. And if you're building, you know, an automation agent that's going to do a lot of stuff, yeah. You have to first of all understand that process inside and out, and then you have to be able to articulate it in a way that you can communicate it so that the models can carry out these identic tasks, right?

Speaker 7

That is a very important point, and I think it's it's kind of scary because my son got in trouble for using AI for his paper report, right? So maybe the the coming generation isn't developing their writing skills because of AI when in reality we need it, right?

Speaker 6

We absolutely need it, and and I think that um your son and all of all the people in that you know in our younger generation have to kind of learn both, right? Yes, I mean because like I don't use AI to write, but I I use AI to edit, you know. So I'm a I'm a I run that newsletter and blog that I have by myself. So I have a I have a human editor that that checks the pieces, but as I'm writing, I'm like I'm checking it against like, you know, did I make any mistakes? Did I, you know, did I overly repeat words? Did I um that kind of stuff is really nice? Does it thematically hold together? You know, is it is it logical? You know, and you can ask the models these things, and it's like having this editor, but even with that, like when I give it to a human editor, she always finds stuff and she'll say, like, I don't know what you were trying to say here. Why does this have m dashes? Those cursed m-dashes. I mean, I've used m-dashes before, but they those models, they just love that.

Speaker 8

Oh, I don't know why.

Speaker 7

And it's an instant, now it's an instant indicator. Oh, it's a test written with an AI. So um how do you think, how do you how do you think in enterprises are measuring this? How are they measuring AI? Uh whether it be code or have you heard any trends or anything that's coming around from perspective of how they're actually going to measure this?

Speaker 6

I mean, I I I think even in the past, you know, sometimes they would say, like, you're a productive engineer because you've produced X lines of code per day, per week, per month, right? Um I've heard people say that even that is pretty flawed. look at engineering excellence so um but if you think about that as a metric that becomes less important right because if you can tell the model what you want and the model produces a bunch of code for you and then you massage that code and then find ways to you know mix that with other things that you're doing that you've created then where's the where's the productivity measure yeah right so um I think companies have to start thinking about how they they they measure uh the production and efficiency of engineering moving forward because you know when you're gonna be working with ai as a as kind of a a worker and I kind of I don't really like that that metaphor yeah but I mean you are going to be working alongside this entity yeah that's very true and um companies have to find ways to say well this person you know knows how to use ai really well and you know that has made them more productive uh you know this person maybe is doing uh more manual coding but it's better code you know so so I mean like you have to kind of start to balance all that stuff and I don't think it's one or the other right I think we're gonna whether it's writing or engineering or whatever it is it's we're gonna find we're gonna learn how to use this stuff uh as a tool and I think it is it's a tool very much so in in your quiver that you know um helps you do your job but it doesn't do your job right yeah that's a great point yeah as a reporter and a writer with all of this you know I mean look every company's trying to get attention right because they're they're they're implementing AI they're adding something new how do you specifically determine what stories you chase after I mean you know the fundamentals of what's news don't change you know okay there's always a technological trend that comes down the bike mobile cloud you know cloud yeah I mean the internet going back you know yeah I mean so uh you know I mean and I I've been around that long but uh you know what was important in the early 2000s when you went to a conference like that was this was very different right yeah sure um and and I mean over the years everything kind of shifts you know if you came here in 2018 probably like a lot of people talking Kubernetes and cloud native yep um you know you came here in 21 to 22 people talking RPA and that that kind of automation and now here we are in 25 and it's just all AI all the time. And I would be lying if I didn't say that had my attention. I mean I write about it a lot um but everything that has AI attached to it is not interesting just like every other wave that came before it um as a journalist I have to use my judgment which I think AI can't do and it is my special sauce as a human is that I can look at something and based on my experience and what I find interesting say I think this is newsworthy or this isn't and or I've seen this 10 times and I don't want to write about it again.

Speaker 7

You know yeah you're actually in a unique position because you do get to see kind of like early you know early announcements or people kind of reaching out to you hey we're doing this we're doing that you probably get to pick out those eight well actually 12 people are doing that it's not that unique right they they may not know that right that's very interesting.

Speaker 6

Yeah I mean and from whether it's a startup perspective or an established company um you know there are there are always I'm always looking for like you know have I seen this before if I haven't seen it before and it's solving a problem that I hear about you know then that's something that's gonna kind of pique my interest and I'm gonna want to dig into it and write about it and see if it's actually a trend that's developing. You know so um what is news is like that's always gonna be like okay what do I find interesting is my own unique filter but that doesn't change because of AI.

Speaker 7

Sure. Yeah that makes sense that's a great point.

Speaker 6

Yeah you're right the internet was a pretty big thing pretty big deal when it came out the web you know I remember the web I remember when you know in the late 90s and early 2000s when companies were trying to decide whether they should have a web presence or not you know and they didn't know what it was they didn't know how to deal with it. And you know obviously that that changed in heart. Yeah now though everything's operated on that now I think you know we see all these companies struggling with AI but eventually it's gonna be like SaaS right it's like you don't look at a software company and say oh they're SaaS now right you know like every company is is software. It's what it is yeah right so I I think with AI AI is going to be baked into everything and it may be not something that we talk about it as as much as we do now. Right.

Speaker 7

So with your with your you know um insights into the business all over the place what what is something that is generally exciting to you um that's coming that you see well I mean I think that as agents develop you know there's there's there's this whole infrastructure around it that has to develop that we're only beginning to see around security around governance around all the fundamentals of IT right like whatever it is whatever type of software it is you can call it whatever you want today it's agents there are certain i and I I wrote a piece um about this at one point where the fundamentals still apply right you have to be fundamentally sound in a large organization especially or you know it's just not gonna work so I think we're starting to see some of that infrastructure develop and I mean I was at RSA in April right in big security conference in San Francisco and there were people talking like you know these things are going to have to go out communicate with other agents cross systems maybe I mean we don't know how interesting the vision will match the reality but as those things happen um you know you're gonna need a unique kind of security right because these things are gonna go do something they're gonna become something else and then maybe their their identity and authentication suddenly are different and they're in their neural interest and like it used to be that software was pretty linear right you know so those ones and zeros yeah so so so so as you went through that process it was fairly easy to secure it once you secured it right now it becomes a little bit more loosey goosey because um things are changing so much and these things can be you know they can move and they have what they're meant is that they're highly flexible right but their their danger is that they're not flexible like how do you secure something that can change and so that that I find pretty fascinating.

Hasith Kalpage, Director of Platform Engineering at Cisco

Speaker 3

It's exciting I'm I'm telling you what um we're gonna do a little segment ship it or skip it okay um office pets um ship it ship it all right I love that uh four day work weeks um ship it yes thank you once you once we get to four then I'll push for three yes with AI right with agents we can we could get there yeah I mean I mean if you're gonna have AI and it truly makes us more efficient why not give it it's supposed to give you time yes and you hear some people measure it that way you know like it can't it it saved eight hours of development it saved eight hours of research whatever it is like why not give that to us right yeah absolutely I couldn't agree with you more yeah well hey I really appreciate you coming on thank you for having thanks for uh love it love your opinions and uh enjoy the show thank you thanks hey thanks for coming out to reInvent my gosh this place is crazy my pleasure it's I mean my feet are killing me I mean I've been yeah no kidding right yeah I'm doing more than like three 30 000 steps so yeah that's what I love about Vegas it's like oh it's just next door that's like five miles away yeah it's incredible these these these convention centers are absolutely massive they just keep going forever and everyone um I'm I'm happy to have you on today I would love to get your perspective on a couple of questions um the first one is I'm just interested to understand from your perspective what do you think right now is hype and real with AI in technology like where do you see some good and some maybe so uh I think AI existing AI can be applied to technology in so many ways to uh reduce soil and accelerate certain things uh where it's uh high high phase uh really anything around uh um uh what's it um uh AI replacing humans completely and uh kind of uh AGI okay that's that's high because uh you know fundamentally these systems are still uh you know next token prediction that type of thing yeah very infant level yeah yeah but but what they are really good at is uh with the data it has uh it's able to process a lot of things very quickly uh and and appear intelligent uh and and deliver outcomes that way and there's a lot of transformation you can do i mean even even with the existing technology if he can productize things I think we we easily have more than 15 years of uh transformation that can happen and and deliver a significant value for the society oh absolutely that's great so when we talk about platform speed and security where do you um especially with AI yeah where do you draw the line because I feel like code assist is really speeding up velocity but maybe causing security issues. Yeah I I think the important thing is uh I think doing platform and coding right is important to get speed uh long term and that's where I think uh you have uh uh security angle also fashion sometimes. Uh I mean I I use this uh term uh frictionless security and uh it often boils down to make sure that you make it easy to do the right thing and that way you go a lot faster versus uh blanket security policies that slows things down and uh frustrates everybody uh and and people often looking for a workaround because uh security is in the way and that in fact uh lead to poor security and slowing things down versus uh good security at speed.

Speaker 7

Are you seeing AI improve that process of getting security approvals and like moving quicker?

Speaker 3

Definitely AI can uh help in so many angles so things like uh you know compliance checks uh you know security has a ton of uh oil work uh and continuously testing things and uh validating certain things I think uh those type of things AI could really help I feel like I feel like security always takes it just takes so long especially like if we're trying to sell some software yeah right it takes so long what what where do you see that where do you see the future of it I mean in the next let's say 18 18 months to two years the because uh it takes so long because of uh certain regulations and uh things that you need to comply and then more often uh it's because of the approach that people have taken why it takes so long uh you know you usually put a human in the mix to the process uh and then most of the time uh you do it because uh you know these approval chains uh make you feel good about it but not necessarily sometimes uh actually doing the right uh type of security uh things you know even in uh compliance you know you have different personas that you bring in but the exercise itself doesn't uh I mean it's better than nothing but that's also fundamentally broken in certain ways to uh have like real security versus uh a tick box exercise and uh kind of like the perception of security right what do you that that's a great that leads me into this next question what do you think is the most overrated security control that's a human and approvals because because what often happens is uh human and okay no no no because I'm not saying it's a bad thing sure we need it uh but let me give you two angles right uh one is uh you put a human in the loop uh where you think human is gonna make uh better judgment calls and uh do the right thing but then you overwhelm that um uh situation and then it becomes uh you know a lot of things escape and without uh understanding uh that becomes like a uh you know clicking exercise where the human doesn't quite understand the implications of everything that's going on right and then uh and then that's a fundamental failure uh in the process itself uh and then the other cases uh when you have uh many stakeholders and approvals you you often uh delay things significantly uh without having uh kind of exact clarity on uh what is being applied well and what are the real implications and how how to kind of like proceed with it. So yeah where do you think security tools should live in the platform team product team in the I would say in the platform team primarily uh so I have an interesting role uh I have uh I'm platform engineering uh director as well as CSO for my group now uh in in one way you can look at empowerment and the stake uh of the two roles uh and the nice thing is uh you accomplish frictionless security by making it easy with platform engineering to do the right thing so more you do that you get better security and a better developer experience compared to uh just uh you know uh security for the sake of it or security theater right do we feel like okay so do we feel like the platform team could potentially not understand context of the product teams and there's a little bit of friction there like so so there is friction because uh if you look at uh like whether it's a you know a security organization or a platform organization these things are fundamentally bottlenecks that are put into any organization to uh they're good bottlenecks I'm not saying bad bottlenecks uh they're put in to uh streamline the process and make sure that you do things sensibly uh so any bottleneck you'll always have frictions if things are not running smoothly and competitively so this is where I think you need to uh you know it's more people relationships understanding empathy and what people like really want along with ensuring the process and making sure what the the bottleneck works efficiently and offer a competitive offering for all the stakeholders I think that's the that's the key about an effective uh bottleneck more often what happens is in organizations uh uh security function uh perform engineering functions they are they're fundamentally broken because this uh that the the bottleneck is not working efficiently and it's it it over time uh it's not competitive enough isn't it true that some long-term security employees just like to say no so like I mean uh I mean so I'm I'm a CISO and uh you know we have groups right like uh there's a funny saying like you know it's the chief uh escaped right yeah the so so I think you know that's a very interesting uh uh angle sometimes because uh your job is on the line yeah exactly when you say when you say yes to things right uh and then if you know something really bad happens you're you're putting your job in in the line uh so so there is an inclination to say no or start with no yeah uh without necessarily appreciating uh what the business is trying to accomplish and uh looking at more pragmatic parts. But yeah I mean I think you know the thing is you know I don't know like personally if you ask me uh uh that's uh not exactly a career aspiration for me on uh on one side like I'd rather think more on hey uh yes but and then try to like that get them on uh hey let's do it in a frictionless way do it better these are the implications uh yeah I mean it's uh I like that yes and actually not even yes but I would say yes and you need to do XYZ right.

Speaker 7

What is one AI security risk that is real right now versus one that's just not really that so much of a worry.

Speaker 3

I think uh one that is not real is the hype uh which is uh I mean it's almost like uh it's not a problem with uh AI uh it's just people misusing AI and just uh creating chaos and trouble. So I you know that there's a lot of uh drama going on yeah around that but uh uh but what's some examples of real things are I think uh uh data security is very real so uh whenever we look at an uh AI uh tool or anything I think where your data flows are and how that gets uh exposed is uh very important to think about because uh uh you know with modern systems uh you you know you you don't always have control of your data you have to sometimes you know trust third parties uh they leave certain boundaries so it's really important to understand those flows and how you secure it because uh that's an that's an area you can get into trouble. Another one is uh privilege escalation. So we've been doing a lot of uh agent it uh and uh one pattern that's been happening is uh when it comes to agent tools you give uh service account access to uh sub agents and agents and whereas when it comes to the humans you have particular RBAC scopes and and uh things around that so mapping those uh roles to what it can do I think that's a big big big challenge because uh not do getting it right would lead to uh uh you know accidental or intentional privilege escalation so that's a uh valid challenge that uh we're seeing right now if you had to improve developer experience and raise your security bar in the same quarter what single move gets whole uh it's doing platform engineering right with the security uh almost partnering with security teams so wherever so I I mean it's nice that I own the both roles right now but uh in the cases where I haven't I would uh seek out to the security organization partner with them understand their requirements and give them also good angles around how a fundamental platform engineering move will improve security as well as accelerate uh developers making it easy to do the right thing versus uh the wrong thing so uh that's that's what I call frictionless security and uh you know yeah that's kind of your thing yeah yeah is uh is there anything you've seen at the show today that is kind of like wow yeah oh that I mean there's there's been a lot lots lots of stuff I mean uh I think uh you know uh certainly agent AI it's interesting to see how how how much is possible and also in some way uh some ways how much is not realized right so it's like uh uh that's kind of like what's uh uh most interesting so if I'm if I'm like uh when I'm thinking about next year uh I think hopefully all of these things being possible and actually going into production use cases and being realized I think uh that's the that's the exciting so do you think agentic is like the next big leap where we make a change to your point like where it becomes more trustable more you know usable yeah I mean uh so there's a couple of things around agent it one is to get around uh LLM sort of limitations and uh uh kind of you know say even if the foundational models don't uh advance agent can be leveraged to solve a lot more complex problems. The other one is the data problems we've had for decades, right? I mean if you look at it uh people still don't have uh their data platform problems and other things sorted out and more often than not the biggest concerns are around data so agent can really help where you can have uh separation of concerns with agents and specific data and them collaborating with each other so that that's not something that we are seeing much right now but I think uh next year perhaps we will see a lot of that where you can say you know I'll I'll give an example uh you know if I have a personal uh health agent I I I will trust my uh personal health data with it but then if if that agent is uh interacting with uh different other agentic system that agent could be empowered to share what's the program than what's not, right? So you can you can kind of like uh safeguard certain information in that way. And then if you look at like the enterprise world and uh data silos and all the concerns around data, I think that's a really significant piece that would work in favor of agent.

Speaker 7

So do you so do you think do you think we need to move to a place of a talent base that understands how to run agents? Like less of an like it's not I guess it is engineering, but like it's almost like their job is to run agents.

Eric Baran, Principal Sales Leader at Amazon Web Services (AWS)

Speaker 3

And also it's uh it's a transformation right so in in some ways uh when I when I think about AI agent all of this uh uh technical capabilities there they are I think yes there has to be a transformation but uh that's ones and zeros and binaries in some ways easier right the social and the human uh people and the process aspect that's gonna be the more more significant thing that needs to be uh transformed so you know processes need to evolve be challenged in order to improve things and then people they really need to have a uh growth mindset around how uh AI can really help because if you start with uh AI is gonna take my job that's a non-starter you you are uh I don't like you know exactly I mean uh so it's like I I can't see AI taking jobs uh like you know in the story in my lifetime I like I don't see that right the but it's really important leveraging AI in order to do things better I think we all have a lot that we can uh kind of like really gain absolutely okay we're gonna do a segment ship it or skip it okay yeah sure you're right AI code reviews AI code reviews I think uh skip it skip it for now for now yeah as again you said we just we quite don't quite trust it yeah it just needs to get a little bit further sorry sorry AI oh my bad let me go let me go AI code reviews AI code review code reviews uh so using AI for code reviews yes uh very very worthwhile leveraging because AI can uh really summarize very complex uh code reviews it still need to be reviewed so like the thing like uh let's take GitOps in platform engineering okay GitOps PRs are a joke in most of the teams because they are so big and people don't understand and like most of the time people just click the button in approval right so so it's like a so using AI to help people understand it's significantly better. And and and the same thing applies to uh code uh I mean in some teams you know hey keep it small so people can understand like that's been the golden rule if you go a few years back now with uh code being generated there's so much that gets generated right and and and when you're doing uh reviews yeah if you don't use AI it's very difficult to uh understand exactly what's being changed and what are the implications so it's like yeah that's a bottleneck and I I think it's important to leverage AI yeah yeah all right ship it or skip it yeah four day work weeks four day work weeks careful careful I'd say skip it I mean uh it's like a it's an interesting thing I think uh I mean you you have like a so I I've I've led like uh uh a lot of different cultures a lot of different teams right I think uh depending on where you are like uh you have a very different uh mindset and an attitude right I think uh to be honest like I'd say like the time you work is irrelevant as long as the outcomes are easy right so it's almost like uh I don't really care whether people work nine to five or uh five days an even better answer right it's like what I care about is the outcome right so it's like uh I don't want to be a bean counter hey did you clock in clock out or are you are you there are you in the office for production yeah it's uh it's a what have you accomplished right like uh makes sense all right no deploy Fridays keep it that's uh that's that's excellent because uh I have done it like I mean it's just you know unnecessary stress right I mean uh you're winding down you do something bad and then uh you ruin the round everybody gets everybody right and the cost of that and then uh the toil that creates for the next week it's not worth it so it's like better to uh ship uh Monday to Thursday where you can actually you know deal with things uh and then uh you know relax a little bit more and it it actually helps you go faster in the long run all right one more yeah LeBron James I don't know thank you sir thank you appreciate it Eric man thank you for coming out to uh reInvent it's absolutely wild here awesome um the craziest show that I've ever been to right yeah this place is just insane bananas i i i this gets bigger every year and it's always fun anyway well hey look I wanted to have you on because um I'm interested to get your perspective on some of the trends that are happening now in in the industry yeah and um I think the AI is obviously one of the biggest uh hot bucket button topics that we're uh hearing and I'm curious to kind of understand from your perspective like um basically what are you what are you feeling is kind of like hype yeah versus actually going to help productivity and and and efficiency.

Speaker 1

100% you know it's interesting I the one of the things that in being in this space now I've been in the software development space for the last 20 years which is scary I got gray in the hair all it's just what it is. But the coolest part that kind of happened when a lot of these model providers really started to come out was I've got all these different line of business use cases that we want to try to go build wealth management advisors or trading platforms and all that's happening. And I think that there's real value that's being derived in the creation of those different types of agents and use cases with the model. But the one that really quickly that I think we've all seen truly emerge that went so fast from prompt engineering all the way to I'm gonna build a fleet of software engineers as AI is in the development space. And what has been amazing excuse me what has been amazing is there is a whole set of my customer segment I manage the global financial services business for our developer platform Gen AI segment. And seeing how customers are taking advantage of those different services to accelerate what the developers are able to actually achieve and how fast adoption has happened is unlike anything I've ever seen in the 20 years I've been in the software development space. And I think that where they're seeing the most value is how are we able to think about an idea of a feature that I want to go build and begin to take you AI to offload all of the remedial things that I needed to do, whether it's test creation, whether it's pipeline configuration, whether it's building the right infrastructure templating patterns and the way we're going to deploy it's giving developers the chance to try to use it for feature development, which is critical, right? But the area where we're seeing the most impact is offloading all of the things that Plat Engine operations security were putting back on development where where you looked at their typical day they might have only been writing lines of code on a feature for an hour or two and they're able to get significantly more freed up in that space.

Speaker 7

So it is accelerating significant. It's accelerating okay so that's that's a great point. So everywhere I look around here everybody's about AI right okay so from your perspective what is something that is that demos really well or or and actually doesn't really help?

Speaker 1

Yeah. So I think that if if I've and I've had a good fortune of working with lots of different technology providers including our own in the space right and that's what something that we at Amazon are really really keen on. We want to make sure that customers have choice in those different types of options. But the demo tends to be kind of the same right we're gonna prompt we're gonna try something I need to ask a developer what do you want to go build? We prompt it we try it we see what happens right I think one of the things that's been most impactful for the customer segment is going back to what can I what as a developer or an ops team am I overburdened by on a day-to-day basis? What can I use AI to offload to give myself the ability to go back and really build the feature segment that I want. And where we've seen significant success is in more of the classic platform DevOps domain functions that really really require developers to go build infrastructure, build code, or build tooling configurations or APIs that are connecting to various things. And being able to get specific on how do we help you move away from all of these different consolidated developer platforms that are being used sometimes in a different way even though it's the same branded logo in you know a 15 person team that might be a 50,000 developer organization. Really focusing on the areas that we're not just going to focus on helping you build and write logic faster but we're gonna build all of that scaffolding stuff that actually allows that to get into production being able to show how AI helps with that process getting it out is actually the most critical area where we're seeing teams want to go latch in.

Speaker 7

Do you think that there are specific areas in in in that you're saying that it's speeding up the SDLC basically but do you think there's specific areas that are that are are are being hurt yes by development? Like if we're like Codasys right we're going faster with codes is there other areas in the SDLC that were being hurt?

Speaker 1

So that's actually right at the heart of what I'm saying, right? So what's been amazing is the explosion of different engines that developers are using. Because they're doing it on their personal time or they're actually going to the enterprise and they're bringing in one of these agentic IDEs or they're actually using or building their own developer agent. We've seen an explosion in code come off of those engines. I can go faster I can build more logic against those features. But in the business that I support in global financials there's still a lot of regulatory risk governance that's required to ensure that what's being created can marry up to the actual compliance governance regime that I need from that feature set. And we go into a lot of discussions with customers where they're going well I had a hard time keeping up with all this stuff before these AI tools were actually utilized right my release cadence was six months, 12 months in certain capacities, right? So where we're seeing a really really large set of interests beyond just offloading or using AI to less lessen the burden for developers of all the ops things they needed to do, we're really really working with clients and it's a why the partnership with Horace has been so strong helping them get better at the domain of okay I've used AI, how do I ensure that what that AI and the human built is able to actually make its way into production. And when we look at how we instrument all the ways that that comes out of the developers kind of IDE into a CI process and be able to move that through ensuring that we're enforcing policy are we actually managing the supply chain of what is in that release it's been an absolutely critical aspect for customers and it's happening really fast. Meaning they're using all these technologies but they're coming back to the table going I need to get better at actually getting this out and I don't know how to make sure that my audit team, my governance team knows that hey we built this quickly but it actually does marry up the spec and that's actually where harness has been so important for our clients.

Speaker 7

So really adding AI in other areas besides just coding but also um I I have a fear right I I feel like we're creating this big wave or a potential tsunami of um code that the developers are not so familiar with and it's going to cause a bunch of like problems in the back end once production happens, right?

Speaker 1

Yeah. Well it's a classic wave that we're seeing everyone's super excited about this hype of what it can do, right? But we are starting to really hear from clients on I love what this can do but I need to make sure that what it can do we can protect the actual logic are you seeing AI in um like you know financial institutions are regular regulation and um really you know heavily scrutinized are you seeing other uh solutions that are helping with that piece of it meaning the actual regulations around enforcing governance and legal and like all of those things like because because to your point it's like we build all this code and then we have to regulate it.

Speaker 7

Yeah. Is there any is there any wave on kind of streamlining that process?

Speaker 1

Yeah well so we've seen that so we've seen it actually I mean if we really go high level we've really been working with a lot of different governing bodies in the banks to like help them understand what secure coding actually means what it means to actually build a supply chain of evidence for an auditor that says this workload that you're deploying on top of the the AWS hypervisor can be blessed because we've built to spec, right? I think the thing that's interesting about the question is we're absolutely seeing the hype wave, right? But it actually has driven some significant value so I don't want to call it a hype wave, right? It's a value wave. But what you are describing is kind of what I was describing previously there's so much new logic being created I need a way to make sure and maintain that we can actually keep up it's actually a paradigm we saw if we go back to the example of before these things were introduced, these AI capabilities were introduced we had a lot of best of breed tooling environments that were built for developers. When we have that happen the way that we need to build the body of evidence of the supply chain off of how those developers operate on a day-to-day basis is a very heavy lift, right? Being able to extract the information and then apply AI to either enforce policy or help those developers follow the standard practices to ensure that that actual governance regime is absolutely where we are seeing not only partners like yourselves but areas where you're taking your platform forward with AI to actually build capabilities natively to help streamline that process.

Speaker 7

Yeah absolutely so okay get real with me here what is something in the tech world business that's just a pure pet peeve of yours like you just can't stand it.

Speaker 1

Well you know to be honest with you tech world business I think that if I go back to the hype wave right there's a lot of just uncertainty of where this all goes and I'm watching clients like change overnight the perspective because the new tool came out. Right. And developers love doing that. We've been doing that forever right but this one's happening faster than I've seen in a while. What's cool though is so it drives me nuts when it's like the next day some new company comes out and then all of a sudden everybody only wants to go focus on that. It's really important though for customers to be able to test and try and experiment with all those things. So it's a little bit of a double-edged sword to say I'm actually frustrated by it. Yeah that makes total sense. But I will tell you that the capabilities that we're seeing and how all of these different engines will be utilized to help get developers more productive and when I say more productive focused on building the best features they can not infrastructure logic not APIs for dueling not configuration but really how are we going to delight customers? That's where this whole thing is going to go and it does require a lot of experimentation.

Speaker 7

Yeah. Who do you think owns the who do you think owns the should it be the platform team or the products team or a combination for AI in general? Like who do you think should own that?

Speaker 1

Because to your point governance and compliance and like all those things is that a platform team thing and then they're pushing it out to the product teams or what do you what is your create the platform I always I I love these these different team things that happen because we went from like development tooling team to DevOps team to cloud center of excellence to platform engineering. Now we're trying to drive an AI strategy across all of that. I think what you're gonna start to see is there's a lot of capability that if you're if you don't understand how to code or don't have a computer science background right that these different capabilities that AI is bringing to the table really democratize the barrier of entry for people to actually go and move into that space. And I think that what will start to happen is there will be a governing body of AI across all of the different use cases, right? Because it's not just development. We start looking at even if you just go into like basic financial discipline right if we're thinking about banking capital markets insurance the different roles that exist there's a lot of automation that can be created that you're gonna see a lot more line of business people, like people that are not developers and they're already doing this, starting to go out of building those apps. And what's starting to happen now is we're hearing I need to build more of a factory approach to how, just like we did with like the development teams we need to start thinking about how do we extend this to think of everybody as a software developer. Right. Right? And be able to actually govern how that actually operates and that's where the critical aspects of how are we building the pipeline that's pushing those things out into the actual production environments that are delighting customers. How are we ensuring we have governance in the fold across that pipeline right? So I'm seeing that start to kind of grow out where we're almost seeing like platform engineering, DevOps cloud center of excellence starting to coalesce and then a governing body from an AI perspective is starting to become central to how they're thinking about options yeah that's 100% true we're gonna do a segment called Ship It or Skip It okay non-compliant AI tools.

Speaker 7

I've used GPT for work I bet you have what is it at a ship or skip?

Speaker 1

You know what I think all of them have their merits and values I'm saying ship.

Speaker 7

I'm saying ship I like it all right ship or skip LeBron James Skip.

Tim Knapp, Senior Director of Engineering at Slalom

Speaker

I'm a Michael Jordan guy all right one more full day zoom meetings in the office full day so I'm on zoom meetings all day on zoom games in in the office makes me want to lose my mind. Ship or skip skip 100% all right buddy thank you man thank you thank you for coming out to reInvent man what a crazy show absolutely wild it's so busy here.

Speaker 7

Oh it's amazing the energy is always crazy it's crazy yeah I appreciate you coming out look um I wanted to have you on to kind of get your perspective uh the tech industry is kind of a a crazy place right now uh AI is really taking it by storm and uh along with the rest of the world and I want to get your perspective on kind of just understanding like what's your what are you seeing and what's your honest opinion like I would prefer not marketing speak but more of a um yes you know hey nobody's listening right okay um but first is are what what are you seeing trend wise that's hype versus actually helpful when it comes to AI in tech?

Speaker 5

Okay so hype hype versus helpful um I I think like uh a lot of a lot of people right now are are trying to make claims that you can um you can you can go and uh adopt some of this new AI tooling especially in the delivery life cycle and you know 10x your throughput um I do think that there's a lot of potential to like you know hit uh hit velocity gains in particular like you know in in teams uh by by uh adopting new tools and methods at the same time um but I think I think there's been some overpromising and some like you know uh what feels like you know maybe uh under underproducing like you know in that on that account. I think a healthy dose of like sort of clear-eyed optimism around like what it really takes for for teams to actually fully sort of adopt tooling yeah adapt their processes right actually shift in from a uh talented people standpoint uh to to actually get in and and uh make the most use of that you know is the thing that we all need to be on on a trajectory you know to get to before we hit 10x so I think 10x is a bit of a hype um but but there's there's real value and progression there.

Speaker 7

Yeah I wonder that because every booth I see right there's AI in every booth. Oh yeah 100% this is basically an AI show right exactly and so but that's the thing it's like what what what what what's real what's not right yeah so what do you think is one of the most expensive habits um that the enterprise has that does not ship product.

Speaker 5

Oh yeah um so particularly because we're talking about enterprises um it seems maybe sort of crazy to say it in 2025 but um I think a lot of the the practices I see kind of stem from um a lack of really moving from a waterfall traditional waterfall mindset you know into uh really an agile you know DevOps and sort of MVP mindset. Yeah um I I have a lot of appreciation for where that that comes from you know history as well as in a lot of cases you know um uh budgeting and you know a financial planning you know standpoint like you need to understand like what an entire solution is going to be but I think what that often translates to is some of the same problems with waterfall that we we saw where what you start out with doesn't turn out to be you know what you need uh and then projects missing uh you know timelines uh scope increasing longer running you know cycles and development cycles and at the end of the day not actually getting product out to like your end users to uh to test out hypotheses um actually unlock value uh and and keep teams moving um and we're seeing those right in the in the the bigger organizations that you know are slow to move right like maybe healthcare and pharmaceutical these companies that have been around a long time um I wonder if I wonder like you know how is AI I mean how is AI gonna really impact that it it right yeah so that's the really interesting thing right um you know one thing I've actually been I find myself walking around saying a bit is that everybody's feeling the imperative right now around AI, right? But you can't you can't put AI on top of Bad, right? Bad processes, uh, bad technology in some cases, right? Um, the paradigm shift with AI in software delivery um makes it possible to actually change your processes very dramatically. Um but that takes like you know a real evolution, uh which means a lot in a lot of cases breaking down some silos that exist, like in many cases, like in the in the enterprise, to sort of lead to that waterfall process around like design, you know, develop, um, test. Um and I think uh an awareness and a willingness to kind of break all that down um to actually unlock the potential of the new technology paradigm shift is uh is what it takes.

Speaker 7

Yeah, I agree with you. Is it do you know of an AI initiative that actually returned cash and then one that was just really nonsense?

Speaker 5

Oh yeah. Um so I'll start out on the on the nonsense. Okay. Well, actually, you know, so where where my mind goes really, I I kind of think there's there's two buckets of sort of nonsense, like you know, things that that we've seen, again, as like people have felt the AI imperative around use cases. The first is ideas that are just frankly not AI ideas, right? Or it's like, hey, what if I could do this? Uh I remember um having a conversation with uh with one person around you know how they could basically improve usability you know in their application, like using AI. And when we really talked about what was at the heart of like those challenges, we realized what they really needed to do was make some primarily CSS changes, like you know, to make the system work a little bit better. You know, and and that comes from like trying to kind of hit you know whatever nail, like you know, with an AI hammer, uh, instead of just thinking like what am I really trying to do here and what's the problem. Um, the next class is really like areas where people just aren't thinking through you know, full sort of adoption, you know, what it's really gonna take to build you know a reliable um you know AI solution that people will actually trust. Uh so bad data is like usually one of is often one of those. Like one of the first use cases people go to is like uh everybody's looking up information across a lot of documents. How do I get those into a chat interface so people can quickly find the answer to questions? Um worked with somebody who was doing that at one point, put documents in, uh we tried it out, we realized those documents were not consistent. So it turns out you're not gonna get consistent answers coming out of that. Um so I've seen those things kind of fall down a bit.

Speaker 7

You know, that actually leads, like I think about that, and and when we talk about like shelfware, right? Like it's like these projects, we take on these projects and then they buy a solution for it, right? Yeah, and then for whatever reason, you know, it doesn't work. But I wonder if that's like a lack of understanding how the solution works and like clinging to those old ways, right? Or actually driving change in an organization, right?

Speaker 5

Yeah, exactly. I mean it's I really think it's both. Like you um you you need to have a solution that in fact, like you know, you thought out and not gone beyond just that happy pattern where I think a lot of a lot of people get stuck is they try something, they say, hey, it worked, but they haven't thought through all the different like you know, like edge cases. And once they start getting into what it'll actually take to make uh a solution you know fully work in a rounded out way that will actually be deployable, um, you know, they they get stuck. Um and then people don't they don't see the pull through you know in in adoption unless uh unless that trust is there that the solution is really gonna work. And then you have to also have a uh a real you know meaningful uh pull through like you know change management campaign, right?

Speaker 7

Yeah, absolutely. What do you think execs overbuy on when they're saying scale engineering? And what do you think they underinvest in?

Speaker 5

Yeah, uh well, I mean, classically, uh I would just say engineering. Um so when you say scale engineering, the thing that the thing that I've I've had plenty of conversations of like, hey, I need 20 engineers, right? Um, you know, and and a lot of the times um you know, we what we see is that you know specialization across different disciplines and and cross-functional teams, like you know, really lead to you know the best outcomes. So a lot of those disciplines and specialties, you know, that really support the software engineers, and I say that as a leader of a software engineering team, um, are uh are often overlooked. So quality engineering, uh platform engineering and devops that's needed to really like help the teams uh operate effectively, um, and all those sort of surrounding disciplines. So things that we often we often see aren't aren't initially asked for, uh, and uh and that we're we're always thinking about how we find the right balance of uh to try to really fine-tune the overall um overall team.

Speaker 7

Do you think right now you're seeing a trend where the execs are saying like let's throw AI at it? Maybe that, right?

Speaker 5

Like well, there's there's there's also definitely that. I mean, that that's the the top-down push of you know, like um let's let's use AI, and I'm expecting to see like you know, like X like you know come out of it. Um probably because they came to the show and saw that 100% 10X use. And everybody, and everybody, everybody on a team right now needs to figure out, you know, the challenge is really how to how to consume that, how to metabolize like you know, the the request of say, I need I need to be using AI or I'm gonna adopt I'm gonna suddenly uh have this tool available to me and really figure out how to go make that possible using um you know some some new techniques and really like like it all comes back to that people process technology you know triangle, right? And the tech if those if those are three sides of the triangle, that technology side just changed drastically, right? It's the people in the process that are still trying to figure out how to adapt to that.

Speaker 7

Yeah, that's very true. So, what do you think is a metric that we should remove from QBRs and maybe one that actually tells the truth? Oh yeah.

Speaker 5

Um gosh, well, so but talking about QBRs specifically, maybe not even one specific metric, but I think a hot take, I would remove agile project metrics from QBRs, right? You're the second person that said there's something there. I mean, whether it's it's defect counts or story points, those are all helpful, important like um project metrics that teams should use you know internally, you know, and like you know, with steering committees, like you know, to really understand what's happening, what's the help of a delivery. Um, but uh those need to be contextualized, you know, and they're different really for every project team. You know, story points are relative, they're not even like you know, meaningful like real measurements. Um you know, defect counts, like you know, like glide various different you know, testing strategies and approaches. A lot of context. There's so much, right? So at the end of the day, you're you're you're looking in a QBR setting, you're looking a little too close at that point. Uh but what you do you should be looking at is you know like how we did compared to our initial timeline estimates, um, and how we managed uh scope changes along the way, you know, and how that led to uh how that led to actual outcomes being there.

Speaker 7

Yeah, okay, great. Um, what's one org change that you can buy and it pays you back in 90 days?

Speaker 5

So probably the biggest thing, well, first of all, you know, I want to I think that siloed organization, like this still exists, like is is one thing, right? If you could org change out some of your like, you know, your separate teams for like those different cross-functional capabilities and and really get everybody working together in a team, I think that kind of solves what I was talking about earlier. But the the biggest thing I think I would hit on right now is the the AI engineering COE. So creating a uh a team you know that is um is also representative of you know your information security team uh to inform and adapt your policies, uh to uh establish responsible usage guidelines for teams, um, and then and then actually do enable net for teams uh broadly across an organization.

Speaker 7

Yeah, I couldn't agree with that more. How many times has somebody bought a piece of software or something and then just it doesn't ever get used? Yeah. Not because it doesn't work, right, but because they actually don't know how.

Speaker 5

And we've we've seen um we've we've seen a ton of that right now. A lot of people will pick it up, they try it, uh it doesn't immediately work, they they discard it, they say, like, you know, like my job's safe, I don't need to use this. Um and uh and then and then they uh they they move on. Whereas like we've been uh seeing a lot of success with certain sort of immersion workshops. Um you know, having having people spend like several days just learning the techniques to actually get the most from those tools, and then walking away and actually feeling completely changed. Like, wow, not only like this is amazing, it's gonna help me do work so much better, uh, and really just being excited about that. So there's some of that fear you know associated, like you know, that I think those COEs can really help to go go get people passed and realize that uh this is all really helpful, amazing technology should help us all do do more things.

Speaker 7

So if you capped headcount for a year, yeah, um what practice would still raise velocity?

Speaker 5

So um the biggest thing, and it kind of falls to that COE, um, I would say right now, and we've been talking about it here at reInvent a bunch, is uh is context engineering. Oh, interesting. So uh teams that that appreciate that to get the most the way I've been talking about it is that every time you make a call to a LLM completion API or uh or an agent in your IDE, um you might as well be pulling somebody brand new off the street that doesn't know anything about what you're uh you're trying to do. Um and when we do actually pull people in off the street, we bring them up, you know, onboard them to our organizations, let them know what we're trying to do, and we give them effectively contacts that they can go use to go do their job, make decisions along the way. Um when we when we ask an AI, an agent to go build a feature that like does something in an application, we need to do the same things, right? And that um that kind of builds on the concepts of prompt engineering um by establishing layers of context that that are basically that need to be maintained, you know, along with um you know a code base, you know, as a product is evolving. And that's a new sort of discipline that we're almost seeing emerge of really kind of how that's done and uh a hat that needs to be worn you know by uh by by maybe of one person andor um multiple people on a team to kind of take collective ownership.

Speaker 7

That's interesting because you're right. If if you are prompt engineering, it kind of is adding context to the specific code that you're writing, whereas, you know, yeah, that's that's actually great. I love that. Um non-compliant AI tools. So I don't know about you, but I uh have used Chat GPT at work, and it may not be compliant. Do we ship it or skip it? Oh yeah. 100% enterprise license. Yeah. That's the right answer for it. I'm still getting used. Um four-day work weeks. Uh skip it. Skip it!

Speaker 5

Sorry everybody, but we can do we can do more, we can do more with all these tools. Let's do it. Let's do three-day work weeks. Um AI code reviews. Uh you mean using your AI to do it. Uh yeah, let's let's ship it.

Piyush Diwan, Director of Software Engineering, BeOne Medicines

Speaker 7

Ship it. For sure. 100%. All right, man. Hey, I love it. I appreciate you coming in. Yeah, great to see you at the show. And uh, yeah, thanks for having me. All right, buddy. Sweet. Thank you for coming to reInvent, man. It's absolutely crazy here. Yeah I I really have never seen a show this crowded before.

Speaker 2

Absolutely.

Speaker 7

It's like a sporting event. Yeah. Um, I wanted to have you on. You're you're an engineering leader and and and you're in the in the midst of all this hectic time right now with AI, and uh the acceleration is just really quite crazy. Um I kind of wanted to understand from you, like, if we talk about um, you know, AI in general, like, where are you as an engineering leader kind of seeing some hype versus just actually productivity and efficiency tools that you get from AI in general?

Speaker 2

Yeah, so that's a great question. I think AI is here to help us in every step of the way in the whole software development lifecycle. I think for me, the biggest uh areas that AI has been able to help us is right from uh starting with the requirements, translation into the actual actionable task to writing code and to test and to deploy. So I personally think that AI's role is equal in every step of the way in the whole software development lifestyle.

Speaker 7

Is there any areas in that that you feel like it's not helping so much or we haven't quite figured it out yet?

Speaker 2

I think testing is one area.

Speaker 7

Testing?

Speaker 2

Yeah. Um and that's purely because uh there are gaps that I've found in terms of uh it being able to understand the requirements properly. And if the requirement understanding is wrong at the first place, it might be possible that it may come up with a wrong set of tests or incomplete set of tests, and that's probably one of the gaps that I've seen.

Speaker 7

Gotcha. Yeah. So what about let's say a modernization that you that you've gone through that worked, but uh you know, even though maybe you would or wouldn't repeat it again.

Speaker 2

Yeah, so um back in my um previous role at uh crypto startup, uh we um went ahead with modernizing or decentralizing, as they say, um the entire infrastructure, and we spun up different AWS accounts, different EKS clusters that were all self-managed, and uh we deployed all of our applications, microservices on those clusters, which turned out to be uh working really well. Um but I I wouldn't repeat that because some of our applications were super lightweight and uh we could have gone with like serverless technologies with lesser operational overhead, for example AWS Lambda or uh Fargate, even EKS on Fargate. So some of those things I I think that we can we could have uh done much, uh we could we could have gone much simpler route.

Speaker 7

So if you cut your platform um spin by 30%, um what goes first and what gets the stake?

Speaker 2

Yeah, um so again there are nice to have things in every platform team and there are must-have must-have things. Um for nice to have things, things like uh duplicate vendors uh that offer pretty much redundant capabilities or overlapping capabilities. Um there are low-impact environments, uh, there are uh redundant tools, complex tools uh that are no longer relevant or no longer used. But my favorite is the unused and underused resources. And I've been able to reduce cost by more than 40-50% in my previous roles, but just focusing on cutting down or shutting down all the underused and underused resources and also right sizing of some of those resources. So I think for me that would be one area, uh, and what must have is CICD and observability. I think observability, personally speaking, there's no limit to it. You can always have more dashboards, more insights, more information about your platform. Some of those things you would not even imagine or know that you would need those, but I think there are always some scope and CICD equally.

Speaker 7

That's a great point. Why do you think dashboards go unused?

Speaker 2

Because they are statically created. Um I have use case one, two, three, and I create widgets for those one, two, three. Or I create dashboards for those one, but there's always 1.5 or 2.5 that are some corner cases that we always miss. And that's why with the um with the AI capabilities, what we are trying to do right now is have agentic solutions built to provide us all of the material, all of the information. Um, and that's really uh uh becoming a conversational way of getting to know information that you're looking for. Instead of statically defining dashboards, you just simply ask questions with your platforms, with your observability tools that can answer you the questions that you're looking for by uh you know going behind the scenes and understanding and uh analyzing all of the data sets and metrics and logs and events and so on. So I think that's the direction we need to go towards.

Speaker 7

Why do you think so? This is something I I wonder quite in depth is if you look at an organization and they have the finance department and the marketing department, the sales department, those departments run heavily on metrics. They live and die by metrics. Why do you think the engineering group has not been able to adopt that? Because I work with a lot of engineering organizations and um like adopting or getting Dora metrics just in general, it seems to be very difficult. And when we do, they oftentimes they don't want to be measured. I'm just curious, like what like why do you think that is?

Speaker 2

I think it comes from the fact that a lot of engineering organizations are very top-down focused. So they're given the requirements, given the you know, use cases that they want, given the business problems that they want to, or they're asked to build the solutions, and they kind of get narrow-sighted to achieve those results and provide those those outcomes. And I think that's that's one uh approach or mindset that's where we miss out on a lot of results or matrix-driven approach. Um, I've seen all I've also seen some organizations working really bottoms-up, and that's where we define really the ground level matrix, and we go by that. We try to achieve that right from the get-go. Uh, even when the requirements are not very clear, uh, we we define the results. For example, um, in one of my previous organizations, when I was trying to build this platform as a product, I defined some of the core metrics. Like we want to reduce the cost by 30%, we want to increase the utilization by 50%, um, we want to achieve the availability SLA by 90%. So some of these metrics, once we defined, then we started building the products around those metrics, and that's where we were able to always backtrack and check how well we are progressing and what are the gaps that we need to fill to achieve those results.

Speaker 7

So, does it get a little bit of tunnel vision with metrics like again, based on initiative? It's kind of like the metric becomes tunnel vision, and it's like, okay, we have to hit this one thing, and then maybe other stuff goes by the wayside, right?

Speaker 2

Yeah.

Speaker 7

Yeah. So um what's something in engineering that you just absolutely think is a waste of time and we should stop doing it?

Speaker 2

It might be. You can be honest. Yeah. Like I said, I was going to say that I might be controversially saying this, but uh I think uh the process-driven software development, um quote-unquote agile uh methodologies, um they are oftentimes over-engineered. Um there are practices like predicting the capacity, bandwidth of the team, and uh but my view is that engineering is unpredictable. That's the fun part of it, right? If it's so much predictable, then it's probably not, it should not shouldn't be called engineering, it should be called something else.

Speaker 7

Um so if you're not running into problems and and having to iterate, yeah, then it's probably you're probably not doing the right thing.

Speaker 2

Yeah. For example, um back in the days when I was building a platform team at T-Mobile, um I achieved like 90 story points in one sprint and 10 story points in another sprint. And I loved it because the unpredictable part was the fun part for me. And I think oftentimes I've seen that uh in the pursuit of following those strict guidelines and processes, we kind of lose the sight of what we really need to deliver and how should we innovate? I think that's the part we should have.

Speaker 7

Do you think then that measuring work delivered, or like like I guess how would you quantify that? Like, how would we say what good is if you can do 90 story points one week and 10 the next, right? Like how what is the what is the end like result that we could measure and say, oh, regardless of number of story points, we this X was accomplished.

Speaker 2

Yeah, I think that it should be measured in terms of the user impact. Um what that end user. End user impact. End user.

Speaker 7

Okay.

Speaker 2

Or even if the end user is your own team, uh let's say I'm building an internal tool for my own team. What was the impact? Yeah, how many users actually try to use it, how many times it was used. Those are the kind of things you should always measure.

Speaker 7

If AI disappeared tomorrow, um what's that boring platform investment that sticks around and is still always gonna work?

Speaker 2

Yeah, I'll go back to my previous answer CI CD and observability. CI CD. Observability and CI CD are the two things. Gotcha. Very important part, right? Absolutely. You just can't um get away with that.

Speaker 7

What's the most unpopular policy change that you made that actually sped delivery?

Speaker 2

That actually sped delivery. Umpopular change.

Speaker 7

Yeah, unpopular change. Okay.

Speaker 2

Um We were using a lot of open source tools to the extent that it was almost breaking our internal monitoring stack by providing us a lot of false positive. Something changes on the upstream, downstream changes, are not prepared. So we kind of were in a continuous loop of fixing those problems by providing some or like adding some one-off or Glucode type solutions. And that was getting out of hands. So I proposed a policy of stop using open source, at least for those specific use cases. It was very unpopular as you can imagine. But we ended up building our own in-house monitoring stack with things like HealthWatch and SmokeCest Suite. And that kind of gave us much deeper and broader visibility into the health of our systems. And that's not true for every use case. I mean, the world lives on open source, so you not using open source is not a great idea for everything, but for certain things where the open source was not really maintained, was not well supported by the community, using those kind of tools was not really a good idea. And that kind of was the change that we made, and we kind of really achieved great results with that.

Speaker 7

What's the single AI tool that you're that you guys are using right now that you feel you're getting the most value out of?

Speaker 2

I think we are using GitHub Copilot and we are leveraging it to the full extent. We use it right from the requirements, translation, which is in most cases very ambiguous, and we translate those requirements into very specific user-actionable tasks, and from there software development to testing to deployment.

Speaker 7

Are you guys using it for document? Like are you using AI for documentation? I feel like that's a really easy one, right?

Speaker 2

It just helps we use we use AI for um writing our runbooks. So any troubleshooting, any um SRE DevOps type work, uh, we create extensive runbooks for different kinds of scenarios, and we use AI to do that.

Speaker 7

What's one vendor narrative that you think misleads engineering leaders today?

Speaker 2

Wow, that's a that's a great question. I think uh I've seen that build versus buy is still a grey area for a lot of engineering leadership. Um there's a misconception that vendors can abstract everything. Um, a lot of it is true, but there are areas where uh you have to customize the solutions, you have to build in-house tools to sort of augment what the vendors can offer and uh support. Um and again, there's no right or wrong answer on when to buy versus build. Uh it really depends on the use case, your skill sets, your bandwidth, your priorities. And um I use these parameters as like the variables in the equation uh to come up with the decision whether we should build or we should buy, with whether we should do both. Uh you know, buy first and then build on top of something that we bought.

Speaker 7

So how much do you think um the build option is driven by job security?

Speaker 2

Yeah. Um again, so um job security, the whole uh narrative about job security has like two, three different um verticals or uh sort of thoughts behind it. Uh number one is uh tribal knowledge. People tend to keep uh information and you know that gives them the job security. Sometimes people build unnecessarily complex systems so that in order to maintain, they would they would remain in the job. Uh I think all of those things are very very much present even in today's uh world. But I personally think that AI and automation and you know the whole narrative of buy uh is to not reduce the scope of job security, but it's to actually improve. Um and by that I mean when you buy and you you kind of demonstrates that it is useful and it can solve the problem that would have taken you months or weeks to build, and now you just bought and like bought a solution and you're able to do it much faster. You're also improving on your own credentials, right? Right. So so I think that's a that's a very uh overlooked point, and I think that that's something that people should remember when they think like if I buy something, I might reduce my job security. That's not the case.

Speaker 7

I agree. Um okay, ship it or skip it. AI code review, skip it. Skip it. Why? You don't trust it yet?

Speaker 2

Because it I'm I want to first trust the code written by AI. Okay, only then I will trust the code review by AI.

Speaker 7

Okay, ship it or skip it, four-day work week. Ship it. Ship it. I love this guy. Okay, ship it or skip it. No deploy Fridays.

Speaker 2

Oh, hell yeah. Ship it.

Speaker 7

Ship it.

Speaker 2

Yeah.

Speaker 7

I love it.

Speaker 2

Yeah.

Speaker 7

Awesome, brother, brother. Thank you for coming.

Speaker 2

Absolutely. Appreciate you. Great meeting with you and pleasure being here.

Outro

Speaker 7

Loved having you.

Speaker 4

Those were really insightful discussions, Thomas. So, for our listeners, what would be one takeaway for 2026 uh for either their engineering excellence goal or their modernization? So you have assessed SDLC for a lot of companies. Uh, what would be one golden rule they can follow?

Speaker 7

Yeah, I would say if I had to boil it down, I would say do not mistake AI acceleration for engineering excellence. They are very two very different things. I think engineering excellence in 2026 is going to really come down to whether your system can absorb speed without losing trust. And the trust word there is absolutely important. If we think about AI agents and um implementing those, the number one thing that will prevent us from doing that is trusting that the agent will take the actions that we want it to and not go and do something destructive. Um, that means that strong platforms, embedded governance, uh, clear ownerships, I think those are just things that um are of the utmost importance. I've talked a lot about utilizing like an internal developer portal to be a context layer for your agents and things of that nature. It's it's really going to be a matter of trust and building in a um a process to uh make sure that your agents and um other AI pieces have the ability to check for governance and safety guardrails in order to take action.

Speaker 4

Yeah, I love that uh touch on guardrails, like guardrails not gates. I think uh uh harness field CTO office also repeats that uh that uh frame that it's guardrails not gates. So uh you you touched on this before. What would be uh one make or break factor for platform engineering teams within the next 12 months based on your re-invent chats?

Speaker 7

Yeah. I think um the one that comes to mind for me, make or break, in my opinion, is going to be context and operating model maturity. I think that um a lot of teams are still thinking about AI mostly in terms of models and tools. Um, but one of the most important themes that I heard uh was that the the future is going to be much more context driven. Uh and we know that you know anybody writes a message to GPT, uh the less context you give it, the the worse the response is going to be. The more context you give it, the better it's gonna be. And so when we translate that into engineering, um I think you know the teams that win over the next 12 months are gonna be the teams that do two things really well. First, they create a strong golden path through platform engineering where this where there's a secure and governed path that's also the easiest path. I think that's really important. Uh and then the second piece is invest in context. So, um, how do we do that meaningful information and standards, workflows, uh, and clarity that help both the human and the AI ultimately operate more effectively.

Speaker 4

Thomas, thank you so much for sharing these insights. If our listeners want to connect with you, learn more about your work, where can they find you online?

Speaker 7

Yeah, absolutely. So I'm on LinkedIn at Thomas DocsDator. Um, I uh I again, as I mentioned before, for harness, I do a lot of consulting work for our clients in um doing uh really SDLC assessments that are utilizing Dora and Accelerate and really looking at it from a uh not from a, hey, I'm a vendor and how can I get you to give a product of mine, but really from a perspective of saying what are the people, processes, and tools that you're using to um operate your SDLC and uh really helping uh organizations zoom out, look at their whole SDLC, and then if there's a great harness product that could help them, awesome. Um that's that's the that's the the direction that I've had a lot of great success with our with our current customers.

Speaker 4

So we add the links in the chat. That was Thomas Doc Stator, engineering excellence at harness. Thanks so much for an amazing season four. I'll see you all in season five.