Welcome to the Software Security Gurus webcast with Matias Madou. In episode 5, Matias interviews Fredrick "Flee" Lee, a long-time cybersecurity expert and Chief Security Officer at Gusto.
They discuss the potential pitfalls of a thriving AppSec environment, including efficient training, best practices, and the lack of focus on custom rule-writing. We also hear his advice on building a super team, as well as a robust security program within an organization.
Introduction: 00:00-02:40
Who should be on an AppSec team? (and other security pitfalls): 02:40-08:48
The right approach to software security training: 08:48-17:00
Customized rule-writing and tools: 17:00-21:05
Improving your software security program: 21:05-30:45
Matias Madou:
Welcome to the Software Security Gurus Webcast. I'm your host, Matias Madou, CTO and co-founder of Secure Code Warrior. This webcast is co-sponsored by Secure Code Warrior. For more information, see www.softwaresecuritygurus.com. This is the fifth in a series of interviews with security gurus, and I'm super pleased to have with me today Frederick Lee, or Flee in short. Hey Flee, how's it going?
Fredrick Lee:
It is going well. I always laugh when people actually use my full, actual name. I just don't hear it that often. I'm doing good, how are you doing?
Matias Madou:
It was actually difficult for me to read, because I'm not used to say Frederick Lee.
Fredrick Lee:
Yeah I know.
Matias Madou:
So Flee, do you mind saying a couple of words about yourself?
Fredrick Lee:
Yeah. So, my name is Frederick Lee, most people actually call me Flee. I am the CTO of a company called Gusto. We actually focus on building a people platform, to essentially help make it easier for small and medium sized businesses to deliver all the benefits, payroll, et cetera, all the things it actually takes to actually have a really, really good and engaged employee base. Prior to being the CTO of Gusto, I ran security at a Fintech company called Square, and prior to that, I was also in security at another financial company called NetSuite. So, I've been around the block several times. I've done some small startups. I started the security team at Twilio. I've done really, really large companies. I was a founding member of application security at Bank of America. So, I've actually been around the block. There's a reason why I have all this gray hair. So.
Matias Madou:
Yeah, that's very different than even two years ago.
Fredrick Lee:
Yeah.
Matias Madou:
So fantastic to have you Flee, for this webcast. If you don't mind, I actually have two topics in mind, which I hope are near and dear to your heart. The first one is around application security, AppSec pitfalls, and the second one is more around, as you just mentioned, you've been at a lot of companies, kickstarting security programs, or improving security programs. So let's start with the first one, application security pitfalls. I, I went through a video of you, AppSecCali, I think, where you briefly touched on a couple of pitfalls. And I saw you mentioning that a couple of times before in other videos too. So it looks to me that it's near and dear to your heart, trying to avoid problems.
Matias Madou:
So maybe the first one that I have here that you called out is, "Hey, not having developers on your security team is not good." You have to have developers on your security team. But at the same time, I was wondering shouldn't we all have like application security people that are now reformed developers, or how do you see that? What type of people do we need on our application security teams?
Fredrick Lee:
Oh, yeah, that's a great question. And particularly the latter suggestion you have with regards to taking AppSec people, or essentially creating application security from developers, part of the reason why I'm a huge advocate of having developers on the security team is one just purely the power of code, having the ability to have your security team, but also be able to actually change what engineering is doing by introducing new components into the engineering ecosystem, to actually help and actually solve some of those security problems by actually building frameworks or improving frameworks, et cetera. With regards to actually bringing some of those developers over into application security, one, you already get a headstart on having people that [inaudible 00:03:40] good software fundamentals to actually help make some security improvements. Probably even more so is that key aspect of having developer empathy.
Fredrick Lee:
When you have developers on the security team, they can actually help the security team understand what are some of the things that they're going to introduce either as a process or a tool, or even just a new framework or even training developers that are previously on previous teams or product teams that are now application security engineers can essentially advocate for the developers.
Fredrick Lee:
They can say, "Hey, Flee this thing you're suggesting actually isn't going to work well within the CICD pipeline we've already built, or our developers don't have that kind of time to take four weeks of training. Can we actually make things a little bit more bite size?"
Fredrick Lee:
Some other good examples are occasionally we as security practitioners often come into organizations with a one size fits all approach toward security. And a good example, I'll use training again, is you'll often find people will come into a company and they'll say like, "Oh, well, Hey, let's train all the developers." And they go to OWASP Top Ten, which is actually, that's a great resource, but often they forget that OWASP Top Ten it's open web and so it's this whole idea is a lot of this stuff in OWASP is targeted towards web applications.
Fredrick Lee:
So you lose credibility as security team. If you're trying to give firmware engineers that spend the majority of their time writing C code and embedded systems, if you're trying to actually give them training around SQL injection. But having a developer on your team can help with that, they can help actually call out some of those pitfalls, keep the security team empathic towards the rest of the developers. And then sometimes also call out the security team on our own BS. They can say, "Hey, this tool, actually isn't all that great. This doesn't work inside of the ecosystem."
Fredrick Lee:
I'll use maybe some other examples that are actually near and dear maybe to you and I. Our paths, when I think about things like static analysis tools and I am definitely guilty because I am one of the people that help write and work on Fortify and other stack analysis tools.
Fredrick Lee:
And we did a lot of stuff from the perspective of a security practitioner, but those tools may not have actually been as consumable by a developer as we would have liked, and ultimately the developers are the solution for software security problems. And so the more that we can actually do to enable them the better, and by having developers on your team that can actually spot out things, they can say, "Hey, you know what, this static analysis tool that you've suggested Flee really doesn't work in our full... I would've rather have something that maybe it's embedded in the IDE or something that's actually really, really useful that looks to me, like a developer." And that's actually some of the other great things you actually get out of having developers on your security team. It's just always useful to have people that can write code also. I've never found an issue that having some good software engineers is not useful for. So.
Matias Madou:
So sounds like the role of an application security person that is not able to code is disappearing? Is that on the way out? Or do we still need, do we still need AppSec people that are not able to code?
Fredrick Lee:
Oh yeah. Let's get controversial here.
Matias Madou:
It's a question.
Fredrick Lee:
Yeah. Yeah. I would love to see it go away. I personally believe that it's weird to call somebody an application security engineer if they actually can't engineer.
Matias Madou:
Yeah.
Fredrick Lee:
And maybe, bear with this stretched analogy here. Envision a scenario where you're writing or you run a newspaper or you run a magazine and I'm trying to keep ship out secure coding magazine monthly and the people that are in charge of also being editors to actually help spot out grammar and things like that can't read. So they can't write. They actually can't be as helpful as somebody that actually has been there has done that can actually write code and actually understands what the code is doing and actually help actually spot out things.
Fredrick Lee:
So, for example, if you don't really understand code, you can't help identify actual [inaudible 00:07:38] inside of code base, which then you can actually leverage to maybe figure out some systemic solutions. So you can say, "Hey, I have this problem inside of my code base. Maybe my developers aren't really good at input standardization. I can understand the code well enough that I can actually figure out, well, where can I actually contribute to inject better components?" So.
Matias Madou:
Yes.
Fredrick Lee:
But to make it crisp and tight, yes, I want AppSec people that cannot code. I want that discipline to go away. I don't think it's useful. So, if you're watching this and you're in AppSec and you can't code, please learn to code and yeah.
Matias Madou:
Or go to network security.
Fredrick Lee:
I would love to see a future where all security practitioners know how to code.
Matias Madou:
Yeah. It's the basis of everything. Everything is code it starts with all the bits and the bytes that we are writing so.
Oh yeah.
Matias Madou:
So a second pitfall that you commonly talk about is you've seen a lot of bad software security training and first of all, Flee, I have to say, you will not get any additional free Secure Code Warrior licenses if you say something nice about our product, but all kidding aside, if you think about software security training, you quite often say, "Hey, a pitfall for a lot of organizations." What should it be? And what should they avoid?
Fredrick Lee:
Yep. Probably the... Actually I'll make it really, really tight. What should it be? It should be lightweight ideally just in time to the developer when they need it and extremely targeted to the organization. And if you can do that, you actually get a lot more engagement. And I'll repeat that again, really light. You can't have training that takes the developer four weeks to go through. I'm not saying that training can't be useful, but the goal is to get them to write better software and identify, and actually do that from the start, as opposed to trying to teach them to become security experts. That's actually our jobs. Being security experts. So keeping it tight. The other is keeping it light and somewhat just in time, I really like that approach and it's really like, "Oh, well, you're," and I'll go back to the previous analogies of the thing about your firmware engineer. This is like, you want it to actually be really, really targeted to what they're working on.
Fredrick Lee:
You want that training to be C-based or some other embedded language. You want that training to be extremely relevant around maybe things like memory issues, et cetera, maybe [inaudible 00:10:10] time of check to time of use issues, those kinds of things. And you want it to be right there with them. Obviously I want these embedded engineers to know all of things around security, but I really need them to actually nail those things that are actually relevant to their day to day job. Because what is relevant to a developer, they embrace it. They love it. When it's actually, non-intrusive, maybe small hints, et cetera. It's actually even better for them. Because actually it looks like other things that they're utilizing. Yeah. And then the whole idea is really, really weight sizing it for the organization. Recognizing if your organization is primarily focused on building static websites, then that training is going to look different than a company that's doing a lot of high volume financial transactions.
Fredrick Lee:
So I think actually some of the things that I think what training should look like, really, really approachable, really, really developer centric. And what are the things that actually appeals to developers? What's the training not look like. So many ways I could get this wrong. One way is I generally see it fail is when people don't customize it for the organization. Developers will immediately and probably rightfully so turn their nose up at something that looks like it's just canned material. And you know, it's going to be the same, weird corporate PowerPoint slide from the nineties with some weird charts. And it's going to be a couple of examples from OWASP and irrelevant to the company. Probably even in a language that's not even their language. And that one I see so often where you'll buy training and really you're leasing time from maybe a third party and you see the training's like, "Oh, well, hey, I want this training."
Fredrick Lee:
And the examples are in Java. Well, my shop is maybe a Python shop or a Go shop or Ruby on Rails shop. Java training doesn't do my developers any good. So making sure you're also are customizing is actually one of the pitfalls. The other is really making it too overweight and too heavy. A developer's time is expensive and rightfully so, product teams should be stingy around that, around that time. So I think it's actually one of the other things, really making it approachable and tight. The other big thing is that often I see it go wrong because the training is just boring. It literally is. It's boring.
Fredrick Lee:
Oftentimes the training is just you as a trainer sitting up there in front of some slides and the participants or the students slash developers are literally just doing passive listening. And so if you can have training that actually is engaging and active, like training that maybe has some labs. If you're doing secure coding training, like, hey, some training that's actually, "Hey, go out and actually fix this vulnerability. Let's actually write some code to actually fix this vulnerability. Test it, see how it works. Let's go and actually find defects in software together, things are actually much more interactive because that interaction is actually what really resonates with developers and actually allows it to actually stick, so.
Matias Madou:
Yeah. And so I think also budget comes into play here because I hear you say with customization, I remember back in the day before we started Secure Code Warrior here. I remember we working together and it can quite quickly become super expensive if you want to tailor everything towards your technology and to your stack. So it's a fine balance, I would say, between, hey, you want to have something that is relevant, but not super detailed because at that point you're spending a lot of money on your training.
Fredrick Lee:
Completely agree because it has to be scalable. And when you look at some of these larger organizations, obviously I've been at really, really small companies. Like when I joined Twilio, we were less than a hundred people. So I can literally just deliver the training myself. And that's actually really scalable at that point. But as companies grow, they get bigger. You look at the size of a company like Square, several thousand people and growing, et cetera. You can't have one person delivering all the training, that doesn't scale, but you do still need to actually [inaudible 00:14:10]. If you look at a company like Bank of America or like HP it's literally now you're talking about tens to hundreds of thousands of employees. How do we make something that's actually scalable? And you're right there is that trade off. You want some customization, but you don't want to spend all your time trying to customize every single aspect because that just doesn't scale.
Fredrick Lee:
And, in an ideal world, you want your security engineers' time more invested in building things that are actually specific to the company from an engineering standpoint, as opposed to trying to solve maybe these somewhat generic solutions.
Matias Madou:
Yeah.
Fredrick Lee:
So I'll give you an example on the training side, buffer overflows are buffer overflows. And so at a high level, that is almost like a commodity problem. A commodity, actually try to explain that to somebody. And there are other people that actually spent a lot more time, at least than I have, just focusing, thinking about how training works well. And I would actually prefer for them to actually create that content.
Fredrick Lee:
So maybe really what I'm arguing for is that training should at least be curated.
Matias Madou:
Yep.
Fredrick Lee:
Right. And curation could be, you're bringing in good training and good components from other places. And some of it's like, "Hey, there's really good examples online, good examples from various tools and vendors, you're going to use."
Fredrick Lee:
You should really be mindful about the vendors that you pick. If you're using third parties to help with training. And ideally one pick vendors that actually have a background in development and a background in security, you'd be surprised at how many training vendors you can go to that nobody on their staff has either written code or have actually worked on a security team.
Matias Madou:
Oh, wow. Okay.
Fredrick Lee:
Yeah.
Matias Madou:
That's a recipe for disaster.
Fredrick Lee:
Oh yeah. This is a whole other rant that Flee has about the security industry and security vendors in general,
Matias Madou:
Oh yeah.
Fredrick Lee:
Getting the tools that are actually okay. These code examples are written by [inaudible 00:16:01] users. The training will be written by people that actually have backgrounds in application security and then curating that to, "Here's the training my developers use. Here are the modules that my developers should actually be going through down there."
Fredrick Lee:
And occasionally you will have to make me do some additional tweaks, but probably really to make that a little bit tighter, curation. Doing a better job of curation and picking good vendors up in the [inaudible 00:16:25] with you as well.
Matias Madou:
Yeah. So one quick note here is that, yes, so even for us it sounds all trivial, but for us, it's even really hard to find people with security knowledge and COBOL programming knowledge to create training modules. So there's these things that are hard to find. Yes. And once you find them, it's golden.
Fredrick Lee:
Oh yeah.
Matias Madou:
A third pitfall that you quite often mention is untuned tools and let's face it, Flee, we were both at Fortify and I think we were pretty much the only people that could write rules and write custom rules. We told our customers to write custom rules, but we actually knew that it was really, really hard. So I hear you say, "Hey, you have to customize your tools." But I also know the reality that quite often, tools are really hard to customize. And so how do you deal with that? So on the one hand you say, "Oh, you have to do it", at the same time, you also know for sure that it is really hard.
Fredrick Lee:
Yep. And maybe I'll go back to maybe one of our previous answers, there's nuance there. The untuned and tuning is just like a broad word. And maybe this actually goes back to this idea of curating again.
Matias Madou:
Okay.
Fredrick Lee:
So even if you actually take a basic tool. Let's go back to our past with Fortify, even if you don't actually know how to write rules for Fortify, you can still tune Fortify and say like, "Hey, I only care about these really, really high quality, high confidence, actionable results inside of Fortify. And so then it's like, "Okay, well, hey, now I'm only condensing the rules that I care about. There are only maybe five, maybe 10, so I'm not boiling the ocean anymore. These are going to be really, really, really targeted rules, really, good, true positive or, false positive, false negative, characteristics, et cetera.
Fredrick Lee:
So it's actually really, really approachable by the developers because really what you're after is giving developers actionable intelligence. You want to give them results that they can actually act upon and results that they can actually understand. And that's where a lot of the tuning comes into place. Obviously you can be way more advanced. There's a ton of things you can actually do with static analysis tools or other security tools to be even really more specific to your organization. It's been my experience definitely I would love to get your perspective on this as well, that for a lot of companies it's that classic quote, unquote 80 20 rule. 80% of what they do is going to be somewhat a commodity. And it's, "Oh, okay, well let's just curate only the results and types of rules that we care about." So if I'm in a web shop once again I'm using a static analysis tool, I don't need to care about buffer overflow rules or anything like that.
Fredrick Lee:
These are just things that are either going to be low value to me, more than likely to actually be false positives. Those kinds of things. Tuning things to make sure you're not pulling in test code, for example, all these other things that can occasionally become pitfalls. And then you've got to [inaudible 00:19:31] as well is the pitfall is, it can reduce the confidence that developers have in the tools, because what developers see is this massive amount of results that come back and they literally want to do the right things. But the programmers sees like, "Oh, this, this little button went red. I guess that's a bad thing. Oh, well there's 10,000 things that are red. All 10,000 of those can't be the most important thing. So tell me which ones are actually the most important."
Fredrick Lee:
And I think without tuning tools, you do a disservice to the security team and developer's perspective of what the tools are like. Because they're just going to run away. It's 10,000 is just way too much to deal with it. If you're a developer 10, 20, even 30 results, os like, "Okay, this makes sense. And these actually look like real results to me." There's not a lot of weird security magic or nuance in the [inaudible 00:20:20].
Matias Madou:
Yeah. I think it also has a lot to do with the metrics or the objectives that we had previously. Like 10 years ago, when you were comparing static analysis solutions and one had like a thousand problems or found a thousand problems and the other one found 10,000 problems. "Let's go over there. This one finds 10,000 problems. That's what we need."
Matias Madou:
So I think that has shifted a little bit because we're no longer trying to find as many problems as possible, but we're trying to find really the problems that developers can fix and should fix.
Fredrick Lee:
It's just making security more pragmatic.
Matias Madou:
More pragmatic.
Fredrick Lee:
And that's what [inaudible 00:20:57]. I think a lot of people, and this is a trap that I think we fell in as security practitioners. We tried to create an environment of zero risk and zero possibility, any kind of failure. That just doesn't scale. And it doesn't actually really recognize the reality of the world that we're in and really what it means to actually be an engineering organization and also act in an agile way. There are definitely things that you have to care deeply about. You want to put a lot of emphasis and a lot of focus on those. But there are other things that are like, "Oh yes, this, in this really, really rare edge case. If we deploy this code in a completely different way and somebody copy and paste that code and then they deployed it in another different way, then there might be a security issue. And that just really doesn't do anybody any favors.
Matias Madou:
Yep. So ultimately the pitfall is to avoid that whole thing by, by selecting first of all the right tool and then tuning it from a more higher level, you can do it at the custom rules level, but from a more higher level and get the value out of it that you're really looking for. So second topic I would like to touch on is, hey, so let's be real here. You kickstarted and improved a lot of security programs. You did it at Betfair, Twilio, Netsuite, Square, Gusto. And I also know that you're a very nice person. And if people ask for help, you do that free of charge as long as they give back to the community because I know that you care about a couple of communities very deeply. So you do a lot for charity
Matias Madou:
And if they give back to charity, you do that for free. So go to Flee if you need help. But with that experience that you bring, if you come into an organization or they ask you for help, I was wondering, where do you look first? Where is quite often the low hanging fruit in software security programs, where you're like, well, if I go into an organization I'm 75% certain that if I look there under that rock, I will find something that is easy to improve and they will get a lot of value out of it.
Fredrick Lee:
That's actually really, really good question. Do I only get one choice, one pick.
Matias Madou:
Top three, whatever you want, whatever you do. What's the Flee magic.
Fredrick Lee:
Probably some of the very first things that I do is actually just go around, the low hanging fruit is really people. I know it sounds super silly and touchy feely. Yeah I see your expression.
Matias Madou:
No, no, I agree. I fully agree with that.
Fredrick Lee:
Yeah. The low hanging fruit is literally going around and asking people, what do they know about security and what do they care about when it comes to security? I think as a culture talking about the security culture, we've had this, frankly, just a really bad attitude towards developers that we think, "Oh, developers are bad and they're dumb. They don't care about security." It's the exact opposite. Developers love security. They care deeply about their products working well. They just don't always have the right information and they don't know how to make next steps. Every single organization I've stepped into, I've always found developers that are like, "Hey, I'm worried about this problem. And I think there might be a security concern there. They come running to you when you ask them, what kind of security issues do you care about and how can I help?
Fredrick Lee:
One of the most powerful things you can do as a security person when you show up at any company is first listen, and two, ask for people how you can help. And that immediately gets people onto your side when they recognize that, oh, you're there to help. You're there to help build. So the other thing I think, actually somewhat low hanging fruit to actually go to is immediately trying to get an understanding of how software is built. What does the CICD pipeline look like?
Matias Madou:
Yeah.
Fredrick Lee:
So it's often what you can do is just give people visibility into what their software looks like. So if you can say, "Hey, maybe I inject a dependency checker into Jenkins to actually give it an understanding like, "Hey, here's some outdated dependencies that you have that might have security vulnerabilities."
Fredrick Lee:
And it's really, really quick and easy and approachable. Or maybe you do leverage a static analysis tool, that you then integrate into their CRCD pipeline for people. Then once again, you can actually give developers information. You don't say, "Hey, your code is bad." It's like, "Hey, I ran this tool. I looked at some stuff. This looks like it might be a concern to you. Are you interested in it?"
Fredrick Lee:
There's no, it's not a stick approach. It really is like, "Hey, I'm here to help you understand these things. They're already things you actually care about." And like I said, it's actually super, super touchy feely because a lot of what we actually do in security and a lot of things that actually drive actual security improvement is actually through people. And there isn't a lot of like just deep, dark magic outside of just showing up, being helpful and trying not to be a jerk. So that's probably the key bit of magic there. Yeah. And you completely lied to the audience, I am really mean I'm not nice don't listen.
Matias Madou:
But so Flee I think you nailed this one in the sense that 10 years ago, we always said, "Hey, developers and security, you guys have to work together," but then you only produced stuff for the security person as a stick on the developer. And he was knocking on the developer. And I think what you're doing is the new way I think that we need to approach security, which is saying, "Hey, how can I help you? And how can I unblock you from getting stuff into production?" I think that's the new way that we have to look at security and all the things that you just mentioned perfectly fit in there.
Fredrick Lee:
Yeah. That's how security teams, when you have a good security team, your security is actually accelerating developers. It is making developers quicker. You're helping them actually find things on their own. You're helping to actually have fewer errors. In a lot of cases, you're actually writing code with and for the rest of the development teams. And so the security team should be one of the foundational go to organizations, within engineering and developers should actively want to engage with them. And actually actually want to work with the security team. Security team should always be adding value to the developers.
Matias Madou:
Okay. Last question, Flee and maybe a fun one for you. So as you remember, 10 plus years ago, you were going to teach me how to go properly to the gym so that ultimately I would have a six pack. Remember that discussion? As you can see, and as you know, I miserably failed. And specially, I quit because I was trying to bench press a record 20 pounds and I miserably failed. So I do not want to be like Ronnie Coleman. And I do not want to be in the 1000 pound club, but if I want to restart, what do I do? How do I approach it?
Fredrick Lee:
Oh, wow. How do you approach it? Literally just do something.
Matias Madou:
Do-
Fredrick Lee:
Just start. It's somewhat of a trick question. And oddly enough, I even wrote a security related talk for someone on this. One of the best things to do is to just get started and to be consistent with whatever it is you start. In your case, I would maybe suggest, maybe you should take up kettlebells. You should take up kettlebells and you should probably stop drinking as much beer. I know it's [inaudible 00:28:29] for you.
Matias Madou:
It's cocktails. It's cocktails these days.
Fredrick Lee:
Oh, now you're fancy. I guess, that's where the PhD comes in.
Matias Madou:
Well, tiki drinks. Tiki drinks.
Fredrick Lee:
Yeah. Yeah. But no, and partly this is why I get so excited, when I actually talk about the fitness analogy is actually security itself is a type of fitness discipline. Fitness, and this is super cliche, it is a lifestyle. It's something you practice continuously and you need to be consistent with. And you also need to pick something that works for you and security programs exactly the same, your security program needs to be something that works for you and your company. Fitness needs to be something that actually works for Mathias. I know some of your hobbies. Actually, there's nothing about your hobbies that's going to help you with fitness.
Fredrick Lee:
But I do know that you like metrics. You like discipline. You like things that are also easy and approachable. So part of the reason I recommend kettlebells is that kettlebells are something that are easy and approachable. You can have one in your house, much like some of the security tools that you and I like. The tools actually just fit into your lifestyle. And you can literally have a kettlebell in with your mobile office there. You do a couple swings in the morning, do some swings after this interview and actually start actually building up your fitness and you just start being consistent with it. So the fitness thing is just, you constantly practice and you constantly make improvements. And it's a lifelong journey my friend.
Matias Madou:
Sounds good. I'll buy them. And then I'll look for your YouTube channel on how I get started.
Fredrick Lee:
Yeah. It's super easy, I recommended kettlebells because it doesn't take you a lot of training to do it. And it's really easy to learn, like some of my favorite security tools.
Matias Madou:
Sounds good, tonight I'll go online and I'll buy them. Flee, thank you. Thank you very, very much for accepting to be the fifth guru on the software security webcast. It was a fantastic chat. Thank you very, very much.
Fredrick Lee:
Well, thank you so much for having me on, sir.