Matt Howard: Good afternoon. My name is Matt Howard. I’m the Chief Marketing Officer at Virtru. I’d like to welcome you today to a quick conversation, Hash It Out with my colleague, Shannon Vaughn. Um, Shannon. We saw some interesting news yesterday where the US Department of Defense was reported in Techcrunch to have left an email server exposed to the public, curious to kind of, get your quick reaction to that.
Shannon Vaughn: Uh, well, I would say first frustrating, right frustrated here. How many times have we heard about? You know, human error causing one of these things. And I think what’s most frustrating is is How much data, especially, how long especially knowing that, you know, it’s in support of, you know, US Special Operations Command. When we already have solutions to, to prevent this kind of you know, error human error.
Matt Howard: So with that in mind, I mean let’s just sort of step back for a second and sort of reflect real quick on what was actually reported. Apparently, it’s an email server that’s hosted. I think on the low side of the Microsoft Azure cloud, we don’t exactly know what type of email server, probably safe to assume it’s an Exchange server. What we know is that the public disclosure was yesterday, February 21st, and 21st and Techcrunch apparent. According to the articles, February 7th that the White Hat researcher, sort of discovered and reported responsibly apparently to SOCOM that that this is an email server that’s publicly exposed to the Web and you might want to do something about it and I think it was like 14 days later that the patch was effectively made and then publicly reported. Does that sound about right to you?
Shannon Vaughn: Yeah, that’s there’s only been a couple reports on it but yeah that’s what I’ve read and heard.
Matt Howard: And so, I guess, one of the big questions is potentially, is, how long was it exposed to the public Internet prior to the White Hat Researcher discovering on February 7th?
Shannon Vaughn: Yeah, I think that. I think that’s the, that’s the big unknown. I mean, I’d love my white hat, researchers. You know, they help to scan vulnerability, they notify. They do everything the right way. But you know, it openly says in the articles that, you know, there’s data. Dating back years. And when you talk about what data that is, they say, Oh, well, you know, we see DoD ID numbers, which is considered PII right? Controlled unclassified information. They say We see SF-86, which is the clearance paperwork right to get your security clearance. Security clearance is not just about that individual. It’s about all the people that they know they’re work History, They’re past experiences, it has your social, it has your home address, it has your spouse. That has their social, has your family has people that, you know, I mean, there’s a lot of sensitive information that goes into an SF-86.
Matt Howard: Yeah, and I think that’s kind of the point. So we should be collectively as a community. careful, not to assume that, just because this server was quote, unquote, unclassified that it did not contain indeed, truly sensitive information whether that’s PII or to your point, these clearance documents, you know, it’s ultimately very sensitive information that probably existed on the server. I’m kind of curious when we step back and think about that for a moment and we think about our tools and our kits collectively as an industry, you know how much of that information do you think might have been encrypted? So like if somebody were to have had access to this and they were to have expelled and traded everything on the server. Do you think any of it was encrypted?
Shannon Vaughn: I don’t know if it was, should it should some of it have been? Yes. You know, we work at Virtru, you know, we like encryption, but I think you and I both agree and I think pretty much the entire company agrees, not everything needs to be encrypted, but, there are rules. There are mandates that for CUI data for FOUO data that has to be protected and encryption is one of the ways to protect. We already have, you know, capabilities that do this, right? So you could do a DLP DRM kind of scanning tool like we have, I mean everybody has kind of these lower level capabilities to just to allow humans to make errors because I don’t think any company has, you know, a hundred percent perfect employees. Right? Humans humans have faults.
Matt Howard: So if we just step back and imagine, you know, in a practical sense, maybe there was a DLP, you know, in the workflow and maybe sensitive emails that were being sent internally because this was an internal email server. As I understand it.
Shannon Vaughn: Yep.
Matt Howard: Might have been flagged by the DLP for encryption in which case some of those more sensitive emails that would have been discovered on that email by anyone attempting. To exfiltrate might have been met with an encrypted blob. That would be the best case because at least not everything would have been sort of just exposed in the wild. At least some of it would have been encrypted but to your point not all of it would have been encrypted because nobody wants to encrypt everything that’s just not, it’s not the way the world really works. You want to use encryption judiciously for the things that are truly sensitive and to your point, it really helps to do that with some type of automation in the workflow with the DLP to the to along those lines. Does that sound about reasonable to you?
Shannon Vaughn: Yeah, I think it sounds reasonable. The question is what you know Crypto algorithm did they use? Right. I mean…
Matt Howard: Yeah, I think that’s important. We should probably poke at that for just a second because I don’t know if I remember if it was December. It was towards the end of last year. There was another White Hat researcher that publicly disclosed that they had reported responsibly to Microsoft of a vulnerability in the office messaging Encryption algorithm used to use by Microsoft to encrypt emails with OME and the White Hat researcher reported, that, that vulnerable that there was a vulnerability in the cypher because it was weak and it was subject to someone potentially, you know, having access to a sufficient number of emails even though they were encrypted. If you have enough emails..
Shannon Vaughn: Pattern match.
Matt Howard: What a good thing, data pattern matching, you could basically infer the context of the underlying email without having to decrypt it.
Shannon Vaughn: If? Yeah,
Matt Howard: When you recall on, that was?
Shannon Vaughn: It was late last year they go. It was right after Thanksgiving, but yeah, that’s exactly right. If only an adversary had a treasure trove of data that dates back years. So even if it was encrypted using kind of OME, Well guess what? You got some really good training data, right? You could probably build a good pattern matching capability against that and then decrypt, those underlying files and that’s best case, that’s if it was encrypted.
Matt Howard: Right. And, and what’s interesting about this, and if anybody’s followed this, in the, in the, in the public domain because it’s been sort of reported on at least in the technical community, Microsoft’s response to the White Hat researcher, who originally reported the vulnerability to them was after investigating it. They determined that. It’s in fact, not a vulnerability and that they don’t intend to patch it. And that’s how it’s intended to work. So they can do things like discovery and, and so forth. You know, it certainly is interesting to kind of reflect on how the world kind of goes around. But now we see ourselves here today. You know, one of the questions that was asked, you know, was How likely is it that somebody could effectively steal a huge number of emails or enough emails? Sufficient emails to basically do the pattern matching to infer the context of the, the copy of the underlying message? As well. I mean, as we know from yesterday’s news, it’s probably not that unlikely.
Shannon Vaughn: Yeah, I think that’s exactly right. And what is the data right? That’s that’s the thing that I think, you know, really gets at me, you know, that I’m now in the Army reserves but you know, almost 19 years in how many times have I sent data especially on it on an internal system where I’m more willing to it to, you know, share information, right? So if I’m sending information to my G1,S1, J1, whoever it is, that’s my personnel group. I’m gonna send them all of my DoD ID. Number, my social, whatever that information is sf 86, Right? If I’m going to the security folks, I’m going to the two shop. Think if, if I know that this is an internal only system. Well great, hey, I’ll share that and not expect any risk. Well, next thing you know, human error, no password, it’s exposed how long has been exposed. And now, you know me myself, my, my family everybody, I know, SF-86, you know, information is in my SF-86 and now, now targeted because, you know, one person made a mistake.
Matt Howard: Yeah, now it’s, it’s A sobering situation, we find ourselves in you and I both know how hard it is to do, you know cybersecurity and data centric security in particular really well at scale, These types of incidents are all too common. I do think that as we reflect on this we should all be careful not to you know.Revel in anybody’s, you know, sort of difficult situation. I think it’s also incumbent upon us to ask really important questions about how you can do better. I’m just quickly curious. How would you do better if it were you and you could rewind the clock a bit?
Shannon Vaughn: I mean, as I said, I mean it with we’ve got it. Everybody’s kind of got it. There are tools already in place where you can automate away a lot of these problem sets, right? So when data goes out, you should probably have a scanning tool. We have a scanning tool, Microsoft actually has a scanning tool, I hope they did it. If you’re going to encrypt ideally you use a better encryption module, right? We really like AS 256 GCM. But you know, have have kind of these common capabilities implemented and then make sure that, you know, the data that it goes out, is effectively being tagged or encrypted, where necessary. I mean, it’s not nothing bigger than that, I don’t believe.
Matt Howard: Yeah. And, and so hygiene matters. Both in terms of how you do data centric security, with respect to things like DLP and encryption of emails. But let’s not forget that Hygiene very much matters with respect to do you or don’t you put a password on the server, which of course is a probably a a mistake that should be avoided at all cost and hopefully will not happen again. So listen, I appreciate you. Taking the time to compare notes on this. It’s fresh off the press, so to speak. And it’s always good to kind of just get perspective from someone like yourself, who’s an expert in the industry. So thanks for making the time.
Shannon Vaughn: Yeah, thanks Matt.