Transcript
Note: This transcript has been edited for clarity.
Mathew Schwartz: Hi. I’m Mathew Schwartz with Information Security Media Group, and I’m going to be discussing the state of the U.S. nuclear industry’s cybersecurity posture, especially in light of recent geopolitical developments. Joining me, it’s my pleasure to welcome Mark Rorabaugh, president of InfraShield. Mark, thanks so much for being here today.
Mark Rorabaugh: Thank you for having me.
Mathew Schwartz: It’s my pleasure. Securing critical infrastructure is such a topical discussion. In the wake of the Iran strikes, in particular, we’re seeing a lot of questions about the potential risk that a cyberattack on U.S. nuclear infrastructure might pose, or if it might be an attractive target. Given your experience and expertise and knowledge of networks in this sector, what would be some of the principal cybersecurity challenges that you’re seeing when it comes to keeping critical infrastructure secure, including, for example, energy, nuclear reactors, materials and waste?
Mark Rorabaugh: So I would say it’s not totally unique to the nuclear industry, but the biggest problem is resources. Oftentimes, because these are all commercial nuclear reactors, and inside the United States, virtually all, but just a couple, are privately owned, so they have a very big, bottom-line drive. And one of the things is, how much money do they spend on cybersecurity and how many personnel do they have do that?
Alot of companies are used to doing it in the IT space. They have long history of knowing exactly how much things cost and what they have to do in order to do that. But when it comes to especially industrial equipment, a lot of those things are much more difficult. Operators are as a general policy, run things until they break. They are not used to replacing things just because there’s some security update or because it’s gone “end of life.” If it’s still running and operating, you still use it. So that causes a big challenge in our industry to make sure that we’re able to help defend and protect against vulnerabilities on assets where, in some cases, even the companies that created them are no longer around.
Mathew Schwartz: Definitely I want to get into that. I mean, that’s such a common challenge that we hear in sectors that are relying on operational technology: the equipment could last for 20 or 30 years. Maybe it was never, ever meant to be, not even network-connected, never mind internet-connected, and sometimes now it is, even if just by accident. That sounds like a security nightmare to me.
Before we get into that, the industry does have some obligations in the form of regulations. I believe the U.S. nuclear regulatory commission has specific rules. Now, I know there’s always tension in the United States about how much you regulate, and how much you don’t regulate. What is the current disposition, if you will, of those regulations? Do you think they are suited to what’s required now?
Mark Rorabaugh: Well, it’s interesting. I’m one of the few people that have been on both sides of this issue. I help create the regulations for the Nuclear Regulatory Commission, originally, back through the early 2000s so as we all know, when 9/11 happened, there was a big concern about a lot of stuff, and that really changed the nuclear industry. So the NRC put a lot of effort into both physical and then later cybersecurity protections. In order to do that, not everything was done great when we came up with those regulations. I’m speaking firsthand in defense of myself and the other great people that worked on with me. When you get a bunch of blank pieces of paper and say, “Go write a cybersecurity requirements for keeping all of the plants in the United States safe,” it’s pretty hard to do when you just have a bunch of blank pieces of paper.
Some of this, a lot of people don’t realize, was even before NIST SP 800-53 came out. I mean, this was really early stuff, we had a big challenge ahead of us. A lot of people didn’t know it. There weren’t many people in cyber that knew anything about operational technologies. So that was a big challenge on that end.
Likewise, the plants had spent billions of dollars, many billions of dollars, on physical security upgrades first, and now they’re they were starting to look at, well, we’ve got to do cybersecurity upgrades as well?
Also you mentioned Iran. Well, that was a big kickoff thing. Stuxnet happened to Iran, and a lot of people at the NRC, especially, but also inside the industry said, “Uh oh, some of that kind of stuff could happen here as well. So what are we going to do about it?”
One thing I can say is a very early decision that was not easy to come by, but happened both with the industry and with the regulator, which was to essentially unhook all of the plants from the internet. To this day, while they’re not technically air gapped – there’s communication that’s allowed to flow out of a nuclear power plant – the only way to get information into a commercial nuclear reactor is by sneakernet. Someone has to actually walk it in there and do something that does cause a lot of extra labor and effort, but it was felt that that would dramatically reduce the number of attacks that could occur, because you can’t have real time communication with any asset inside the nuclear power plant. So that was probably one of the biggest wins out of that.
What I will say is that from a regulatory standpoint, it’s been – and this is just a personal opinion – a mixed blessing. On one hand, the regulator helps by sometimes driving people into action. Sometimes, nuclear can be extremely slow to react to things because it’s very costly. For instance, just putting a switch into a plant, if you were to do that in a normal data center, it’s no big deal. They go install the switch and update some paperwork, and they’re done. In a nuclear power plant, you’re probably talking $150,000+ in engineering costs just to determine what’s going to happen in an earthquake, what’s going to happen with the heat load, all that sort of stuff. The costs are enormous for very small things.
So a lot of things get analysis paralysis, where they get stuck in the decision making. Sometimes a regulator can help push that through. The flip side, however, is a lot of plants, not all plants, but a lot of plants, essentially wait for the regulator to inform them or instruct them on what they should be doing, and that is, unfortunately, sometimes a mistake. You will sometimes see certain things that are under better cybersecurity posture, on the commercial side, like their regular desktops, then you’ll see sometimes challenges with plant equipment, because, again, they get scared to touch it, or it’s expensive, or they’re afraid that it’ll upset operations, which is all warranted.
A typical power plant, for every day it’s down, can cost over $2 million a day in lost revenue. So it’s a balance. What do you take down in order to improve something, versus waiting and hoping that something doesn’t happen? Obviously, if something bad happens, it could be very expensive for a plant, and of course, for the power grid as a whole, and that’s not getting even into the safety issues.
I will say, just to let everyone know, there’s a lot of confusion about it, that it is extremely hard in the United States to get to radiological release. So when it comes to the threat of actually injuring and killing people, while it’s theoretically possible, it would be extremely difficult. That doesn’t mean, however, that there can’t be very large impacts to the industry, both in terms of, like I said, just stopping the producing of power. If they don’t know what goes wrong, it could take up to a month to get it going again, so that could easily be $50 or $60 million down the drain. That’s also a double whammy. You’re paying everybody to do stuff, plus you’re not making any money during that time frame. There’s a cost to it. Then there’s reputational cost.
There’s a lot of misconceptions about nuclear. I think some of that has been changing recently, where people understand that it’s not quite as terrifying. You can’t make a nuclear bomb out of a power plant. That’s nothing but Hollywood-type stuff. There are some risks associated with it, but they’re very, very heavily mitigated. Most of the ones that you see are international incidents, and they’re haven’t been held quite to the same standard that the U.S. holds its power plants.
Some would say the U.S. power plants are held to too high of a standard, because it’s very costly to run a nuclear power plant United States, not because of producing power, but because of all the safety measures and the backup, the backup, the backup systems. So what we can do know is, cyberattacks could cause a lot of confusion to a lot of operators, to personnel and it could have some negative effects to the plant itself.
It is possible to destroy equipment that may, again, not lead to anything like radiological release, but it could cause a lot of damage and a lot of money if done properly. Some of those protections are just very hard to implement, as I stated before, so it’s a very big challenge. They’ve been at it a very long time, but it’s still got a long way to go, in my opinion.
Mathew Schwartz: And how does that happen? Also, I will just pause for one second and say, talking to an expert in nuclear security isn’t always something that you think will make you have positive thoughts, but it’s great to hear all of these defenses and the very prescient-sounding decision to not – as you say – air-gap facilities, but to not allow ingress from the internet. I mean, +10 points, gold star, thank goodness, especially because we see some other industries with operational technology systems in particular, you know, SCADA systems and ICS, where that is not the case. So that’s a great, I think, foundation, thankfully for as for the nuclear sector.
Mark Rorabaugh: We do work in other industries besides nuclear as well. In some of those, I didn’t quite get thrown out of a room, but we started with some of the same premises. We said, you really ought to look at the most critical parts of your operations and in the OT space and figure out how to not get it remotely connected so that it sends information out, things like water, which has been a problem, as you know, a lot more recently. Which was a no-no. We mentioned that and said we understood a lot of reasons why they’re used to having very few personnel remotely operate tons of equipment, across a very large area, and in some of these cases, because those things are run by individual counties and cities all across the country. But that can create problems.
So I do think we stayed focused on the consequence. While we knew that the risk was very, very low when it comes to nuclear because of all of the backup systems and things that are in place, the consequence is very, very high. If something goes wrong, it’s going to be everybody paying for it. It’s going to be not only the power plant, but the taxpayers and the people surrounding it. I’ll tell you every place I’ve been in a plant, it is their belief that they never want to be in that position. Now, exactly how much needs to be done to ensure that is the part that gets under a lot of debate in the cybersecurity community. “How much is enough?” is the big question.
Mathew Schwartz: Well, good for you for posing difficult questions, such as, couldn’t you really just take everything offline, please, with your water plant? On that front in terms of questions you might be posing, directions you might be exploring, we have seen some really notable changes evolution in the IT landscape. I mean, pick your time period. It feels like there’s always something new. For example, when it comes to vulnerability management, it continues to get better and better. Auto-patching as well was a huge, huge improvement over the last decade or more. Also, we have things like zero trust, which, although it’s a concept, not a product, attempting to implement it can really do wonders for your defenses and also your knowledge of what you have and what’s going on.
To what extent can these approaches be adapted for the nuclear industry?
Mark Rorabaugh: So we, we are very much into the zero trust philosophy. That being said, it is difficult, especially with very outdated systems. But what we’ve generally encouraged the customers or the licensees, in this case, for nuclear power plants, is to institute zero trust whenever they’re upgrading and replacing systems.
When you go to the trouble, and you’re already going to spend many millions of dollars to replace a particular system in a power plant, go to the extra effort to do that, because you can sleep a lot better at night knowing that that’s the case, and increasing the monitoring.
A good thing is that a lot of these systems are not all on one massive network. The bad news is sometimes you lack visibility in a certain areas of what’s going on a plant, because they just run on their own little islands over there, which could mean if something gets in there, it may be difficult to know that. So we are encouraging them, again, same philosophy: unidirectional traffic monitor, be able to monitor what’s going on using things like taps and that sort of stuff, without having any incoming traffic, so that you could detect things, even if you’re not going to be able to institute any changes. That zero trust model is not a new concept by any stretch. But in nuclear, it’s going to be a long time before it gets fully implemented.
I am pushing with the newer reactors, especially the small modular actors, I think they have a big opportunity to concentrate on that; they’re using way more modern technology as a result of being new. And I think they’ve got a good opportunity to really do that zero trust model really separate what individuals duties are, and that is increasingly is easier to do.
You have a lot of legacy people who have been in the nuclear industry 30, 40, years. Way back when, you always had full access; people had complete administrative rights to do something.
So part of what we do now is education. When you talk to OT personnel and explain to them that you can separate these things now, and you can have certain people who can monitor things but can’t change them. Or some people can change it, but they can’t monitor everything.
These types of separations are important. It’s new for a lot of these workers to grasp, but it’s happening, and as you’re getting younger people in there with more knowledge on that front, it’s becoming a little bit easier in that front, although it is still a very big challenge. So that’s the zero trust part.
The other one that you mentioned is vulnerability management. That’s something that has been a huge challenge, especially in nuclear, probably in most of critical infrastructure, not counting maybe the financial markets and that kind of stuff. But most of the, what I’d call the traditional industrial type markets, it’s a “run till failure” model. For a perfect example, take a safety system in a nuclear power plant. It goes through such rigor in order to get that approved. It costs many, many millions of dollars to get that through all the hurdles that you have to do – kind of the equivalent would be like getting a new drug through the FDA, right? You go through all that kind of trouble, and once you have it approved, they want to lock that down to exactly the state it’s in, because any change may kick off them having to go through that entire process again. The problem, obviously, is that flies in the face of cybersecurity. Your number one concern is that usually, if you have outdated equipment, and you’re not running patches, and you have a bunch of outdated software, it’s going to be Swiss cheese and going to be fairly easy to attack. So then they have to do all these other kinds of external mitigations try and compensate for that.
Hopefully we will see some improvements, where there’s a good balance between what is the true safety functions of some of these things, versus a lot of the stuff that’s just associated with safety because it happens to have a nexus to it, or monitor it, or some of those things, and it would be probably better to keep up to date, because they don’t have a direct safety effect, but they could pose a risk that if they were getting infected, that could make them a jump off point to get or cross over.
Defining those boundaries, I think, is something both the industry and the regulator could do a better job at, really. Focusing on the core things.
Lastly, I’m still a big believer in physics wherever possible: have a lot of your true safety functions rely a lot more on physics and a lot less on digital logic. Nothing against digital logic. I think sometimes the digital stuff will make better decisions quicker and can analyze a lot more information. But there’s something to be said for having a last recourse, something that trips because there’s a solid state conductor that determines it’s in an unsafe position. That’s pretty reliable. It’s pretty easy to understand from an engineering standpoint.
So I think there’s a balance there. I do think that in the end, you’re going to see even more digital stuff come into the power plants, which is what we’re preparing everyone for. Their first reaction was to just try to keep it out. And we say, well, that’s inevitable, sooner than later the manufacturers are going to stop making it, you’re not going to really find analog devices and it’ll just be the digital stuff. In fact, digital usually works better for most of the things in terms of data analysis and making decisions and that sort of thing. So that’s coming, they know it’s coming, so they’re starting to wake up and take cyber a little more seriously. They can’t just push it down the line a little bit more.
Mathew Schwartz: What other sorts of questions are you hearing? Especially with digital transformation occurring with the equipment that is powering, if you will, the industry. What sort of questions are you getting on emerging technologies such as artificial intelligence, or third-party monitoring, perhaps of these networks, which is often recommended as a best practice for keeping an eye on the kinds of threats that might be hitting you if you yourself don’t have an in-house cyber threat intelligence team.
Mark Rorabaugh: Both of those are really good questions. Starting with the monitoring part first, there are some plants that have utilized third-party monitoring. A lot of these companies are in the Fortune 500, Fortune 1000 – for sure, they’re pretty large companies, so a lot of them do have some of their own cybersecurity operation centers and that sort of stuff at their more corporate level.
The issue there is, a lot of those people are not very aware of what goes on in the plant. So even if they see stuff, they’re not quite sure what to make of it or how to react, and they can’t get to it. They can’t, for the obvious reasons, just say, “Well, let me go take a look at it.” They may see information, but the best they can do is contact someone at the plant in order to go investigate it again. While that extra step is annoying, that extra step does offer that a very big bright line or barrier. We encourage everyone to maintain some of both. We think it is a good idea to send a lot of your stuff, either to your CSOC within the parent organization, or if you have third parties monitor. We don’t think that’s a bad idea, either, because that allows them the opportunity to get external resources to look at stuff that might have seen more things than the in-house staff.
However, we think it is still important that you have some nuclear-aware cybersecurity people able to look at and monitor these events so they can say that looks like something really weird or suspicious, versus, there’s a digital acquisition card, and it’s just flimsy and does weird stuff.
Sometimes you have equipment that’s 20 or 30 years old, it’s very costly to replace, and you need go to over there and kind of tap it on the side a couple times and it goes back to work. From that standpoint, you have to have someone who’s a little bit more knowledgeable and can understand those things, which are not the easiest people to find. They’ve got to have knowledge certainly in the IT digital space, but they’ve got to be able to understand the OT stuff as well. So there is a huge cybersecurity personnel shortage as a whole. Has been for quite a while, but as you can imagine, as big as that is, and how many people that are needed, the need. The need for OT specialized people is even more so because it’s a very narrow subsection of those people that actually do that, and not everyone is cut out to handle it.
Another thing with cybersecurity personnel is the turnover rate is very high because of it’s a very stressful position. When you get into industrial stuff, they’re 24/7 operations in most cases; they don’t want any hiccups. The last thing they want is any kind of hiccup. So it’s a big challenge for everyone on that front.
AI is something that every power plant, I would say, three to four years ago was running away from and saying, “do not use any AI whatsoever.” Within the last 12 to 24 months, I would say that has flipped dramatically. Some of the very large organizations have determined we can find out really great ways of using AI to make very big cost savings on things, and a lot of that is operational.
As you can imagine, they’re looking for patterns. They’re looking for patterns on what type of equipment is most likely to fail. Can they predict it? Can they determine when we really should replace it? It’s not ideal waiting to replace stuff when it fails, but you also don’t want to replace it too early. If you can make more accurate predictions on that, that’s good. And AI tends to be fairly good at that; they’re getting some pretty good results in that realm.
There’s a lot more to be done there, and I think everyone agrees there is a lot more money to be had by you to utilizing that appropriately in the cybersecurity world – including us. We have artificial intelligence and some of our products and stuff, but I will say it’s still very much in the infancy in terms of exactly the best way of doing it. We’re so far largely using it for better ways of detecting or automatically handling certain events. For that type of automation we have some AI assistance, but we are, at this point, especially because we’re in cybersecurity, still kind of clinging to making sure we still have human in the loop. We’re very big on that. A defect in your model, could cause some dramatic things in cybersecurity. So we’re very big on making sure someone is there, just to make sure that that looks fine, and most of the time it does do a good job. But especially as we’re moving away from more simplistic machine learning models to large language models, which is happening in cybersecurity, like it is everywhere else, things like hallucinations and stuff are a risk too – it does, in fact, come up with some very strange and bizarre results and you don’t quite know why. It’s very infrequent, though.
I would say that the humans make mistakes too, and that’s the big thing that everyone’s got to remember, is the AI doesn’t need to be flawless. That really just needs to be as good or better than people are. And I’d say quite a few of that it’s already achieving, that it’s getting for a lot of these purposes. What would take 10 or 20 people to look at could be done by one or two people reviewing the decisions the AI is making.
So I don’t think that adoption is going to shrink at all. I think that’s going to continue, and at some point – and I hate to say it – but I think a lot of cybersecurity will be dealing with AI against AI, just like you’re seeing drone against drone now.
Everybody has to play in that field eventually, I will say one you asked also about one of the upcoming things or challenges that they’re looking at in terms of new technology. There is a big push within industry to right now they’re forbidden from having virtually any wireless technology on their plant side for the very reason of not allowing external connections to it. There is a lot of reasons that they want to loosen that up, because they have lots of information, especially with AI, to monitor all kinds of areas of the plants, and that can. Of stuff, which would be very difficult to do by wiring it all up. We believe there are solutions. There are happy compromises, just like there’s unidirectional traffic that we allow out. We think we could do the same thing in terms of wireless, where it can transmit but it can’t receive anything. Those are kind of technologies that are starting to be looked at so, but there’s a lot of unknown, right? When the regulations were written well over a decade ago, there wasn’t much thought of that. It was basically just: Don’t use wireless and make sure you’re not connected to the internet. So, every decade or so, it’s probably about time for us to revisit some of that and look at what we’re giving up by being too strict on some of that stuff.
Mathew Schwartz: We’ve talked about the real bleeding edge of technology, and you’ve mentioned stuck the Stuxnet at the very beginning of the discussion. My final question for you is on Stuxnet. The reporting on that is that malware was introduced into an environment by sneakernet. Somebody brought it in, probably on a USB stick. The reports say it was a very sophisticated operation on multiple fronts, including having that malware designed for that very specific environment, which I hear is very difficult to do. All that aside, how are organizations in the nuclear sector these days when it comes to portable media? Has that vulnerability been dealt with, or is that another on the to do list?
Mark Rorabaugh: Well, the industry as a whole, and not just nuclear, but several of them have had a big challenge with portable media, for quite some time. We do believe that there are good solutions, including some of the ones that we are offering now, to really limit what can get on – and do something to – that type of equipment.
We do view portable media as the single largest threat inside nuclear by far. Most likely, if something is successfully going to do something against plant equipment, it’s going to be well engineered, and it’s going to come in through some sort of portable media.
A big problem is that what the regulation requires, and what most of the plants have implemented, is that they have to detect known malware. As I point out, Stuxnet wouldn’t have been detected by that because even now, most of those technologies are largely signature-based. They’re largely looking at patterns, and looking at other things that are known.
We have, even in some of our training courses, taken existing malware, changed one or two lines that really didn’t have anything to do with the actual malware, and run it through something like VirusTotal, and it’s gone through all of the virus scanners without a hitch. So the point is, it’s not it’s not been that difficult for us to get through those layers of defenses. Again, with the zero trust, are very heavy into making sure that we do whitelisting on the devices that can go on to the stuff, whitelisting on the actual types of files that can go onto it and make, and to be able to validate that it’s known good, not just we don’t think it contains something bad.
So, that’s probably, I would say, the single largest problem facing the industry, and most industrial industries, is that a lot of stuff, at the end of the day, they’re plugging something into it, to get something on it.
Those battles just don’t end there. I mean, it was a huge thing throughout all of nuclear to get them to stop charging their cell phones by plugging it up to plant equipment. You know, I think everyone has gotten yelled at and nearly lost their job at this point that nobody does it, but for quite a while, it’s just instinctual someone go in there and say, Oh, I’m low on juice. Let me just plug it in, there’s a USB port, I’ll get a couple of extra battery bars on my phone. That’s the sort of thing where we had to explain, there could be something on that phone and it could infect something, and it would be very hard to spot that.
So those are things that as new technology comes in, as I say, we try to say, if it detects that you plug something in there, then they issue reminders, they use port blockers and stuff. So you have to think about it and say to yourself: Do I really want to hook this up to that, because maybe I shouldn’t?
If there’s a port blocker in it doesn’t stop a real adversary, but it stops the mistakes, which is a large part of what we’re trying to accomplish with that.
I’d say one final thing is just to understand where we are, because a lot of the plants look to the regulator, and the cybersecurity regulation is performance-based, and what that means is they don’t dictate how to solve the solution. They say: Here’s some requirements you have to meet. You have to do these kind of minimal things. Now show us how you’re going to meet them.
So it’s really up to each individual plant on how they achieve that goal, and that’s a good or bad thing. On one hand, it allows each plant to kind of do their own method of doing that. I do think there’s some good aspects to that, just like each plant in our country is built in a different time frame by many times different companies. So having some sort of malware that could affect multiple plants across the country at the same time is would be almost unthinkable, at least in terms of humans coming up with it, because it just be so phenomenally difficult because of all the different configurations and the type of equipment and the upgrades and everything else, so that difference between all those plants is actually an asset, as far as I’m concerned, from a cybersecurity standpoint.
But the flip side is they’re looking for the answers from the regulator, which they won’t generally get, because they’re just going to say, here’s the requirements. Go figure it out. And that has led to a couple of things due to the physical nature. When they originally designed the regulation, the physical protections, they of course, said, Well, you’re not required to defend against the nation state, because if they’re going to come in with tanks and bombs and everything else, that’s a little bit much to expect any one plant to defend against. So in the physical stuff, they that’s what they said. They carry that forward in the cybersecurity realm. And I do think that’s a mistake, because not most, most plants, I’d say, are wise enough to understand what the threat is, regardless. But because of that, some of them kind of wave their hand say, well, we don’t have to really worry about advanced persistent threats, because it’s not our responsibility to defend against nation states.
My argument, though, is the original design was because, well, heck, they’d be invading the United States, which is pretty crazy in and of itself, and by the time they did all that, our military would be taking over, not your base. But when it comes to cybersecurity, that’s not really the thing. Every time you have a USB stick going into the plant, as I inform some of the physical security guys, just think about like you’re letting a hand grenade in. From our perspective, that’s what that is that’s going in there. It can damage equipment just as much as you bring in an explosive in there. And so I think that’s the other big challenge, is just making sure all plants recognize they need to be defending against cyberattacks, regardless of where they’re at, and just be realistic about it, because that attack, if you would ask me, you’d be absolutely insane to attack a nuclear power plant by force. But cybersecurity, you kind of have a better shot at it, so I think that’s the more likely approach nowadays that you would see. I don’t think anyone’s going to be dumb enough to try to take on a standing force when they’re armed more heavily than most military bases.
Mathew Schwartz: Fascinating. Thank you so much for bringing us up to date on the challenges that the industry is facing, some of the solutions that are happening, and also how some of the thinking still needs to change.
Mark Rorabaugh: Thank you very much for having me. I appreciate it.
Mathew Schwartz: Thank you; it’s been my pleasure. I have been speaking with Mark Rorabaugh, president of InfraShield. I’m Mathew Schwartz with ISMG. Thank you for joining us.