What Makes an Organization Cyber-Resilient? An Interview with Dr. Georgianna Shea

“Focus more on resilience than cybersecurity.” – Dr. Georgianna Shea

Published on April 02, 2025 by Rixon Technology

In this episode of the Rixon Podcast, Dr. Georgianna Shea, Chief Technologist at the Foundation for Defense of Democracies and board member of Rixon Technology, joins host Heidi Trost for a deep dive into cyber-resilience—a concept that’s quickly becoming more vital than traditional cybersecurity.

With real-world examples like the Colonial Pipeline incident and the impact of untested software updates, Dr. Shea breaks down how businesses can anticipate, withstand, recover, and adapt in the face of cyber threats. She challenges outdated notions of cybersecurity, urging leaders to refocus on mission continuity, interdependency awareness, and resilient system design.

From industry-wide ripple effects to the evolving role of quantum computing in encryption, this interview highlights actionable insights for CISOs, CTOs, and executives navigating the complexity of modern digital ecosystems.

The conversation also explores advanced strategies such as tokenization, zero-knowledge proofs, and distributed data storage—critical tools for protecting sensitive information beyond traditional encryption methods.

🛡️ Want to better understand how resilience impacts your organization’s future? Watch the full video above and read on for the complete transcript below for an in-depth breakdown of everything discussed.

From zero-knowledge proofs to real-world case studies, resilience isn’t just theory—it’s strategy.

If you’re ready to take the next step in securing your systems and sustaining your mission… Resilience isn’t a buzzword—it’s your next move.

Introductions

Heidi: Well, hello, everyone, and welcome to the Rixon Podcast. I am here with Dr. Georgianna Shea, and we’re super excited to talk about cyber-resilience and what that means. So thank you, Dr. Shea, for joining me today.

Dr. Shea: Well, thank you for having me. And please, call me George.

Heidi: Okay, George. So, George is the Chief Technologist for the Foundation for Defense of Democracies (sometimes shortened as FDD), the Center on Cyber and Technology Innovation. Gosh, that is a mouthful, so apologies. Barely getting that right. She is also a board member at Rixon, and fairly recently, she served on the Cyber Physical Resilience Working Group of the President’s Council of Advisors on Science and Technology (also known as PCAST). I think you only work on things that have very long names.

Dr. Shea: Yes, and it has to have a bunch of acronyms as well.

Heidi: FDD, yeah.

Dr. Shea: Yes. Yeah. And I’ll probably mention at some point in the podcast that I was working with GRF and their BRC on the ORF and go over all those letters as well.

Heidi: It’s fitting because in cybersecurity, we just have endless acronyms, so of course, you know, you would also have these acronyms with the different organizations you work with.

What Does Cyber-Resilience Mean?

Heidi: Okay, let’s talk about cyber-resilience. What does that mean? What does cyber-resilience mean?

Dr. Shea: So, um, I look at resilience as being able to anticipate, withstand, recover, and adapt from various cyber events.
Typically when you talk about cybersecurity, people are going to think about:

  • What do I have to do in terms of compliance?
  • What are the controls that you’re putting in?
  • What does the audit look like?
  • What are the requirements under GDPR, PCI, RMF?

You know—whatever that is—putting in the actual protections in your system.

But resilience really goes beyond just the cybersecurity. It’s about merging your mission with your security strategy:

🧩 What is the mission of the organization or system?
🔁 And how do you ensure that you’re able to continue that mission—even in a degraded state or under attack?

That might mean:

  • Identifying the critical path of your processes
  • Pinpointing where you have choke points (like a single system that, if it fails, halts operations)
  • Building in redundancy to prevent total failure

I really get back to the engineering fundamentals of the organization.
It’s not just about components—it’s about people, process, and technologies working together.

You have to explore that critical path across all three to ensure your mission is met—even when facing degraded or adverse events.


Heidi: Yeah, I liken it to kind of like the Boy Scout on a camping trip, right? Like—they’re prepared for anything.

Dr. Shea: Yeah, always be prepared. Be able to anticipate, be aware of what’s going on, and make sure you can get through it.

Why Is Cyber-Resilience Important Now?

Heidi: Yeah, resilience and cyber-resilience aren’t necessarily new terms, but in the things I’ve read recently, those words seem to come up a lot more.

Dr. Shea: Good.

Heidi: Yes. Good. Maybe it’s like when you’re thinking about buying a new car, and then suddenly you see that car everywhere. So maybe I just noticed the word resilience once, and now it’s showing up all over.
But why is that important now? Why should businesses care—and why is it coming up more and more lately?

Dr. Shea: Well, I don’t know if I’m the best judge of that because I kind of live in that space.
By trade, you’d probably put me in a cybersecurity field—but honestly? I kind of hate that word.
It doesn’t really mean anything unless you define:

🔍 What are the requirements?
🔐 What are you actually trying to secure?
🧾 What is the required outcome?

I now push for resilience over cybersecurity.

Ripple effect visualization of interconnected systems in cyber infrastructure.
Interconnected systems mean small issues can ripple across entire sectors.

📌 Whatever the mission is for your organization, how do you ensure that you can continue that mission—even when something goes wrong?

The public sector is beginning to feel the impact more because of how interconnected everything is.
Think about critical infrastructure—the interdependencies between systems are enormous.

Dr. Shea (continued):
You have things like the CrowdStrike issue that happened a couple months ago. A single piece of software that wasn’t fully tested gets deployed—and the ripple effect was huge.
Not just one or two organizations. All of critical infrastructure was touched… even my mom, calling me because ancestry.com wasn’t working.

“The Internet is broken!”

When these events start affecting everyone, they become front-and-center concerns.

Organizations must be able to anticipate, test, and build in resilience—whether the threat is a cyberattack, human error, or even a natural disaster.

Heidi: I love that. You’re totally speaking my language. So glad I asked this!

Dr. Shea: Good.

Heidi: And when you said “What does cybersecurity even mean?”—that really hit me.
It’s meaningless unless it’s grounded in your organization’s mission.
That’s the why—why businesses need to care. Your mission is your revenue stream. Whether you’re:

  • Supplying water to a city 💧
  • Manufacturing widgets ⚙️

…your entire business depends on protecting that mission objective.

Dr. Shea: Right.

Real-World Examples of Resilience

Heidi: Can you, so I gave a couple examples, but could you give, just to make it a little bit more concrete for people, like what, what exactly do you mean when you’re…

Dr. Shea: In terms of resilience? Yeah. So you mentioned that I worked on the cyber physical systems resilience paper for the President’s Council of Advisors on Science and Technology that the PCAST—we put out a number of different recommendations in there, some high-level recommendations on resilience and we brought up some examples of, you know, past issues like the Colonial Pipeline.

So, you know, the Colonial Pipeline is a great example of there was a cyberattack on the office systems of the Colonial Pipeline, and, um, they then shut down the access to the operational technology piece and cut the flow of fuel to the entire East Coast. Not really understanding the impact of that. So let’s just contain this ransomware attack and make sure it doesn’t spread. So we’re going to cut access here, contain it. And then by, by, by the steps that were taken, then the East Coast didn’t have fuel for, I forgot, a couple of days, a week. It was almost a national disaster. I think people were filling Piggly Wiggly bags and plastic totes full of gasoline at the, uh, at the gas pump to ensure that they had it.

A person filling multiple fuel containers at a gas station during a fuel shortage crisis, illustrating the cascading impact of cyberattacks like the Colonial Pipeline incident.
The Colonial Pipeline attack caused fuel shortages across the East Coast—leading to panic-buying and supply chain disruption.

So, you know, not understanding how the systems are interconnected to the mission to the public to your customers, you know, that that is a recipe for disaster. So you really have to understand what those impacts would be. And then, you know, internally, how those technologies relate to each other.

And then, you know, aside from the Colonial Pipeline, you can look at things like, like a, you know, Log4j, for example, that was a piece of code a couple years ago that was found to be exploitable, and a lot of organizations didn’t, didn’t even know if they had it. Like, does this pertain to us? We don’t really know. It’s a piece of code that’s within other pieces of code. So it’s an embedded piece of code, and if they didn’t have like a clear software bill of materials and an understanding of what their assets are, they’re spending all of their time just trying to identify: are we susceptible to this? You know, before they can even take the mitigations to go through and say, okay, we are susceptible, now let’s go through and update this, this version to the secure version.

Uh, you could look at Ukraine, not even on the cyberattack side, but if, you know, they were, uh, producing something, sending it to you, and then all of a sudden they’re at war, that disrupts your supply chain and your dependencies. So do you have alternate dependencies?

So, you know, it’s just really important to go through and, you know, again, map out what your mission is, what those, what I call the, the mission, the minimal viable objective or service, you know, for your organization. Which was a recommendation that was in the PCAST report, identifying what is that minimal viable service or product that you have to do to sustain operations and so that your customers can sustain operations.

And, you know, as I mentioned at the beginning, that, that did come from the GRF, BRC, ORF. And so I’ll explain those, those acronyms. There’s an organization, the Global Resilience Federation, which works with many of the, I think about 17 or 19 of the, the various ISACs out there, your Information Sharing Analysis Centers, and those are your, uh, sort of belly buttons for different topic areas. So your K-12 ISAC, your Space ISAC, your Operational Technology ISAC, your Manufacturing ISAC… I call it the, um, you know, phone a friend.

So if you’re working in this industry and you, you want to talk to a similar organization, you know, securely share information, have indications and warning, uh, non-disclosure agreement kind of things. Um, you can share information with them. So it’s the phone-a-friend like, “Hey, I work at a K through 12 organization. This is what we’re seeing. These are our priorities.” Another K through 12 organization is probably going to have the exact same issues and concerns. So you can share information, but… anyway, so GRF works with a lot of those and they’ve pulled together the BRC, the Business Resilience Council, which are representatives from various sectors of critical infrastructure.

In the United States, we have 16 designated sectors of critical infrastructure. So, so when I say critical infrastructure, I just don’t mean like things that are important. I mean, the designated sectors being water, energy, the defense industrial base, healthcare sector—not gonna name all of them, but you know, those, those designated sectors. And then this, um, BRC group had invited me to work with them on developing an operational resilience framework.

So I, I, um, I liken it to a business continuity plan, but more advanced, a very advanced business continuity plan because it’s not just what are the, um, what are the risks? What are the impacts? And what do we need to do for backup? But it is identifying that minimal viable objective or service that the company has, looking at the, you know, the upstream and downstream dependencies. Who are our suppliers? What does that look like? Could there be a choke point? Do we have redundancy? Do we have a stockpile of this flux capacitor and only one person makes it? What would it look like if the supply chain is disrupted? And then also looking at our customers and what do they have to have from us in order to continue their operations?

Because as we saw with COVID, there’s so many needs, so much interdependencies and connections amongst organizations and in that supply chain of services and products that, you know, when, when COVID hit, you’re like, “Oh, we’re not getting this because some mom-and-pop shop way down the supply chain had some issue,” which then supplied a critical component to one organization, which then was a ripple effect through these major organizations.

So, you know, by understanding that, the ORF puts that into, um, you know, a business strategy for companies to better merge the business piece of it, the mission and processes with the technology. I think that was a lot. I don’t even know if I answered your question. I got, you know, okay.

Heidi: You did. You did.

Incentivizing Accountability in Leadership

Heidi: So one of the things that you talk about in the PCAST report—and maybe this is kind of what you were getting at before. One of the recommendations is to, and I’m, I’m reading this, uh, this is a quote, “develop greater industry board, CEO, and executive accountability.” Can you give some examples of what that might be and how we can incentivize that sort of accountability outside of the government just telling people that they have to?

Dr. Shea: Yeah. So on the, um, you know, it’s very—it’s a very complex issue. Um, when we talk about critical infrastructure, um, I think it’s—last I looked—80–85 percent of our critical infrastructure is privately owned. So it’s not a “the government said so, so you have to do this” or, um, cyber command put out this order and now all of the military is going to follow it. It’s not the same pattern of activity.

Um, in the defense industrial base, it’s a little easier. The army said, do it. So you’re doing it. That’s how the army works. That’s how the military works. That’s how DoD works. But when you, when you put that out to the financial sector, it’s a privately owned bank. You know, put it out to the healthcare sector—it’s a privately owned hospital. You put it out to a lot of companies. They, they don’t have the infinite, um, uh, resources that some of the other federal—I don’t say infinite, you know, not quite infinite—but they, they don’t have to worry as much. Um, they’re not handed a bucket of money. “Here’s, here’s a bucket of money for cybersecurity.” You know, that doesn’t happen in the private industry.

They’re, um, you know, utilities, for example, you—you’re charging customers, and then the money you get from those charges, you then have to put back into your company for services, payroll, other activity, developing product. And, um, cybersecurity is one of those very difficult areas to have a return on investment. Why do I… explain to me why I need to spend a million dollars on this particular thing when, um, we haven’t been hacked yet? What’s the risk? Like we’ve been fine without it. Why do we have to spend this money now?

So it’s a constantly evolving thing. It’s a very difficult argument to make and, um, you know, the, the board engagement and getting CEOs involved is, um, it is convincing them of the importance of building out these resilient strategies, investing in cybersecurity, convincing them that: there are really two types of organizations out there—those that have been compromised and those that are going to be compromised.

And convincing them that you may think that, um, you can accept this risk—and maybe they can. But, you know, understand the, you know, rippling effects of that.

You can look at the, you know, Target breach from years ago now—I think it was like in 2015. Target had a breach and that ended up being, you know, millions of dollars worth of cost to them.

Digital breach warning icon symbolizing major cybersecurity incidents.
The Target breach cost billions—one weak spot can trigger massive financial fallout.

So, so the—to incentivize boards and the CEO piece, the PCAST report talked about:

  • Better engagement
  • Public-private partnerships
  • Involving the private sector more
  • Having relationships and sharing more information

On the federal side, there’s usually more access to the intelligence that’s available. We’re seeing these types of things. We’re seeing this type of nation-state attacking our organizations. The commercial sector doesn’t necessarily have that kind of insight. So by sharing that intel and the threat and what we’re seeing across other sectors—that’s, um, helpful.

And then to incentivize them, it’s really to—I guess there’s a couple ways you can do that:

One is to paint that picture of what is the impact if you don’t invest in these strategies or technologies.

Then you may be that Colonial Pipeline. You may be that organization that’s preventing the entire East Coast from having gasoline for a large period of time.

Or you may have the financial burden of recovering from ransomware attacks.

But one of the recommendations we had talked about—and I don’t know if it actually made it into the paper, I don’t remember—but it was… I don’t want to say public shaming, but… transparency.

You know, posting what—or requiring companies to show—what are they doing for that resilience and cybersecurity?

How are they meeting some of these requirements? How much are they investing? 1%, 0.5%, 0.3%, 20%?

It ranges.

And if the public is seeing that you’re not investing, then ultimately there’s going to be a big issue.

So maybe just through that public shaming, they would be motivated. I don’t think we actually put that in the paper, but it was a fun discussion.

We actually compared that to environmental standards, where companies post their carbon footprint—and because it was embarrassing, they took steps to reduce it.

So we talked about that.

But I think through:

  • Sharing of information
  • Understanding the financial impact
  • And understanding their role in the ecosystem

That should incentivize them. But they’d have to really understand what that means.

Heidi: Yeah, really painting that picture.

Dr. Shea: I kind of feel like the silver bullet for motivating not just CEOs and board members is the insurance industry.

So I think the insurance industry is going to end up being the cyber savior to the country. Once they get all their ducks in a row, get organized, and put out meaningful direction—because right now it’s a little bit disparate.

Once the insurance industry says: “We’re not going to insure you unless you do X, Y, and Z,” that’s really going to motivate companies because they’re like, “Oh, we need insurance.”

You know, in cybersecurity classes, they talk about different types of risk management. There’s risk transference, right?

“We’ll just get insurance, so who cares?”

But insurance companies aren’t insuring people anymore because the payout is massive.

The insurance industry is sort of imploding on the cyber side.

It’s not just the cost of the computer. It’s:

  • the cost of a new computer
  • retraining systems
  • ransomware payments (could be millions)
  • audits
  • third-party recovery teams

It’s not like car insurance—“crash it, replace it.” It’s exponential.

So in my mind, the insurance industry is eventually going to lead everyone to:

“This is what you really need to do.”

And we’re not going to accept just risk transferring anymore.

Heidi: Yeah, that’s really interesting. Yeah, nothing like the insurance company telling you what you have to…

Dr. Shea: Right, right. I mean, you talk about the federal government telling you what to do, right? Yeah—no, it’ll be the insurance companies.

Heidi: There you have it. The inside scoop.

The Future of Encryption Standards

Heidi:
Um, I want to shift gears and talk about a recent CSO article. The title of the article was “European law enforcement breaks high-end encryption app used by suspects.” In the article you say, “CISOs should be taking note of the diminishing lifespans of current encryption standards.” So sorry for kind of like abruptly changing subjects, but it was a really interesting article and I thought what you, you, you said and what you explained in it was really interesting as well. So when you’re talking about the diminishing lifespan of current encryption standards, can you talk about what you mean there?

Dr. Shea:
So we’ve already started on the federal side to go through and move to new algorithms for encryption. Um, so you have, you have basic encryption. Encryption is based on math, and it’s hard math. And so with the development of quantum computing, the math is getting easier to break because the computational power of quantum computing is so advanced. So that means new algorithms are being developed based on harder math. I mean, just to keep it super simple, um, yeah, so, so the math is getting harder and there’s requirements in, in the federal space right now to, to change to this, these new algorithms that work with the harder math to make it more difficult with the expected development of, uh, crypto-relevant quantum computers (CRQCs), which are the computers that can break encryption, modern-day encryption.

Person writing code on a computer in a modern workspace.
Encryption requires both strategic design and secure implementation—especially in the age of quantum computing.

And in modern-day encryption right now, uh, you look at RSA encryption. With, with the way computers work, it’s expected that it would take a billion years to break the encryption. However, with the quantum computer, that same, uh, encryption is expected to be broken within like six minutes, so that means:

🔐 No more passwords are safe. No more encryption is safe. No more data security. Everything can be broken.

So you know, the solution is, you know, harder math for, you know, more computational power. And so it’s in my mind, sort of a never-ending problem of computing power vs. hard math because we’re going to have harder math and more advanced computing power in the next 10 years, 15 years, whatever it is. Um, so I, I believe if you just remove the math problem and approach things from an information-theoretic secure state where you’re not relying on the math, but you’re relying on the complexity. So regardless of how much computational power you have, you still have secure data.

An example of that is distributed data storage. So if, if I have all of my data in one database and I have, um, you know, encryption on it, if, if someone were to break into that particular database, they have access to the data, it’s now encrypted. They go through, uh, break the hard math, they have all your data. But if you take the same data, and you were to distribute it in multiple locations, so you have, um, you know, fragments of data everywhere, you would first have to go through and collect all of those pieces of data to get them together before you can then apply that quantum computing capability to go through and break the encryption.

And so, so there’s strategies in place: the distributed, uh, storage, tokenization, which, um, you know, is a big piece of how to ensure the confidentiality of your data. So if the information is compromised there, they’re not getting anything. It’s a multi-layered approach to security, and you don’t have to worry about that quantum piece of it.

And I will say, if you have this conversation about quantum computing with people, the idea that comes up usually is:

“Well, it’s like 30 years out. Who cares? No big deal.”

I would really like to see a conversation between you—and I’m not a quantum expert. I research, I study it, but I’m not a quantum expert in any way. I just look at it from the cybersecurity standpoint—but you have experts in that field that are, you know, actively working on the development of those CRQCs (cryptographically relevant quantum computers), and they have wildly different opinions based on their work and what they’re seeing in their research.

Some experts, you know, bona fide experts in that area, believe we’re going to get to that stage where we have to worry about breaking of modern encryption within the next five years, probably two years. But then you have an equally qualified expert that says:

“It’s like 30 years, so who cares?”

And then people will go through and hear both of those opinions and they will depend on their own position, their own bias on how they feel like, “30 years, so I’m not going to worry about it,” but, um, that’s not necessarily the case anymore because now there are steps that have been taken by the federal government to move over to the new algorithms.

You have to take some action if you’re going to align with the government and federal standards, and you need to be aware of it. So if you’re the CTO, CISO, CIO, you’re buying new equipment, developing an architecture, setting up systems, you’re going to want to look at:

🔍 What is that future standard? What are those new algorithms we have to use?

…and implement those so that you’re compatible with the system, and then know that, okay, regardless if it’s two years from now or 30 years from now, it’s still a threat. So when you buy this equipment or you develop these architectures and you have these strategies:

  • Are you planning for 30 years out?
  • Are you planning for just one year out?

Take out the math—that’s what I say.

Heidi:
Okay, that makes perfect sense. And I want to kind of drill down a little bit deeper. You, I’m sure you can anticipate what my next questions will be. So when you’re talking about like this, this strategic, you say strategic, you say multi-layered, like I want to unpack that a little bit. One of the, some of the things that you say in this CSO Online article are:

“Multi-layered defenses such as tokenization, zero-knowledge proofs, distributed storage, and other technologies that protect data even if encryption is compromised.” (Click to read the full CSO Online article.)

So those are a lot of terms and you kind of sprinkled them in, you know, as you were explaining this. But I’m hoping that we can drill a little bit deeper into each one of them.

Unpacking Multi-Layered Defenses

Dr. Shea: Okay, so, so tokenization. Um, tokenization, you know, is the process of replacing the actual data with tokens. Uh, I don’t know if you remember going to Chuck E. Cheese. I, I did not. Um, I, I tried to never go to Chuck E. Cheese, but every now and then my, my sons were invited to go there. You, you don’t get… you, you don’t hand your kids, like here’s, here’s 20 bucks. You, you go to the token machine, you, you give the, uh, the money to the, the machine or the person and they then give you tokens. And then you just give your kids the tokens, they run around the, uh, arcade putting the tokens in the machine.

So they’re not actually handling the money, the sensitive currency. Um, they’re, they’re just dealing with those, uh, you know, representations of the money. So if, let’s just say the, you know, your, your child is then cornered in the ball pit, someone’s like, give me, give me all your money. It’s, it’s not really the money. It’s just the tokens. I’m sure for the child, it’s just as traumatic. They don’t get to play as many games, but the, the money itself is still protected because it was never actually in the ball pit with the kids.

So, so the tokenization strategy, um, is that, uh, is that way of ensuring that, again, if, if, if the encryption is broken, someone gets into your, uh, system, your, your, your data, uh, flows, and they’re able to, um, you know, man-in-the-middle attack it or wherever it’s being stored, get to that data. It’s, it’s not the data. It’s just a representation of the data. It’s…

Heidi: It’s the darn Chuck E. Cheese tokens!

Dr. Shea: It’s just Chuck E. Cheese tokens. Yeah.

Heidi: That would make an attacker mad.

Dr. Shea: Yeah. Yeah. So it’s like, oh, I broke into it and this is useless. It’s useless data. So the actual crown jewel, sensitive data, your PII, your PHI, all those elements that you’re trying to protect, they’re still protected.

Heidi: Awesome. I love a great analogy.

Dr. Shea: I just came up with that. I should have thought about that before. I was like, yeah, that’s, yeah. Some sticky, dirty tokens. Yeah.

Heidi: Zero-knowledge proofs, so explain those to us.

Dr. Shea: So, so zero-knowledge proofs are, um, um, that kind of gets, it’s, it’s sort of a scenario-by-scenario situation in which you would use it. There’s a lot of data sharing that takes place amongst organizations, um, for audits or, um, you know, whatever, what, what maybe for compliance, you have to share information. So, so zero-knowledge proof is a way of sharing information without having to share your sensitive information. So it may not apply in all instances, but you can definitely use it in some instances.

Unpacking Multi-Layered Defenses Dr. Shea: Okay, so, so tokenization. Um, tokenization, you know, is the process of replacing the actual data with tokens. Uh, I don’t know if you remember going to Chuck E. Cheese. I, I did not. Um, I, I tried to never go to Chuck E. Cheese, but every now and then my, my sons were invited to go there. You, you don’t get... you, you don’t hand your kids, like here’s, here’s 20 bucks. You, you go to the token machine, you, you give the, uh, the money to the, the machine or the person and they then give you tokens. And then you just give your kids the tokens, they run around the, uh, arcade putting the tokens in the machine. So they’re not actually handling the money, the sensitive currency. Um, they’re, they’re just dealing with those, uh, you know, representations of the money. So if, let’s just say the, you know, your, your child is then cornered in the ball pit, someone’s like, give me, give me all your money. It’s, it’s not really the money. It’s just the tokens. I’m sure for the child, it’s just as traumatic. They don’t get to play as many games, but the, the money itself is still protected because it was never actually in the ball pit with the kids. So, so the tokenization strategy, um, is that, uh, is that way of ensuring that, again, if, if, if the encryption is broken, someone gets into your, uh, system, your, your, your data, uh, flows, and they’re able to, um, you know, man-in-the-middle attack it or wherever it’s being stored, get to that data. It’s, it’s not the data. It’s just a representation of the data. It’s... Heidi: It’s the darn Chuck E. Cheese tokens! Dr. Shea: It’s just Chuck E. Cheese tokens. Yeah. Heidi: That would make an attacker mad. Dr. Shea: Yeah. Yeah. So it’s like, oh, I broke into it and this is useless. It’s useless data. So the actual crown jewel, sensitive data, your PII, your PHI, all those elements that you’re trying to protect, they’re still protected. Heidi: Awesome. I love a great analogy. Dr. Shea: I just came up with that. I should have thought about that before. I was like, yeah, that’s, yeah. Some sticky, dirty tokens. Yeah. Heidi: Zero-knowledge proofs, so explain those to us. Dr. Shea: So, so zero-knowledge proofs are, um, um, that kind of gets, it’s, it’s sort of a scenario-by-scenario situation in which you would use it. There’s a lot of data sharing that takes place amongst organizations, um, for audits or, um, you know, whatever, what, what maybe for compliance, you have to share information. So, so zero-knowledge proof is a way of sharing information without having to share your sensitive information. So it may not apply in all instances, but you can definitely use it in some instances. And I’ll give you, I’ll give you an example on, on something I’m working on. So within the PCAST paper, we had recommended the, the, the standup of an organization called the, um, National Critical Infrastructure Observatory, which would act as a digital twin for critical infrastructure. So you have an understanding of the security and resilience posture of all of the critical, um, you know, systems, critical infrastructure of the United States. Because right now you don’t really have that belly button out there on, like, what is our posture? So if you develop this sort of digital twin organization, it would then have to get information from organizations. And we just talked about how 85 percent of critical infrastructure is privately owned. So, um, I don’t know a lot of private organizations, if the government says, “Hey, send me a copy of your, um, um, like your, your, like all of, all of the CVEs that you’re, like, compliant with, or your, um, like, password, simple, simple password. So send me a list of all your passwords so we can ensure that there are at least 16 characters with, um, special characters, capital letters, lowercase, lowercase letters, and, um, and symbols.” Yeah, yeah, they’re going to be like, “Yeah, no, no, I’m not sending you a list of my, my passwords. Just, just take my word for it that we’re good.” And they’re going to say, “No, we don’t want to just take your word for it. We want some type of verification to understand that, um, you’re using complex passwords, basic cyber hygiene.” So instead of the organization sending in, “Here’s a list of all my passwords for you to verify them,” um, the, the two organizations—this made-up organization that we’re promoting and the, the organization they’re working with, you know, it would work with maybe a third-party organization that would develop a proof that goes through all of their—like a scan. “Let me scan all of your passwords and do a simple check. Is there a capital letter? Is there a lowercase letter? Is it 16 characters? Is there a special character? Is there a number?” And then for every password on the, um, you know, on that, um, organization, that private company that they’re trying to get the information from, it would come back as yes, no, yes, no, yes, no, yes, no. Hopefully it comes back all as yeses. And then the, um, uh, organization they’re sending it to, like this National Critical Infrastructure Observatory, they would then get the responses, “Yes, like it meets it, yes.” So they both trust the proof. They both looked at the proof. They know the code, and they know what it looked for, and now they both trust, “Yes, you absolutely have, um, complex passwords that are meeting these requirements without me having to see your passwords.” So it’s a zero-knowledge proof. I don’t know what any of your passwords are, but I have the confidence that they are meeting the requirement because we both trusted this particular proof that was written, this code, and now I can say confidently, yes, you’re doing this. So it’s a way to, again, exchange information from organization to organization without sharing the, you know, the PII, PHI, GDPR issues—that protected data. The sensitive data. Or even something as simple as, um, you know, I, I wanna know that, um, you know, again, which of your systems may be affected by this new CVE that came out. And you can run, you know, scans throughout the organization. And the, um, like the Nessus scans that come out of it would then say this software, this software version compatible, not—nobody wants to show their attack surface out there. Like, “These are all the software pieces I’m using.” So instead of, you know, showing that sensitive information, you could just come back with yes, no, yes, yes, no. So it’s a simple way to have trusted verification and shared information. Heidi: Awesome. That’s really helpful. Also I’ve never heard anyone explain it so, so succinctly. I feel like I understand it a lot better.
Secure data sharing without exposure—visualizing the flow of zero-knowledge proofs.

And I’ll give you, I’ll give you an example on, on something I’m working on. So within the PCAST paper, we had recommended the, the, the standup of an organization called the, um, National Critical Infrastructure Observatory, which would act as a digital twin for critical infrastructure. So you have an understanding of the security and resilience posture of all of the critical, um, you know, systems, critical infrastructure of the United States. Because right now you don’t really have that belly button out there on, like, what is our posture?

So if you develop this sort of digital twin organization, it would then have to get information from organizations. And we just talked about how 85 percent of critical infrastructure is privately owned. So, um, I don’t know a lot of private organizations, if the government says, “Hey, send me a copy of your, um, um, like your, your, like all of, all of the CVEs that you’re, like, compliant with, or your, um, like, password, simple, simple password. So send me a list of all your passwords so we can ensure that there are at least 16 characters with, um, special characters, capital letters, lowercase, lowercase letters, and, um, and symbols.” Yeah, yeah, they’re going to be like, “Yeah, no, no, I’m not sending you a list of my, my passwords. Just, just take my word for it that we’re good.”

And they’re going to say, “No, we don’t want to just take your word for it. We want some type of verification to understand that, um, you’re using complex passwords, basic cyber hygiene.”

So instead of the organization sending in, “Here’s a list of all my passwords for you to verify them,” um, the, the two organizations—this made-up organization that we’re promoting and the, the organization they’re working with, you know, it would work with maybe a third-party organization that would develop a proof that goes through all of their—like a scan. “Let me scan all of your passwords and do a simple check. Is there a capital letter? Is there a lowercase letter? Is it 16 characters? Is there a special character? Is there a number?”

And then for every password on the, um, you know, on that, um, organization, that private company that they’re trying to get the information from, it would come back as yes, no, yes, no, yes, no, yes, no. Hopefully it comes back all as yeses. And then the, um, uh, organization they’re sending it to, like this National Critical Infrastructure Observatory, they would then get the responses, “Yes, like it meets it, yes.”

So they both trust the proof. They both looked at the proof. They know the code, and they know what it looked for, and now they both trust, “Yes, you absolutely have, um, complex passwords that are meeting these requirements without me having to see your passwords.”

So it’s a zero-knowledge proof. I don’t know what any of your passwords are, but I have the confidence that they are meeting the requirement because we both trusted this particular proof that was written, this code, and now I can say confidently, yes, you’re doing this.

So it’s a way to, again, exchange information from organization to organization without sharing the, you know, the PII, PHI, GDPR issues—that protected data. The sensitive data.

Or even something as simple as, um, you know, I, I wanna know that, um, you know, again, which of your systems may be affected by this new CVE that came out. And you can run, you know, scans throughout the organization. And the, um, like the Nessus scans that come out of it would then say this software, this software version compatible, not—nobody wants to show their attack surface out there. Like, “These are all the software pieces I’m using.”

So instead of, you know, showing that sensitive information, you could just come back with yes, no, yes, yes, no. So it’s a simple way to have trusted verification and shared information.

Heidi: Awesome. That’s really helpful. Also I’ve never heard anyone explain it so, so succinctly. I feel like I understand it a lot better.

Closing Thoughts

Heidi: I know that we’re at time here, so I want to say thank you so much for sharing your insights, like such, such cool stuff that you’re working on, and really appreciate you taking the time to unpack this all. Um, so again, uh, George works for FDD, then the PCAST report I will link to in the show notes. Um, any parting words for our listeners?

Dr. Shea: No, thank you for having me. And, um, I guess I will, I would, I will finish with what I started with. And that’s, you know, focus more on resilience than cybersecurity.

Heidi: Yeah. CISOs everywhere are like, what?

Dr. Shea: Yeah, I know. I know. It makes no sense. And we’re pushing everyone into the cyber field right now with STEM. And I don’t even think the word resilience comes up. So they’re like, “I’m supposed to do these things.” I’m like, “No, no, no, have the big picture, you know, implement the big picture and mission, mission success.”

Heidi: Love it. Thank you so much.

Dr. Shea: All right. Thank you.

Ready to align resilience with your mission?
Let’s explore how your organization can anticipate, adapt, and thrive—no matter what comes next.

Disclaimer:
This transcript is provided by Rixon Technology for general informational and educational purposes only and reflects the personal views and opinions expressed during a podcast interview on April 2025. The content does not constitute professional advice, legal guidance, or official endorsement by Rixon Technology, its affiliates, or the individuals featured. Dr. Georgianna Shea and Heidi Trost, as contributors, share their insights based on their expertise; however, these statements are not intended to represent definitive solutions or guarantees. Rixon Technology, Dr. Georgianna Shea, and Heidi Trost are not liable for any actions, decisions, or consequences arising from the use of this information. Consultation with a personal legal or professional advisor should be sought where appropriate for application of information in this podcast to an individual’s own personal circumstances.

© 2025 Rixon Technology. Reproduction or distribution without prior written permission is prohibited.