
From Passwords to Quantum Threats: Securing Remote Access in a Rapidly Changing World With Neil Gad
Share
Podcast
About This Episode
Remote access sits at one of the most contested and most overlooked security boundaries in modern organizations. In this episode, Rachael Lyon and Jonathan Knepher are joined by Neil Gad, Chief Product and Technology Officer at RealVNC, to explore why secure-by-design must be built into remote access from the ground up, not bolted on after the fact. From legacy open-source tools running unencrypted on factory floors to employees quietly exfiltrating proprietary source code into large language models, Neil draws on real customer conversations to make the risks concrete and the solutions actionable.
The conversation covers the full threat landscape: the cloud-versus-on-premise debate through a post-COVID, AI-accelerated lens; why a quantum computer could render today's AES-256 encryption worthless virtually overnight; and how AI agents are reshaping the cybersecurity arms race by defeating CAPTCHAs, interacting with screens at scale and identifying vulnerabilities faster than human teams can. Neil also shares his take on the future of biometric multi-factor authentication, what hybrid infrastructure must look like as agentic workflows proliferate and why the most valuable skill for the next generation of cybersecurity professionals is not Python. It is critical thinking.
Podcast
Popular Episodes

35 mins
The War on Data, Cyberspies and AI with Eric O'Neill - Part I
Episode 352
January 6, 2026

22 mins
How AI and Third-Party Risk Are Transforming Healthcare Cybersecurity with Ed Gaudet - Part I
Episode 339
October 1, 2025

24 mins
The Evolving Cyber Threat Landscape in Healthcare: Insights from Fortified Health Security’s Russell Teague - Part I
Episode 332
August 11, 2025

44 mins
From Battlefield to Boardroom: Ricoh Danielson’s Lessons on Cyber Warfare and Digital Forensics
Episode 327
May 27, 2025
Podcast
From Passwords to Quantum Threats: Securing Remote Access in a Rapidly Changing World With Neil Gad

Welcome, Neil Gad
Rachael Lyon:
Welcome to the To the Point Cybersecurity Podcast. Each week, join Jonathan Knepher and Rachael Lyon to explore the latest in global cybersecurity news, trending topics, and cyber industry initiatives impacting businesses, governments, and our way of life. Now let's get to the point.
Hello everyone, welcome to this week's episode of the To the Point Podcast. I'm Rachael Lyon, here with my co-host, Jonathan Knepher. Good morning, Jonathan.
Jonathan Knepher:
Good morning, Rachael.
Rachael Lyon:
So I was doing a little reading this morning, and there was an interesting article on ransomware. Ransomware is still a really effective tool for attackers, but the article was talking about how legitimate credentials and identity are now replacing malware as the threat vector opening the door for ransomware. I thought that was an interesting angle — getting back to legitimate credentials. It feels like a blast from the past.
Jonathan Knepher:
Yeah, I think it is. Finding ways to compromise systems with valid credentials is definitely a major attack vector, and people let their credentials out there in the wild inadvertently through so many different vectors.
Rachael Lyon:
Yeah, it's old as time — harkens back to the days of writing your password, literally "password," on your monitor so everyone can see it.
Jonathan Knepher:
Oh, now everybody knows my password.
Rachael Lyon:
Well, I am so excited for today's guest. He brings over two decades of experience across technology, commercial, and operations roles. He currently serves as Chief Product and Technology Officer at RealVNC, where he is responsible for defining and delivering the company's product vision. His career began in strategy consulting at BCG and PwC, where he led value creation programs and supported M&A deals across the TMT sector and beyond. Since then, he has led technology functions at both large corporations and startups, building high-performing teams with a sharp focus on customer value. On the academic side, he holds a first-class Master of Engineering degree in mechanical engineering from Cardiff University, and has more recently taught himself Python to develop deep learning models for forecasting applications. Welcome to the show, Neil Gad.
Neil Gad:
Thank you for having me.
[02:37] Remote Access as a Security Boundary: The Secure-by-Design Imperative
Jonathan Knepher:
Hey, Neil. Let's kick this off — tell us a bit about how you'd frame remote access as a critical boundary for the cybersecurity industry today.
Neil Gad:
Sure. Remote access, by definition, is creating a way to access devices across networks between different users and different environments, which is often in direct conflict with what a cybersecurity professional is trying to do in terms of locking down access. So by definition, we have an instant problem. Working in remote access, we have to understand what the potential threats are and figure out how we can enable access while keeping it under control so that we can operate in secure environments.
Rachael Lyon:
So that seems to parlay pretty well into a secure-by-design conversation, which is always fascinating in terms of how we approach these things.
Neil Gad:
Exactly. While we design our software products in remote access for legitimate use cases, there are obviously malicious use cases — bad actors who could hijack that access for their own purposes. So design principles really have to be at the core of these kinds of software packages. That means security is not an afterthought; it has to be built from the ground up around secure design principles, and the bar has to be set really high. If you want to be credible in the remote access space, you have to have some table stakes — basic security architecture that allows you to say, "Yes, this is not going to be hijacked and taken into the wrong hands and expose your enterprise data."
Let me give you some examples. I hear all the time about industrial machines in factories, on oil rigs, that use open-source, unencrypted remote access solutions. When remote access was invented over two decades ago, it was open-source and unencrypted. Only in the last 15 years has encryption and other security wrappers been applied around it. The number of times I have conversations with customers who are still running that open-source legacy remote access — that's the first obvious threat vector.
Secondly, there's the concept of granular permissions, or role-based access control. In order to identify and keep out bad actors, you need to know who the good actors are. You need to know who has access to what, and when. It's really important that organizations can identify who has access to devices, what access they should have, what their role is, and what they're able to do. And then, after the fact, there needs to be an audit trail — who connected to which device, when, and what did they do — preferably with a screen recording of the session. Those are the table stakes of secure-by-design principles, and they should be present in all remote access solutions. But they're not always there.
Jonathan Knepher:
It's interesting that you bring up the unencrypted open-source point alongside the recent Telnet vulnerability — unauthenticated root access — that's come out recently. Talk more about what new risks are emerging right now and what you think people are overlooking. On the Telnet thing — who still has Telnet out there in 2026?
Neil Gad:
Right. I'm not entirely sure. There are a whole bunch of new threat vectors. Everyone's talking about AI and agentic AI and what it can do. But unfortunately, there are things that are closer to home and more common. Take what I call application sprawl: on any device, users have many different applications, different logins, different credentials, many different systems — and each of those is its own attack surface. The number of customers I speak to where that's not locked down or clearly controlled is very high. And therefore you're providing an instant potential vulnerability for a bad actor to take control.
Via this legitimate pathway, using legitimate credentials, they can take control of your data and systems. That's very common. One of the other things that comes with this is that, increasingly, in the age of LLMs and AI agents, an organization's data can be easily — or more easily — exfiltrated into an LLM. A very common scenario I see: an employee uploads a bunch of data, proprietary information — could be source code for a software vendor — into an LLM to generate more code or a summary of what they've ingested. That data contains proprietary information. So all organizations should have enterprise data protection when using these kinds of tools — meaning that data is not being sent to a cloud to train LLMs and then become accessible to other organizations. This is table stakes, but it's still far from universal.
And that's a new frontier where, in my experience, cybersecurity professionals are paying close attention, but it's not always as locked down as it should be.
Rachael Lyon:
That's a really good point. Are organizations today thinking about remote access through the lens of securing data specifically — as data moves so quickly, gets created, goes through LLMs, and flies everywhere?
Neil Gad:
I don't think so. The main way of thinking is to lock down access to applications and devices through permissions control, which lots of organizations do well. But all it takes is an unauthorized remote access application to end up on someone's machine, and all of a sudden you have a backdoor out of that organization to somewhere else. You can talk about secure-by-design principles in your primary remote access solution, but a human can install an unauthorized, unencrypted, open-source remote access application — and all of a sudden you have a channel through which data can be exfiltrated. In my experience, that's not always being considered.
You can trace when an employee installs an application they shouldn't. But by then, it may already be too late — they may have already exfiltrated data somewhere else. So there's a shorter reaction cycle that cybersecurity professionals now have to work within. The speed at which threats arrive is increasing, and therefore the vigilance required is higher.
[09:51] Cloud vs. On-Premise: Air Gaps, Backdoors, and the Hybrid Future
Jonathan Knepher:
Right, so you've brought up an interesting attack vector with uploading data to large language models. A lot of folks are also now evaluating their whole cloud versus on-premise stance across most of their infrastructure. Some recent actions in the world have taken one of the major cloud providers' locations offline, which has been newsworthy. What's your opinion on cloud-based versus on-premise, especially when it comes to securing remote access and protecting how your data flows?
Neil Gad:
Really good question. Cloud remote access is fairly secure — it includes the protections I mentioned: end-to-end encryption, granular permissions, access control. These layers do provide protection. But at the end of the day, those connections are still happening over the internet through a cloud. Even if the end-to-end data traffic is encrypted and cannot be accessed in transit, the devices at either end could still be accessed by a bad actor. Because of those organizational attack vectors — using legitimate credentials or other ways of accessing endpoints — it can effectively devalue the encryption you have. So a lot of organizations prefer to have all of their connections on their own local network, behind the firewall, with completely air-gapped infrastructure. They're essentially hiding — saying, "We're going to tuck all of our stuff behind this firewall."
Very commonly, in industrial settings where you have critical infrastructure on the IT/OT boundary, that tends to be on-premise, adhering to what's called the Purdue model — a set of guidelines around industrial control systems that have to be secured in specific ways, with all kinds of regulatory standards around them. That's usually more prevalent where you have unattended access or a mix of attended and unattended access. Where you have unattended access to machines, especially in industrial environments, that's a higher risk factor because there's no user at the other end. In those circumstances, customers tend to prefer on-premise because it aligns with their overall cybersecurity posture. Whereas attended usage — laptops like the ones we're talking through right now — those are going to be connected to the internet anyway. So it tends to fall along those two lines in terms of how organizations think about cloud versus on-premise.
But I've seen a mixture of both. On-premise is by design more secure because there's no internet connection. But cloud, with the right controls and secure-by-design principles, can also be equally effective — as long as your organization's security posture, controls, and risk management are also aligned and don't become the weakest link.
Rachael Lyon:
I like to apply the AI lens to everything — just ask Jonathan. Through COVID, everybody rushed to cloud and hybrid environments. Now, with the AI explosion, I've been hearing pockets of conversation about organizations almost going back to the past and doubling down on on-premise versus cloud. Are you seeing the same?
Neil Gad:
Yeah, similar. Over the last decade, that's certainly been true. More organizations that would have put infrastructure in the cloud have gone the other way toward on-premise — precisely for that reason. Think back 10 years ago: everything was going to be cloud-based. All applications were going to be hosted somewhere in the ether, and everyone was talking about data centers, ramping up AWS capacity, and how all of that worked. There is a definite trend in the last five years, post-COVID, with the proliferation of remote working and remote access. The risk factor has increased. I hear more from customers and the industry about a tendency to go for on-premise to provide this kind of hiding, as I call it.
Jonathan Knepher:
On this whole on-prem versus cloud element — not only the AI exfiltration angle, but what about backdoors being implemented in on-prem solutions, back channels out? How do you prevent those?
Neil Gad:
It's actually quite common. A lot of on-prem solutions — not just in remote access, but in other types of software — do have a backdoor. Could be just a single machine that can talk back to the software vendor's cloud. Usually that's for the purposes of tracking data and analytics about the customer's usage profile, or to provide software updates — downloading patches from the cloud. That's quite dangerous if you think you're fully on-premise, but you're actually able to download packages, because that package could contain a malicious payload. Some solutions have a one-way gateway outbound — especially in industrial environments. Sometimes it's hardware-controlled, with only a one-way data-out flow — essentially read-only.
That's more secure than two-way, but it still provides a channel through which an outsider can look at data inside an organization you thought was air-gapped. In my role, we spend a lot of time thinking about how to do fully on-premise with no cloud connection — which actually makes my life a lot harder. It puts more work on the customer in terms of setup, since the software isn't talking back to a cloud somewhere. But while it's harder, it is more valuable, and often essential and mission critical — particularly in industrial settings and manufacturing, where the cybersecurity posture has to be fully on-prem.
Rachael Lyon:
We've joked a bit in past conversations about critical infrastructure — just go back to the Stone Age, take everything offline, make it manual.
Neil Gad:
Well, there is a definite trend that way. But over time, I think it's going to have to go the other direction. Organizations need to be thinking about how, in an AI-enabled world with agentic workflows, there will be a dependency on a cloud somewhere down the line. What we're going to see in the next five years — or possibly sooner, given how fast everything is moving — is organizations like mine having to think harder about how to enable the same level of security that on-premise grants while having some kind of cloud connectivity and a hybrid model. I think that's going to become more important, because it's going to get harder and harder in the age of AI and agentic workflows to be fully on-premise.
[17:44] Making Secure-by-Design Real: Frameworks, Standards, and Quantum Threats
Jonathan Knepher:
Yeah, the cloud connectivity and the kind of velocity everyone's expecting has changed. But your point about secure-by-design being basically a requirement — how do we go about making that a priority? How do we get engineering teams to make that their focus?
Neil Gad:
I think it has to come from the cybersecurity organization laying down the ground rules. We have a cybersecurity team. They tell me what I can and cannot do. They set the guardrails around the architecture I'm allowed to build. Before I build anything, before anyone writes a line of code, we have to establish the boundary conditions and the parameters within which we have to work. Doing things in that order is really essential. There's no good building something and then going to the cybersecurity team and saying, "Hey, what do you think of this?" You have to do it the other way around. If organizations adopt that posture, they will be more successful at achieving secure-by-design principles — because they had to think that way before they wrote anything.
Rachael Lyon:
But there's always this tension, right? Are we slowing down innovation? Are we not moving as quickly as we need to, particularly in the age of AI? How do you have those conversations, Neil?
Neil Gad:
They're hard. My job as a product leader is to create customer value — I want seamless workflows that reduce friction for my customers. And that is always in tension with: you have to have two-factor authentication, you have to have these security guardrails that protect the customer. While it might be great to save them a bunch of clicks, you're also potentially exposing them to threat vectors. So this is the constant dialogue I have with my cybersecurity organization, who are really talented and skilled at what they do, and at working out creative solutions on how we can achieve customer value without compromising on security. I'd advise all software organizations to think that way in this space.
Jonathan Knepher:
Are there frameworks in place that help define this relationship and help organizations get past that tension?
Neil Gad:
Yes. There are frameworks like Zero Trust and Least Privilege. There are standards like NIST, which has its own software development lifecycle guidelines. These are good starting points. We like to say that we comply with various standards, as do other remote access providers. It's really important to be able to say, "Yes, we developed our software in accordance with these principles." It then becomes less of a debate — you have to comply with these frameworks, and you are recognized as an organization that can be trusted.
Rachael Lyon:
Speaking of NIST — this gets into one of my favorite topics: quantum computing. It's so far away — but is it, Neil? How should we be thinking about this in the remote access world?
Neil Gad:
It may not be that far away. Depending on who you speak to, timelines range from next week to 2035 — so definitely within the next decade, this is something all remote access vendors will have to think about. At some point in that timeframe, a quantum computer of the future is going to be able to crack all encryption as we know it. AES-256 encryption could become indefensible against a future quantum computer. We're okay now — but all financial systems, WhatsApp, and remote access solutions using this kind of encryption become vulnerable and effectively lose their enterprise value overnight if that encryption is cracked.
NIST published standards for post-quantum cryptography in 2024 that all remote access providers are looking into. But there's no guarantee these will be quantum-safe, because we don't yet know what future quantum computers will be capable of. It's the best guess available to NIST and the industry right now. We are definitely thinking about it. At some point, this shift will happen and a quantum computer will crack today's encryption — which is based on mathematics that's currently considered infeasible to solve, but won't remain so indefinitely. It's really interesting, and it's unclear exactly what to do. I haven't seen any remote access providers actually move to these NIST standards yet, because I think it may be too early. We don't know whether they'll actually solve the problem. So it's a waiting game — but you don't want to wait too long, because it might be too late.
Rachael Lyon:
Exactly.
[23:16] The AI Agentic Arms Race: The New Battleground
Jonathan Knepher:
So with all of these threats on the horizon — quantum computing, AI threats, political unrest — what's the most important to be focusing on right now?
Neil Gad:
Right now, my organization is spending a lot of time thinking about AI and agentic workflows. Lots of organizations are adopting AI agents to automate business processes, and that's creating a new way that remote access providers have to think about how agents interact with computers. AI agents are actually really good at checking the box that says "I'm not a robot." They're getting really good at interacting with screens. In the last decade, there's been a wide proliferation of endpoint management, remote management, and monitoring software platforms that remediate issues with devices at scale — "deploy this patch to these 1,000 devices." Those scale processes have proliferated. But you still need a human in the loop to look at what's on a screen at some point, because it provides richer information. Sooner or later, agents are going to be used at scale to do this in place of humans. They're going to get really good at interpreting on-screen information, because user interfaces are the common currency of all devices — not all applications have common APIs. So AI agents have to get good at interacting with screens that are designed to be interacted with.
What we're going to see is a couple of things. First, the volume of agents increasing massively. Instead of being limited by human technicians, you can scale up workflows with thousands of agents looking at screens and interacting with applications simultaneously. That creates different threats in terms of keeping pace with what's happening on all those devices across your organization — because it's no longer limited by the number of humans watching. Secondly, AI agents are getting really good at understanding software vulnerabilities. We use AI in our own cybersecurity team to build agents that find vulnerabilities in our source code. If we can do that, bad actors can too. And the speed at which they can identify and exploit vulnerabilities in software and applications is also going to increase.
From both sides, there is an AI agentic arms race underway — in terms of both the volume of AI agents interacting with computers, and the ability to exploit vulnerabilities, some of which are human in origin. Those two things are going to create a bigger headache for cybersecurity professionals. That is the new battleground we're facing in remote access — and I'm sure in wider cybersecurity more broadly.
Rachael Lyon:
How should organizations meaningfully manage this? The scale and speed you're describing is astronomical. How do you even get ahead of it?
Neil Gad:
I think some safeguards are table stakes: enterprise data protection when using AI agents and LLMs, limiting access to sandbox environments, locking down networks, and sub-compartmentalizing your assets, infrastructure, and data. Those are meaningful controls that can limit the impact and proliferation of AI agents into organizations. And retaining human-in-the-loop interactions is really important. While it's great to automate workflows and have AI agents that increase productivity, there is a quality control issue — and it's really important to have humans at various checkpoints to make sure what's coming out is meaningful, meets quality standards, and doesn't introduce other vulnerabilities.
Rachael Lyon:
I was reading an article — I may be getting the name wrong — {Ironclaw} or something like that. Because you give AI agents all this access, it was looking at more of a virtualized environment, right? To try to manage some of that. What are your thoughts on that approach?
Neil Gad:
This whole direction of travel is an inevitability. As organizations and leaders in the cybersecurity world, we have to be thinking about how we retain controls that don't just let the AI loose. I think that is the way organizations are going to be successful in their use of AI. Or unsuccessful — hopefully not.
Jonathan Knepher:
Go for it, take the left-hand turn.
[29:10] Authentication, the Future of Remote Access, and Next-Gen Talent
Rachael Lyon:
Coming back to authentication — this is one of my favorite topics, because multi-factor authentication drives me crazy. The phone required to authenticate is always in the car or something. I'm curious what the future looks like. Obviously you need it, and it has saved my life many times. But I just want it to be a little less intensive in terms of steps and effort.
Neil Gad:
I don't think MFA is going away. I think it is a key defense — especially biometric multi-factor authentication, where you need your face. In a world where credentials alone are not enough, and where an AI agent can easily behave like a human when interacting with a device, there's no substitute for your biometrics. You're the human. You have to retain some control. So I don't think there's a way around it.
Rachael Lyon:
So you're saying there are probably more steps coming — instead of two or three, we're talking five, six, seven, just to really make sure you are who you are?
Neil Gad:
I'm not sure of the exact number or what it's going to look like. But I can tell you this: the "I'm not a robot" checkbox is no longer a defense. We need a new version of that — one that genuinely verifies whether there's a human present. Websites have to defend themselves by rate-limiting the number of calls made to them to avoid agents scraping them, and so on. It's going to be an arms race. I do think MFA protects against credential sprawl, application sprawl, and leaky defenses in terms of understanding who your employees are and who's still around. MFA is here to stay, and I think it will get tougher in terms of more prevalent biometric authentication.
Rachael Lyon:
So what do the next five to ten years look like for remote access — in terms of evolution and integration into how organizations secure data and infrastructure?
Neil Gad:
I think the security stakes are going to get higher. Having some kind of hybrid on-premise/cloud setup that provides the security of an on-premise product while also allowing some kind of connectivity for AI agentic workflows and LLM access — solving that problem is going to be key. And you have these two opposing vectors: the ability to interact with devices at scale without looking at a screen to manage them, but also the increase in agentic interaction with screens. These two things are going to be tough problems to solve. How do you optimize a remote access product to be used by AI agents in a secure way? That, I think, is a really fascinating question for remote access providers — and one that my team and I are thinking about hard.
Rachael Lyon:
I'm excited for the future — a little scared, as I think a lot of us are. It's moving so quickly and evolving so fast, and it's hard to keep up with the cracks in the system at that kind of velocity.
Neil Gad:
Yeah. I think it's really important that the cybersecurity professionals out there know that their role is going to become ever more powerful. My team is going to become more and more essential as they have to advise me and other professionals on how to work around the increasing threats from AI. I am confident that we are very secure — we have lots of security by design ingrained in the way we do things, as I'm sure lots of remote access providers do. As long as that can keep pace with development, I think remote access is here to stay, will become more powerful, and will keep up with these threats and provide the assurance that customers need.
Rachael Lyon:
Wonderful. I like to end our podcast on a more personal note. One of the things I always think about is the next wave of talent — those who are going to help us move forward and solve a lot of these problems. For those getting ready to embark on a professional career, I hear a lot of "What skills do I need?" and "What should I be thinking about to start contributing to this industry?" What's your perspective?
Neil Gad:
It's really interesting. It's a tough market if you're a graduate going into the job market right now. I think it's moved beyond being able to code Python or having a specific technical skill. What's really valuable is critical thinking and empathy. These interpersonal skills are amplified in a world where many of what were considered core skills two decades ago are becoming increasingly solved problems. The thing that AI is not good at is empathy, critical thinking, and orchestration — a big-picture worldview across multiple things. You tell an AI agent something, and all it has is the context you've given it in a prompt. It doesn't have real-world experience of how something is going to work in practice. It's just responding to prompts. Even with wide arrays of AI agents given lots of additional context, it's never going to be as good as a human who has the lived experience of how something is going to work in the hands of real people.
I think critical thinking, empathy, and interpersonal skills are going to become amplified. The new-world skill is learning how to use AI tools as knowledge inputs, and then applying that to real-world situations.
Rachael Lyon:
I like that perspective. You're 100% right. Lots to think about. Neil, thank you so much for joining us and sharing these wonderful insights. Do you have any other parting comments you'd like to share with our audience?
Neil Gad:
No, just thank you for having me.
Rachael Lyon:
Awesome. Thank you. And to all of our listeners out there, thanks again for joining us for another great conversation. Jonathan is going to do the drum roll — please.
Jonathan Knepher:
Please smash that subscribe button and you get a fresh episode every single Tuesday.
Rachael Lyon:
Until next time, everybody — stay secure.
About Our Guest

Neil Gad, Chief Product and Technology Officer at RealVNC
Neil has 20 years’ experience in technology, commercial and operations roles, Neil was appointed to the role of Chief Product & Technology Officer to define and deliver RealVNC’s product vision.
Neil has a background in strategy consulting from BCG and PwC, leading value creation programmes and supporting M&A deals across TMT and many other sectors. He has since led tech functions in both large corporates and startups, with a proven track record of delivery build on creating winning teams that collaborate effectively and focus on customer value.
Neil holds an MEng in mechanical engineering from Cardiff University, graduating with 1st class honours, and more recently is self-taught in python to build deep learning models for forecasting applications.
Check out Neil's LinkedIn and RealVNC's Website
Listen and subscribe on your favorite platform