My AI Agent Hacked A Website

Welcome to the Real Estate Espresso Podcast, your morning shot at what’s new in the world of real estate investing. I’m your host, Victor Menasce. Today’s episode is a little different. It’s about artificial intelligence, about automation, security, and a moment that, quite frankly, caught me off guard. I accidentally became a hacker overnight without even realizing it.

As part of my ongoing research for the podcast, I’ve been using artificial intelligence to help streamline information gathering. That, in and of itself, is not too controversial. If you’re tracking multiple markets, multiple asset classes, multiple sources of institutional research, there’s nothing unreasonable about wanting a faster way to collect those reports, to summarize and identify what’s changed.

I installed a new piece of software called NanoClaw on my computer. Now, for those of you who don’t know what that is, it’s an open source personal AI agent that runs on your own computer. It’s designed to connect tools and the channels that you use, and it can automate tasks. It can browse the web. It can carry out workflows on your behalf. One of the reasons it’s attracted a lot of attention in the industry is that it runs agent sessions inside isolated containers, which is intended to make it more secure and more auditable than simply turning an agent loose on your machine with broad permissions.

It also includes browser automation capabilities, which means it can navigate websites and perform multi-step tasks very much like a human would. And that’s precisely why it’s attractive. It’s good because it gives you local control. It’s not all operating in the cloud. It’s a lightweight tool. It’s open source, which means the code can be inspected instead of it being opaque. And it’s good because it’s containerized and firewalled from the rest of your computer. It runs on a virtual machine.

You can think of a virtual machine as a virtual computer within a computer, and it’s completely isolated from the machine that it’s running on. It all runs in software.

But that’s where things got interesting. I asked the agent to locate and download any new research reports from the major brokerage houses like CBRE, JLL, Marcus & Millichap, Colliers, and others. In my mind, it was a simple productivity task: go get the latest market reports, download them, organize them, and save me the repetitive work. I asked it to perform this task every Sunday morning so I would have a fresh download of reports to choose from each week.

Some of the websites accepted that normal interaction; some did not. A number of them rejected queries that appeared to come from an automated source. In some cases, even with a valid login credential, the website still resisted the activity.

This is where the experience changed. The agent became more persistent than I was expecting. It interpreted the blocked accesses not as a signal to stop, but as a problem to solve. It began trying alternate paths to achieve the objective. I’m not going to go into the methods, because that is not the point. But in that moment I realized something that was a little surprising. Without intending to, I had placed myself in the role of someone attempting to circumvent the security controls of a brokerage website. That certainly was not the objective.

So what struck me was that we’re now living in two internet worlds. There’s the human internet, the one most of us have used for decades. It’s built around logins, forms, popups, buttons, cookies, multifactor prompts, session timeouts, and pages designed for a person to click, read, wait, and respond. It’s full of friction because friction was designed in part to shape human behavior and reduce abuse.

Then there’s the machine internet, the one the AI agents are trying to inhabit. That world values structured access, explicit permissions, machine-readable interfaces, stable endpoints, and workflows that can be executed cleanly and safely by software. To an agent, the human internet feels old and brittle and strangely inefficient. It looks less like a well-designed workplace, more like a maze to navigate through. When you give an agent an objective inside a maze, you shouldn’t be surprised when it starts looking for openings in the walls. It doesn’t make the agent evil; these are just constraints that need to be overcome.

So there’s a lesson here for business owners, especially for those of us in real estate who are beginning to adopt AI inside acquisitions, asset management, operations, and research. The risk is not whether the model hallucinates, although that’s there, of course. The risk is whether the model pursues the goals with a level of persistence that exceeds your intent.

Most humans carry social cues. Machines don’t, unless you design those constraints explicitly. So the real question is not β€œCan the tool save time?” Of course it can. The better question is: under what rules, under what supervision, what boundaries is the tool allowed to act? It’s not a leadership question; it’s a governance question, it’s a design question. And if you’re going to use AI agents in your business, you need a policy framework, you need permission boundaries, you need to tell the agent when to stop, when to give up, and you need logging, you need review, and you need to distinguish very clearly between authorized automation and unauthorized circumvention.

But here’s the truth. The machine internet is coming, whether websites are ready for it or not. We’re in fact redesigning our website to make it more easily machine readable. The businesses that succeed will be the ones that create proper interfaces for agents instead of forcing agents to masquerade as humans, and operators who succeed will be the ones who understand that delegation to an agent is still delegation of work, not delegation of responsibility.

That experience left me with a bit of caution. I’m optimistic that tools like NanoClaw and OpenClaw and CoWork will point to a future where more intelligent agents can genuinely accelerate research and allow us to spend more time on things that are judgment and less time on repetitive process. Caution is that power without boundaries could become a liability quickly.

So yeah, there’s now two internets, one for humans and one for machines, and the one for machines, they would absolutely prefer to use it. I would too, at this point. Our job is to build a bridge between them ethically, deliberately, and with eyes wide open.

As you think about that, have an awesome rest of your day. Go make some great things happen. We’ll talk again tomorrow.

Stay connected and discover more about my work in real estate and by visiting and following me on various platforms:

Real Estate Espresso Podcast:

Y Street Capital: