Europe’s AI Act and Real Estate

Welcome to the Real Estate Espresso Podcast, your morning shot of what’s new in the world of real estate investing. I’m your host, Victor Menasce.

We’re talking about the 2019–2024 AI Act in the European Union. Now, this is a real estate podcast. There’s no part of the modern world that’s insulated from advances in artificial intelligence, as well as the negative consequences.

We’re going to focus on what it means for real estate investors who own property, especially in Europe, and in particular if you use any smart building technologies like security cameras, access controls, any tenant screening, or building automation.

Now, I’ve long held the opinion that European legislation can serve as a canary in the coal mine for what might happen elsewhere in the world. So we’re looking at Europe’s AI Act in that context.

The big idea is simple. Europe is drawing a bright line, a regulatory line, between AI that is low consequence, meaning it’s benign in its impact, and AI that can materially affect people’s rights, their safety, and access to services.

As a property owner, you might not be building AI, but you could still be the one deploying it, maybe even without knowing it, and that carries some responsibilities with it.

The European AI Act entered into force in August of 2024. It phases in over time. The European Commission summarizes that prohibited AI practices and AI literacy obligations started applying in February of last year, with other obligations phasing in through August of this year and beyond. This is not a someday topic; it’s already here in parts, and it’s going to broaden in terms of its scope.

The Act breaks risk down into four main buckets:

1. Prohibited practices, meaning you simply cannot use them in the European Union, with very narrow exceptions in very limited context.
2. High-risk systems, meaning you can use them, but only with significant controls, with documentation, with oversight and transparency.
3. Systems with transparency obligations, where users have to be informed in specific ways.
4. Everything else, where the compliance burden is lower, but you still need to be responsible.

Now, real estate touches people’s lives directly. Housing access, physical security, workplace surveillance, essential services are all areas where regulators have gotten serious.

Probably top of the list is facial recognition. Security cameras and access controls today have these features enabled. There are a few different use cases that often people lump together as facial recognition, but the regulatory impact depends heavily on which one you’re actually doing.

Now, case number one is a very simple video recording with no identification. This would be a standard closed-circuit camera that records video for later review. It’s not automated as an AI system per se, but it’s just keeping digital recordings of that video stream. Your primary legal exposure here is not going to be tied to the AI Act.

Number two is if you are using face recognition, but you don’t uniquely identify the individuals. So some systems detect a face that is present; they might count faces or blur faces. That can still raise questions, but it might not trigger the stricter biometric identification rules if you’re not trying to identify someone.

And then number three, this is where biometric identification or verification, meaning you’re matching someone’s face against a database, and that’s where things get real. In the AI Act, biometric-related use cases are not singled out. Annex III of the Act lists categories of high-risk AI, and biometrics are one of those areas that are covered in the list.

Separately, the Act prohibits certain biometric practices. For example, you can’t categorize people to infer sensitive attributes. It places strict limits on real-time remote biometric identification in publicly accessible spaces for law enforcement. As a property owner, you’re typically not law enforcement, so the law enforcement section is not your permission slip.

The relevant question is: Are you deploying biometric identification in a way that falls into that high-risk category, and are you handling that biometric data legally under the rules?

So here’s a practical takeaway. If you use facial recognition to grant access to a building, a gym, or a coworking space, you’re operating in a category that regulators view as sensitive. Even if the AI Act does not outright prohibit your setup, it can push you into a higher compliance category, and that can impose an immediate constraint on biometric data because it falls into a special category of personal data.

The AI Act uses the term β€œdeployer” for the party using the system in the real world. It lays out the obligations for deployers of high-risk AI systems, including using the system according to its instructions, ensuring human oversight, monitoring performance and keeping logs, as well as cooperating with authorities.

Translated to property ownership, that means if your building system is considered high risk, you may have an operational discipline that feels more like running regulated infrastructure than installing a gadget. You’ve got to be thinking policies, not products. Who has access to the data? How long is it retained? How are alerts transmitted and how are they reviewed? What happens when the system flags someone incorrectly? And how do you document the oversight?

Many owners might assume that the vendor who built the system handles the compliance, and that’s only partially true. The provider has obligations, but so does the person owning and operating the system. If you are the one deciding to turn on the facial recognition, deciding where the cameras are pointed, deciding how to use the outputs, you are the responsible party in that chain.

If you own a property through a special purpose vehicle or you have a third-party property manager, you still need to know who the deployer is in practice and how the contracts allocate responsibility. Regulators are going to look for the accountable operator, not just the org chart.

So there are some practical implications for real estate investors.

You have to do a building AI inventory. You have to list every system that uses analytics, identification, scoring, automated decision making, including security, access control, leasing automation, tenant screening, maintenance triage, and staff monitoring.

Number two, you want to separate security from identification. If you can meet your security objective without identifying individuals, you reduce your risks dramatically.

Number three, treat biometrics as a board-level decision, not an IT feature. If you go there, you need a defensible necessity argument, you need strong governance, and you need alignment with privacy laws.

Number four, you want to update your vendor and property management contracts. You’ve got to require documentation, audit rights, very clear roles, data processing terms, and a plan for regulatory requests.

And then number five, you need to budget for compliance as part of your capital expenditure. This cheap camera system can be expensive if you have to retrofit governance after a complaint.

So the punchline is this: The European AI Act is not anti-technology. It’s anti-careless deployment in high-consequence settings. Real estate is high consequence because it governs access and safety. And if you’re using AI in your buildings, especially biometric identification, you need to operate with a mindset of a fiduciary and a systems operator, not just someone who bought a gadget.

In the meantime, as you think about that, have an awesome rest of your day. Go make some great things happen, and we’ll talk to you again tomorrow.

Stay connected and discover more about my work in real estate and by visiting and following me on various platforms:

Real Estate Espresso Podcast:

Y Street Capital: