Constructed Futures

Dev Amratia: Pioneering Artificial Intelligence for Construction Risk at nPlan

Episode Summary

Construction is famously full of risk - whether its timing, partners, materials or any number of other factors. nPlan are pioneering the use of artificial intelligence to assess risk, but also to generate and assess project options to minimize risk and the uncertainty it creates. They do this through analysis of over 500,000 project schedules and outcomes to create best in class AI models. Dev Amratia, CEO of nPlan, talks through how that works, but also what it could mean for the industry going forward.

Episode Notes

Reach Dev Amratia at: https://www.linkedin.com/in/devamratia/

learn more about nPlan:  https://www.nplan.io/ 

Episode Transcription

Dev Amratia

[00:00:00] Hugh Seaton: Welcome to constructed futures. I'm Hugh Seaton. Today I'm here with Dev Amratia, co-founder and CEO of nPlan. Dev, welcome to the podcast. 

[00:00:11] Dev Amratia: Thanks Hugh. Thanks for thanks for having me. 

[00:00:14] Hugh Seaton: So Dev I'd like to start with what nPlan does. Let's give everybody some context for the rest of the conversation.

[00:00:21] Dev Amratia: Sure. So nPlan is a company that forecasts the outcomes of construction projects. We do that by using machine learning to understand how previous construction projects performed, by reading their schedule files: what they plan to do, and then what actually happened. And then using that knowledge and context, we're able to generate hyper-realistic forecasts of what will probably happen on projects that are in the planning and execution phases. So we can say some really simple things... and really simple yet impactful things like, “Hey, how long is my project probably going to take,” we can provide the answer to that. What might go wrong with my project?

And what opportunities might I have to improve my project, the third clearly being the most important one, I hope. And all of that, Hugh, packages up into this offering that our clients see us as an assurance partner...they call on us to provide them with assurance.

[00:01:26] Hugh Seaton: That's interesting. And, and your clients are who? Are general contractors? Are they owners? Are they insurance? Who generally sees the most value. 

[00:01:33] Dev Amratia: It's contractors and owner-operators across the world, or more precisely, in eight different countries around the world.

But the folks that receive the most amount of value are the folks that do the most complex projects. 

[00:01:44] Hugh Seaton: Well, that's an interesting one there because I would have thought that from a data standpoint, you would start with simpler things that you could get a lot of examples of. How do you guys think about projects?

Because you know, it's almost a truism in construction, right? Every project is its own little snowflake, but that of course isn't true either. So how do you guys look at a project so that you can get enough data to train a model and to kind of make predictions like that.

[00:02:12] Dev Amratia: Yeah. Great question. So I love this snowflake example and I really build on it.

I'll go so far and say that two identical towers with the same design, built opposite each other, started construction on the same day, will have different outcomes. Maybe tower one finishes a month before tower two. And the reason tower one finished before tower two was maybe the welding crew was just a bit better skilled on one tower versus the other. So given that the variability is enormous is I guess the gist of that... How does a machine now forecast? If I was to say, perhaps paraphrase your question and make it even harder for me to answer it. So I guess I'll break this answer up into two parts.

One is the data piece, and then explain how the data piece leads to forecasting. On the data piece nPlan... it is a very hungry problem to solve. That means we do have to process hundreds of thousands of data points and schedule files in order for nPlan to even work. Right? So that's why it took nPlan three years from the time we started to the time where we could first demonstrate the systems working. To give you some real numbers and give you a sense of scale here nPlan has now processed just north of half a million projects.

These half a million projects in terms of scale represents 1.2 trillion US dollars in capital that has been deployed around the world. This is the largest single collection of schedule data held anywhere on the planet. And so it's a lot of projects, right? In simpler words now across all of that data the function of the machine learning algorithm is to determine, are there patterns inside all of that, that might be relevant to a future project?

And it'll do that in an unsupervised learning method where a new project schedule comes in and it says, Hey, I'm, I'm planning on building this thing with this sequence. And our algorithms will go hunt inside all of that past set schedule data to say which sequences might be relevant across 500,000 other projects that will help us forecast this one.

But the trick is... it's not a trick. It's an artifact of the nature of the problem is that everything that involves time and the future is probabilistic. So what I mean by that in simple terms, if I'm asking you, Hugh, how long will it take you to drive home from work, maybe you work from home these days, but you know, how long would it take you to drive home on Christmas Eve? What you might say as well? I have experienced that, I've done it 500 times already, but on Christmas Eve, I kind of feel like it would be between 40 and 55 minutes. Right. Which is longer than the normal 30 to 40 minutes.

Now, what you've just done is you've provided a probabilistic range based on your experience on Christmas Eve, you will drive that journey. I hope you're not working on Christmas Eve, but if you are, you will execute that forecast, meaning you execute your plan and you will then correct yourself, right? 

So you'll say, ah, actually I did the thing and it took me an hour and a half to get home. And you've created a learning loop inside your brain, right? So if I ask you the same question again, next year, you'll say, oh, it's like an hour and a half. 

But fundamentally what you're doing is that you are producing a forecast and you're forecasting using a range of outcomes.

So how long is it going to take to build this refinery? The answer is it could take between three years and five and a half years with most probable range in this section. And so what we're effectively doing is never saying precisely that based on past data, this will happen. We say based on past data, there's a probability that this might happen.

And that in itself allows us to demonstrate uncertainty in these systems. What uncertainty means is when the algorithms don't know what's going on. I.e., you're trying to build a spaceport and you know, we probably haven't built that many spaceports in the world today. So yes nPlan has 500,000 project schedules already, but we probably don't have too many spaceports in the system. Right. In which case, the nPlan forecasting system will look at this new schedule coming in and say, Hey, I'm going to produce a wider range of probabilistic outcomes here. And you could say it could take between one and 11 years to go build that spaceport because I'm uncertain about its future, because I don't have enough schedule data to prove that.

And that, that in a way you could call that we like to call that algorithmic humility. Something that you wish more people had that's right in the industry. 

[00:07:17] Hugh Seaton: I love that. And, and so I'd like a couple of things. How sensitive are you to what subset of that 500,000 it compares. So for example, are you coding what goes in as some, as a classification of type of building. Or is it saying no, no. We're just going to look at the sequence and, and it'll match what it matches to. 

[00:07:40] Dev Amratia: Very much the latter and the reason it's important we go with the latter, which is an unsupervised system, is because the patterns that you and I might recognize like you, and I will say, oh, that's a building and that's a building.

So they must be the same or similar is a very crude way of recognizing similarities. Cause you know, the similarities and differences can come in thousands of dimensions. And they'll often just happen in the minutia, right inside the activities and say maybe the way the activity sequence on this hospital in the foundations has some relationship to this casino and its foundations and you and I will be like, what, how could that be?

Because it's sitting at such like a tenacious level of detail, you and I just don't have the time and mental capacity. Oh, I'm assuming your mental capacity is similar to mine. And that, you know, I certainly can't look through all of that excruciating level of detail and figure that kind of thing out on my own.

[00:08:40] Hugh Seaton: And I want to call out a word you've been using for folks that might not spend as much time in the AI world, but unsupervised learning is less commonly used because it needs big numbers. And it's a lot more to it than that. Most AI is what they call supervised learning, where it's a big, big, big bucket of labeled data.

And you're kind of telling the computer what the patterns are and is telling you to go find you're telling it to “go find patterns like I'm showing you.” And what you're saying is unsupervised learning. It's often actually what happens before supervised learning, but that's a longer podcast. But what you're saying is that the complexity of these projects are such that there's going to be points that matter, that aren't obvious based on how you might label it.

It could be that every time there's, you know, concrete before welding that's stainless steel, that, that tends to be an area that, that causes delays. I mean, I'm making that up, but you wouldn't know that. If you eyeballed it as a human, but being able to hold together some number of millions of data points and graph them and play with them in a way that only it only an AI system could, it's going to find patterns that to your point earlier, aren't just, aren't accessible to normal people.

[00:09:51] Dev Amratia: Correct. Yep. That's it. The power of unleashing yourself from the dispositions that your own brain has on, what is the pattern? Right? Is extremely powerful, not just in the construction industry. I mean, I would say in almost all walks of life that, you know, there are certain tools available to mankind today that actually allow us to see the world in a different light.

And this is one of them. 

[00:10:15] Hugh Seaton: And when people talk about AI in construction, you know, there's a lot of times people will say AI, what they really mean is they're pulling in an API from Google that allows them to do voice they're doing, or they're doing something that actually isn't really AI, but it looks whizzy. So they, they sell that.

But what's interesting is what you're describing here is actually the real promise of AI in construction, which is gathering truly Herculean amounts of data. And then running algorithms through it to let it teach you to let it, to let it to discover patterns that are just too big or too subtle for that for the human mind to hold in it's...

I mean, you can think about working memory or different other reasons why the mind can't handle that much at once. It doesn't matter. It's... 500,000 of anything, even if it was only one data point per project. And it's clearly not. That's really exciting. 

[00:11:04] Dev Amratia: Yeah, it is. And I think that the important thing there, Hugh is fundamentally the way you can think of think of these systems, right?

It's like and we get a lot of questions that come in from our clients in the early days. Like yeah, but everyone is doing AI and it was like, I'm not here to like say and does better AI than the other, like in the end, it doesn't matter. All that matters is that you solve a big problem. Right. And how you get there.

I mean, cool. If you find an easy way to get there good for you. Sadly I did not find the easiest way to get there. But the nicest description we've heard of like how people would describe our systems. It's like, imagine you're the world's most experienced planner scheduler, right? You've seen 500,000 projects and then you're being called on to like the next project, because you're such an incredible human being. Like you, you just have this aura of knowledge inside you. That that's a pretty nice way of describing a person. Can we use that nice warming description onto a machine? And that pretty much encapsulates what is intelligence, right?

It's the ability to provide new insight based on based on the information that is somewhat complex for the human mind to infer. 

[00:12:21] Hugh Seaton: It's funny when you were describing it earlier, that was an analogy that was, I was thinking, this is like the intuition that a experienced person would gather if they had the ability to really... two things, one is breadth, but the other one is retaining a level of detail.

We tend to abstract as humans to the point where you're going to miss some of these things. And that's where I think computers do what we can't, is they don't need to abstract quite so much so that some of these more subtle patterns still show up. Yeah, very cool. So that actually leads me to my next question, which is we've talked right now about projects at, you know, in toto, like for the, the entire project.

How do you guys think about going, going deeper than that? Like, and by that, I mean, you know month by month, or is it by trade, by trade? How are you able to, and I'm assuming this is the kind of thing that grows over time, but how do you look at, at the, at a project below the whole project level into either time or into subsets.

[00:13:18] Dev Amratia: Yeah. So we look at the way the system will work or the typical workflow that kicks off, we're typically engaged onto a project pretty early in its life cycle. So they're in the planning stages like contractors, perhaps bidding on work, they're trying to give themselves assurance that we're not completely ludicrous in what we're planning or proposing to execute on.

And perhaps signing our, our balance sheet away on the... so the system is working at the total project level and analyzing all the activities that make up the project. Nothing is excluded even if planting the flower bed is the last thing you're going to do on the project. And if it's a garden that doesn't matter, we'll still analyze it.

And tell you, you've got too much duration, too much risk inside these components. The project teams that we work with, then the first thing we produce is a probabilistic duration function, which basically says here's the likelihood and here's the duration of this.

And it usually looks like a bell-shaped curve. Usually, it doesn't always look like that. And then underneath it, we'll say, and here are the activities and activities you're going to execute that are contributing the most to this forecast. And that can be positive or negative. Meaning positives meaning here are things that if you do these things, you could accelerate the project, and negative meaning do these things, you could decelerate the project or delay the project in other words. And then the project teams like go through that based on time sequences and figure out, all right, so it looks like there's a high chance we're going to get delayed on our permits. And this is, this is actually where the AI systems we've built draw the line. Right. We cannot tell the project what to do. So it's like, if you, a permit is going to get delayed, we say, customer, this is, this is now where you got to put your hats on and think, what am I going to do about that?

Right. Can I do something about that? Can I mitigate this risk? So think of it as helping everyone focus their mind on the most impactful things, they can take decisions on. 

[00:15:26] Hugh Seaton: Almost assigning better more, more accurate value to choice A versus choice B. It's still a probability, but it was anyway like a human doing it they're only applying probability. They may not call it that, but that's, you know, you're thinking, well, this could have happened. That's right. That's right. Or some, some number of projects. That's that's, you know, maybe 15. So that's really interesting. 

[00:15:49] Dev Amratia: The whole system then rolls back up again.

So if you've done this at the activity level, you then roll it back up. So if you're a contractor and you're trying to win work from the client, you'll roll it back up and say to the client, Hey, look, we may not be the cheapest or the fastest contractor bidding in this, in this competition, in this competitive tender.

But we are the ones that can prove we can get it done so they can offer certainty. And I love that model because it creates a premium in the construction market, which is differentiated. It says I'm not cheap. I'm not fast, but I'm certain and set. And there are clients and owners in the world that will pay for certainty over who's cheaper and who's faster.

Not everyone. If there are a few I, my, my hope is that more of them increase. And if you flip the flip the equation around, if you're an owner owner operator, you'll be looking at all these bids figuring out like, Hey, which of these contractors is telling me the truth? You know, which one is like pulling my leg, like, you know, what, what what's actually going to happen to this asset that I'm about to invest a couple of billion dollars to go build.

[00:16:55] Hugh Seaton: That's really interesting. And you know, the, the implied in what you're saying is uncertainty is expensive and given that property Developers very often have a pretty strong financial kind of acumen, at least some of them do, presumably they can say just by knowing, you know, 10% more accuracy. When, when I'm going to have to disperse what cash I can do, deploy that cash better.

Or I can say that I'm going to, I, the likelihood of my needing a $50 million in, you know, at the end that I didn't expect is lower. So that what's the cost of capital. And how did, what money does that save me really, really a cool way of looking at it. So one of the things that comes that you hear about in AI, as it has progressed and people have gone through asking about, okay, that's a black box. How can we not make that a black box, some models, and some ways of building models allow you to look at what's driving decisions, you know, in the case of a process like construction, that might be really interesting, right?

Because you might be discovering, wow, there are lever points that we didn't know. You know what I mean? There are combinations of activities or combinations of events that may be interrogating the data too much. But I'm curious if that's something you guys have been able to do or look at? 

[00:18:08] Dev Amratia: Yes, so many of our clients do the thing called the what-if analysis. So because it's a machine and it doesn't care how many times you ask it the same question, right? You could upload 20 schedules or 20 versions of the same, same project in different schedules. And say what if I build it this way? And what if I did it that way?

And what if I did it this other way, tell me what's going to happen. Right? So this is like a, what if sensitivity analysis. While. So, I mean, I should also say the limitations of these systems are that it can't explain to you why it is saying what it is saying. The answer is because past data is saying so. So ultimately where folks find the most value in these systems is learning about, okay, if I do this, then past data is telling me that this is the most probable outcome.

And then playing with that to figure out what's the best way I could do this.

These algorithms, a fun fact, that's a security feature, but in the, in the, in the in the workspace, when, when people are using these tools, they need to feel comfortable about, Hey, I need to understand that. How this thing is thinking, right? And the best way to do that is give them a sandpit. And they played inside the sand pit with as many scenarios as you want to run, and you will get a feeling for why does it, what this algorithm likes and doesn't like when you're putting in different sequences of work. 

[00:19:43] Hugh Seaton: That's really great. And obviously, as people are thinking about how to execute and different sequencing and so on, it's just another great tool. And I come back to what you said earlier in the conversation where you likened this to imagine the most experienced, possible project engineer or project manager ever.

And if you were to say, Hey, if we did this, then that they may be able to give you an idea. And in this case much, much more detailed. Really cool. 

So I want to take us, we're kind of coming towards the end here. I want to take us to where you see all this going. I mean, when we were doing our pregame, you actually have a really big vision that goes beyond what we've just discussed.

Talk to me a little bit about where you see this going and how you see it impacting markets that are broader than just construction. 

[00:20:28] Dev Amratia: Yeah. So the vision of nPlan is to see a world that is no longer risk limited by its risk appetite. And so what does that mean? Right? Today we look when we, as a broad function of governments, private investors, individual investors, owners, contractors.

So we, as the collective industry, When we look at new projects and new investments that are required, there's this like deep shuddering effect that runs through deep into the spines of the individuals thinking through how much is it going to cost? How long is it going to take? And basically thinking through the economics of, of that investment.

You know, it, it literally is the reason we, we collectively don't build stuff, right? We're like, I don't know how bad it's going to be. If I try and build a railway, another railway system through central London or any metropolitan city for that matter. Where, if you dissect that problem statement and think about what we do at nPlan today, where we provide a systematic method, which is the algorithm to quantify risk, quantify exactly how much risk there is with no biases, just using historical data to say, this is where you're headed.

So using that model, one of the things that we, we see a world, one of the ways in which we see the world changing, especially. Imagine the world's changing by the fact that we can now start introducing or because we can quantify it. We should be able to price it. If we can price it, we should be able to sell it. 

That simple three-line equation means that we, as nPlan are launching an insurance product at the start of next year, so this insurance product is, is called project overrun insurance. Which is as simple as perhaps the title is, yet, yet is completely governed by the fact that nPlan produces forecasts. And then the tail risk of that forecast gets bought in the insurance market. And so that the investors of those projects get cover for disastrous outcomes. 

Hugh, we could have a whole other podcast that goes into how that insurance product might work and the dynamics and so forth. But for now, let's just leave it at that at this exciting Development of insurance.

And from there, like what happens after that? Like, so if, if we're able price risk, and sell risk into an insurance market, this is the new product, then why don't we as nPlan become the investor, right. If we're so good at pricing something sort of like, how do you just put your money where... because ultimately we all know the validity of any project outcome, right?

So if someone says, I want to build this railway, or I want to build this casino, it's going to cost this much and take this long. We can quantify that risk and can actually put a price to that risk. And if we have that as, as you could call it our advantage, which is an advantage, I know almost every infrastructure investor in the world searches for today.

Like, can I price this a little bit better than the other guy that also has cash on the table as an investor, then that's how you make money. Right? That's the basis behind banking. I would suppose. Then that becomes our opportunity as a business, right? We can price risk more, fairly, and more competitively than anyone else in the market, because we're the ones who can quantify risk systematically.

[00:23:53] Hugh Seaton: That's really exciting. 

[00:23:54] Dev Amratia: It can become much tighter with the way capital is seen in the markets. And that's ultimately what unlocks the wealth. Right? So if you're saying you don't need to worry that much about that metropolitan railway system going through the city, because we can tell you what's probably going to happen to it.

Right and put the money to back it up. 

[00:24:15] Hugh Seaton: So, there's two elements to this. One of them is it's sort of the natural extension of actuarial science. Right. And the next one though, is, is it's just another level of, of capital allocation is, is we can, you can tell people that the impacts of different allocation decisions, in a more nuanced and accurate way than they are able to do now. I mean, I just, I made your simple statement into a complicated one... 

[00:24:39] Dev Amratia: but that's no, no, you're right on. That's exactly really well said. 

[00:24:43] Hugh Seaton: That's awesome. Well, Dev, thank you so much for walking through from kind of soup to nuts of what you guys are doing now and how you're viewing the world.

I'm super excited. I'm going to take you up on that second podcast down the road. So thank you. 

[00:24:56] Dev Amratia: Not at all. Thank you Hugh. Thanks for having me.