Deploying AI at scale in one of the worlds biggest fintechs

Intimate Q&A with Manik Surtani

Redwood Sessions are where we host a world class expert for an intimate Q&A with our members.

Last Monday, Redwood hosted Manik Surtani, Head of Open Source at Block (Square, Cash App, Afterpay), and co-founder of the Agentic AI Foundation. It was a private session with our founder community, but Manik was kind enough to let us publish some of his wisdom here.

Redwood: You've spent your career in open source. What's made you start speaking publicly about your concerns with AI?

Manik: I'm very much a techno-optimist. I know that some of my writings may make me sound very pessimistic about the future, and I am in some ways. In certain areas I am pessimistic about certain things, but broadly I'm a techno-optimist. I think AI is going to be very beneficial for society, for everybody, provided everyone gets access to it. And that's the little asterisk next to that.

There's a big risk that all of those benefits only accrue to a few companies, and that could be very dangerous. If you follow the thread of AI displacement all the way through, you end up where the value of human labour goes to zero. That can be a great thing - for example, one potential outcome is we're in a utopia where all of our needs are taken care of. But that only happens if the gains from AI are shared broadly. The risk is that broad sharing doesn't happen, and all that economic benefit gets concentrated in the hands of very few people. That's the path we're currently on, and that's dangerous.

The parallel I like to draw is with social media. Social media could have been a really valuable tool for everybody, but it didn't become that. Instead it concentrated into very few companies with no transparency, no portability, no real governance. No way to vote with your feet (try leaving Facebook and taking your network with you!). And if that happens with AI, I think we're all in big trouble.

Redwood: You have written about AI decoupling the formation of capital from the need for labour, and that this may hollow out the middle class if allowed to proceed unchecked. On the other hand if I listen to someone like Marc Andreesen, he tends to be all rainbows and skittles when it comes to creative disruption. Are you and Marc both right, but just in the sense that given a long enough time frame, technological disruption is always a net positive?

Manik: I don't think he's wrong necessarily. That is a possible outcome. It's just not the only possible outcome. Our current trajectory is not the rainbows and Skittles outcome, unfortunately, and we need to work to make sure it becomes that. We're not going to get that outcome for free. It's not going to automatically happen.

A lot of the stuff I talk about around social media wasn't because there's a bunch of Machiavellian evil geniuses that plotted to create that outcome (concentration, lack of transparency, lock-in). That outcome just happened because you have standard corporate incentives of making money, which will lead there as it compounds. And I think the same thing will happen with AI. We have to intentionally fight that.

Redwood: What's the answer then, and how do open source and open protocols fit in?

Manik: The other example I like to talk about is the internet itself, which ended up becoming a wonderful level playing field. Open standards, open protocols. Anyone can build another email client or a new browser and participate. If you don't like the one you have, you can leave it. You can leave Gmail and take your history with you. There's also a lot of self-sovereignty there. If you don't like working with a particular country because you don't trust the corporations there, you can use services elsewhere. Or run your own. That is a powerful lever that keeps everyone honest, and that's what we need for AI.

The agentic layer is where I'm focusing right now. Though, taking a step back, the agentic layer alone is not actually enough. We need open standards and open protocols throughout the entire stack, all the way from the hardware level to models, to how models are trained and built, to how they're served, to how we apply them to real-world use cases. That last one is the agentic layer. It's a hard problem to solve and I don't think we're going to get all the way there in one go. The agentic layer is where right now the playing field is still pretty open and everyone's jostling for position.

MCP is one example where people are pushing for an open standard, because right now there is no clear winner in agentic AI. If there is an open standard, everyone gets a chance to become that winner in future. And this is where we're seeing OpenAI jumping in, Anthropic, Google, Microsoft, AWS. That's exactly what I want. I want the people who would otherwise be out there creating monopolies to start pushing for open standards, interoperability, portability and now they're all starting to do that. Once they start working with each other on that layer, we can start exploring the other layers as well, little by little.

That's why I co-founded the Agentic AI Foundation. We launched in December with 50 companies. It's now four months old and there are 170 companies, and it's only just getting started.

Redwood: How is Block actually deploying AI internally?

Manik: We deploy it literally everywhere. We started using AI a lot in customer support. A lot of Cash App's customer support is already through a chat interface in the app, so it was very easy to introduce AI to take on the most basic queries. A prime candidate for automating. The next biggest area after that was engineering, with Goose.

But all of that is still low-hanging fruit. That's the obvious stuff, making the existing machine a little bit faster. We're currently at the stage of taking a step back and rethinking how we run the business as an AI-native company.

Jack Dorsey has been writing about the business as an intelligence in itself. If the primary role of an org structure is to manage information flow, then the concept of an org chart was based on how many people a human can effectively manage. So you have all these hierarchies and layers, and the more layers you have, the slower the information flow. But now you're in a world where you don't necessarily need that structure. If all information flow can go through an AI, an intelligence layer notifying who needs to know about what and taking action, then you've squashed the org chart and information flow becomes much faster.

We're currently rolling this out, one step at a time. It was relatively easy for us because we're a remote-first company, so everything is already electronic and machine-readable. And it's incredible to see it working.

Redwood: What about the model economics at scale?

Manik: We have several thousand engineers at Block, and it's not just engineers that use AI. Even non-engineering teams use it a lot. They mostly use Claude Opus - the model resonating most with everyone at the moment. We spend a lot on it, but that's all right - it's still bounded.

Where it becomes a problem is with our consumer-facing apps that have AI embedded. With tens of millions of monthly actives, and with every incentive to grow that number. Those AI features cannot use Claude. Several thousand employees using Claude is okay, but 75 million users? The numbers just don't work. We use open source models in those cases, and it works incredibly well. We're within 5 to 10% of the performance of frontier models.

A lot of American companies are doing this. Major Wall Street banks are using Chinese open source models. They don't talk about it very much, but that's exactly what they're doing. It's really hard to be cost-effective at scale otherwise.

Redwood: Has the way you build teams changed?

Manik: The same roles still exist. How they work has changed. We're basically vibe coding at scale. It changes what an engineer does, how an engineer works. But it also changes how non-engineering teams work. Lots of people who've been blocked on engineering to build a tool don't need to wait for that anymore. Go do it yourself. And people are doing that. Non-engineers are picking up that work and just running with it, which again just compresses the time to ship anything. That's really powerful to see.

Redwood: How do you handle code quality when everything is moving faster?

Manik: Right now there's a first pass of an AI review before it gets passed on to a human reviewer. That catches a lot of the basic stuff, but I think that needs to be pushed further back to the developer actually committing their pull request. Even if AI wrote most of the code, if you're the one pushing the commit through, so you're on the hook for it if something goes wrong. So it's in your best interest to make sure the quality is up to scratch.

We're currently experimenting with a few approaches to review early and often, even before issuing a pull request. AI-based review on your local machine and informs you of anything you need to clean up before you push it out to a human reviewer. The most obvious review questions get handled before a human reviewer gets to look at it.

Redwood: How are engineers sitting with all these changes emotionally?

Manik: It's a real mixed bag, and genuinely unpredictable. People who I thought would be really digging their heels in are not. Vice versa, people who I thought would really lean in are instead saying you can't trust this stuff. The ones who lean in are the ones who are going to succeed, because if they don't lean in, someone else will and race ahead.

Redwood: What's your call to action for founders?

Manik:  Choose open standards. Choose open frameworks where you have a choice, and don't go proprietary if you can. If your systems or your business can tolerate it, pick an open model. But also push on your service providers for open standards. Lean on whoever you work with for open protocols, open frameworks. That's really important in the long arc. Even if the open stuff isn't as good right now, by pushing on it, that's how we're going to get there in the end. And contribute! Join the foundation, the working groups, get involved shaping what open standards for agentic AI will look like. Now's the time. 

Where
ambitious

founders
thrive.

Apply for Redwood today and we’ll schedule a meeting to learn more about your company.

Apply for Redwood today and we’ll schedule a meeting to learn more about your company.

Where ambitious founders thrive.

Redwood is a private network
for Australia’s top founders.

© Redwood 2026

Redwood is a private network
for Australia’s top founders.

© Redwood 2026

Redwood is a private network
for Australia’s top founders.

© Redwood 2026