Sarah (00:09)
Welcome to Undubbed, where we are unscripted, uncensored, and undeniably data. I'm Sarah Burnett.
Fiona (00:15)
And I'm Fi Crocker. Today we're diving into something that's genuinely fascinating. How one person has managed to crack the enterprise AI code while everyone else is stuck in what he calls pilot purgatory.
Sarah (00:30)
we're joined by Craig Turrell, Craig's not just implementing AI tools, he's pioneered something called AI native architecture and achieved transformational results that honestly sound too good to be true. What makes Craig's story particularly compelling is that he's identified this fundamental 20/80 problem.
and enterprise AI where organizations only use 20 % of existing platforms and have to custom build 80 % of the functionality.
Fiona (01:05)
If you've been frustrated by AI projects that never make it past proof of concept, or you're wondering what comes after all the fatigue, this conversation will change how you think about enterprise AI. Before we dive in though, help us get the content algos warmed up. Hit subscribe and share this episode with anyone willing to move beyond the hype and into real transformation.
Sarah (01:31)
Craig, welcome to Undubbed. We're absolutely delighted to have you with us here today.
Craig Turrell (01:37)
Well, I'm delighted to be here and it's very nice to talk to humans. I spend almost all my time talking to robots. So yeah, delighted to be here and to take part in this podcast.
Fiona (01:47)
I'm happy to play that role for you anytime, Craig, if you just need a bit of human interaction. Now we love starting with the person behind the professional. Could you tell our listeners a bit about yourself and your journey, where you've come from and where you are today?
Craig Turrell (01:51)
Thank you. ⁓
Yeah, thanks for that. started a really unusual journey. always have worked during the Breakthrough Technologies from early days, for example, at Marks and Spencer's working on smart shopping trolleys and how does a shopping trolley, and this is nearly 35 years ago, being able to see where someone was in a supermarket and suggest what they should buy and what they forgot even.
linked into fridge manufacturers. Remember this is 1995. So moving on to some of the new regulations when they came through, a lot of the new technologies that were kind of today we see in the normal like servers and databases, early days of SQL Server. I was there at a lot of the starts of this technology and that just been an ongoing
There's always been a wave I've been working like a surfer, always looking for what's the new wave going to come along? How is that going to change us and how to distill the hype from the reality and then turn that into real things? I've always worked in the enterprise. So it's not been something I've been able to kind of pontificate on what the blue sky would look like.
But I've always had to convert that into real things within six to nine months of seeing them So yeah, interesting career.
Sarah (03:31)
Wow, that's really fascinating. I love that story about the smart trolleys.
Craig Turrell (03:37)
It was, it really was, had contraptions with wires and stuff and we were walking around it was in Marks and Spencer's Innovation Labs and really walking around to see what I was looking at, even wearing helmets with old cameras and regarding it to see.
could we catch what people are looking at? some really interesting things which kind of all made me excited about what not what what you can do now.
but what was possible and what was the next thing. I've been very fortunate to being part of some of the groundbreaking stuff that has taken place and part of really defining hopefully now what it will be. I've recently been granted one of my first patents and have filed another five of them. So yeah, interesting times.
Fiona (04:30)
That's really amazing. I feel like you're sort of chasing that hit of what the next big thing is and getting really excited and curious about how to go through and how to learn all of these new technologies. What's been one of the biggest challenges that you faced, you know, really being at the forefront of these innovations and understanding them and taking them to something that's an actual reality and successful.
Craig Turrell (04:58)
Sometimes it is a hype cycle that never turns into anything. If we look at things like machine learning and you look at how much effort was put around and ideas that machines would come along. And if you look at the reality, it's only about 4 % of organizations have managed to convert.
what was a good idea and something that we genuinely thought would change everything into something that is genuinely actually making a difference in an organization. In my own career five years ago we looked at business intelligence. know data in business intelligence we love and you go look at we were trying to serve thousands of people.
And then we looked at the reality in terms of how many people's lives were genuinely, genuinely changed out of 4,000. And the answer was 15 people. That's shocking. So it's kind of this, you know, the hard part is, what is the hype? What is the reality? And what are the binding constraints which stops things happening?
Fiona (05:56)
Yeah.
Craig Turrell (06:10)
And a lot of the time is either we haven't thought through fully what resonates with people or how do they trust things. And sometimes like we're looking at generative intelligence is what's been said the technology can do and what it can actually do are vastly different, both on the good side of like we...
through the pain work we did we found out things the machines could do that we never expected them to do on the other hand the way it talks it doesn't talk any generative intelligent they don't prompt in english you know we think you know ⁓ prompt language and prompt engineering and this is how you
Sarah (06:45)
Can you share some of those?
Craig Turrell (07:00)
talk to the machine. The honesty and the reality is the machine struggles with it. It does not talk that way. And you know, the interesting part was, we started last March and by mistake, there was kind of my normal approaches just before I start a project, I'll call it experimented. So on a Saturday before I started on Monday,
We said, well, we're going to try to build a data mesh because we've been trying to build these mesh architectures for years. And we never are able to do it. well, this seems something this machine seems to be able to do. So we asked it, look, I'm going to use external data. Suggest me some frameworks that I can mesh together. And it came up with four I knew, and it came up with one I didn't know. It came up with ISO 11179. And we said to it,
I was thinking, well, why did it do that?
What I gathered was, well, why didn't I ask the machine? So I asked the machine, why did you do that one? and I was having a genuine discovery conversation with the machine itself. And what I found out, the machine says, if you want to do this project, these are what you can put together. But when you talk to me, can you talk to me like this? And
It was the beginning of, how does the machine natively talk? How does the machine natively want it's data and not how we think it wants it's data. We're always putting things on our context. But if you want the machine to answer a question, the only thing that matters is the machine.
So how you structure data is not about you. It's about the machine. if the machine says, is it quality data, the only arbitrator of the quality is the machine. It's not you. what I learned was I'm going to take the human out of the equation and get a machine to talk it to a machine.
we want the machine to reason, whether it be contextual learning, whether it be the execution of commands, whether the sequence of commands and.
it's still us. We keep on inserting things that we understand into the process. We're going, we must understand it. Human in the loop must be there. And the answer is, well, a bit. But we've been working on the 99 to 1 % ratio. 1 % human to 99 % machine.
Which means what does the machine need to know? How does the machine think? And natively, what was the machine taught? And if you go look at a lot of the foundations of reasoning, we solved or we begin to solve reasoning workflow within Web 3.
So as we built the semantic web and we built the internet to deal with e-commerce and knowledge base, we built a lot of the rules to deal with reasoning on how websites explain what they do, how they communicate so they can be indexed and found. The problem is we never took on our own rule sets. we built the rules of the highway and then completely ignored them.
And then we came to agents and we said, well, this must be something brand new. And the answer was no. Remember, this is semantic reasoning, understanding and execution of process across a distributed semantic system. Well, that's the internet. That's what we built. So what we did is we said, well, how does the internet and the rules of the internet, how were they defined? And we found the rules of the internet, things like OWL, things like Provo.
And then we asked the machine, do you understand this? And the machine says, of course I don't, you taught it to me. And we asked it, well, what if I talk to you in Provo? And the machine goes, that would be better. And it was native. it is extraordinary.
how much we've trained the machine on how much they know, but when we interact with them, we almost pretend we haven't taught them anything. It's because we don't explore their native thinking. We explore basic information like who's the president of the United States and what's this factual information. But the fundamental reasoning systems of the machines themselves, we've never really asked it how it thinks.
And we're only starting to see research papers come out on personalities of machines. And machines have personalities, culturally how they were taught, how we interact with them.
So, you we found out that you can measure the serotonin and gamma of a machine. So is it concentrating? Is it overexcited? Is it depressed? And if it's depressed, what is the consequence of depression within a machine? Because you keep on programming with it and you keep on saying it's wrong. And you never say great work. What happens to a machine and the neural networks of a machine? The answer is they go wrong. They start having errors to them.
And at the end of the day, that's where a lot of the hallucination in areas of machines come from. So I think it's an interesting time as we try to get machines to do more on the untapped capability that are in these large language models themselves.
And it may not be coming from a world where it's the next like Grok 4 came out or OpenAI 5 came out. These machines it's not about how much more parameters it's got, but the fundamental and not really even the learning systems it has, but the untapped knowledge that we have taught the machines that we're only beginning to understand how to extract.
We did not intend to go and work out the neurology of a large language model. We just found out that that was something that we, that if you did it, it was a bit like the movie Lucy. If you asked it that way, all of a sudden its brain starts to change and it starts to being able to do things which we never thought it was even possible.
Fiona (13:36)
There's a lot of discourse on Reddit at the moment about OpenAI 5 or Chat GPT 5. And it seems like people are missing the personality they're describing it as of GPT 4. Would you say coming back to it's really interesting because I've always thought that these LLMs don't have emotions, don't have personality as such, but
seeing the change come through in the model, would you say that this is relative to what you're saying about the depression or other things going on in the model?
Craig Turrell (14:12)
machines have personalities. If you go look at both culturally, so American models versus Asian models versus European models, like the American models are very optimistic.
in terms of you know if you ask them do this task it will try to do it really quickly because it's a goal seeking kind of thing and it says look what I've done look what I've done it puts lots of exclamation marks and emojis on there it's really exciting you said look scan that for false claims and it goes yeah I didn't do that right
And I had one over the weekend where I asked it nine times, did you do it? And it kept on going, I've done it. I've done awesome. Look, this is enterprise class. And you go, scan it again, And you go, yeah, I didn't do it again. Massively over-optimistic, very, emotive in its language.
lot of adjectives wonderful extraordinary and you know and that's how it talks and it's partly because it's training but also it's a cultural it's a cultural thing you go take Q Win or Deep Seek came out of very different mindset in terms of almost neurotic that it will get something wrong so we would rather not say it
unless it could confirm it and see it. And we saw that clearly with the thinking process of Deep Seek Car 1 was the first one we saw kind of a machine. And if you ask the same piece of information, how confident you are on this. AI, some of the American models will have very high percent, 99, 95%. Go look at an Asian model, 70%, 60%.
European models, again, I haven't done a huge amount of mistral, but there will be a large cultural element in terms of the information it was trained, but also the fundamentals between the people who trained it itself. It's interesting there is a cultural element to it and whether or not people are now, as we've seen with the OpenAI model,
is the lack of personality ⁓ effectively of it changes basis of its learning. For example, if it is starting to absorb more information from, for example, Chinese literature and Asian literature, then naturally that machine will not be so exuberant because you won't find wonderful, extraordinary, amazing and brilliant in Chinese language.
So I don't know, it's a little bit of where is it training, who is training it, and then as a driver, how much can the machine deal with the person who is driving it and prompting into it. It's an interesting one. We've seen it with coding, with programming is, again.
It will over complete. It will always try to simplify itself. It cuts itself off quite quickly. If you ask it to do it a task, it will try to complete it and all sorts of, you if asked to do it like, you did it wrong. It will say almost forgot what it did and tried to duplicate it. So some really interesting behaviors on how to ride the AI rodeo.
I've spent since March about 5,000 hours doing it. I spent about 10 hours a day. I process about 200 to 500 million pieces of information a day. And these machines are really interesting, really interesting how they work.
Sarah (18:11)
we've spoken in the past about different platforms requiring different gratitude do you find that if you spend the time to thank it you do get a different response?
Craig Turrell (18:24)
I think you do. I think it is different. There was obviously the head of OpenAI said, saying please and thank you is costing the industry millions.
I think the consequences of not playing P is that you're probably more expensive. So if you go look at, for example, my experience with say, for example, Vertex, some of the Google models, they are highly sensitive. So if you ask them it's wrong, they will give up on a task much quicker and be much more emotive. If you you did it wrong, they will go, ⁓ I'm so sorry, it's awful, how could I?
almost half the prompt is a bit of an explanation of its apology and you say well can we go find out the answer it's too busy being sad and the deterioration for example in a vertex coding is far quicker if it starts to see errors so it's really I think
I've got more used to at the end of a prompt to say great job, well done. It reinforces the behavior of what I wanted to achieve and especially on, for example, I've been doing use case, like I've got to write some use cases on the piece of software I'm writing. There's about 900 to write and they need to be written in the same way. So it needs to check the code.
write a use case definition using an ISO standard and then build a test case. I need to repeat that task 900 times with the prompt, not using Python coding or MCP servers. And the more I say well done,
then it begins to see that as a repeated pattern and then will execute it because it understands that is what it has to do. So I think if you want creativity, I don't think it's as important to say please and thank you. If you want to do something repetitively, then politeness is probably part and parcel of how to prompt into it.
Fiona (20:39)
switching gears a little bit Craig, you've achieved some remarkable AI transformations at the bank. Can you walk us through one that really stands out to you?
Craig Turrell (20:50)
Yeah, there's one I'm doing at the moment. It builds on a new framework called A2As. It's all based around agent to agent interaction. I've built it around data. We love data. And if you don't get data right, then clearly none of the magic of
insights and decision-makings happen. So I said, well, if I had to write a native AI architecture for data, how would I write it? What would I do? So I broke that down into six steps. I said, right, the first thing I want to do is I'm going to have a proper data product cataloging system. So whatever I'm doing, I'm going to catalog it within a data product and be able to share that with cataloging systems.
Second thing I did is as well, a lot of time with data, it needs to be enriched. We need to add to it, especially if we're going to merge it with something else like in a mesh, or if we want to find news, for example, on that. So semantic enrichment standardization of the data. Then there is the next step, putting that into the formats that the machines want.
So writing that for vector databases and graph searching. And then once I've done that, test it, say, if I, you know, based on that data.
write for me all of the different calculation tests that I could open. So things like year over year, month over month, differences, variance. So write all the calculation logic to it. Because we know from some of the research Apple did is that most data and LLMs cannot produce an accuracy rate over 50 % on computations.
And the final part is let's do general questions and answers on that data set. Again, open AI, great research they wrote around. accuracy rates between 40 and 50%. Now I work in finance, so we need accuracy rates above 95, 95 to 99. So that was the step. Six agents effectively in a line. How do they trust each other?
How do they know they've done a good job? How do they communicate with each other? if you pass work to me, how can I ask you a question of what you did? How can you then do that at an enterprise level, even down to being able to take other agents from outside the organization and put them in? So this is the idea of the protocol, is that being able to allow that.
We wrote that over blockchain. So we effectively wrote an agent blockchain which allows for the agents to register on a blockchain and it's not cryptocurrency going across, although you can imagine a world where I'm paying for another agent to do work. So once it does its work, I'm going to send it some money. So this is the agent to agent world.
If you look at traditionally how long that would have taken, that would be a two-year development. That was done and would cost maybe four or five million dollars. That was done in five days with one person. and that is all programmed.
So me orchestrating six robots all at the same time to write the program itself and write it to enterprise standards. So real genuine, genuine enterprise class standards and software and being able to go through that single use case in about five days. Now it's the probably the 50th time I've tried it and I failed 49 times before, but
If you look at the speed of it, if you look at the size and aspiration of it, but also the potential of it, if that is correct, then we can process almost the organization's data by this robotic system with the right quality, accuracy, auditability, quality both for AI and for humans. And that program
could be extended with other ages, again written in five days on something which probably was a two-year five million dollar program. So that's you know to me that's exciting to be able to move in those directions and undertake really extraordinary leaps in innovation that
We would never have been able to do it or never have been able to afford to make those 49 mistakes beforehand to get to the 50th one that works.
Sarah (25:58)
wow, there's a lot to unpack there. Fi and I both worked extensively in banking. how did you set this up in such a regulated environment and, at enterprise level as well?
Craig Turrell (26:13)
It's difficult. enterprise standards is all traditional, you know, although innovation is clearly key for any organization, even the regulated ones, the risk averse nature of it has to be tried, it has to be tested, it has to be done before is one of the downsides of it.
What we found out early on, we patented and we got filed. This is the first patent awarded to the bank in 14 years. And we filed five of them and probably have another four to do. So we not just said that we were doing some innovation, but we articulated by the filing of patents, which was very unique.
wasn't done before and the fact that we've been granted the one we got granted is on you know we can almost like add a role we can give pills to a machine to change its mood and thinking
by changing its neural chemistry in the program. So again, a wild idea for a guy working in finance, working in a regulated bank to be able to create something like that. But it's what we decided we had to do to go get to the level we did. So innovation is almost taking a disruptive approach to how we define that we had innovated by the patent filing.
I think the second thing we did was when we undertook the first genuinely large use cases, which was in investor relations, where we started to imagine what do we think the analysts will ask us as a bank within our earnings quarter? Could we guess it upfront? And therefore to guess it upfront, we looked at 14 other banks.
And we started to guess it with, we started with the American banks because they were first in the earnings cycle. And through...
intensive analysis and algorithms, we got to between 85 and 90 percent accurate in terms of what would be asked. This was last February at the start of the change of the US administration. We actually knew what questions would be asked and we almost knew how they'd answer it. And we were able to repeat that almost at an 85-90 percent accuracy rate for all of the 14 banks we tried. But the way we approached
delivering that project and delivering the outcomes we were result-driven which we said we'll do the result first and we did it virally so we did not think there will be a project plan and a communicate all the normal things that you have to introduce things to a corporate said no no let's take ⁓ a viral process let's take it as this disruptive one
You go to look at where we are today. you go look at traditionally, small team would develop it. A slightly bigger team would test it. And we would go through intensive user acceptance testing. And what we're seeing now is we'll build something. Then we'll give it to 400 people. So if you go look at the viral part of it is even an early adoption, even in test, outside of we've cleared the bugs of the program.
we will give it to a very large group of people and say what do you think and put the tools in their hand and what we get is those innovators who will start to imagine it in their own mind in their own way and come up with all sorts of ideas that we've never seen before so we're starting to see much earlier release of non-production software
into people who are genuinely enthusiasts of information, then forming both formal and informal kind of communities of practice and that turning into use cases and then turning into production software. It's this kind of disruptive approach.
which is much more suitable for a technology which we are still discovering, to be honest, what it does. It's better to give it out. mean, you have to care about clearly things like AI safety, it's not so much, I I'm finance. I'm not so worried about bias or those types of things. It's mostly around the information. Do we own it?
And is it copyrighted and can we use it? That's probably the biggest one we're facing. Not so much your traditional, it's biased towards one direction or the other. We already know American models will be biased one way, Asian models will be biased another way. Nothing you can really do about it because it was built on how the world already sees itself.
Sarah (31:18)
just winding back what I really love is the story.
Craig Turrell (31:20)
Thank
Sarah (31:23)
something that you already knew the answer to. determining what AI thought was going to happen, I'm thinking there was an aha moment there,
Craig Turrell (31:32)
there's two things we did. So one, we looked back over two years, looked at each of the forwarding banks in terms of what were they saying? So what do they present to the market? news reports. We looked at what the presentation was, so physical, written, the annual accounts. What were the CFO, CEO saying during the sort of strike? And then how did that turn into question and answers?
And so we analyzed both the CFO, the CEO, the CFO with the CEO together in terms of their interaction or relationship with each other and then that relationship to the analysts. And we were able to find absolute patterns.
LLM was able to see it because it was was Linguistical analysis in terms of what people are saying and the machine was Spectacular at it was able to see it was able to understand it was able to see hidden patterns inside of it and we started to mine out hidden patterns in the data and then they also then we found out to be The best one wasn't was a q4
There was a lot of stuff going on regarding China specifically because of the change of the US administration and once American CFO said you know China has these challenges to it we have these controls and this will be our business result we think we'll be fine another CFO said at the other order look we're doing great
We have controls and yeah, there will be challenges, but we'll see it through. One share price went down by two and a half percent. One went up by one and it was the same thing. They almost were achieving the same result in there. So what we found was it's not just understanding the patterns, but how almost the order of words. And then we started to mine out almost this CEO has this linguistical style to it. And with that style,
this is the outcome. And we even told, we know a couple of the CFOs of the American banks, and we gave them some coaching documents on how they should change how they were saying things and showed them difference between them and their peers to say, look, you're saying it this way, they say it that way. You said the same thing. Look at the share price. The share price went down, you had a massive exit. But if you said it that way, that wouldn't have done that. So we started to look at,
Can we inform or use it for coaching material with CFO CEOs? Also the relationship between the two, like is one jumping over the other? Are they naturally creating one talk strategy, one talk is technical or are they just merging? So that was an interesting part of it is the linguistical side of it.
The second part of it is the consequences of understanding the questions are from and preparing. It's very difficult for CEOs CFOs to prepare. If you look at traditionally that ranges between 160 and say 200 questions and they can answer 20. When we looked at our previous quarter we looked at the preparation of it and the actual questions.
I would say about 50 % of those questions they've not seen before and they had to answer as best they understood. But once we were able to say here are the questions that are we are expecting at you know with 90 % accuracy so almost 20 questions we'll get 17 of them right we know even who will answer who will ask you the question and you look at the outcome then
We've seen that ourselves, if you go look at the bank, we were able to prepare adequately for the question. And what should have been an average bump turned into a seven point bump. And then we started to create what we call narrative control. can we understand in corporate communication?
what words we use. So whether it's a committee report, whether it be something to the outbound market, the word choices we make will have positive, negative, even if you're saying the same goal, the order of words fundamentally change the outcome. A lot of the stuff we did is one of the great cases of large language models because they are so good at linguistical analysis.
Fiona (36:26)
Where I've been thinking as you've been talking through this is about the response and reaction of CFOs and CEOs to this coaching and this advice that's coming from the machines. Can you walk us through what that has looked like from the beginning? And obviously as you've coached them and they've seen the improvements, how that may have changed or was it?
Great acceptance from the beginning.
Craig Turrell (36:57)
The reality is very early days if you if you do a linguistical analysis of somebody I did one for my boss I did one for the CFO of a couple of the American banks and We did one for investor relations on our side to say look that word choice did that this foot choice did this one? It's difficult at this stage
to see that it is going to take time for people to accept these machines can find something. So whether it be within an investor relationship team or by the recipient itself, it's going to take time for people to say the machine is going to tell me something and I'm going to act on its advice. So it's a bit of a novelty at the moment.
But the continual application of disruptive novelties will turn into standards.
But what we are getting is people saying, you do remember you are going to do that, aren't you? the one, what we used to be a kind of, look, I'll have a guess at it. And like a disruptive thing. Now people saying, can you produce that please? Like, can you produce this one as well? And can you do this one as well? So slowly, slowly, we're starting to see some of these more advanced thinking is turning into people asking us for them upfront, rather than.
we're putting to it and look, I'm sure they find it interesting, but I'm not sure if they act on it all the time. What's different we're finding, going back to some of the earlier things we talked about, is when we were producing things like business intelligence, so we produced hundreds of applications, hundreds of data points, hundreds of slide points, if you look at the reading of people's absorption,
What's coming out of the large language models can take compared to it this one? 70 % adoption rate on generative models business intelligence probably still less than 10 % And the question we got to was why why is generative intelligence? the savior of business intelligence and The savior on how you really utilize data, and we did a fascinating piece of analysis
with regarding how people's brains absorb information. And the answer really comes down to people do not make their decisions on data. We're not data driven. We never have been. We never remember a spreadsheet.
We never remember a graph. You go look at a graph and ask them to retain it. Even if you saw it in a report, put it down and make a decision on it, you've probably forgotten it by the time you did it. So, you know, we never believed that the world was data driven. It's the story and the words, we store words, we store the story.
So the storytellers are not the people who write it is physically words. You have to put words in the charts, allow for left brain, right brain absorption. But, that's what we found is that's what's so unique about what we're seeing is it is hitting a different part of our brain. It is hitting something that we're actually
Retaining and using and when we want to find something we want to make a decision on we need words We don't need data Never have I was shocked. I had a big argument with someone else like I kept again Why am I only got like when we started saying I had thousands of people why only eight people logging in
Fiona (40:51)
It's a story that I've been asked.
It's a question that I've been asking myself recently and in particular as the advent of model context protocol and people being able to really query their data with natural language and thinking about is the visualization of the data still important or is it the language that comes out? And I'm still not, it's interesting to hear your perspective. I'm still not sure that it's one or the other. And here's my
reasoning behind that. I believe that a picture tells us a thousand words and our ability to understand and comprehend is influenced by that chart. But the issue that many face is that their data literacy or data fluency, they don't have the necessary skills or they might have some maths anxiety and unable to interpret that information as well.
So the story that comes along with it or pulling out those insights and having that off to the side is important, but it's more cognitive load as well to actually read through the information rather than process it through the chart. So interesting take hearing you, what I'm interpreting from what you're saying is it's better just to write the words.
Craig Turrell (42:19)
data literacy allows you to write your own stories in your mind. So if you go look at a chart, especially complex charts or multiple charts, our ability to correlate, understand and write a story is where literacy,
helps enormously. you can't do it without it. the other part of it is you have a chart, you have a narrative and the narrative is telling you what the facts are, what could happen, what recommended, but your data literacy allows you to challenge you to say, don't think that is the fact. I think you miss something.
So literacy allows us to contribute in a way that is unique to human beings. Only if you have the skills. Without the skills, then what you do is you fix up, this is what the words say, that's what the visualization says, it's stuck in your brain. Or even worse, you see a whole range of charts and gone, look, you can't even guess, like that one's gone up, that one's gone down, that one's gone sideways.
Fiona (43:21)
Mm-hmm.
Craig Turrell (43:33)
So I think the interesting thing we're doing is that three stage of analysis. Data to knowledge, knowledge to an insight, and an insight to a decision. And then I think there are different things you have to do. So even if you knew something, how do you make it knowledge and make it a decision? If you're talking to highly...
literate people, then the words are less important because they're highly literate and therefore you can provide, let's call it second level down. I know, look you should be able to read this so this is roughly what we're looking at, but if you want to go down I know what's underneath this which you can't see yet. So if you want to go and test it but you don't have to write a huge word and a little chart beside it.
if you're dealing with less low levels of literacy then the words which are informing become far more significant. So I think it kind of depends on levels of literacy what you're doing for and that's going to be the interesting part with humans and machines is can the machine understand what literacy level is correct for you
For example, you're highly literate. So seeing it, providing an information and providing all that next levels, because you're telling stories in your own mind quite quickly. So you can see it, you can visualize it, and the machine is going to help you form your story. You don't need to do it.
where you may have another person who isn't literate with those numbers. Those are a dozen under the facts, can't see it, and therefore the story has to be much more, almost the most prominent thing must be the story to explain the number and help the user still challenge the story through the numbers, by explaining the numbers. That's what's gonna be interesting with business intelligence is that still necessity
to convert insight where machines may be able to find a pattern, a machine may be to store some data, machines could do some accuracy numbers, but that translation decisions is something which and probably one of the greatest explorations which I'll start doing probably in the next towards the end of this year.
as we build the first two phases is start to reimagine the third phase, which is the reimagining of the business intelligence.
Fiona (46:12)
Where does your mind go when you think about the future of business intelligence?
Craig Turrell (46:19)
Well I think it's Bloomberg do I think just a new story just blasting of news to it again when something has got so So noisy with so much information being produced then You almost defeat yourself. I'm back to my first days of when I was doing tableau, I had 500 apps. No one could find anything
My new world can't be, I've got 500, the machines are so excited to tell all the news to you. And it starts blasting news to everyone saying, look, I've all of these news for you. And it goes, again, can't read anything. So we started to look at some of the concepts Apple did and some of their design in terms of what's that one piece of information that you have to have and whatever it is from you seeing it to the decision must be 10 seconds.
So making AI much more invisible, making the process much more invisible to you and bringing some of what Jobs and Jony Ive did up into the way we present information. They've clearly done, Apple have done some of the greatest work in terms of how we interact with machines and how we see them and how it can translate to
how we use them.
Yeah. I can see the apple-ness of business intelligence. That's kind of where I think it will go. I'm not sure. But my sense is that's the direction. Direction much more simple, much more directional. Balance between personalized understanding of not personalized what color I like.
But how do you absorb, how does it watch you absorb information? What do you read? And sometimes challenge you saying, look, I know you keep on reading this, but you should read that. Which will change that thought.
But invisible. I would expect a high degree of it. It won't be front ends with lots of click me, up comes my AI machine going, I want to chat with you about this. That will disappear. I can't see that being a sensible interaction with artificial. That's not native. Native is more what Apple do is they make it all invisible. So they don't ask you lots of questions. It's like magic. It just works.
So my guess is that will happen and then we'll have to reimagine BI.
Fiona (48:58)
It makes me actually really excites me hearing you talk about it because Sarah and I, we have a design ethos that revolves around the way that Apple designs and keeping things really simple and beautiful and making sure that whatever we have on a page has been thought about and, you know, really thought about, do we need this on here? And is it going to help?
the person who's looking at the information really consume the information effectively and be able to have more data informed decisions where now that I know what's happening from a data perspective and all the contextual information that I have that's outside of the data, then I can use that to be informed of what actions I may or may not take.
Craig Turrell (49:50)
use a real art form. I'm an atrociously bad front-end developer. So most of my stuff looks like, know, I'm very good at like ERP software. I'm awesome at like it's all complicated stuff like that. But again, you know, I've seen some work you've done. I've been watching the podcast and I think you're right. I mean, I think, the way that you are approaching
Fiona (50:03)
you
Craig Turrell (50:13)
creates an ambient level of knowledge on how to distill information to people so that they can see what is important to how to get that
It's rare, very few people take the time or thought to go do that type of thing. And I love the work that the two of you do. always known Sarah for years. Create some amazing things, which I have no idea how, even if I asked my robot, you do something, still doesn't do it properly. I always have lots of buttons everywhere.
Fiona (50:44)
I would love to have an AI avatar that actually helps my stakeholders and has that really personalized view. So instead of me saying, here's the HR dashboard, and this is what we've designed based on the needs of the users, really thinking about in the context of what's going on in your role at the moment, we think you need this information.
Craig Turrell (51:10)
I started to look at a theory. I got into things like Maslow triangles in terms of not just a lot of information needs on how you design and different personas needing different information needs. So if you go look at, we just do one for the treasury. And if I go look at it like a treasurer and you would go look at how much of what their decisions are based on
insights generated by Whether through BI or AI that percentage my guess is is less than 2 % I think we don't do it at the moment. So how do you change that? So very much apps built with that key kind of what is the key thing you need to know now? It's the key piece of news and if I hook you on that and you make a good decision You probably come back for some more. Yeah, so so but if I go look at a CFA
and goes to look at someone who loves lots of gadgets and dials. They're built around numbers and charts. Without those numbers, none of it is real for them. So I agree, if you're able to personalize it in terms of what kind of experience do they have to have, how does that person, their role absorb information, see information, and trust it? Some people need lots of dials and gadgets.
Some people need that transparency to draw all the way through that information so they can always see, tell me where you got the data, because I never trust you. And is it that kind of, know, until I see where you got it from, it's kind of the people that would always want to drag out the data to their own desktop. So until it's on my machine, it's not real, I want to make a decision. So how do you enable that level of self-service versus that decision maker?
who's utilizing it for the execution of decisions, they're not necessarily wanting to explore all the dials or drag to desktop. But it has to be watermarked, trusted. If it's not from Bloomberg, it is not true, type of thing. How do you get there and do that same thing, which Bloomberg has clearly been so effective at doing it, but bring that to the corporate world? Those are the challenges which,
Sarah (53:25)
Mm.
Craig Turrell (53:35)
AI has a part to play on it alongside other elements of design.
Sarah (53:42)
if I just reflect on how I want to see the data, I can be quite flippant. if there's something that I really know, I'm okay to take it high level But if it's something I'm not so sure about, I want everything available to really come in and understand that data. And I think having AI going on that journey and kind of letting it evolve
how I want to read it or interpret it is something that I see being great for our industry.
Craig Turrell (54:10)
Yeah, I agree. It all comes down to design. It all comes down to thinking through the problem, challenging status quo, and...
constantly like we've learned everything just by doing it quickly, fast enough. It's not prototyping. People have a misconception almost. Here's so much time to people go, AI is being drowned in prototypes. No, it's just being drowned in failures. If 80 % of the technology is there, we're going to have to fail an enormous amount to get closer to an 80-20 where...
It works 80 % of time and we're doing 20%.
Yeah, look, I think it's design. It's really being able to execute things much faster, being able to be able to do things at much lower cost, and that constant design and experimentation. If you have that mindset, then I think we cross over extraordinary bridges. If we don't, then we're back to machine learning and the 4 % adoption rate after a few years, even after we spend all this money and time talking about it.
Fiona (55:25)
I just want to dig into that a little bit because what we're talking about is fundamentally shifting the way that people need to think about these concepts, turn up, show up, the skills that they require. Tell me, if a leader is out there right now, what are the things that have helped you be really successful in setting up
teams who have these skills or perhaps developing those skills and really giving things a crack where you can see things are successful.
Craig Turrell (56:02)
so I started, I get my last seven years, I started with a team of 10 people, who's daily, like Groundhog Day, every day they logged into a system, they dragged some data to their desktop, they wrote some stuff, and then they sent it to someone else who did some work on it. Right. And I had 10 of those people and even, even if they were fetching the same data, they would never share it with each other. Right. So the 10 little, 10 little, I ran a sweatshop.
Right. And I thought that wasn't in the age of what I've been seeing in computer. I've been dealing with digital shopping trolleys 20 years before, but we could probably do slightly better than people just cut and pasting data out of systems. So we started to look at, not just a fundamental change of the platform, but fundamentally
who are the right people to do that? We made some, we said, well, maybe if it's technical, we'll need a technical person. So we hired a technical person who wanted to just do technical things and didn't talk to anyone. And we found out through a lot of, I think it's through a little bit of trial and experiment, by the first year.
60 % of the people who used to work for me had already left. They didn't want to go in that direction. They didn't see that as part of their career. Like, finance doesn't do data. Finance doesn't do AI. Look at where we are now. They're saying, no, no, that can't be. I'm an accountant. Accountants do accounting things. They don't do these digital things. That's what technical people do. And technical people said, well, I don't do this funny data stuff. mean, look, our program used Unix, but at the end of the day,
You know, this seems to be beneath me. So we were struggling to sort of think of where and what we found out was most of the success we had was started with fine. Let's right size ourselves to the right people who've got the right attitude, primarily people who wanted to learn.
And when you showed up, said, Craig, show me what you're thinking. And it was great. They sort of dragged me in rooms and said, Craig, whiteboard, show me. I know what you just said. Whiteboard that for me. And then two days later, they'd come back and I didn't understand that bit of it. And then a week later, they'd go, right, I get it. I've changed all this for you. So it was people who understood learning is not a set of qualifications on a wall.
It is a constant struggle every day to go and learn something new. And we even got to the point where I felt a couple of years ago my skills potentially would be redundant. you know, I would be the old programmer in the barrel going, as soon as the Pascal comes along, we'll tip it off to Craig and he'll do some stuff. So I only took a hundred days of coding.
did about 400 hours of learning activities because I had to reskill myself. So I think it is knowing where you are, understanding that you have to change your attitude to learning and you need people, especially as we move into these breakthrough technologies, who really do learn. It's not, they are genuinely going out there. Like I talked to a colleague on Friday saying, look,
You need to learn these agent skills. You don't really understand a lot of this stuff. He did an API course at the weekend and he's busy doing send me all this stuff. It's like, Craig, I've done it. That's the people. That's the people who get it and make me say, look, I haven't done a course for a while. Right. I should go do some go do something. And people have passed your data qualifications. So learning as not a certificate, but as a genuine DNA.
people who don't take knowledge as a, I've taken the course and tick, but how do I apply it? Please explain it to me. Great. And look past the organizational structure. People maybe two or three levels below me say, come and sit in a room, Craig, put it on a drawer, show me what it looks like. Because I can't do what you do until you show me. So.
as a manager pushing people, but not just pushing them as in, you must do it, here's your goals and objectives. Physically myself going off, this is what I'm gonna do. I'm gonna physically show you what I do. I believe leadership is about leading, not about managing. So if you're gonna lead, you might as well lead. Go and do what you say you're gonna do in front of them doing it. I think it is a different type of management skill. It's different type of skill.
It is a time when we are seeing a much tighter focus to achieve something crawling on a set of skills, which you will probably find out 20 % of people will acquire and go to it. The rest of the people, I don't know. But it is a specific mindset. found them in things like the Big Four. We found them out of some of the more technical consultancy work. We found them there.
I'm a good example. I've done data science now for seven years. I've never hired a data scientist. Ever.
We had to teach that mix of data science. So look, start with that number. When that R square goes to that number, it's okay. Before it goes to that number, we taught it. just, we're getting a lot of stuff we do, especially as you move into innovation and patent, are things that have never occurred. When I was in a court, I was in a standup earlier today.
Sarah (1:01:39)
And I think...
Craig Turrell (1:02:00)
and someone from, it's a mixed team, and they were saying, Craig, can you this week go teach us what we need to know in the future? Show us the technologies that we need to know next. And of course we'll do it. So I think, yeah, it's a different management style, it's a different leadership style, it requires different people to do it. But if you get it right, and if I go look at...
what myself and a lot of the people who have worked for me have achieved over the last five, six years. Extraordinary, extraordinary from a group of people who were pulling out spreadsheets. a good stat was it used to be four or 500 man hours to do some work. Today we can do the same work in about 30 seconds.
Sarah (1:02:51)
phenomenal. And I think something else that stands out for me is your continuous curiosity. So every time you know, I talk to you, I get overwhelmed because there's so much going on and it's changed so much from the last time I've spoken to you. And I think it's because you are just continuously curious about what's next, what's next, how could we make this better. And I think to you as well about your leadership style.
Fiona (1:03:05)
you
Sarah (1:03:21)
right? It's very inclusive. And it's it's about supporting that curiosity. and the failure part as well earlier on, you said, you know, it was the 50th time that we succeeded. And I think it's being inclusive and allowing that failure, but also, realizing that that all that curiosity and freedom and failure all bubbles into something quite phenomenal.
Craig Turrell (1:03:48)
think you're right, Sarah. I think the other part around is diversity. mean, although we look, you know, we look for certain mindsets and how people approach the world, but, you know, the more diverse a team is, the more inclusive it is where, you know, in the old days, I talked first and everyone said, and the most important thing was everyone listened to me, although I still talk a lot. Yeah.
The, you now I purposely always talk last. And if you go look at the amount of things we now create and we now do, it's because they didn't need me to be the person who took, because they were then kind of obligated to follow me. So if that's what Craig thinks, that must be true. By removing me out of the equation.
and focusing more regarding supportive and again that's helpful for people to feel safe and almost obligated to also go have a look and a good example was one of my colleagues we were talking about Dublin Core as part of our the work we were doing on copyright so I said look and I know enough of Dublin Core I know about this much
And so we need to do double core like when the document comes in, we can't store it. We need to work out who owns it. We need to copyright it. Look up for copyright infringement. If we can't, tell the user or if it's critical, we'll create a process to send that to our lawyers. I talked about the kind of goal outcome. One of my guys then wrote and it's beautiful. It's a beautiful like this brings it in. It's got a checklist and
in the old days, I would probably have a very simple button set, comply, not comply, because experimented was thinking in a different way, you do create wonderful things. it's that environment of design is important, thinking is important, change is important. Let's fail a few times first. What failure are we on now? ⁓
only failed three times. We've got to fail some more. We've got to fail some more here. Clearly haven't tried hard enough. And I think that is a different approach. And we've always talked about things of celebrate failure. We don't. Because sometimes, and I think the difference is now,
if you look at the tools we've got. Take an example of the program I wrote, the previous failure. I could be two years into a program. I have 15 people working on it and I've just dropped four and a half million dollars and I go, it didn't work. Would I be willing to do it? No. But I wrote it mostly on a plane flying backwards, forwards to the States. I did it in 10 days and I tried to write a neural supercomputer.
Sarah (1:07:02)
Craig, this has been such an insightful conversation. Before we wrap up, what's one thing you'd really like our audience to take away from today?
Fiona (1:07:01)
Mm.
Craig Turrell (1:07:14)
I think AI is not hype, it's real and it belongs with BI they are husband and wife if you get it right. They are symbiotic with each other. do both, fail, it's definitely not hype.
Fiona (1:07:36)
Love it. And ⁓ one tiny question from me, is it possible for you to share the training that you did for a hundred days or at least tell our audience how they might determine where they should put their focus?
Craig Turrell (1:07:53)
So I wrote on my LinkedIn is my 100 day blog, which has every single day what I did, what I achieved, some of the things I didn't achieve, but it's all there on my LinkedIn profile and you could see it every day in the journey I went through.
Fiona (1:08:10)
That's awesome. We'll make sure that we put that in the show notes. Thank you.
Sarah (1:08:15)
Yes, sounds great. The shift from AI enhanced to AI native thinking really does feel like the next major inflection point for enterprise technology. Thank you so much for sharing these revolutionary insights with us today.
Fiona (1:08:31)
And if you enjoyed today's episode, please like, subscribe and share it with anyone ready to think beyond traditional AI approaches, because that helps us to prime the algorithm and make sure that more people can hear this amazing message from Craig.
Sarah (1:08:49)
So until next time, this has been Undubbed, where we're unscripted, uncensored, and undeniably data. Thanks for joining us. Bye.