CCAB Ethical Leadership Podcast

Can we use AI ethically?

Richard Season 1 Episode 1

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 34:17

Artificial intelligence is rapidly transforming the world of finance and accountancy. But as organisations explore its potential, an equally important question arises: can AI be used ethically?

From data privacy issues to bias and hallucination, artificial intelligence poses a lot of difficult questions for accountants and business leaders. Is it even possible to use AI ethically? How do we mitigate the things we can’t control.

Host Tom Parker is joined by several experts in ethics to discuss all of the issues raised by AI and how to address them.  

Links:

Guests:

Professor Susan Smith, Professor of Accounting at University College London and Chair of the CCAB Ethics Group

Alistair Brisbourne, Head of Technology Research Within Policy and Insights Team for ACCA

Conor Flanagan, Chair of the CAI Technology Committee

Dr Giles Cuthbert, Former Managing Director of the Chartered Bankers Institute and a doctor in AI and ethics


Host: Tom Parker

Producer: Natalie Chisholm

Episode recorded: 4 December 2025

AHack Creative and First Touch production for CCAB



Tom Parker:

Welcome to the CCAB Ethical Leadership Podcast. I'm Tom Parker, and in the first of this two part special on the implications of ai, we'll be looking at the ethical challenges of AI from data privacy issues. Two biases and hallucinations. AI poses a lot of difficult questions for accountants and business leaders. So how do we address these concerns? Can we use AI ethically and how do we mitigate the things we can't control? Joining me in the studio today to discuss. This is Professor Susan Smith, professor of accounting at University College London and chair of the CCAB Ethics Group.

Professor Susan Smith:

Great to be here.

Tom Parker:

And also joining me in the studio is Alistair Brisbourne, head of technological research within the Policy and insights team for A CCA

Alistair Brisbourne:

pleasure to be here, Tom,

Tom Parker:

and coming in remotely, we have Connor Flanagan, chair of the CAI Technology Committee.

Conor Flanagan:

Hello, all delight to be here today.

Tom Parker:

And Dr. Giles Cuthbert, former managing director of the Chartered Bankers Institute, and a doctor in AI and ethics.

Dr Giles Cuthbert:

Hi, delighted to be here.

Tom Parker:

First off, uh, I want to set the scene a little bit. Alistair, let's start with you. How prevalent is AI use in the accounting and finance sector? Uh, what are people, you know, using it for?

Alistair Brisbourne:

Well, it's surprisingly difficult question to ask in, in the sense of the, the extent to which it's being used. We have all of these adoption statistics that we've probably seen headlines of what sort of 70, 80, 90% of organizations are using ai. And, and there's truth to that in the sense they might, they might be using it in marketing, et cetera. Um, in finance and accounting. It may be slightly more limited in terms of the extent to which it's been fully implemented in business processes. There's certainly a lot of experimentation going on and trialing going on, but when we run surveys on this, we've, we've run a few, it's between sort of a quarter and a third of organizations have fully implemented AI in accounting and, and, and and finance work processes. And now the. Gray area there is that there are lots of people and individuals in particular using things like generative AI tools that are maybe not being counted. As I said, there's a lot of experimentation going on, which is maybe not being counted in that because we sort of set a high bar in terms of asking about full implementation. But what that does say is that. There's lots of, um, focus on the potential of ai. We know in terms of the investment, um, again to something else we asked in our smart alliance research last year was the extent to which organizations were investing in ai. And we found on average that was around 2 million pounds over the last 12 months. And that counts from very small organizations through to some much larger organizations. So some are spending upwards of 20 million, some are spending. Much less than that, right? But clearly organizations are putting their money where their mouth is, so to speak, right? They see the potential and so they are investing heavily. And so we'd expect those numbers to change, um, quite rapidly. There's one other gray area, which is that. There is AI implemented in a lot of software solutions as well that is maybe being missed in some of those, those statistics. So it's there, it, it's prevalent. It might still be only a minority of organizations to some, but it, it's growing quite rapidly.

Tom Parker:

And then what are the individuals actually using it for? How are they using it in different ways within the organization?

Alistair Brisbourne:

Yeah, so, so the most prevalent use cases are just for things like data analysis and reporting. For fp and a. So financial planning and analysis purposes for forecasting, for scenario planning, lots of uses. Um, accounts payable. Automation is a really big one where you're seeing just, um, the use of like RPA with machine learning has massively improved the capabilities there. And so you are seeing, um, just vastly better forms of automation in terms of accounts payable, accounts receivable, um, and then you're seeing things like. Office productivity, like more generally popping up in, in some of the statistics, um, which wasn't there a few years ago when we asked, but that's where like generative AI is kind of sneaking through people using copilots and other digital assistance. Um, so that's also up there, I think it was around a quarter to a third of organizations we're using AI just generally for office productivity purposes, which sort of says. We're not quite sure yet, but we think it's gonna make us a bit more efficient. And then there's um, of course things like fraud detection, anomaly detection, which have been around for a long time. I mean, you look in financial services, they've been using it since probably the late eighties. Some, some form of basic machine learning, at least for like anomaly detection. So those are probably the most prevalent ones. And then you're seeing some sort of other cases say like smaller practitioners more likely to be using it for things like. Um, tax preparation, tax filing, that's probably becoming more prevalent in other areas as well. Um, compliance monitoring, stuff like that. There's lots of opportunity, I guess is is is the, is the thing.

Tom Parker:

Yeah, absolutely. Well, with that context, uh, Alistair Giles, uh, I want to look and get stuck into the ethics conversation. What are the biggest ethical hurdles for adopting ai?

Dr Giles Cuthbert:

I think the first, uh, thing I'd bring in is really the, there's an absence of sense of responsibility when we start to look at the world of ai. Uh, there's this view that somehow, uh, we can have a sort of a. Ethical bypass on. Sometimes when we're looking at ai, we look at this and see this manifesting itself in terms of how people behave online. Um, if you think about a commercial organization, how a bot. Talks to a customer, we would never allow a human to talk to a customer like that. And indeed, if they did, we'd probably fire them yet somehow it's okay when it's a bot. So I would say always remember that we might talk about artificial intelligence, but it's very real and it's operating in the real world. And it's also not terribly intelligent. It it, you know, it follows commands. It looks for goals. So don't set expectations, which it will never be able to deliver. Always remember it's domain specific. It's very, very good at doing one specific role. It starts to have problems when it works across domains. But at the heart of all of this, remember that you as an individual. Responsible here. You can't blame the computer. You can't blame the ai. This is about decisions that you as an individual take to use a piece of technology. And you may see the world through that technology and see people as data points within that technological output, but you are responsible for what you are doing and always remember that.

Tom Parker:

I want to go, uh, into one of the specific areas a little bit more, so I'm gonna stick with you. Giles is, is around sort of data security because AI are only as good as the data that is fed to it. So can you give some sort of discussion around data security, some of the ethics around that.

Dr Giles Cuthbert:

Well, it's, it's very troubling really from the outset. If you look at, for example, generative AI and the extent to how it gathers its data, um, we know that that data is full of, uh, bias, uh, of the, from the society in which it was gathered originally, but also. When you think about how data is scraped from the internet, often this is simply misuse of other people's IP and is not, uh, ethical in any way. It's another example of you wouldn't do it in the real world, but it's happening in the world of ai. So when you start to think also about the sort of questions people are asking, large language models, you're giving an. Awful lot of detail potentially about your company to a public source if you use publicly sourced uh, uh, systems. As we've seen, people can start to identify your company's strategy from the sort of questions you are asking. Uh, we've seen examples of executives saying, I'm, I'm looking to expand in India. Can you write me a strategy? Well, hey, who all your. Competitors suddenly know what your next move is. So we have to be very cautious about the tools we choose to use. And I, I suspect, uh, Connor will have some, uh, significant views on, on that.

Tom Parker:

Well, actually I was gonna go to Susan next because regarding data and biases and, you know, uh, some of the issues around that. What does good governance look like and, and how do organizations adopt it?

Professor Susan Smith:

Well, I think good governance is a moving feast and it needs to adapt. As these tools adapt. The governance should look at what the purpose of the tools is, how people are using them, and make sure that they're equipped to understand. The intended use within the organization and any safeguards that they need to put in place when they're using those tools. So for example, we've heard risks around potential client data or internal data. Um, is it permitted to load it into a publicly um, available tool? Probably not. So it's, it's about being mindful, but also making sure that employees are aware and are appropriately trained.

Tom Parker:

And Connor, did you have any points that you wanted to come in here?

Conor Flanagan:

I suppose the, the big thing from my point of view would be that concept of shadow a AI as well. And that clients will ultimately use these things, customers or your employees will ultimately use these things. And we're seeing that more and more and more. So up to, to Jay's point there, you can take confidential contracts and put them into publicly available chat, GBT, not knowing where that's going, not knowing how the model deals with that. Or as a company, you can put guardrails in place so that you can then allow your employees to take advantage of this technology, but within a controlled environment. So that information isn't, that then not shared externally. It's not maybe visible to who knows what or the model or maybe other or other bodies. So I think it's that concept of making sure that we make this technology available within your organization, making sure we take advantage of it. But doing so in a controlled way. And what we're hearing anecdotally as well is a lot of employees are even starting to use things now, like, um, mobile phones or, or other apps to record meetings, to take minutes because the organization maybe isn't paying for co-pilot within teams or, or other similar controlled apps, I guess. So I think it's that point of if they are doing that, you potentially have confidential information then leaving the company tenant going off to a private device and then going to who knows what or who knows where. So I think that the integrity point is, is, and the, the security point is, is massive in that sense, in that if you don't control it, what we're starting to see is they will likely use it anyway. And then they will likely use it potentially in an uncontrolled environment, which is extremely dangerous and something that we, we, we should want to avoid.

Alistair Brisbourne:

If I may just add one point on the governance as well, one of the reasons why having that foundation in place is so important right now is because of the state we are in with, um, where, where the technology is, where the software vendors are in terms of integrating these capabilities. And we've had reports from members, for example, where they've had enforced updates from their. Prospective software provider, which has integrated some kind of generative AI capability, um, before they've had the ability to actually do things like label their documents from a security and privacy point of view. So they've inadvertently given access to a generative AI model to documents that they wouldn't have chosen to. And so they've had to backtrack. And so again, have, being proactive in that regard is so important because we are seeing these things come through.

Tom Parker:

Connor, I wanted to come back to you as well and, and, and look a bit at one of the particular terms that is sort of dego at the moment, which is agen ai. And I, I think maybe we need to break down the differences between AI here. We've got, you know, AI that has been around, it's been a, a term that's been coined since the sort of fifties and sixties. We've got generative AI here, we've got Agen, ai, you know, for a lot of people out there that are getting confused by some of these terms. May, maybe we can break those down a bit and then, and perhaps look at some of the, the specific challenges around agenda.'cause that does bring in much more ethical. Uh, considerations.

Conor Flanagan:

So generative AI is your, your classic JA GBT. It will give suggestions, it might suggest reconciliations from accountancy point of view. It might suggest how do eyes, it might, um, summarize documents, but it won't necessarily do, it won't necessarily complete an action on behalf of the employee. So that's really when you're then stepping into the, the zone of AEM ai. It's where it's now not only suggesting. Also potentially doing, so that's potentially responding to a customer email that's potentially hosting a purchase invoice that's potentially, um, responding to an email about, say sales, for example, saying, yes, you can, you can place an order, it'll be this price. So it's completing actions on behalf of employees. And I think it's very interesting recently how the, the larger organizations are starting to almost see agents. Employees. So agents are being set up now with security profiles, with roles with barriers within the organization, which I think is very interesting, but equally, um, ethically potentially challenging because up until now we've always had this argument or this discussion of there'll always be a human in the loop. And I think that potentially, um, doesn't fly when we're now moving towards agents because purely from an audit trail point of view. I've noticed recently any transactions that are posted to your systems by an agent are now not tagged with a user id. They're now tagged with an agent id. So that raises a lot of interesting questions around responsibility, around ownership, around who actually stands over that if the organization is seeing the agent as an employee, and ultimately the transaction is being tagged or posted by that agent. Where does culpability sit? Where does liability sit? And I don't think we fully maybe bottomed out those, um, those discussions yet. And I think that's really where, where our conversations are gonna go over the next few years around responsibility, around how we can keep the humans in the loop. Because we obviously want to push forward with this technology. We wanna push forward with innovation and investment, but we wanna do so in a controlled way. We wanna make sure that we as humans. Are still ultimately responsible for the numbers that we're maybe publishing or account as the numbers we're sharing to management internally, but taking advantage of these automations as well. So it's trying to find that fine line between advancing technology, but also standing over the numbers and ensuring good integrity and good, uh, good governance.

Tom Parker:

I want to build out on that part, Charles, because, uh, if we are looking at the human in the loop and we're looking at accuracy models here, whereby AI is becoming more accurate than, than individuals, but it's, again, as I said before, only as good as the data that feeds it and, and this, and having the human there to make sure that it is checking some of those results, giving it so human experience to some of the data that's coming out. How important is that?

Dr Giles Cuthbert:

If I could also respond to one of Connor's points there about the role of technological advancement. There's been a lot of times in human history where we've assumed technical advance, uh, technology and moving forward is good. But as we saw with the case of using radiation for cooking in the 1920s and thirties, it was horrifically. Disastrous and, uh, cause serious illness and, and death. So we do need to approach it as well with caution that, uh, what, what's new is, um, is always good. But getting onto that question, I mean, when you think about an awful lot of technologies we use as we get more used to them, as they become more reliable, as we find other ways to toward them, we don't necessarily always need a human in the loop. You know, if you think about the, the mechanisms within a car, there isn't really a human in the loop on a regular day-to-day basis. So actually, if you think about it in terms of governance, it could be the case that, yeah, yes, you need to regularly audit these activities and you need to have a good guardrails and framework supporting it. But do you actually need a human? I think the question, which perhaps concerns me more is where this can be misleading to clients or customers where they are, assuming they're working with a human, they have every reason to believe a human has given consideration to this negotiation or to this, uh, contract. And they put their emotions and their trust in, uh, the hands of a human. And at no point are they informed, uh, that this is not a human. So I think that's where. Transparency becomes absolutely critical that it is very clear when you are talking to a human. Now, I've seen a number of firms these days are very clear when you're talking to a bot, when you're talking to a real person. I also know some companies who say you're talking to a real person when you're talking to a bot. But I think if we have transparency about these things, um, people will put their faith in machines. People go to work on the DLR where there's no human in the loop. Driving it. So I don't think the human is necessary, but I think there needs to be an honesty around it, and I think we need to ensure there is a good audit of that system.

Tom Parker:

Susan, I want to bring you in here as well, um, because. When, uh, Charles mentioned about you don't have humans within the car mechanisms set up at the moment. You're right, we don't, we don't worry about flying at 40,000 feet in a long, you know, metal tube now because the airline industry we've trusted for years, but very early on it was probably quite scary and the people that invented it were the ones that were flying the plane and, but industry has been regulated for quite a while and there are those checks and balances within that system, you know, where are we there with AI and, and, and perhaps you can build out. On Jar's point of, uh, of having these, these agents that you may or may not know whether you're talking to or not.

Professor Susan Smith:

I think, uh, if we take the case of audit, the client contracts with the audit firm to do certain work. And it is absolutely fine that they rely on technology to do some of the audit, but ultimately it is a human who's signing off the audit and they need to, to be accountable for the work that's undertaken and ensure that it's, um, being done in, in the way that. To is appropriate for the audit so that they, um, understand what work's being done by ai, for example, and any potential, uh, bias, any potential issues that may, um, result from reliance on ai.

Tom Parker:

When employing that human in the lube there and that trust, do you think there is going to be a time in the next, you know, year, 18 months, two years, five years, whereby it will get to a a point where jars is saying, where you won't have that within the audit side of it and you will just be trusting on a, on a bot or,

Professor Susan Smith:

I think there is potential to trust a bot. However that needs to be proven. The, the process of governance around the bot, its limitations need to be tested and, and regularly over time to make sure that that reliance can be, is still there at any individual point in time.

Tom Parker:

Anna, do you have any points, uh, you wanna raise on this?

Alistair Brisbourne:

Just quickly on Giles's point, because he raises a really interesting question about maybe not needing human in the loop in every instance. And, and, and it's a very valid point.'cause if you think about using, like algorithms in fraud detection, right? We've all probably had that message from our bank when we're abroad traveling or something like that, saying you're, you're. Debit card has been frozen because we've identified a fraudulent transaction. Right. That's a great example of us just massively scaling that decision. That at one point was down to human judgment because someone would've had to say, look, this transaction looks a bit fishy. It doesn't fit with the others. So we've created massive efficiencies. We've been able to scale that kind of judgment. We don't necessarily need a human in the loop there. Well, except for at the end in the decision as to what to actually do and then. How to unfreeze the card, et cetera. So in those very narrow applications where you have a bespoke trained solution, that's great, but when we're talking about general uses, it gets a little bit more iffy. And so we talk about like trusting artificial intelligence. I'll be frank, I don't know what that means because trust is a social concept, right? It's about human interaction. It's about the practices and behaviors that we exhibit, the institutions we've built so that we can trust people that we don't know. That's what we need to be thinking when it comes to ai, right? That is down to both having the right technical practices in place to make sure that we are monitoring. AI models that we are using to model them for, for, for Drift. We're, we're auditing the data that's being used both as an input to train the models and how we're using the outputs from AI models, and we are training ourselves and our staff on how to use these models effectively. Those are the things we trust in. If I don't trust this table not to collapse, if I, if I stand on it, I trust that whoever's built this table has had the competence and used the right materials to make sure that I could sit on it without it collapsing, or maybe not, I don't know.

Tom Parker:

Let's not try it now. No, but I get your point. And I, I wanna broaden this, this trust topic out because it is verging into the future and perhaps closer than we all want it to be about whether we know, whether you are talking to a bot or not, as Charles said. And in fact, anthropic, I think recently hired, uh, an AI welfare. Researcher that is specifically looking at the welfare of the artificial intelligence. It's not the welfare of the humans that are using it. We already have those in place. Mm-hmm. But it's the welfare of the artificial intelligence so much so that it is, it is bringing to the forefront of conversations about having. This moral status for artificial intelligence and, uh, chat GPT-4 0.5, I think earlier in March this year, passed a Turin test. Um, you know, there is, there is gonna be a lot of these questions about trusting the information that is coming out of there and whether that human is in the loop or not. Conor, maybe I can ask, you know. Are you sort of scared at the pace at which this is happening? Are we, are we ready for these sorts of conversations to be, to be had around an ethical point of view? And, and, and fundamentally, do we have the guardrails in place at the moment that is ready for this, this speed that we're traveling?

Conor Flanagan:

Yeah, I think two points on that. One thing, I think if you embark 12 or 18 months and you said. What industries would you think would be most impacted by AI or automation? You would probably put accountancy high up that list. The reality is the change is much slower than that. I think if you look at what people do today first of what they've done 12 months ago, it's largely in line. So it hasn't wiped through the industry and the way people think. And I think that's because we've got a much higher level of, of, of integrity, of data quality and that AI maybe just isn't at the level of consistency yet. So in things like say marketing or creative spaces. 80 or 90% accuracy maybe is okay, but when you're dealing with finances and numbers and taxes. 80 or 90% when you extrapolate that over, maybe millions of clients isn't at the level where it's accurate enough yet. So I think that the level of change maybe hasn't been at the pace that, that people think it might be. And I think one, one important thing around that, and it's that concept of, um, like professional integrity, professional, um, kind of critical thinking. So I think that we as, as accountants or as finance professionals need to maintain that, um, kind of professional judgment that we can critically analyze the data that comes out. My fear around that potentially is that where AI is showing a lot of advancements is in the kind of more junior rules. And in those more junior rules is where you learn a lot of those professional skepticism style skills. It's where you taught accounts, it's where you take minutes, it's where you do stock takes. It's where you do the more menial, tedious style tasks. But through that, you learn a lot of basic skills. You learn a lot of check, check and double check your numbers and. If that skill or that them tasks are now being completed by by agents or bots, it does then ask a question around training. It does then ask a question around, are we sure that we're still training people to the same level of professionalism, which we are, but are they still learning those on the job skills and maybe not. So maybe do we then need to tweak how we're treating people? Do we then need to need to maybe look at how we're ensuring we're maintaining that level of professional standard? If there maybe aren't going in at the same junior level. So I think there's ways we can do it. I think there's loads of exciting ways by using things like augmented reality for things like audit tests or, or checks like that. But I think it is something that we do need to be aware of, that if there are junior rules that maybe aren't going to be there longer term, which is what it probably looks like, we need to think of the skills that would've been developed there that maybe will no longer be developed. And how we can maybe ensure that those skills are still in our professionals and that we don't just naively or take the, take the data that comes out of the chatbot.'cause that is the, that is the huge risk that we lose the ability to be critical of the data coming out and maintaining that critical eye or being able to critically analyze the data is. The key skill key and the key place that we as professionals, I think can add value.

Professor Susan Smith:

I think you're absolutely right there. That's a fundamental risk around relying on AI and whether this potential skills gap could open. You can't question. The output of, of these tools, unless you know what it should look like, and you have those basic building blocks under your belt that you understand what it should be and how it should work. Um, it is really, really important. And I'm sure we'll, we'll have new ways of, um, getting people to the point where they have the basics, but it, it's in the short term. A potential risk for the profession that people are entering with, uh, uh, mid-levels or what would've been mid-levels, um, without some of the underpinning knowledge and experience which you build through doing some of those lower, um, or, or more repetitive tasks, which AI will now be doing.

Tom Parker:

In fact, we are gonna be focusing on the impact of AI on sort of jobs and culture and a workforce in episode two. So I'm gonna leave, leave that to our second episode, but it's really important that we touched on here, Charles. I'm actually gonna go back to, to sort of the question that I was posing to Connor, which is. Around the speed of, of AI moving towards this moral status That, and I think you, you led off earlier in the podcast around this, so I'd like to get your views on, on the speed of where we are with this and, and the sort of conversations around moral status and perhaps the regulations and guardrails that, uh, whether they're catching up to the speed at which these AI models are being develop.

Dr Giles Cuthbert:

I think I'd start answering that by saying that things similar to Connor, things are not moving as quickly as people think. The latest version of Chat, GPT had less accuracy than the previous because what's been happening is as AI has been generating. More and more dubious answers that's been going back out into the web. The latest scrape happens, and a lot of those inaccuracies are coming through and being repl amplified as they're being picked up. There's some very reputable sites where you can no longer find out the accurate date of, for example, birth of an artist because it's been got wrong on the internet so many times. Likewise, the recent Apple Research, I found that large language models collapse. When faced with relatively basic logic puzzles, so starting from the point that I think that things aren't necessarily moving that quickly, I think it becomes very hard to then say, okay, we will give moral status to something which is essentially a very fast calculating machine. I mean. I don't much as I love her, I don't give my dog independent moral status. If, uh, if my dog bites someone, they'll sue me, not my dog. Um, so I'm very unconvinced when she can do a lot of amazing things. Why a calculating machine should have moral status when my dog doesn't. So I am a skeptic at this point, but I think. Crucially, because I think we have to recognize there is something special about being human. Um, you know, we, uh, live in a, in a socially constructed world where through our experiences, through our emotions, through our relationships have learned to take what are actually very complex emotional judgments sometimes, and we know in a business context there's going to be emotional impact serious. Financial impacts for people around some of those very hard decisions we have to take. So the notion that gets reduced down to a simple calculation, which is inevitably what AI would have to do, I find concerning.

Tom Parker:

Connor, you mentioned it earlier and it kind of flicks, uh, something in, in my mind. When you said that there would be seen as employees, some of these bots, does that mean they'll be taxed? Uh, like employees seeing as we are talking about accountancy here? Mm-hmm. What's, what's the future looking like there?

Conor Flanagan:

I think if that means I could tax less, I'm all for it.

Alistair Brisbourne:

Brilliant. Okay. Uh, I was expecting a slightly longer answer than that, but No, that's it.

Professor Susan Smith:

We'd need to put them on the payroll first.

Alistair Brisbourne:

Yes. Okay. I mean, this is one of the challenges with agents though, is that they do compete for the payroll budget because they cost a heck of a lot more than the traditional IT budget would cover. And, and so there is the, that, that, that potential that. We do start treating them like part of the workforce, um, just by virtue of the fact that that's where the budget's coming from.

Tom Parker:

Well, look, I wanna go into final thoughts because we've covered a lot here and let's maybe get a quick 30 seconds from each of you. What is your final thoughts when it, when it comes to this sort of big ethical question around ai, where are we now, perhaps, where are we gonna be in the, you know, in the next couple of years? Are you, are you excited about where we are with AI and, and happy with where we are from the ethical point of view? Uh, let's start with you a.

Alistair Brisbourne:

Definitely very excited about ai. I think it offers tremendous potential, but I think everything that's been discussed today. Highlights that there are some very real hurdles and constraints that we need to think about, and the complexity here is that they don't all arise from one source, right? Some of it is because of the technical construction of the model we're talking about, right? The sort of black box nature. We can't explain it. We can't reproduce outputs, so we can't kinda justify or explain outcomes. Some of it's down to the, the data used to train or how we're using output data. Some of it's due due to our individual misuse. Of models. So we need to understand that whole kind of ecosystem. But we do also need to just fall back on our existing things like ethical principles because they are, um, still well and truly fit for purpose. We just need to think about how they apply in this new context where. How we're doing things has changed or is changing.

Tom Parker:

Susan,

Professor Susan Smith:

the fundamental principles hold true. We need to be very mindful of how we're applying them, but they will hold true over time. We need to understand the tools we're using. We can't just outsource responsibility and. Learn as we go along and make sure that we are transparent about how we're using data, that we have the learning and knowledge to use the tools that we're putting data into, and we're aware of some of the shortfalls of them. It's always, um, tempting to think, well, it's come out of a, uh, um, some sort of technological tool. It must be right, but as we know it. That's not always the case, and we really need to, um, fall back on our core skills of professional skepticism, judgment, and use those in a slightly new way.

Tom Parker:

Charles,

Dr Giles Cuthbert:

I think what I'd add is, despite some of what I may have said, I am very excited about the prospect. Um, but we just need to pace ourself as we would with any new experience. We need to go into it, uh, with wisdom and caution and care. And as I say, for me, at the heart of it, always remembering that, uh, no matter how big this. Technological divide becomes between us and our clients and customers. We need to remember that they're humans on the other end of that line, and we need to stay acutely aware of that client relationship at all times.

Tom Parker:

And finally, Connor,

Conor Flanagan:

as I said, it hasn't come as fast as we think, but I think it is on the verge of taking off. I think a lot of clients over the last 6, 10, 12 months are taking time to sort out their data, their security, and all of that stuff. And I think once that foundation is laid. I think you're perfect for AI then to push on and to automate a lot of those tasks. And I think as the saying goes, an accountant that knows AI and can embrace AI will always be better than an accountant that does not know AI and is not embracing ai. So I think we have to see it as another tool in our, in our toolbox. We have to embrace it and we have to, to then by knowing it, by knowing what its dangers are, we can then put the guardrails around it. But by ignoring it and by it, it isn't gonna make change. We'll be caught on the wrong side of history, so I think it's very important that we do embrace the change. We see its benefits, but we see its benefits with a critical eye as well.

Tom Parker:

Well, what a thought provoking conversation we've all had today. This brings us to the end of the episode. I wish we had more time, but I want to thank all of our guests, uh, professor Susan Smith. Thank

Conor Flanagan:

you,

Tom Parker:

uh, Alistair Brisbourne. Thank

Conor Flanagan:

you

Tom Parker:

Connor Flanagan.

Conor Flanagan:

Thank you very much.

Tom Parker:

And Dr. Giles Cuthbert.

Dr Giles Cuthbert:

Thank you. It's been a pleasure.

Tom Parker:

The second part of this special, we'll look at the cultural implications of AI and how we maintain a strong ethical company culture. As AI use increases, the CCAB has also released a joint statement on ai, which outlines how to tackle some of these considerations In more detail, you'll find a link to that and other resources in the show notes. Thanks everyone for listening. Goodbye for now.