Karl Escritt, CEO of Like Digital & Partners, explores the economical, ethical, and creative implications of AI for your business.
Once the realm of sci-fi films and nihilistic novels, AI (artificial intelligence) has firmly entered our everyday vernacular over the past few years. Whether it’s watercooler discussions over whether Chat-GPT is coming for your copywriting job, or the very real fear generated by Diary of a CEO’s interview with Mo Gawdat, former Chief Business Officer at Google – who claims that AI poses a bigger threat to the world than climate change – there’s no escaping the prevalence of AI.
While AI is not a new concept (the term ‘artificial intelligence’ first came into common usage in the 1950s), it has recently experienced a meteoric rise in terms of both functionality and accessibility. What was previously the domain of an exclusive club of scientists, mathematicians, and philosophers, AI is now open to anyone with access to the internet, bringing myriad possibilities and pitfalls to our fingertips.
And it shows no signs of slowing down… According to a recent article in the Harvard Gazette, the global business spend on AI services will top US $50 billion this year and is expected to reach US$ 110 billion in 2024.
Benefits of AI for businesses
When used correctly, there’s no denying that AI can bring substantial benefits to businesses. Key advantages include increased efficiency and streamlining of processes; optimization of logistics, workflow and resources; enhanced productivity; better decision making and data analysis; and error avoidance, as you remove the human element – all of which can lead to improved customer satisfaction and measurable cost savings.
According to the PwC 2022 AI Business Survey, 86% of CEOs view AI as an essential aspect of their operations, as leaders leverage “data, cloud and analytics for a bigger payoff.” Of the 1,000 survey respondents, the highest success lies with those businesses that are taking a holistic approach to AI integration. Rather than focusing on one goal at a time, these AI leaders are implementing core competencies in three areas simultaneously: business transformation, enhanced decision-making, and modernised systems and processes.
The report notes that these ‘AI leaders’ are twice as likely to report “substantial value from AI initiatives to improve productivity, decision-making, customer experience, product and service innovation and employee experience” than those businesses that are taking a piecemeal approach.
In a recent article in the Harvard Business Review, Tsedal Neeley noted that AI technology is not simply a tool for making us work faster – it is a set of systems for collaboration that can identify patterns the human eye cannot see, providing data-based insights, analysis, predictions and suggestions. If you start to view AI as a way to improve business from a bird’s eye view, you can begin to harness its full potential.
The ethics and accountability of AI
For businesses today, it’s not a matter of if they will adopt AI within their workplace, but rather, when. OpenAI, Microsoft, and Nvidia have brought AI within reach of everyone from students writing their term papers to top-tier organisations, but with this comes the entirely unchartered waters of ethics and accountability – and whether the robots really are coming for jobs. As AI technology excels at a pace that laws and legislators cannot keep up with, the responsibility for ethics and accountability currently lies with each individual user.
As AI can process data at a rate far faster than anything we’ve seen before, transparency becomes vital in terms of how the data is sourced, analysed, and implemented. Neeley cites the large language models (LLMs) such as OpenAI’s Chat-GPT or Microsoft’s Bing, which are “trained on massive datasets of books, webpages, and documents scraped from across the internet — OpenAI’s LLM was trained using 175 billion parameters”. This opens AI up to unintentional bias, as it is trained on data sets that may not be representative of the global population. To avoid this bias being embedded into the AI systems, developers are being called on to include more diversity in their data sets and in the make-up of their teams. Only then can we ensure AI is truly for all.
Many AI algorithms are self-learning, constantly evolving and refining their output, which, when left unchecked, could lead to the perpetuation of harmful bias, the spread of misinformation, privacy violations, security breaches, and even harm to the environment, Neeley notes. Before sharing sensitive information with an AI programme, business owners should ensure privacy and end-to-end security is embedded into the design of the programme. For those that are processing sensitive details, such as employee information, businesses should consider hiring a privacy expert for their team.
The human element
Much of the general discussion around AI is whether these systems will replace humans in the workforce. While there are some areas where AI far exceeds human capabilities, particularly in medical and tactical military fields where its precision and insight is light years ahead, in many instances AI output is missing that much-desired human element. Whether it’s emotion and empathy in copywriting, originality in design, or personalised risk assessment in finance, AI can’t replace the individual nuances of the human workforce… yet.
What comes next?
In terms of future-proofing the world from the potential pitfalls of AI, global players are recognising the need for a united front. The European Union is in the process of drafting its Artificial Intelligence Act, which proposes three tiers of risk categories: unacceptable risk (such as the social scoring employed by the Chinese government – and famously parodied in Black Mirror); high-risk (such as CV-scanning tools, which should be subject to specific legal requirements); and those that are neither banned or high-risk, which could be left unregulated.
Globally, a collection of non-profits and research institutes such as the Partnership on Artificial Intelligence, the Institute for Human-Centered Artificial Intelligence, and the Responsible AI Initiative are establishing their own ethical standards, guiding companies in the use of AI to protect consumers and employees.
And while Elon Musk previously called for a pause in the creation of AI digital minds, along with Steve Wozniak, the co-founder of Apple, and Emad Mostaque, who founded London-based Stability AI, he now feels that ship has sailed. As the co-founder of OpenAI, Musk believes the way to avoid what he describes as a “Terminator future” is to create an AI programme that is at least as smart as humans. In announcing his superintelligence programme xAI on Twitter in July, he said “From an AI safety standpoint … a maximally curious AI, one that is trying to understand the universe, is I think going to be pro-humanity.”
The only thing that’s certain is that AI is here to stay – now we humans just need to stay in control.