EASOL Logo

AI and Responsibility

Everyone’s talking about AI these days. Between the legal battles and monetisation for data to facilitate machine learning, the frequent and long-lasting strikes around the globe by workers in an effort to protect their jobs against AI or rights to their attributes from being utilised by AI, the calls for specific legislation pertaining to AI, the use of AI for military applications – there has been a real need, now more than ever, to dive deeper into the question of AI and social responsibility pertaining to it.  

 The concept of Intelligence Artificielle Oblige, if you’ll pardon my French. 

The Replaceable Human 

AI, in its current form, is currently narrow. It can only perform specialised tasks, albeit at posthuman efficiencies that the average human just cannot compete with. Recently, however, the range of tasks AI can handle have expanded rapidly in the last 5 years – to the point that Google’s AI LaMDA recently passed the Turing Test (AI response becoming indistinguishable from human response) in the field of generative marketing – spurring discussions on a new classification framework for AI intelligence. While a truly functional general AI is, according to experts, decades away, it makes one think – how long until what I bring to the table gets replicated as well? How long until I become replaceable? 

 One of the biggest fears for the working class today is machine replacing the common man. Yesterday, the assembly line; today, the writer, actor and coder; tomorrow? The doctor, the lawyer, the pilot? What next, the politician? Will we let an AI embezzle our money in place of a human? 

 That was a joke – politicians would filibuster before allowing that to happen, but quite frankly the issue will come to a head soon enough, and we as a society will need to figure out the boundaries of AI and the direction we want to move in, and fast. 

Making the Right Call on AI Use 

The financial incentive of replacing humans with AI is an attractive one at first glance. Complete automation of tasks can be and has been the correct solution for some things – intense computation and calculation, the storage and access of data, and unskilled physical labour – busy work in general. However, using AI often comes with a cost – whether to social responsibility (laying off workers), quality of service (limitations of AI) or intangibles (impersonal relationship with customers). 

 In the scenario of machine-operated call centres or robo-calling, the experience on the customer end tends to be one of frustration. Between going in circles, having limited capabilities of understanding complex situations, dropped calls, incorrect inputs, and most importantly, not speaking to an actual human, the general outcome and overall experience tends to be negative. AI, at least at its current level, cannot replace humans when human interaction is required. 

 An acceptable solution to the problem is to use generative AI as a complement to call agents instead – by creating a list of suggested responses, less skilled agents simultaneously receive training on-the-job, while more skilled agents see an increase in productivity. Companies have their improved efficiencies and maintain a personal touch, the workers get to keep their jobs, and customers are happy speaking to a real person. Win-win-win. 

This is the direction I believe the utilisation of AI should head towards. It’s important for businesses to keep up with the latest technology, but it’s equally as important for them to figure out exactly how to adapt them into their operations to bring about the right kind of change. New technology should spur leadership to evaluate and think clearly about the optimal combination between profitability and social responsibility instead of a hasty adoption.  

 Think outside the box – try to find the best way to introduce innovation and utilise your workers at the same time. Constantly judge the suitability of using new technology against using a human at each stage – what you might come up with might just trump a broad sweep of replacement for all parties involved. 

Optimising Employability For AI 

Since we have an inkling of how AI may develop, it’s looking to be a good idea to diversify and upskill workers in tasks that AI cannot replicate in the near future. AIs still struggle to make the correct decisions in complex situations, create elaborate strategies or solutions without a wealth of data or a referral template, or ideate in a new frontier. They still lack the human touch or empathy – everything they do is still a mimicry of the real thing. These are the things we should focus on when it comes to preparing our workers for the inevitable adoption of AI into our day-to-day.  

 As an afterthought, perhaps AI itself should be given a new direction. Rather than developing AI to replicate human resources, perhaps the direction of AI development and design should fully pivot to augmenting existing human capabilities instead – it’d certainly save us a ton of headache when it comes to the social, moral and ethical responsibility of things. 

 But that’s just wishful thinking, I suppose. 

Like this article?