A speaker at Mobile World Congress (MWC) 2017 predicted that there will be more robots than humans by 2027. It was clear from 2018’s event that it’s rapidly gaining pace, with AI more evident than ever before. The joint NTT and DOMOCO stand drew huge crowds with their fascinating 5G Robot enabled live Japanese calligraphy demonstration. They also used the opportunity to announce their plans to launch the technology commercially by 2020.
Arguably, one of the most pioneering AI developments in recent years is the growth of self-driving or autonomous vehicles, which some claim could reduce road accident rates substantially. In fact, driverless cars could be on our roads sooner than we think – with the likes of DOMOCO also showcasing an autonomous vehicle equipped with 4K cameras, sensors and digital signage. In January, Uber and Volkswagen also announced a new partnership with Nvidia to make further gains in the autonomous vehicle industry. These self-driving vehicles will feature autonomous drivers that use Nvidia technology to make split-second decisions and could be on the market by as early as 2020. Supporting this, in the autumn budget last year, the Chancellor Phillip Hammond also revealed his plans to get self-driving cars on UK roads by the start of the new decade.
Not everyone is so optimistic.
Just this month, Elon Musk described AI as ‘more dangerous than nuclear weapons’ and called for a regulatory body to oversee this new super intelligence. Motoring mogul Jeremy Clarkson also warned of the dangers, saying he could have been killed when carrying out a 50-mile test drive of an autonomous car on the M4. He revealed that the vehicle made two serious mistakes, arguing that the technology was ‘miles away’ from being safe enough for the UK’s roads.
These are just some of the examples raising serious questions about the ability of AI to replace all human decision-making and whether humans will still need to set the rules and priorities.
The Driverless Trolley Problem dilemma – deontological versus utilitarian morality
However, AI could play an important role in situations where humans face a moral dilemma. A classic example is the trolley problem, a thought experiment introduced in the 1960s. It examines how the human mind is conflicted between two moral perspectives: the deontological approach (when morality is based on a set of rules) and the utilitarian approach (when an action is assessed on the net benefits/cost). If we apply this to a car crash scenario, the different approaches argue whether it would be acceptable to allow one passenger to survive if five would be killed or whether it would be acceptable to kill one passenger if it meant five others could be saved.
This podcast by Radiolab offers a fascinating insight into the Driverless Trolley Problem dilemma and the concerns over AI’s impact on human behaviour.
Proponents of AI say the technology could resolve this dilemma by adopting the utilitarian approach so that all decision-making is based on minimising loss to life in the event of a crash. However, this would depend on what the rules and priorities are and who will set them – humans or robots? For example, if AI-led cars were instructed to protect passengers at all costs, would they save one passenger in the vehicle even if it meant killing five pedestrians? Or if cars were programmed to save the maximum number of lives, would someone be comfortable buying an automated car knowing that, in a split-second decision, it’s more likely to sacrifice them as the single passenger?
How could AI adopt the utilitarian approach in other areas?
In our June 2017 blog, we looked at whether AI could end the workplace diversity struggle by reducing employers’ unconscious bias in the candidate selection process. Already, employers are using AI to encourage more-utilitarian hiring practices. For example, Unilever uses algorithm assessments to compare a candidate’s performance to a predetermined profile. Why? Because if HR staff asked the questions, they could (unconsciously) attach their own cultural or personal values to the candidate selection process, which would create bias.
Could this apply to customer experience marketing, too?
The Content Marketing Institute says AI could help marketers create higher-quality content that can be mapped to behaviour or intent. For example, biometric sensors could measure both the physical and behavioural features of a person to enable brands to target their marketing more precisely. Alexavier Guzman, a senior full stack developer at Forbes, referred to a sensor-equipped bus-stop LED screen that could sense what people at the bus stop are wearing and holding so it could create tailored AI-generated content. So, the screen might advertise sports clothing to someone wearing sportswear or a coffee shop to someone holding a coffee cup. Yes, this might seem like an intrusion or invasion of privacy, but don’t forget that Google has been tracking your online behaviour for years to create targeted ads.
Fastest growing marketing technologies currently use:
Source: Salesforce, State of Marketing, 2017
While AI certainly brings a whole new meaning and potential to marketing automation, it’s not yet clear whether it will ever completely replace the human emotive bias. When we recently developed a thought leadership and go-to market proposition with professional services company PA Consulting we quickly realised that technology and hyper-connectivity have made customers more empowered than ever (the customer 4.0 revolution). But much like the ‘butterfly effect’ – while we think there’s method in our madness – Customer 4.0 demonstrates that one unexpected interaction, or perhaps a negative review from a total stranger in our network of influencers can trigger completely different outcomes for our ‘customer journey’ to those AI had so carefully predicted.
So, while there’s no doubt that AI continues to unleash an explosion of new ways to engage and deliver personalised customer experiences, as it tracks and learns from our every movement, it remains to be seen whether AI will eventually substitute for those natural, but completely unpredictable, human behaviours.
Something to carefully consider when you’re trying to develop your next ‘programmatic’ customer engagement campaign perhaps?
JPC are a strategic communications agency who ‘Make the complex, compellingly simple.’ We specialise in Building Brands, Winning Bids, ABM, Sales Enablement and Creating Experiences. Please contact Claire Carsberg for more information; email@example.com