AI is facing us with critical issues and resulting choices which need to addressed. In this article, we draw on key messages from our recent book Beyond Genuine Stupidity – Ensuring AI Serves Humanity to highlight five of the most critical issues.
Technological unemployment and new jobs
The AI technology vendors are struggling to hold a consistent line. On one hand they are selling the return on investment case – predicated on headcount reductions. As this has become contentious the new line is that AI will free people from routine tasks for more creative work and problem solving. Will employers follow that path? Evidence suggests most are going for cost base reduction.
The challenge for governments is to model a range of scenarios, including extreme ones. They can then start assessing the tax implications of different unemployment levels, explore policy options and identify necessary actions they should take because they are valid under all scenarios.
Reskilling and education
Generally the provisions for retraining/lifelong learning are woeful. However, facilities exist in schools/colleges, and there is no shortage of trainers. Exponential change requires an exponential increase in retraining – the cost of inaction will be higher unemployment costs, rising mental health issues, and skill shortages.
For young children the bulk of the jobs they’ll do probably don’t exist yet. We need to equip them with the skills to take up new opportunities: greater emphasis on social and collaborative skills, conflict resolution, problem solving, scenario thinking, and accelerated learning.
Guaranteed basic incomes
There will inevitably be employment casualties from automation. How will people afford goods and services if they no longer have jobs? Many argue for provision of a guaranteed basic income (GBI). Countries including Canada, Finland, India and Namibia have been experimenting with different GBI models.
Governments will need to work together on different experiments and see the impacts on funding costs, economic activity, the shadow economy, social wellbeing, crime, domestic violence, and mental health.
Robot Taxes and other options
As AI and other disruptive technologies are introduced many issues will arise from choices made by employers. Will they retain the staff freed up by technology or release them? If unemployment costs rise, or GBI schemes are introduced – who will pay for them? One option is “robot taxes”, where firms pay a higher rate of taxes on the profits derived from increased automation.
Opponents of GBI schemes and robot taxes have yet to offer substantive alternative policies. Two options suggested are
- The notion of a total employment responsibility. If your prior year business turnover was one millionth of national GDP, you’d be responsible for ensuring the employment of one millionth of the workforce.
- Deferred redundancy. Workers stay on your payroll at full pay until they find another job.
It is easy to oppose such ideas but large employers and governments need to think now about policy alternatives for a world possibly needing a smaller workforce.
Ethics, governance, and ownership of AI
Is AI is too important to leave its evolution to the private sector? Voluntary ethical charters are starting to emerge to govern the development and application of AI and robotics.
The challenge is that AI is recognised as a critical future technology by leading industrial nations such as China, Korea, Taiwan, and the USA. It is an economic battleground – ethics may not be a prime consideration in the race for AI superpower status.
In response, there is a growing argument for state regulation and oversight of AI. This would probably require the capabilities of a regulatory AI to conduct such a governance role as, in the relatively near future, the capabilities and reasoning of most AIs is likely to overtake humans’ abilities to monitor them.
Given all these challenges, there is also an argument being made for governments to nationalise the ownership of AI intellectual property and then licence it back to the firms that deploy it. In this way, governments could regulate the deployment more effectively, and raise revenues to cover the expected social costs. Such moves are likely to prove hugely unpopular with some, while others will argue they are the inevitable consequence of technologies that could ultimately be beyond human oversight and control.
AI is advancing so rapidly that it has far outstripped the ability of governments, businesses, and individuals to identify the potential impacts, assess the possible implications, and try out potential solutions. A genuinely stupid strategy would be hoping the problem vanishes, never arises, or magically gets resolved by market forces. A better option is to start undertaking serious assessment of the most radical possible outcomes, developing policy options for the worst case scenarios, and implementing actions now which we know will be of value whichever scenario eventually emerges.
ABOUT THE AUTHORS
Rohit Talwar, Steve Wells, Alexandra Whittington, April Koury, and Helena Calle are futurists with Fast Future – a professional foresight firm specializing in delivering keynote speeches, executive education, research, and consulting on the emerging future and the impacts of change for global clients. Fast Future publishes books from leading future thinkers around the world, exploring how developments such as AI, robotics, exponential technologies, and disruptive thinking could impact individuals, societies, businesses, and governments and create the trillion-dollar sectors of the future. Fast Future has a particular focus on ensuring these advances are harnessed to unleash individual potential and enable a very human future.