Is robot apocalypse a myth?


Robot apocalypse or uprising is associated with a scenario where artificial intelligence (AI) or robots are self-sufficient and effectively take control of the earth. This may sound like a science fiction movie however considering the rapid technological advancement in AI, it raises more questions? Elon Musk, CEO of Tesla and Space X once said, “AI could pose existential threat to humankind if not regulated properly.” According to co-founder of Google, Larry Page, “AI could bring economic benefits. When we have computers that can do more and more jobs, it’s going to change how we think about work.” These are two contrasting views that are out there. Will AI be good or bad? I believe, We, the people, institutions, governments, data scientists, experts in our field of studies etc. have answers to the former and latter, since AI is created by humans.

Why now? According to Elon Musk, this is due “Exponential increase in hardware capabilities and software talent that is going to AI”. To add on this point, the boom of smart devices, IoT, social media, big data contributes to the equation. The convergence of digital technologies makes AI more efficient and effective. Autonomous vehicle or self-driving cars for example, uses a lot of sensors, IoT, cameras, connectivity, cloud computing, data, autocruise functionality etc. to enable AI.

Will AI destroy white-collar jobs or enhance productivity of workers without replacing them? Will it create economic benefits for all or will it widen the inequality gap? Will it create new jobs and take away the old ones? For some people AI is just another terminology or technology innovation, and others see it as a disruptive technology. There’s a lot of excitement and uncertainty around AI. With uncertainty comes fear and resistance to change. Either way, change is inevitable.

What is AI and does it matter?

Artificial Intelligence (AI)

AI has been around for decades, the term was coined in the summer of 1956, at Dartmouth conference of scientists and mathematicians. AI is lines of code, a computer program trained to mimic cognitive capabilities that humans use, for instance, thinking, learning, responding etc. Think of it as a supercomputer hardware and software or a program which rely on data, in the form of pictures in terabytes (TB's) and algorithms to analyse, identify patterns, reason and make predictions better and faster than humans. AI is now part of our day to day life. To name a few examples of AI application:

  • Alexa, voice-controlled virtual assistant developed by Amazon, first used in the Amazon Echo and lives in the Amazon Echo Dot smart speakers.

  • Siri, speech-recognition application or virtual assistant that is part of Apple iPhone, iPad etc. The assistant uses voice queries and a natural-language user interface to answer questions, make recommendations, and perform actions by delegating requests to a set of internet services. All this processing happens in the cloud and in real-time. It's amazing how technology is advancing at a rapid pace.

  • Google search engine, for example, if you want to talk to the nameless search-bot, you just say, "Hey Google". Google translate, the AI trained algorithm.

  • Microsoft is using its digital assistant, Cortana, to allow users to speak in any of twenty languages and have the results appear as text in up to sixty different language.

No matter how good AI is at recognizing known patterns, search and retrieving information. It is not yet good with logical reasoning, creativity, identifying new patterns, social and emotional sensing. Within AI, there’s a smaller category called machine learning.

Machine learning

Machine learning are algorithms that enable computer software to improve its performance over time as it consumes more data. A good example for this application is Netflix. According to the New York Times, "Netflix is commissioning original content because it knows what people want before they do." Netflix uses machine learning algorithms, big data, cloud, micro-services, connectivity and data insights to understand viewers preferences and then recommend specific content based on those preferences. In most cases and based on my own experience as a Netflix subscriber, these recommendations are mostly accurate. Netflix also produce their own digital content based on data insights. This makes the overall Netflix, on demand TV experience personal and exciting, I like it.

More effective machine learning tools require better computational power and lots of data. Within machine learning, there’s a smaller subcategory called deep learning.

Deep learning

Deep learning is a subset of machine learning. it's an algorithm that learns data on its own and gets better over time. It’s inspired by how the human brain functions. Almost every deep-learning product uses supervised learning, meaning it is trained with labelled data. Deep learning powered application include speech recognition, translation services etc. Can deep learning trigger singularity? Singularity is the hypothesized moment when super intelligent machines (AI) start improving themselves without human involvement, triggering a runaway cycle that leaves disadvantage humans ever further in the dust, with terrifying consequences.

NVIDIA’s DGX-1 was dubbed the world’s first deep learning supercomputer in 2016. These are exponential improvement in processing power. Google DeepMind using deep learning technique created AlphaGo, the system that defeated the world champion of the Go ancient Chinese game. This is considered as one of the greatest AI achievement. So how did this happen? AlphaGo learned how to play Go essentially during self-play, self-training and observing professional games. This is referred to as unsupervised learning, where computers could teach themselves and make sense of the world almost totally on their own, like a child. Kids at a young age learn mostly through observation and practicing what they observe or taught. I believe this is the next evolution of AI and is already here.

These are high-end AI tools. The entry level AI is Robotic Process Automation (RPA). RPA programs are robots (software). They are known synthetic workers. They mimic a human. They do exactly what a human does. They are good at automating repetitive, manual processes and tasks for white-collar jobs.

What’s next?

The future of automation

Will AI destroy jobs, widen inequality gap or will it create economic benefits and enhance productivity of workers without replacing them? Let's keep in mind that, AI is not yet good at identifying new patterns or even predicting the unknown (future). It may take decades for it to achieve this. The fact remain that we cannot predict the future, however I think we can be better prepared for it.

In his book, Richard Balwin (2019), "the globotics upheaval - globalization, robotics and the future of work" says that, AI (Machine learning) is affecting the world of work in radically new ways. This new form of computer cognition is changing realities. It is creating new forms of automation that will replace millions of humans whose jobs were until the twenty-first century sheltered by the fact that computers couldn't handle elephant tasks, however now they can. He argues that, "White-collar robots are not really taking whole occupations; they are taking some of the activities that make up part of many occupations." Robots can take over some of your tasks, but not all. This means that you’ll be more productive – and that may mean there will need to be fewer people like you doing the job – but robots won’t eliminate your occupation. He continues to say that, “after all, most occupations involve at least some tasks that require a real person. Yet white-collar robots will reduce the headcount. It is just a matter of arithmetic.” He provides a good example: "Tractors automated some farm chores, but they did not eliminate farming as an occupation."

According to Bridgette Tasha Hyacinth (2017), in her book, "the future of leadership - rise of automation, robotics and artificial intelligence", the relationship between new technologies and jobs is complex. New technologies enable better-quality products and services at more affordable prices, but they also increase efficiency, which can lead to a reduction in jobs. While repetitive jobs are always thought to be first on the chopping block, automation will impact many tasks of white-collar jobs as well. The challenge is that people are not able or willing to adapt and learn so they can move on to the new jobs while their current positions are being taken away by this technology. This could eventually cause a problem with displaced workers. All of us have some tendency to become comfortable and complacent with things simply because they are familiar to us. When a new way of doing something is thrust upon us, our natural instinct is often to resist for no other reason than because it is different. By accepting change, you put yourself on more solid footing in dealing with the unexpected. Bridgette suggests that, we “must adapt, simply adapt. Think of adaptability as a habit, something to practice.”

In closing, history has proven that new technologies and automation will take away some jobs, however will create new opportunities. Are we ready to learn new skills and to take advantage of these new opportunities? Due to our inability to embrace change, adopt and adapt emerging digital technologies, we'll be left behind in this technological transformation. It will make it harder for us to deal with the implications of AI. This technological transformation, the fourth industrial revolution should benefit all of us, not just a few. It is up to us to shape the future.

#RoboticsandAutomation

© 2017 - 2020 Buhlebenkosi  Consulting 

All rights reserved 

Created with Wix.com