In November of 2018, the Joint Research Center, the European Commission’s science and knowledge service, released a report called “The Impact of Artificial Intelligence on Learning, Teaching, and Education”. The report is aimed at policymakers and contains important information for those studying AI’s potential influence on the future intersection of society, education, and the workplace.
The obvious elephant in the room for education leaders is the demand for the modern educational system to cultivate competencies that allow people to be contributing members of the economic sphere – in the face of prospective automation of lower-level job roles.
Most of us have heard high-level speculation about “robots replacing the workforce” and/or viewed the TED talks such as those in the TED Playlist on Artificial Intelligence. However, this report took the hard questions such as “Which occupations OR skills will become obsolete and what will be valuable skills in a world where AI is widely used?” and placed them into a context that revealed the clues on how they could be addressed in a way I have not encountered before.
According to Tuomi (2018), “AI can enable new ways of learning, teaching, and education, and it may also change the society in ways that pose new challenges for educational institutions. It may amplify skill differences and polarize jobs, or it may equalize opportunities for learning. The use of AI in education may generate insights on how learning happens, and it can change the way learning is assessed. (Tuomi, 2018, p. 5)
The author dedicates the bulk of the report to an explanation of the history, types, and impact of technological enablement of AI/ML as related to models of learning, psychology, and society.
For me, a real eye-opener popped up midway through the paper. Recent studies, notably those of Frey and Osborn (2013), have taken a task-biased rather than a skill-biased approach to assessing possible automation. The subtle difference was hard for me to comprehend (as I had to reread it several times) but, once I did, it revealed an essential comprehension for me. Job skill automation is much bigger than simply jobs lost to ‘robots’, it may require fully rethinking the description of a role in any occupation and the associated competencies.
According to Tuomi (2018), “In skill-biased models, jobs that do not require educated, experienced, and skilled workers are susceptible to automation. In such models, computers are expected to be used mainly for tasks requiring limited skill. It becomes natural to assume that to avoid unemployment people need more and higher-level education” (p. 23).
The above is the popular thought process that I have seen popularized in the media. However, a task-biased analysis, (Frey and Osborne, 2013) is rarely talked about and, through that something deeper is revealed. That is a reassessment of occupations where many daily tasks may be susceptible to automation, regardless if, overall, it requires higher cognitive activities to fill the job role. Check out Table 1 on page 23 about Middle School Teacher task automation!
This is the most grounded explanation of what I have 100% bought into already – we cannot effectively focus JUST on work-related skills and instead need to focus on building competency acquisition programs that enable lifelong learning.
I suggest you take a read at the more detailed argument presented in the report, including some notes on the limitation of AI/ML talent on progress, the importance of “no AI without UI”, and the last 10 pages on the impact of AI that I am not walking you through in this post.
In general, I am fascinated because there are only a few on the leading edge of getting far enough ahead of this to minimize the level of impact that occurred during industrialization. There are many nuances to the possible impact, both from expansion and possible constriction points of view. Before reading this I did not consider all the ethical implications.
“In supervised AI learning models, the possible choice outcomes need to be provided to the system before it starts to learn. This means that the world becomes described in closed terms, based on predefined interests and categories.” (Tuomi, 2018) (pg. 36)
Further Tuomi states (2018),
“When AI systems predict our acts using historical data averaged over a large number of other persons, AI systems cannot understand people who make true choices or who break out from historical patterns of behavior. AI can therefore also limit the domain where humans can express their agency” (Tuomi, 2018) (p. 36).
It almost seems imperative that we actually widen the net to acquire personal data for learning and educational purposes past that which is primarily owned by corporations. At the end of the day – I hope we find the ability to cooperate well enough to serve learners freedom of agency.