Opinion: Exploring Generative AI’s potential

In this piece, FA Editorial Board member, Rocky Tung, and his associate explore the potential impact of AI across education and work.

At an industry social this summer, John Paul and I discussed the prospects for an increasingly technologically driven future and the place of many of our sacred cultural and economic cows within it. Among the issues that caught our attention were the rise of tokenisation and its impact on museum-grade art pieces; the crypto winter and its impact on Hong Kong’s push for the development of a digital asset ecosystem; and how ChatGPT might impact the future of work.

At the table were influencers and professionals in their own fields and while we agreed to disagree on certain matters, we came to clear consensus on technology: that emerging Artificial Intelligence (AI) is an area of unexpected, continuous, and significant development – one that has potential to complement or replace humans in areas that define our social prerogative.

Within this space, one of the most exciting developments for researchers and students alike is the emergence of Generative AI (genAI) – among which, ChatGPT and Bard are arguably its most famous (or infamous) instantiations. In a broader context, GA refers to a type of machine learning (ML) which creates new content such as text, images, or music, based on an existing database. In the last few months alone, key opinion leaders (KOLs) and technologists have sounded the alarm on genAI’s potential impact across a broad spectrum of industries, not least the worlds of work and education.

This article serves as a non-conclusive, stock-taking exercise that offers a bird’s eye perspective on existing considerations around genAI, its impact on some of the activities we view to be distinctively human, and it aims to address some of the ethical concerns that arise from application of AI.

Use of GA in education

Geoffrey Alphonso, CEO at Alef Education, a digital learning platform that supports learning, wrote in Forbes in May that “[t]he potential applications of generative AI in the education sector are endless, with personalised learning content being one of many possibilities floating around the market.” He proposed among several uses for generative AI in education, the possibility of generating questions for students appropriate to their current level of achievement; personalised study plans based on strengths and weaknesses; and the creation of engaging and interacting learning activities to help students understand complex concepts.

One of the most significant ways in which genAI is poised to transform education is through its provision of new and innovative ways for students to learn. It can, for instance, create personalised learning experiences for different students, catering to their individual needs and learning styles. Through analysing collected data on student behaviour, performance and preference, genAI's algorithms can create customised learning modules and activities that are tailored to a student’s unique strengths and weaknesses. This has the potential to enhance educational achievement, create a greater sense of connectedness between students and their learning materials, and stoke a superior passion for learning – all without the traditional kinds of investment in human capital required to elicit similar improvements in the past.

GenAI is also set to create educational resources and materials that are, at least, new in form. Through its ability to create pictures and videos, relevant algorithms can be used to generate high-quality educational videos, animations, and illustrations that are designed to explain complex concepts in an easily digestible manner. Furthermore, interactive content can be produced to make educational materials more accessible and engaging for students, regardless of their learning style or background.

Concurrently, genAI has the potential to make education more accessible for people who face challenges due to circumstance, such as those who have special educational needs (SEN). In such instances, genAI's algorithms can be used to create text-to-speech software that is more natural and expressive, making it easier for people with visual impairments or reading difficulties to access educational content. Similarly, it can be used to create sign language avatars that can help produce educational videos and other forms of content that are more accessible for people who are deaf or hard of hearing. If educational materials can be curated and adapted to a wide variety of learning levels and SENs, learners can enhance their passion for learning and their educational achievements with speed and efficiency. YouTube already provides automatically generated captions for videos, and there are existing dictation softwares that provide speech-to-text (and vice versa) facilities for dyslexic students. GenAI promises to enhance the pace, cost, and precision of these AI-generated educational accommodations.

Notwithstanding the many positive contributions that genAI may bring, its adoption in education also raises several ethical concerns. One of the biggest is the potential for relevant algorithms to perpetuate biases and inequalities that exist in current society. Such concerns are by no means limited to the usage of genAI, but its influence could magnify these across wider networks. For example, if the algorithms used to create educational content are based on biased data sets, this could result in discriminatory outcomes that will see prolific citation and adoption by users of the same database, perpetuating biases and knowledge based on flawed information.

Stefania Giannini, Unesco’s assistant director-general for Education, has expressed a clear need for “AI models and applications that claim to have educational utility [to] be examined” according to criteria including accuracy of content, age appropriateness, relevance of pedagogical methods, and cultural and social suitability – which encompass checks to protect against bias. She concludes that it is “rather remarkable that [genAI] has largely bypassed scrutiny of this sort to date.” In short, it is important to ensure that geAI is developed and deployed in a way that is fair and unbiased and takes into account the diverse needs and backgrounds of all students.

Some of the questions going forward ought to be: whose data sets are genAI systems being trained on, what biases do they manifest, and how can these biases be tackled both in their own right and their potential to impact the educational materials they generate? 

Generative AI in work

In addition to its potential impact on education, genAI also has the potential to transform the space of work. One of the most significant ways in which it can do this is by automating repetitive or tedious tasks, freeing up human workers to focus on more creative and complex endeavours. For example, genAI has capacity to automate tasks such as data entry, transcription, and image tagging, which can be time-consuming and error-prone when conducted manually.

The McKinsey Global Institute observes that “[a]utomation, from industrial robots to automated document processing systems, continues to be the biggest factor in changing the demand for various occupations.” More specifically, the institute’s July 2023 report on Generative AI and the future of work in America points out that genAI is “both accelerating automation and extending it to an entirely new set of occupations” alongside other structural factors affecting labour demand.

Kevin Scott, Microsoft’s chief of Technology, has expressed optimism over genAI’s potential impact on work, arguing that it will unleash creativity, make coding more accessible, unlock faster iteration, and make work more enjoyable. In May 2023 alone, GA-related job postings increased by 20% as companies looked to enhance their growth and productivity by leveraging these emerging technologies. The World Economic Forum (WEF) has noted that genAI has the potential to disrupt several industries, including advertising, art, design, and entertainment, writing that “while there are valid concerns about the impact of AI on the job market, there are also many potential benefits that could positively impact workers and the economy.”

Although one may expect genAI only to replace repetitious, less skill-based, and more labour-intensive manual work, it is projected to be used to improve the accuracy and efficiency of tasks that require a high degree of precision or attention to detail. For example, relevant algorithms can be used to analyse medical images or financial data, identifying patterns and anomalies that may be missed by human workers — and indeed, an AI “red team” is already in use by Microsoft, Google, NVIDIA, and OpenAI.  

Furthermore, genAI is set to have a big impact on the creation of new products and services, from designs for products – such as clothing or furniture, to the creation of artwork, based on customer preferences and feedback. With the right set of data inputs, genAI can be used to develop products and design marketing campaigns that are tailored to specific audiences.

However, this can also have a dark side, as film studios have turned to AI to generate new scripts and film ideas as exemplified by the Sag-Aftra strike in 2023. Creatives are worried that studios’ demands for consistent profits and their insistence on lowering costs will lead to genuine creativity being squeezed out of the picture in favour of AI-generated projects which use data to inform what viewers want to see.

While the aggregate demand for products and services remains unknown, it is likely that this demand will ebb and flow, resulting in a rotating – rather than exclusively shifting – demand curve. Viewers have already responded negatively to straight-to-streaming movies and TV shows which are laser-focussed on appealing to algorithmically driven demographics; meaning that there could very well be renewed interest in distinctively human-designed content, analogous to the ways in which filmmakers such as Christopher Nolan and Tom Cruise rely on practical effects, rather than CGI.

Similar to its impact on education, the use of genAI in work also raises ethical concerns. Among other elements, a key consideration is the potential for GA to replace human workers, leading to a possible large-scale job displacement in the form of what 1930s economist, John Maynard Keynes, termed “technological unemployment”. Despite the potential for genAI to replace certain functions of high-income professionals (e.g. medical doctors), its impact on the less-skilled working population could be more abrupt and far-reaching, which could result in the intensification of economic inequality. For instance, tech-employment scholars, Carl Frey and Michael Osborne have predicted that 47% of US jobs will be fully automated “in a decade or two”. Thus, for policymakers, it is important to ensure that the benefits of GA are distributed fairly, and that workers are provided with the training, support, and reskilling opportunities to transition to new roles and industries.

Ethical considerations

There are several ethical considerations that need to be taken into account when developing and deploying GA systems. In an interview with the Financial Times, Gita Gopinath, the IMF’s first deputy managing director, said, “[w]e need governments, we need institutions and we need policymakers to move quickly on all fronts, in terms of regulation, but also in terms of preparing for probably substantial disruptions in labour governments.”

In March, the Deloitte team elaborated on its Trustworthy AI Framework report, which focusses on four risk factors across generative AI: uncertainty, explainability, bias, and environmental impact. Meanwhile, the Harvard Business Review revealed findings indicating that 79% of senior IT leaders are concerned by the potential for security risks as a result of AI, while another 73% are concerned with biased outcomes. It concluded that organisations “need a clear and actionable framework for how to use genAI” and to align their genAI goals with business functions, including sales, marketing, commerce, service, and IT jobs; while a set of guidelines for the ethical advancement of genAI is also necessary.

The need for transparency and accountability is among the most important elements driving the ethical development and deployment of AI. While it is debatable whether general users are able to understand fully the specifics of its backend technology, it is important for genAI developers and users to be transparent about how the technology works; what data is used to train its algorithms; and how calculated decisions are made. This would ensure that the people and entities involved in the technology’s development are prepared for potential scrutiny in the case of unethical pursuits. Such an approach could help build trust and ensure that genAI is being rolled out in a responsible and ethical manner. Essentially, all AI algorithms will be skewed if they are trained on data reflecting societal biases. To ensure that there is no spillover effect of false information, the need for fairness and non-discrimination is also of utmost importance. Data used to train genAI algorithms needs to be representative and diverse, and developers will need to ensure that algorithms are programmed to avoid perpetuating – or even expanding – existing biases and inequalities.

Privacy and security are also core ethical considerations.The technology has capacity to create “deep fake” images and videos, which can be used to spread disinformation and manipulate targeted groups or even the general public. It is thus important to establish ethical guidelines and regulations to ensure that genAI is used in a responsible manner that protects individuals' privacy and security.

While the development of regulation on technology tends to lag that of the latter, there is a need for ethical considerations around the impact of genAI on society as a whole to be given a higher level of importance. As the well-known Diffusion of Innovation Theory suggests, the adoption of genAI is set to pick up over time. While the technology possesses much potential to create new industries and job opportunities, it can also lead to job displacement and economic inequality. The signature of documents by KOLs such as Elon Musk, signal the importance of an appropriate framework.

It is important to ensure that the benefits of genAI are distributed fairly and that society as a whole benefits from the technology. And, it is up to humans to “hybridise” themselves with AI to experience the enhancements it offers in the workplace. Actor-Network Theory is ever more relevant in the 21st century office where AI is now not merely a “tool” that human actors “use”, or a machine to independently generate output.* The proposed increases in productivity and output are the product of interaction between humans and AI, relying upon the way  in which each other interacts. Those familiar with ChatGPT may already experience this mutual dependence. The ideas generated are the product of human ideas, but human ideas can be products of generated ideas; we learn from them and they learn from us. It is through human-AI interaction that curiosity and creativity — increasingly rare qualities — can foster and achieve symbiosis.

Looking ahead

This article represents our first attempt to organise independent thoughts on genAI and its possible implications across the future of education and work. We shall remain watchful of relevant developments, given that these are not linear nor static.

GenAI has the potential to transform education and work in numerous ways. By creating personalised learning experiences; developing interactive educational content; and automating routine tasks; genAI can improve the quality and efficiency of education and work, while also creating new job opportunities and industries. However, it is important to address the potential for biases and inequalities, job displacement, privacy concerns, and ethical considerations, to ensure that the benefits of genAI are distributed fairly and that the technology is used in a responsible and ethical manner.

As the technology continues to advance, it will be important to continue to monitor its impact on education and work, and to adapt to the changing needs of society, while also ensuring that ethical considerations are taken into account. Its implementation in the workplace is especially important, not only for ethical considerations, but also to ensure the optimal utilisation of AI to best experience its potential benefits.

While we hope that policymakers attend to genAI’s relevant development, we are not proponents for its tight regulation, given that this is impractical and could be detrimental to the development of a burgeoning technology. Truth be told, with the many burning issues concerning different economies, setting up a framework that aims to monitor, facilitate, guide and manage the development of genAI is unlikely to be on top of the current policy agenda. Moreover, it is premature to call for a specific government to be more restrictive on genAI research and development (R&D) for fear it could hinder its overall competitiveness alongside other peer economies. However, with promising potential surrounding how genAI could influence our work and education systems – and essentially, society’s future – enabling aspiring entrepreneurs and groundbreaking researchers to develop such technologies arbitrarily, could prove the biggest mistake.

*Bruno Latour, (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press.

Disclaimer: This article is a product of the authors. Views and information provided are of the authors and do not represent the stance of their affiliated organisations, institutions, or employers. This article does not serve as investment advice.

¬ Haymarket Media Limited. All rights reserved.
Share our publication on social media
Share our publication on social media