Ethics in the Age of AI: The Human and Moral Impact of AI
Generative AI and LLMs have ever-expanding use cases, but they also have the potential for misuse, raising fundamental questions regarding AI's human and ethical impact.
Join the DZone community and get the full member experience.
Join For FreeEditor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Enterprise AI: The Emerging Landscape of Knowledge Engineering.
Over the last few years, artificial intelligence (AI), especially generative AI, has permeated all sectors, growing smarter and faster — and spreading a ubiquitous presence. Generative AI can lead to competitive advantage, and the large language models (LLMs) that underpin AI and the ever-expanding use cases have evolved faster than any tech in history. But AI also has the potential for misuse, raising fundamental questions regarding its human and ethical impact:
- How can it be regulated to ensure a balance of fostering innovation while also safeguarding society against the risks of ill design or misuse?
- What is good use?
- How should AI be designed and deployed?
These questions demand immediate answers as AI's influence continues to spread. This article offers food for thought and some advice for the way forward.
Can You Regulate When the Horse Has Already Bolted?
AI is not a problem for tomorrow — it's already here; the horse has well and truly bolted. OpenAI's ChatGPT set a record for the fastest-growing user base, gaining 100 million monthly active users within the first two months. According to Open AI, "Millions of developers and more than 92% of Fortune 500 are building on our products today."
Many startups and enterprises use tech for good, such as helping people who stutter speak more clearly, detecting landmines, and designing personalized medicine. However, AI can also be used in ways that cause harm, such as misidentifying suspects, defaming journalists, breaching artistic copyright, and developing deepfakes that can steal millions. Furthermore, the datasets within the LLMs that power AI can be gender or racially biased or contain illegal images. Therefore, AI regulations must examine existing problems and anticipate future problems, which will evolve as LLMs provide new use cases across various industries, many of which we never thought possible. The latter is no easy task. Today's AI has created entirely new business opportunities and economic advantages that will make enterprises resistant to change.
But change is possible, as GDPR regulations in Europe demonstrate, especially since compliance failure results in fines proportional to a business' earnings depending on factors such as intent, damage mitigation, and cooperation with authorities.
Attempts to Regulate AI
Regarding governance, Europe has passed the Artificial Intelligence Act, which aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI, according to its potential risks and level of impact. And in North America, the Defense Production Act "will require tech companies to let the government know when they train an AI model using a significant amount of computing power." The President's Executive Order (EO) charges multiple agencies — including NIST — with producing guidelines and taking other actions to advance the safe, secure, and trustworthy development and use of AI.
The UK hosted the first global AI safety summit last autumn and is building an AI-governance framework that embraces the transformative benefits of AI, while being able to address emerging risks. India has yet to implement specific AI regulations.
We don't know what these regulations will mean in practice as they have yet to be tested in law. However, there are multiple active litigations against generative AI companies. There's currently civil action against Microsoft, GitHub, and OpenAI, claiming that, "By training their AI systems on public GitHub repositories … [they] violated the legal rights of a vast number of creators who posted code under certain open-source licenses on GitHub." Writer Sarah Silverman has a similar claim against Meta and OpenAI for alleged copyright infringement.
This is very different from having legislation requiring responsible AI from the design phase, with financial and legal penalties for companies that create AI that breaches regulations. Until the regulations are tested with enough heft to disincentivize the creation and use of unethical AI, such as deepfakes and racial bias, I predict a lot of David vs. Goliath cases where the onus is on the individuals harmed, going up against tech behemoths and spending years in court.
AI Challenges in Practice
Generative AI can be used to work better and faster than competitors, but it can breach regulations like GDPR, share company secrets, and break customer confidentiality. Most people fail to understand that ChatGPT retains user input data to train itself further. Thus, confidential data and competitive secrets are no longer private and are up for grabs by OpenAI's algorithm. Multiple studies show employees uploading sensitive data, including personal identifiable information (PII), to OpenAI's ChatGPT platform. The amount of sensitive data uploaded to ChatGPT by employees increased by 60% between just March and April 2023.
Salesforce surveyed over 14,000 global workers across 14 countries and found that 28% of workers use generative AI at work, and over half without formal employer approval. In 2023, engineers at Samsung's semiconductor arm used ChatGPT to input confidential data such as source code for a new program and internal meeting notes. In response, Samsung is developing its own AI models for internal use and restricting employee use to prompts with a 1024-byte limit.
The Lack of Recourse
There's also the issue of how AI is used as part of an enterprise's service offerings, ostensibly to increase efficiency and reduce manual tasks. For example, decision-making AI in the enterprise can choose one potential employee over another in recruitment or predict a tenant's ability to pay rent in housing software.
Companies can't simply blame bad outcomes on AI. There must be a human overseer to address any potential or identified issues generated by the use of AI. They also must create effective channels for users to report concerns and provide feedback about decisions made by AI, such as chatbots. Clear policies and training are also necessary to hold employees accountable for responsible AI use and establish consequences for unethical behavior.
AI Innovation vs. Regulation
Governments are constantly trying to balance the regulation of AI against tech advancement. And the more you delve into it, the more the need for a human overseer emerges.
Unplanned Obsolescence
There's plenty of talk about AI making tasks easier and reducing pain points at work, but what happens to telemarketers, data clerks, copywriters, etc., who find their roles obsolete because AI can do it faster and better?
I don't believe programs like universal basic income will provide adequate financial security for those whose jobs are replaced by AI. Nor do all displaced people want to transition to physical roles like senior care or childcare. We need a focus on upskilling and reskilling workers to ensure they have the necessary skills to continue meaningful employment of their choosing.
Sovereignty and Competition
There is a pervasive challenge in the dominance of large companies responsible for most tools, especially where smaller companies and governments build products on top of their models and open-source AI. What if open-source models become proprietary or raise their prices so startups can no longer afford to create commercial products, preventing large-scale innovation? This is hugely problematic, as it means that smaller companies cannot compete equitably in the economic market.
There's also sovereignty. Most LLMs originate from the US, meaning the data generated is more likely to be embedded with North American perspectives. This geographical skew creates a real risk that North American perspectives, biases, and cultural nuances will heavily influence users' understanding of the world. This increases the chance of algorithmic bias, cultural insensitivity, and ultimately, inaccuracies for users seeking information or completing tasks outside the dominant data landscape. International companies in particular have the opportunity to ensure that LLMs have diverse data representation with global perspectives. Open-source collaboration is an effective way to foster this and already has the necessary frameworks.
Creating custom LLMs is no easy task on an infrastructural level — it's expensive, especially when you factor in the cost of talent, hardware, infrastructure, and compute power. GPUs power AI workloads and training, but they've been in short supply since the COVID-19 pandemic, with GPUs earmarked for 2024 already sold out. Some countries are buying up GPUs; the UK is planning to spend $126.3 million to purchase AI chips. This will leave fewer resources for less prosperous nations.
Intentionally fostering innovation between developed and developing nations is crucial to facilitate knowledge-sharing, more equitable resource allocation, and joint development efforts. It also requires targeting funding and support for local infrastructure.
What Does Accountability Really Mean?
Company accountability for unethical AI — whether by design, deployment, or unintentional misuse — is complex, especially as we have yet to see the net result of AI regulations in practice. Accountability involves detecting and measuring the impact of unethical AI and determining the appropriate penalties. Existing regulations in industries such as financial services and healthcare are likely to help establish parameters, but each industry needs to predict and respond to its unique challenges. For example, the World Health Organization suggests liability rules, so users harmed by an LLM in healthcare are adequately compensated or have other forms of redress to reduce the burden of proof, thus ensuring fair compensation.
We're only just getting started, and companies that commit to ethical AI as their earliest use cases will be able to adapt easier and faster to whatever regulations come over the following months and years.
The Way Forward
Ethical AI in practice involves intentionality, ongoing commitment to design auditing, and an environment willing to look at the risks associated with AI. Companies that embed this commitment throughout their organization will succeed.
Active Ethical AI in the Workplace
The last few years have seen companies like X and Google reduce their responsible AI teams. A dedicated team or role can assist with proactive risk management, building a transparent culture, and employee training. However, an AI ethicist or a responsible AI team only works if they have a place in the company hierarchy where they can drive and influence bottom-line business decisions with business managers, developers, and the C-suite. Otherwise, the role is simply a public relations spin.
There's also the temptation that hiring a dedicated person or team makes ethics someone else's problem. Assigning ethics to a single individual or team could create a false sense of security and neglect broader responsibility across the organization, especially if it comes at the expense of embedding responsible AI from the earliest design phase and seeing it as a valuable asset to a company's brand.
Evolving Policies and Practices
Creating an AI policy is useful but needs to be embedded in your company's practices rather than simply be something that gets shared to keep investors happy. Ultimately, companies that want to practice responsible, ethical AI need to have this commitment embedded into their DNA, much like a security-first approach. This means active, working AI policies that are amenable, align with innovation, and spread responsibility throughout the workplace.
For example, companies like Microsoft highlight key factors in what ethical AI should look like, encompassing:
- Fairness
- Reliability and safety
- Privacy and security
- Inclusiveness
- Transparency
- Accountability
Choosing Ethical Tools
Companies can also weatherproof themselves by committing to using tools and services focused on ethical AI. Some examples include:
- Researchers from the Center for Research of Foundational Models have developed the Foundation Model Transparency Index to assess the transparency of foundation model developers.
- Fairly Trained offers certifications for generative AI companies that get consent for the training data they use.
- daios helps developers fine-tune LLMs with ethical values that users control, creating a feedback loop between users, data, and companies.
- Last year, Aligned AI made AI more "human" against misgeneralization. It is the first to surpass a key benchmark called CoinRun by teaching an AI to "think" in human-like concepts.
Conclusion
AI is complex, and ultimately, this article poses as many questions as answers. When tech capabilities, use cases, and repercussions are ever-evolving, continual discussions and an actionable commitment to ethics is vital. A company that commits to ethical AI in its early iterations weatherproofs itself from the incoming regulations and possible penalties for AI misuse. But most importantly, committing to ethical AI protects a company's identity and competitive advantage.
Resources:
- "Artists take new shot at Stability, Midjourney in updated copyright lawsuit" by Blake Brittain, 2023
- GitHub Copilot litigation by Matthew Butterick, 2022
- "ChatGPT sets record for fastest-growing user base - analyst note" by Krystal Hu, 2023
- "UK to spend £100m in global race to produce AI chips" by Anna Isaac, 2023
- "Nvidia's Best AI Chips Sold Out Until 2024, Says Leading Cloud GPU Provider" by Tae Kim, 2023
- "Samsung workers made a major error by using ChatGPT" by Lewis Maddison, 2023
- "Biden Administration to implement new AI regulations on tech companies" by Duncan Riley, 2024
- "More than Half of Generative AI Adopters Use Unapproved Tools at Work," Salesforce, 2023
- "Nvidia's AI Chip Supplies Will Be Insufficient This Year, Foxconn’s Chairman Says" by Shi Yi, 2024
- "Sarah Silverman Sues OpenAI and Meta Over Copyright Infringement" by Zachary Small, 2023
- "Hackers Steal $25 Million by Deepfaking Finance Boss" by Victor Tangermann, 2024
- "Identifying and Eliminating CSAM in Generative ML Training Data and Models" by David Thiel, 2023
- "Implicit Bias in Large Language Models: Experimental Proof and Implications for Education" by Melissa Warr, Nicole Jakubczyk Oster, and Roger Isaac, 2023
This is an excerpt from DZone's 2024 Trend Report, Enterprise AI: The Emerging Landscape of Knowledge Engineering.
Read the Free Report
Opinions expressed by DZone contributors are their own.
Comments