Do AI Bots Need Some Regulations?
Well, do they? Let's look at fundamental policies, data privacy and security, handling rogue chatbots, and dealing with moral issues.
Join the DZone community and get the full member experience.
Join For FreeLook around and you will realize that artificial intelligence (AI) has found a place in almost every aspect of our daily functioning and is increasingly acquiring more space in our lives. Email spam filter, booking a cab, location-based services, using GPS while driving, voice commands on mobile — these are all examples of AI. As the customers and employees become smarter, there is a growing need for smart homes, and workplaces, and artificial intelligence (AI) can be seen acquiring more extensive responsibilities and coming up with an innovative offering.
In the series of AI innovations, there is another offering from AI, which is set to make our lives much easier and convenient, and it is chatbots. Today, organizations are actively using AI chatbots to promote their businesses, engage with customers better, and enhance their experience with a seamless personalized assistance. Growing competition, the need to keep up with ever-changing business landscapes, and the empowerment of consumers are making chatbots an essential presence. Not only are bots turning out to be instrumental in communication and engagement but also in cutting down costs and streamlining workflows. Moreover, chatbots are gradually finding the addresses of our homes. Machine learning capabilities and natural language processing have further opened gateways to the future, which was far from imagination once.
Since the intervention of chatbots is increasing in human functioning and cognitive science is making them smarter and predictive about human behavior and emotions, it puts forth a need to make them accountable and regulated.
So, Do AI Bots Need to Be Regulated?
It is evident that chatbots are expanding their reach and gamut of their applications is increasing exponentially. With time, we are becoming aware of their growth potential. At the same time, the fact that they are also dealing with private data and sensitive topics cannot be ignored. That is why regulatory uncertainty is not an option anymore and there are have to be well-defined bot ethics and rules in place. Below are a few areas where a regulatory framework around the bot application is the need of the hour.
1. Fundamental Policies
First and foremost, a regulation for businesses to declare their intended use of AI bots should be brought in. It is the user's right to know whom they are speaking to. However, smart conversational interfaces can make them believe that they are talking to a human. This makes it easy to manipulate the conversation. For constant evaluation of bots and prevent any unfavorable repercussions, it should be made mandatory for the users to know that they are communicating with a machine.
Besides, the bots should be able to trigger an alarm whenever they are unable to decipher the instructions and a human intervention is needed.
2. Data Privacy and Security
This may possibly be the most sought-after regulation. With continuous interactions, chatbots collect great volumes of personal data from the users and that is where data protection and privacy becomes a major concern. Therefore, there needs to be strong policies protecting the security and privacy of this data. These policies should entail directives regarding what data to collect and why the users should be made aware of it in advance. This should be included in the privacy policy statement and the users should be given sufficient time to read it.
Additionally, the chatbots should also enable the user to store, encrypt, retrieve, and erase their personal data.
3. Handling Rogue Chatbots
Rogue chatbots are emerging as one of the biggest threats from AI. They can harm users in more than one way. A few examples are theft of personal data and bank account details, abuses, negative sentiments, misleading responses, etc. Moreover, you never know when even the most sophisticated and well-trained bots may go rogue.
Chatbot developers and owners of the chatbot should always be careful about the possibilities and extent of the harm the bots can cause due to their rogue functioning. They should be quick to react to user complaints pertaining to the chatbot. That is why a strong framework to prevent or at least minimize such scenarios is a must.
4. Regulations Around Advertising and Product Promotion
Chatbots are actively being used in advertising and product sponsorships and hence should attract the same laws as the ones applying on advertising media and agencies pertaining to all the means of marketing and promotions. The authorities need to ensure to regulate any promotional information the users are getting through the bots.
Chatbot development companies should also consider the policies around advertising and product promotion. It becomes all the more important while dealing with products and services governed by strict policies such as tobacco products, alcohol, healthcare products, politics, etc. Moreover, To make sponsorship and advertising a fair practice, it is required to be made clear that the chatbot is sponsored.
5. Transparency in Terms and Conditions (T&C)
Terms and conditions is an aspect that requires careful consideration especially when bots' operations have a direct impact on the users; for example, online transactions, medical or financial advice, product recommendations, etc. Chatbot owners should ensure that the bot is trained to be able to ascertain whether or not there is a need for the user to accept T&C. T&C should be presented to the users in a clear manner and the bot should be able to transfer the issue on T&C to the human support as any unintentional or accidental claim by the bots may also become a part of T&C. It is also important to ensure that whatever information T&C contain should be in accordance and compliance with regulatory policies.
6. Monitoring Interactions With Children
With children spending more time online, bot accountability increases in size. Chatbot developers should make sure that the bots are able to ascertain early whether they are talking to a child and must prepare the conversation and information accordingly. To ensure this aspect is in place, authorities should monitor the bot's ability to verify age and tailor content.
7. Dealing With Moral Issues
With the advancement of natural language processing, human-machine conversations are becoming more realistic and practical. This sometimes may result in scenarios where the conversation turns out to be around a serious, concerning topic especially when it involves sensitive and vulnerable individuals and deals with medical advice and legal issues. This raises a question whether the bots should start handling moral duties and responsibilities. A regulatory arrangement around this is important as to whether the bots should call for a human help or apprise the authorities in situations that indicate a sign of threat to life or breach of law and order. This question should also have a clear answer in company policies.
With That Being Said...
In a fast-paced business world, it is quite common to see chatbot development companies and bot owners often ignoring ethical aspects of AI chatbot implementation. Evidently, there is an indispensable requirement for a comprehensive yet uniform code of conduct and ethical standards for AI chatbots. Although defining, formulating, and implementing these standards can be a daunting task, we cannot afford to ignore this need. AI chatbots are becoming more sophisticated and ubiquitous by the day, and they can pose a series of privacy, transparency, abuse, and legal risks. That's is why it becomes critical for the bots to be monitored and regulated. There needs to be a compliance framework in place for the chatbots to ensure that the dearest friend is not turning into the deadliest foe.
And at last, to quote Rob High, the CTO of IBM Watson, "AI, like most other technology tools, is most effective when it is used to extend the natural capabilities of humans instead of replacing them. That means that AI and humans are best when they work together and can trust each other."
Published at DZone with permission of Amaan T, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments