Responsible AI Practices are the Need of the Hour in a Hyper-Generative AI World
Updated on

Responsible AI Practices are the Need of the Hour in a Hyper-Generative AI World

Generative AI has been in existence for a while and practitioners have been experimenting with and applying them for various use cases over the years. The recent manifestation of generative AI as ChatGPT captured the imagination of a large section of society and has the potential to reimagine the way we have adopted and applied AI models so far.

GPT models are now available on Azure platforms, providing a scalable and secure ecosystem for the adoption and implementation of use cases of generative models. Google also released Bard, its response to ChatGPT, and other players are also going to release their models soon.

2023 will be a year of generative models and we can expect to see intense rivalry among technology players, each outsmarting the other. While the euphoria around this technology is high, it also poses risks. If the risks are not addressed, they can create systemic risks for practitioners and their organizations. There is a need to build guardrails that protect the creators and consumers of AI solutions driven by Generative AI models.

While regulations are around the corner, more innovations in Responsible AI practices are the need of the hour and these need to be a step ahead of innovations in Generative models.

Responsible AI Best Practices

Following Responsible AI practices can help manage risks and protect the creators and consumers of generative models.

Valid and Reliable

Ensure the System is Valid and Reliable

There is a need to assess the validity and reliability of generative models through continuous monitoring and regular audits to certify that the system is performing as intended. There is a need to reimagine the maker-checker function and its associated processes. While generative models would increasingly take the role of a maker, the onus of verifying the same and approving them will reside with the checker (human being).

Since generative models work on the principles of generating the most probable word after a word, the most probable sentence after a sentence, and the most probable paragraph after paragraph, they are susceptible to producing non-factual or hallucinated output as well.

Since the models have been trained using large amounts of data harvested through web scraping, there are bound to be accusations of plagiarism and copyright violations.

The recent debacle of Google Bard’s live demo during its launch reemphasizes the importance of robust QA processes. The checker would need to validate the veracity of such output and ensure compliance with processes, policies, and standards. Current risk frameworks would need to be enhanced to incorporate new working methods.

We are getting fast into a new world order where there is a need to add a disclaimer: Created by generative models; verified, edited, and approved by a human being!
Safety of Consumers

Ensure the Safety of Consumers

The generative AI system should not cause subliminal manipulation resulting in physical or psychological harm that endangers human life, health, property, or the environment. Special care should be taken when the consumers of such systems are children or mentally challenged or marginalized sections of society.

Secure and Resilient

Ensure Systems are Secure and Resilient

Like other technology systems, Generative AI systems need to ensure confidentiality, integrity, and availability of protection mechanisms that prevent unauthorized access and use. Applications like ChatGPT use principles of reinforcement learning where the bot learns based on user feedback.

We have seen examples where users can manipulate and convince the bot to accept a wrong answer through manipulative prompts. The models are susceptible to other types of adversarial attacks that can compromise the resiliency of the systems. Through simulated attacks and defense mechanisms, this can be addressed to an extent.

Fairness

Ensure Fairness

The large corpus of training data on which generative models are trained has inherent biases or discriminatory viewpoints on various dimensions and the models are likely to reproduce the content that reflects the same. While there is a need to weed out such data from the training corpus itself, checks and balances are required while using the model to ensure there is no bias or that bias is managed to the maximum extent possible. This needs to be a continuous process with a closed-loop feedback mechanism to constantly improve.

Privacy

Ensure Privacy

Generative models provide us with the opportunity to directly use them by applying zero-shot learning principles or fine-tuning them through few-shot learning. The data that would be input for such training has to be sanitized and anonymized if it contains any Personal Identifiable Information (PII).

Since past conversations are stored for future reference, such data should be encrypted and secured so that it does not fall into the hands of hackers who are constantly on the prowl to break through the systems and take advantage of them.

Explainable and Interpretable

Make the Models Explainable and Interpretable

Explainability refers to the representation of the logic and mechanisms in the algorithm’s operation and Interpretability refers to the inherent details of AI systems’ output in the context of its use case. Both are equally important. Generative models need to provide transparency on the underlying training data and how the output was derived, and also provide references to the output for the results it provides. They also need to provide details on relevance and confidence scores for the answers so that users can take informed decisions.

Accountability and Transparency

Ensure Accountability and Transparency

Transparency and accountability are fundamental to creating trust around AI systems. When users interact with generative AI bots, it is important to provide details of the bot, including its capabilities, and alert them that they are interacting with an AI system and not a human being. Generative systems could pose various types of risks to consumers, so there has to be a clear accountability structure and associated roles and responsibilities defined for monitoring and managing such risks.

Explore Generative Models with Course5 AI Labs

Course5 Intelligence applies generative AI models to a vast spectrum of industry use cases by leveraging capabilities spanning text summarization, topic generation, Q&A, conversation engines, text generation, entity extraction, sentiment analysis, emotion detection, etc. Our ethical and Responsible AI practices are strongly focused on creating an ecosystem of “AI for Good.” Contact us to reimagine your AI journey with generative AI models.


Jayachandran Ramachandran
Jayachandran Ramachandran
Jayachandran has over 25 years of industry experience and is an AI thought leader, consultant, design thinker, inventor, and speaker at industry forums with extensive...
Read More