The Roadmap for AI Technologies to Remain Relevant and be Mainstream
Updated on

The Roadmap for AI Technologies to Remain Relevant and be Mainstream

Advances in AI are happening at a rapid pace and there is wider adoption of AI across various use cases to solve complex problems. AI is augmenting human capability better than ever before, and the benefits are being experienced by consumers. Current AI technology maturity is termed as “Narrow AI” as it is focused on a singular/specific task, learns the patterns from underlying data and provides acceptable results that are comparable and, in some instances, exceed human capabilities.

There are examples like face recognition, chatbots, recommendation systems, language translation, summarization etc. that have been deployed so far. These are task-specific models and work best for the scenario. While wider industry adoption is happening on “Narrow AI”, there is research underway to develop the future of AI, AGI (Artificial General Intelligence) or General AI or Strong AI.

AGI are systems that can think, learn, comprehend, and work across more than one task and is almost equal human intelligence. There are also discussions happening around “Super AI” which represents systems that can demonstrate capabilities beyond human beings. There is a long-term roadmap for AI for the next few decades. While AI technology is advancing and the focus is on the functional side, it is important to adhere to a few guiding principles on the non-functional side for wider acceptance of the technology, and successful adoption of AI without any inhibitions in society at large.

The following are some important guiding principles for the future of AI technologies to remain relevant and mainstream. AI must –

  1. Be trustworthy
  2. Be scalable
  3. Quantify model uncertainty and associated risks
  4. Create value

1. AI must be trustworthy

AI technologies guideline

Robustness –
Deep learning applications, ranging from computer vision apps, autonomous driving, machine translation, text classification, speech recognition, etc., are everywhere these days. But many of them are not very robust. The models are vulnerable to even minor perturbations in the system. The perturbations can be random noises, natural variations, shift in model parameters, shift in data distribution and data poisoning. Though it is challenging to measure robustness of models, it would be good to have some metric that can be assigned to models.

Recently IBM has defined CLEVER (Cross-Lipschitz Extreme Value for Network Robustness) a metric for measuring the robustness of deep neural networks. CLEVER score is attack-agnostic and a higher score number indicates that the network is likely to be less venerable to adversarial examples.

There are many other methods like Fast-Lin, CROWN, CNN-Cert and PROVEN to verify neural network robustness. These scores give a threshold and tolerance level beyond which it can be assumed that the model would behave inconsistently. Robustness score and certification would be essential for the consumers to have confidence and trust in AI models.

Fairness –
AI algorithms should ensure fairness at an individual level and group level. Individual fairness is all about when two individuals with similar input to the model receive similar outcomes. Group fairness happens when the algorithm provides similar outcomes for all in the group with similar attributes. Any deviation in fairness should automatically trigger the process for rigorous testing on the edge cases. There are open-source tools such as Google’s What-if tool, IBM AI Fairness 360 and Microsoft Fairlearn that help determine fairness of models and detect bias. It is not only the algorithmic fairness that counts, but also the whole business process must ensure fairness. Documenting scenarios where fairness would be compromised is important to assess the risk associated with it.

Explainability –
Explainability has been an active area of research and we have seen many techniques that make AI models globally and locally explainable – for example, LIME, SHAP, Grad Cam, saliency maps, and self-explainable neural networks that are being applied to provide model explainability. While these methods focus on which input features are most important to arrive at a particular decision, they still fall short of end-user expectations.

Explainability should be domain adapted, in the context of the end-user and in terms familiar to them. With advances in NLG techniques, explainability should be persona-oriented and provided in natural language that end users can easily interpret.

Privacy, Security, and Safety –
When it comes to the future of AI, privacy, security, and safety are critical to enhance the trust in AI applications. It is important to make these factors an integral part of model development. There are many secure and privacy-preserving techniques like differential privacy, federated learning, zero-knowledge proof, homomorphic encryption, and secure multi-party computation to implement the same. A regular process to assess the vulnerabilities of the models to cybersecurity threats must be put in place.

2. AI must be scalable

Neural nets, which are fundamental to deep learning, have been quite successful in discovering patterns from data and providing results. However, they fare poorly in varying domain settings, require a lot of data, and consume a lot of computing resources.

As per numerous studies, it has been observed that the human brain consumes around 20 Watts of energy, whereas a deep learning system endeavoring to mimic the human brain consumes thousands of watts. That is disproportionately higher.

For AI to be scalable, novel approaches must be explored such as:

  • New architectures that can build models with less data, but operate at population scale
  • Shift from correlation-based machine learning to causal learning
  • Sustainable AI that consumes lesser resources
  • Easily integrates with existing and new business processes

Continuous learning models –
Model building should not be a one-time process. Just as humans can become obsolete without learning, AI models can also become the same. Learning on-the-fly as new scenarios emerge should be by design. Instead of having a predefined frequency for retraining AI models, an innovative approach could be to design for continuously learning and updating models when there is mismatch between the model world and the real world. The job of the network is to monitor changes in the real world, predict outcomes, validate, and at the same time trigger new learning when required. Apart from this, there is a need to constantly check on the model verification process, its underlying assumptions and any changes to that.

3. AI must quantify model uncertainty and associated risks

AI model performance metrics are predominantly statistical measures that need expertise to interpret. These measures must be explained in ways that the business user understands to gain acceptance. While model accuracy and other metrics are important, there is a need to measure the level of uncertainty in the model to create confidence. Uncertainty could be due to uncertainty in the data (Aleatoric) or uncertainty in the model (Epistemic). Measuring uncertainty is the hard part as it needs lots of cross-validation to ascertain that.

Recently, IBM released the Uncertainty Quantification 360 (UQ360) open-source toolkit to provide developers and data scientists with a guideline and process to quantify, evaluate, improve, and communicate the uncertainty of machine learning models. The consequence of a wrong decision by the model must be ascertained before accepting the model and deploying it in the real world. The probability of prediction accuracy and confidence levels of the AI models help ascertain the chances of error and the risks thereof. If AI models can not only predict the outcome but at the same time, based on the level of uncertainty, predict the consequences of a wrong prediction, then users can make an informed decision to either take it or leave it.

The model output may be certain or uncertain. In either case, it is prudent to disclose the confidence level of AI predictions. Being transparent is always better.

4. AI must create value

The purpose of any technology is to make things easier, automate processes, improve productivity, save cost, generate new streams of revenue, and create delightful, enriching experiences for the users. The value creation must be quantified and communicated continuously.

While data engineers and AI scientists create the data pipeline for sourcing, ingestion, processing, modeling, testing, deploying, and monitoring, an additional process is to be added wherein it seamlessly computes and depicts the business value created by using the AI application.

Having this information would help assess the return on investment and the viability of AI solutions on an ongoing basis.

Win with AI
Trust is fundamental to the success of any mission involving humans. Since AI is endeavoring to develop human-like capabilities and beyond, it is important to create and nurture trust with them. For example, patients trust doctors for their qualifications, experience, specialization, success rate, feedback etc. A qualitative or quantitative rating is attributed to them. Similarly, we provide ratings for machines for their energy efficiency. Similarly, AI applications can be attributed a composite score that ensembles their robustness, fairness, explainability, security, privacy, safety, scalability and risks. Such a score would help users evaluate the maturity of applications and adopt them.

A trustworthy, scalable, risk-free and value-creating AI model would endure the test of times and make the journey toward the future of AI go without many headwinds and turbulence.


Jayachandran Ramachandran
Jayachandran Ramachandran
Jayachandran has over 25 years of industry experience and is an AI thought leader, consultant, design thinker, inventor, and speaker at industry forums with extensive...
Read More