Artificial Intelligence and Machine Learning
Machine Learning is Ready for Prime Time
A wave of emerging algorithms, massive data availability and accessibility and commoditization of infrastructure is driving the democratization of AI, where the complexity of model building, training and deployment will get simplified. Large platform players will play a critical role in this movement. Natural Language Processing (NLP) will get employed in many aspects of business processes. Deep Learning is emerging from academic research and is ready for business use cases. Enterprise ML initiatives will come out of pilot mode and become mainstream. Overall, machine learning is becoming a primetime reality in enterprises.
Analyst research predicts that by 2022, one in five workers will rely on AI to do some portion of their jobs. In the last couple of years, we have witnessed a trend of machine learning evaluation initiatives turning into mainstream projects and becoming an intrinsic part of enterprise automation roadmaps. Enterprises are investing in more practical usage of machine learning in areas such as human augmentation (automating complex processes, providing intelligence to aid human decisions), customer support and engagement, employee engagement, talent sourcing, sales and marketing.
While we see this increasing demand on one side, there is a shortage of qualified data science talent on the sourcing side and increasing complexity of algorithms on the business side. These factors are the driving force behind the trend of democratization of AI. Cloud providers like Amazon and Google are providing the means to quickly build and deploy ML models. Automated machine learning, (which attempts automation of exploratory analysis, feature engineering and selection, building and training multiple simultaneous models) is becoming more prevalent. Google recently released a service called Cloud AutoML that enables developers with limited ML expertise to train high-quality models. Niche companies like DataRobot efficiently automate the process of building many ML models from open-source algorithms, for a given dataset and target variable.
We also see a pattern of common use cases across enterprises in horizontal areas like unstructured text insights, image analysis, and speech-to-text, where enterprises are seeking readymade ML models. On the other hand, AI start-ups in financial services, healthcare, and legal domains are mushrooming to target niche problems within a domain. Amazon AWS Marketplace recently launched ML models in the marketplace for a tailored selection of ML use cases. Overall, the industry is slowly gravitating towards a common ML solutions marketplace.
In terms of technology, two areas will be prominent in 2019 and beyond. The first one is NLP and text analytics. While chatbots and voice assistants have become commonplace in customer support and engagement, enterprises are realizing the need to go beyond the hype of chatbots, understand the complexities of human-machine interaction and employ NLP in every aspect of customer experience. Healthcare providers are trying to find hidden details in nursing notes which can affect patient care plans or seeking to decipher patient sentiments from patient-nurse conversations. Banks are using NLP to assess the digital footprint of customers to feed into credit score calculations. Pharmaceutical companies are building NLP query interfaces for complex pharmaceutical regulation. The need to understand how human language works and how to employ it in business flow is becoming a key priority for practitioners of machine learning.
The second major technology that will matter in 2019 is deep learning. It is primarily getting leveraged in areas such as medical diagnostics, digital pathology, image and speech processing, and drug discovery. This is getting bolstered by a more open approach and willingness of regulators to approve the use of AI in clinical and diagnostic settings. In the US, the FDA recently gave a go-ahead for a decision support software that uses ML to assist the neurovascular specialists in their diagnosis. Other examples include use of deep learning based image classification in astronomy for galaxy classification, in life sciences for micro-organism classification, and in auto insurance for car damage inspection.
Explainable AI, which unlocks the black box of the underlying algorithm and explains the decision or prediction made by it, will become key to the adoption of AI and ML. This is especially critical in healthcare, where the physician’s adoption of AI is heavily dependent on explainability. Tools such as IBM OpenScale attempt to address explainability of the underlying algorithm, demonstration of data bias (if any) and decoding the decision making as required for regulatory compliance.
Democratization and mainstream usage of ML however opens a Pandora’s box—about ethics and responsible AI. Not everything that ML can do should be allowed to happen unchecked and unregulated. Additionally, given the fact that these outcomes are driven by the possible biases in the underlying data, it is important to have checks and balances on the outcomes. The Public Policy Technology and Law Commission in the UK is considering the regulation of AI. We predict that 2019 will see more constructive work in this area to define a regulatory framework through common efforts between industry and government.
|Explainable AI||Automated ML||Deep Learning||MLaaS platforms (AWS SageMaker, MS Azure ML, Google Cloud ML)||Chatbots|
|Responsible AI||Advanced Image Analytics||NLP||Classical ML Algorithms|
|ML Marketplace||AI based Medical Diagnostics|
- For faster time to market, adopt ML-as-a-Service platforms and ML marketplace models before building and training models from scratch.
- Look beyond chatbots and incorporate NLP in every aspect of customer experience.
- Evaluate explainable aspects of algorithms for better adoption of black box models.
You May Be Interested In
Sameer Dixit, GM, Data, Analytics, AI/ML at Persistent Systems talks about the significant impact of AI & ML on Industrial manufacturing
Abhiram Modak discusses the benefits of data priming for a smoother RPA implementation
The potential for Automation & AI to raise throughput, increase productivity – explained