Writing a good ML application that actually delivers tangible value within your business is hard. Wouldn’t it be great if you could instantly add an expert in your team that can help you? If that sounds like your problem, you should meet Oscar!
Do you feel left behind? There’s a lot of research already available about how companies who haven’t implemented AI to improve their decision-making are being left behind by their better-informed peers. A 2021 Gartner® Application Innovation Implementation Survey found that, although 84% of respondents are using AI, 41% are only making limited use.*
Mastering AI needs new skills Moreover, it’s getting very hard to find engineers with the ML skills needed. The same 2021 Gartner survey found, “almost 60% of organisations have either no software engineers or less than 10% of their software engineers trainedin ML skills. This is indicates a significant gap in skills needed for developing applications with ML or AI models incorporated into them.”
Throughout 2025, the lack of AI skills will continue to be the No. 1 challenge for enterprises looking to succeed in their AI initiatives. What are these new rare skills?:
Data Ops: Aggregation of raw data sources, meta data extraction and management, master data management, data governance, automated [data pipeline] source control;
ML Modelling: Hypothesis generation for actionable insights, data exploration, feature engineering, algorithm development, model development, application and integration;
ML Ops: decision[signal] integration, continuous integration, continuous deployment, continuous training, explainability, business value realisation.
Up until now, the barrier to mastering these data science and data engineering disciplines in order to deliver real value has been very high. If you’ re one of the companies catching up, the good news is that using Oscar, you can bootstrap the development of these skills in house. In this blog, we’ll zoom into how Oscar can help with the ML modelling skills gap.
Oscar is here to help! Our UX design approach with Oscar is not to hide away the complexity of ML model development, but to present tools that enables any user, regardless of their skill level, to be able to develop an ML application. With Oscar, the user can focus their efforts on solving their problem, rather than learning to navigate the complexities.
Oscar’s Workbench presents two views to the user: one is a “canvas” UX where non-experts are guided by Oscar’s recommendations on the right ML models and the best configurations. The second (which we’re working hard on at the moment)will implement a familiar “Pythonic” UX where experts are fast-tracked through models and configurations by Oscar’s recommendations.
Using AI to build AI
Complex ML models can be very complex indeed. Even the schema for arbitrarily large datasets can be too big to fit into human brains. Put the two together, and you get ML model development that is so hard, you need a team of people to build just one. But wait, the whole pointof AI is to reduce complexity, so why not use AI to build AI? That’s exactly the promise of Oscar Recommends, the Decision Intelligence system being built into Oscar to guide users through the complexity of ML model development. Oscar Recommends surfaces our developed expertise: (a) fundamental AI algorithmic building blocks; (b) strategies, protocols and best practices on using these building blocks to develop explainable, accurate and reliable models;and (c) compute fabric that executesthese models at scale. Not automatically unexplainable If you were so inclined, you could build a system that completely automates the process of building ML models from start to finish. Such AutoML techniques promise to take the [well-formed] source data, your target, then choose an ML model that has the best accuracy/ sensitivity/ precision/ other metric. The most common approach in AutoML is brute force: create a bunch of models, train each of them, then choose the one with the best numbers. We’ve deliberately chosen not to take this path, not just because it’s more computationally expensive, but mainly because it risks removing insight from the process. Lack of understanding not only how a model makes a prediction, but the also of how the model was built in the first place, adds too much business risk through too little explainability.
Instead, we’refocusing on new techniques that improve the ability for you to understand your data. Understanding where the insights come from makes the model explainable. We think explainability is a key requirement of all enterprise ML models and will be a regulatory mandate in the near future,so it’s important to get this right.
Massive Analytic’s key innovation in AI is the development of Artificial Precognition (see https://www.massiveanalytic.com/artificial-precognition). Artificial Precognition has been tested to deliver exceptional predictive accuracy: In healthcare predicting Alzheimer's Disease onset correctly 91% of the time in non-symptomatic patients, predicting car insurance claims 98% of the time or reducing traffic delays by 30 minutes in central London. As well as being a toolset of algorithms, Artificial Precognition is also a methodology of coarse followed by fine-tuning, as part of a data science Observation - Interpretation - Prediction - Action loop. Artificial Precognition, as implemented in Oscar, is able to find additional features in your dataset that can yield better prediction metrics.
How Artificial Precognition fits into the ML workflow Here’s how Artificial Precognition fills the skill gap. As we saw earlier, ML models and model deployments are very complex. For every configuration parameter, there is a choice to be made based on a cost to be understood. Oscar helps the user understand their choices in two ways: 1) by providing a comprehensive help text that supports each activity in the Workbench, at the level of Business User and Data Scientist; and 2) by providing our ground-breaking Artificial Precognition algorithm and methodology.
Artificial Precognition is discussed more fully elsewhere, but here’s a short version: the model development workflow in Oscar splits into two sequential parts: Coarse-Tuning and Fine-Tuning. Coarse tuning analyses the full (or fuller) dataset to determine all possible features which might contribute to solving the AI problem. Sometimes the resultant coarse tuned model is enough, but often times a further fine-tuning step is needed. Fine-tuning may use only the subset of features that most strongly contribute to solving the AI problem, or use algorithms that are more computationally efficient and can be executed in real-time.
Artificial Precognition is able to unearth unconventional binnings that the dataset indicates that a new cognised feature is consistent but less certainly implied by the data. By using these additional cognised features we can train potentially more accurate models,perhaps using less data. Oscar will recommend cognised features as part of the coarse tuning step of the ML workflow.
Over the course of our researchand development of Artificial Precognition, we’ve found ML contexts that give particularly good results. This prior learning is codified as decision rules. When Oscar finds that the context for the ML work undertaken matches one of these Artificial Precognition contexts, Oscar will promote Artificial Precognition more highly. Otherwise, Oscar will recommend alternative ML models (or algorithms) and associated hyper parameter values. These recommendations are expertly curated by our in-house data scientists.
After coarse tuning comes fine-tuning. Oscar can apply a number of analysis types, statistical and ML models, some native to Spark MLlab, others imported by the user. The choice depends on the problem being solved. The expert rules of which approach to use in fine-tuning are curated bythe user.
We’ve seen how Oscar can help navigate the complexity of ML model development, primarily through its implementation of Artificial Precognition. Business Users and Data Scientists alike will benefit from Oscar’s ability to surface unconventional insights from your data - without needing to master all the skillsneeded to do so.
The development of Oscar’s ML modelling capabilities is our currentfocus, but we’re not stopping there. Our ambition is to take this approach into ML Ops and Data Ops as well so that Oscarbecomes a more complete team member.
Gartner, “SurveyAnalysis: AI AdoptionSpans Software Engineering and Organizational Boundaries”, Van Baker, Benoit Lheureux, November25, 2021.
*Total number of respondents- 109. Question- To what degree are software engineering teams in your organization incorporating AI and ML into application and software development?
GARTNER is a registered trademark and servicemark of Gartner,Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.