MLR Loxz Scoring Methodology

Chen Song | 8/27/2021

Table of Contents

1. Introduction
2. Scoring Methodology
3. Data Summary
4. Survey Data Summary
5. Conclusion
6. Contributors


Machine learning has revolutionized how the fastest growing companies conduct business. As the world becomes more digital, reliance on machine learning deepens across all global industries. However, this ubiquitous adoption comes with an equal amount of uncertainty in execution. For organizations to actualize the cognitive ability and the economic efficiency promised by machine learning, they must have an understanding of their readiness to deploy machine learning models. Loxz Digital addresses this uncertainty by generating a machine learning (ML) readiness score for every organization that takes our free ML readiness survey!

By taking our assessment your organization can learn about its technical maturity and ensure it meets the requisites for successfully launching ML models, all while gaining important business intelligence that will help prevent you from being eclipsed by your leading competitors.

The Loxz digital survey was designed by domain experts, including the Loxz Digital data science team, to provide immediate insight into your ML strengths and weaknesses. The data source of this report is first-party respondent data from the beta versions of the ML Readiness Survey conducted by Loxz Digital and the corresponding ML Readiness score of each survey respondent. This report provides insights into the relationships between ML scores, ML roles, industries, the number of data professionals in organizations, and perspectives about model risk & quality.

This 16-page report provides a quick overview of how the Loxz Digital ML readiness survey and scoring reveals essential insights and help us identify the current ML role.

-Prepared by Chen Song, Data Scientist Loxz Digital Group

Scoring Methodology

What is our scoring methodology?

The ML readiness score is designed to indicate how mature an enterprise is in its capacity for Machine Learning. Upon taking our survey, you are provided with a label as (1) Observer, (2) Performer, (3) Innovator, or (4) Leader based on your responses.

This categorization not only conveys the overall assessment of your organization's ML readiness but is accompanied with specific practices you can adopt to take your machine learning models to the next level! Taking our ML readiness survey not only helps organizations understand their strengths and weaknesses, but provides insight into where your improvements can be prioritized.

Identify and label scoring Questions

The ML readiness survey covers a variety of robust questions which are factored into our scoring methodology, used for survey refinement, and ultimately provide your organization with important recommendations.

Your organization’s ML readiness score is calculated based on a sophisticated and proprietary scoring algorithm which has been vetted by industry experts. Questions that are meaningful to industry success help ensure that your machine learning readiness score accurately reflects your firm's readiness to successfully undertake machine learning projects!

ML readiness scoring methodology 2.0

The Loxz Digital survey is not just a survey, it's a diagnostic assessment which uses a tightly vetted answer key system to simultaneously increase accuracy while reducing bias.

To establish an accurate score, our team has developed an answer code for each option of every designated scoring item. This answer code, represents the Machine Learning maturity of the organization within a particular dimension proven to be vital to success in ML, such as an organization’s awareness of importance on machine learning development and an organization's ability to deploy a machine learning model effectively.

The answer code for each question consists of a list of numbers (ranging from 0-12), divided by the options within each question and weighted on the relative complexity for having a resource of completing a task. The lowest possible score for each item is always 0, allowing us to gauge your machine learning maturity throughout the entire life cycle. The answer code increments by one, with a higher answer code indicating a higher level of maturity.

Weights established by our panel of experts are assigned to each question and used to calculate

Weighting items involves our domain experts assigning a value to each question. The weight is a numerical value ranging from 1 (being the least relevant) to 5 (being the most relevant). The weights are then averaged across the domain expert scores and applied to the overall machine learning readiness score. The overarching method effectively reduces both the potential bias generated while survey takers fill out the survey and the potential bias in the survey questions and options.

Future work

We are currently revamping our instrument to sub-categorize machine learning readiness into five psycho metrically robust dimensions:

We are currently revamping our instrument to sub-categorize machine learning readiness into five psycho metrically robust dimensions:

  • Data Preparedness
  • Model Development
  • Model Deployment
  • Model Monitoring
  • Business value

Importantly, these composite scores will help you identify how you can bolster your machine learning readiness and target recommendations for your firm. Future iterations will consist of newly adapted batteries of items designed to succinctly target your organization’s ability to deliver a machine learning model!

Data Summary

"Even the smallest of companies have data! Smart companies use their data efficiently and effectively to solve business problems. Indeed, more than 70% of all companies surveyed indicated that they are actively implementing ML tools and strategies."

Survey results (appearing in Figure 1) most (37.57%) companies surveyed were performers, indicating their companies are actively implementing and adopting ML tools. 29.94% of companies were leaders in their respective industry, establishing the next wave of trends across the global workforce. 28.84% of all companies were observers, classified as those taking a wait-and-see approach to machine learning. However, only 3.65% of companies were innovators who were currently exploring and developing machine learning in their businesses.

Figure 1. ML Readinesses among all respondents

Survey Data Summary

"Innovators take risks when building models, and Performers with a lower overall score have excellent upside potential."

Results (Figure 2) indicate that innovators- relative to other classifications - have the highest average ML readiness score (67.60), well above the sample average (56.57).

Observers only received an average score of 16.19, far behind Performers. Presently, our results note that innovators have a higher average ML readiness score than leaders because innovators are more open to experimenting with models and take risks demonstrating a proactive approach to both model-centric and data-centric approaches. More importantly, innovators are more open to risks when it comes to adopting ML. This may be because innovators are more apt to deploy a model that underscores a higher performance in the proof of concept stage.Performers with a lower score seem to have excellent upside potential.

Figure 2. ML Roles and ML readiness scores

"ML innovators hold a more open attitude to risks, and ML performers seek a balance between risks and retaining qualities. However, ML leaders are more conservative regarding the risks, and 100% of observers show conservative instincts. "

Product quality, experimentation, and risk are key elements that all organizations must evaluate when adopting machine learning. While Innovators show interest and willingness to take risks, Leaders focus on quality and reproducibility. For example, 67% of ML innovators innovators indicate that experimentation and risk are promoted, even if this causes failures to deploy models, but only 10% of ML leaders show that experimentation and risk are encouraged.

Performers seek a balance between retaining quality and reproducibility and taking risks while experimenting with models. However, 22.85% of performers show that quality and reproducibility are priorities, and 31.43% show interest and willingness to take the risk.

For example, one important finding is that 34.28% of performers show an even mix of taking risks and retaining quality and reproducibility. Observers show no interest in taking risks and experimentation, and 100% of observers show that quality and reproducibility is their top priority.

Figure 3. Perspective on quality, experiment and risk across ML roles

"Our ML Readiness Score varies across industries. For example, there is almost a 41 point gap between the financial and banking industries and the energy and utilities space. "

The ML readiness average score varies widely across industries (Figure 4) for many reasons which we will address in future publications. Although companies within the Finance and Banking industry earn the highest ML readiness scores, and companies in the E-commerce and retail industry earn the lowest scores, there are many breakout stars across industries. However, these findings are indicative of trends observed across industries.

For example, in 2020, the worldwide pandemic prompted lockdowns which inherently boosted the adoption of machine learning initiatives particularly for companies in banking and finance. As the pandemic drove consistent adoption, customers have become more reliant on remote services which leverage machine learning to efficiently and effectively enhance access, services, and scalability.

However, industries such as Energy and Utility, Transportation and Logistics, and Education have been far slower to react to these needs and lag far behind in their ML readiness scores. Further, the Lack of ML Resources, including coveted "Data Scientists" and machine learning professionals coupled with low data quality, are significant drivers that often prevent these industries from adopting machine learning techniques.

Figure 4. ML readiness average score vs. Industry

"Our ML readiness scores highly relate to the number of data professionals in the organization and how robust machine learning implementation is as a differentiator in their organization."

As illustrated in Figure 5, there is a positive relationship between ML readiness and how highly an organization values machine learning. Our results indicate that companies who rate machine learning higher in importance (5 out of 5) scored 28.49 points higher than those who ranked it slightly lower (4 out of 5) Indicating that the Lox ML readiness score is highly related to perceived organizational importance.

However, there is no positive correlation between the ML readiness scores and the number of data professionalsin the organization. As shown in Figure 6, the organizations with 100-500 data professionals get the highest average scores in ML readiness and 8.88 higher than the organizations with more than 500 data professionals. One of the reasons behind this could be that those giant companies have a fixed business model and machine learning mode, so they are sometimes unwilling to experiment more robustly with new machine learning algorithms.

Figure 5. ML readiness average score vs. Importance of machine learning
Figure 6. ML readiness average score vs. Number of data professionals

"Those who report ML as being core to their products generally earned higher ML readiness scores. Similarly, the higher interpretability of the ML to the products, the higher the ML readiness score. "

As shown in Figure 7, organizations that identify ML as core to their products are the highest scoring companies. Further, companies that identify the ML as interpretable to their products earn higher average scores than the organizations who perceive ML as a “BlackBox”. Conversely, organizations that mark the ML as a feature instead of a core to the product get lower scores in general.

However, the difference between organizations who view ML as interpretable and a feature of the product and organizations that mark ML as a core of the product is less than 4. The score difference between organizations that mark ML as a BlackBox & a feature of the product and organizations that mark ML as a core of the product is 7.88.

Figure 7. ML readiness average score vs. ML roles in products


The ML readiness score provides a quick and direct assessment of an organization's machine learning maturity and readiness. While the score relates to many different factors, including the organization’s size, industry, role of ML, the number of data professionals, and the blueprint that the organization hopes to achieve with ML, our survey provides the most accurate way for an organization to objectively understand it’s ability to deliver on machine learning objectives.

As noted in this report, the scoring methodology effectively reduces the potential biases from survey answers and survey options, and it takes multiple factors into the model to provide ML readiness scores as accurate as possible.

From the analysis of the score and the relevant factors, we see a great potential of ML performers and potential in healthcare organizations and e-commerce organizations in ML development. The average score difference of ML performers and ML leaders is tiny, and ML performers account for the most significant proportion of 4 segments. The high ML readiness average score of the E-commerce, Finance and Banking, and Healthcare industries is the product of times we find ourselves in.

The development of artificial intelligence and network effects including the rise of the digital operating model, has created a boom in e-commerce, and the global pandemic lockdowns boost AI and ML adoption in the Finance or Banking industry and the Healthcare industry.

In the Q3 report that will be published in early October, the sub-categorical scores regarding different perspectives of ML readiness will be available. With targeted sub-categorical scores, you will gain more detailed and refined insights into ML maturity and readiness from the entire ML Lifecycle, (from Data Preparedness to Model Monitoring) including but not limited to data prep, model development and model deployment. The sub-categorical scoring aims to provide you with a more specific and targeted diagnosis of ML maturity regarding different ML lifecycle stages and help you identify potential opportunities and risks throughout the ML lifecycle.

Future work will be geared toward refining this assessment and providing companies with more detailed and nuanced business intelligence and recommendations. We will also dive deeper into business value and model inventory. To learn more about your organization’s machine learning readiness contact Loxz today!


Loxz Digital Group is a Machine Learning Collective located in Berkeley, CA. Established in December of 2020, our focus is on building accurate machine learning models with diverse ensemble techniques for private and government entities.

We have partnered with esteemed organizations such as AWS, Splice Machine, and TurboSBIR to help us build machine learning models efficiently and coordinate with government entities as a gateway for the commercialization of our products and services. Collectively, the current assembled team has over 40 years of ML experience, housing 9 data scientists, all located in the United States and Canada. The data acquired from this survey is exclusively first-party data.


Chen Song,Data Scientist

Lead Contributor, Author

Abhishek Santra, Ph.D

Scoring Methodology

Ara Baghdassarian


Yiming Zhang-Data Science Lead

Scoring Methodology

Yumi Koyanagi, Designer

Report Designer

Keira Wang, Designer

Charts Designer

Justin Chase

Survey Methodology