With rapid advancements in AI technology likely to have a major impact on how society functions in the near future (from how we work to how we get from A to B), many governments around the world are looking at developing ethical standards to ensure it is not abused.
Data Ethics is an emerging field that has attempted to fill in the gap at the confluence of these developments. It however relies heavily on drawing insights from multiple disciplines (such as IT, legal, auditing, data science) and is arguably yet to provide practical tools to help product developers successfully navigate the needs of both the business and the consumer. Existing
ethical theories and models often rely on eliciting individual values, virtues, personal maxims and principles which are then composited and prioritised by product development team members. In doing so, it unfortunately assumes a degree of universality that does not exist – particularly in relation to novel uses of data and AI driven applications. Without some form of external validation
of something so intrinsic to individuals as personal ethics and values, significant reputational risk is likely to accrue during the product development process.
Quantifying Ethical Product Choices
Given the above, a study was developed to demonstrate an approach to identifying and quantifying ethical priorities of consumers within the context of data driven AI products.
Respondents were presented with two hypothetical car insurance products that use AI to track and monitor driver behaviour and usage in real time. Premium discounts were provided to drivers exhibiting desired practices as part of the case study scenarios. Study respondents were then asked to rank product features (containing defined ethical priorities) based on their relative importance to their purchase intention.
The study was completed in 2023 and all responses were collected online, screened for US Car Ownership. The sample size was based on 268 qualified responses. Balanced sampling was used on Gender and Age, based on the US Census. Respondents were also representative of all regions throughout the USA.
What did we find?
A couple of surprising findings came out of the study.
Firstly, Fair Processing as opposed to Transparent Processing was key to consumer purchase decision making. The study defined each as follows:
- Fair Processing - whether the information collected by the insurer will be fairly assessed
- Transparent Processing - Whether the insurer will be transparent on how your personal information will be used
This seemed quite at odds with the large wave of regulatory action to embed transparent processing in the form of ‘Explainable AI’ into law around the world. It however makes a lot of sense, once you step back and consider it within the context of the decision being made.
Fairness is a loaded concept, but it connotates an outcome that has been impartially weighed against all available information. Transparency’s aim on the other hand, is focused on explaining what is being done to your personal information. Such an explanation may or may not end up being useful. In the study, the insurer for example could provide a great deal of information on how it calculates discounts for a premium. This information will quickly become meaningless for respondents if they do not have the inclination or aptitude to understand what is being presented. From a purchase decision making perspective, fairness is therefore much more attractive as it is a shortcut to achieving an optimal outcome.
A second interesting finding was the overwhelming interest in purchasing the hypothetical AI Car Insurance products presented. Not surprisingly, many expressed concerns about their personal privacy. This was however balanced by an equal number of suggestions on how it could be improved or added to. What the study clearly demonstrates is that consumer ethical decisions should not be viewed in isolation. Particularly when it comes to complex data driven products. Such decisions are pragmatic in nature, weighing numerous risks, benefits and factors.
Organisations looking to better manage their data ethics risk should understand that consumer product evaluations are multifaceted, with context playing an important role. Strategic decisions that
have an impact on reputational risk, should be validated by realistic modelling of consumer choices where possible. Relying on intent metrics collected in isolation or the intuition of individual team members is unlikely to be reflective of real world perceptions or choices. As outlined by the findings of our study, quantitative approaches are available to help provide practical and objective evidence to help minimise such risks.
Chris Tia - Principal, Lean Prototype Machine
Certified Machine Learning, SOC, CIPP/E+US, CIPT, ISO27001, NIST CSF, e-Discovery Professional
B. Science (Computer Science), B. Laws (Hons), M. Commerce, Grad Dip (Legal Practice)