Next Article in Journal
Risk Approach—Risk Hierarchy or Construction Investment Risks in the Light of Interim Empiric Primary Research Conclusions
Next Article in Special Issue
Educational Leadership in Times of Crisis
Previous Article in Journal
Exchange Rate Volatility, Currency Misalignment, and Risk of Recession in the Central and Eastern European Countries
Previous Article in Special Issue
Dilemmas in Managing the COVID-19 Crisis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Liability for Incorrect Client Personalization in the Distribution of Consumer Insurance

by
Piotr Tereszkiewicz
* and
Katarzyna Południak-Gierz
Private Law Department, Faculty of Law and Administration, Jagiellonian University, Olszewskiego 2, 31-007 Kraków, Poland
*
Author to whom correspondence should be addressed.
Submission received: 1 March 2021 / Revised: 22 April 2021 / Accepted: 23 April 2021 / Published: 1 May 2021
(This article belongs to the Special Issue Risk in Contemporary Management)

Abstract

:
The use of personalization mechanisms should allow the insurance distributor to reduce exploration costs and adjust the offered insurance product to the needs, features, and situation of each individual client. This study seeks to examine how liability should be allocated when the process of the personalization of an insurance product does not result in the client’s choice of an optimal product. First, we identify the typical uses of new technologies allowing for an adjustment of insurance contracts. Second, we analyze the interplay between their application and the legal obligations of insurance product distributors. Subsequently, the paper discusses the scope of factors the insurance distributor is liable for when using personalizing tools in contacts with clients. We submit that offering an online personalization of insurance products ought to be regarded as being equivalent to providing advice under Art. 2, Sec. 1, Point 15 of the European Union Insurance Distribution Directive (IDD). From the consumer’s perspective, our analysis makes the case for the insurance distributor’s liability for mispersonalization of an insurance contract.

1. Introduction: New Technologies in the Insurance Sector

Any technical revolution necessarily brings about legal change. The growing importance of new technologies has given rise to complex new questions in insurance law as well. Currently, one of the crucial concerns in insurance law relates to how insurers deploy modern technological devices to explore the needs of prospective clients (consumers) and the risks that those clients present. The use of online tools enables insurance providers to profile prospective clients as regards personalizing any content (insurance offers) subsequently presented to them, that is, to adjust the said content to the individual characteristics, needs, and situation of a person to whom this content is addressed (Południak-Gierz 2020a, p. 1010). In contrast to customization, in this process, the person whose data are a point of reference for the adjustment remains passive, while tailoring the content is performed for them by a system (Sundar and Marathe 2010, pp. 300–2). In consumer insurance distribution, personalization may take various forms. From the perspective of this analysis, two of the instances of personalization are crucial. First, when personalization mechanisms are applied at the precontractual stage to assure that offers reach the target audience only (e.g., clients who potentially can be interested in a given product or who may become profitable customers) at the time these persons are most prone to conclude a contract, in a manner and form that matches their individual preferences. Second, personalization tools can be used during the contract formation in order to place appropriate clauses into the text of the contract that is being generated for an individual person.
Personalization in the distribution of consumer insurance products gives rise to a number of legal questions at different levels that should be taken into consideration when designing and applying the law. From the business perspective, InsurTech may change the business’s exposure to the regulatory, governance, fraud, model, data, assets side, cybersecurity, and IT risks (Nicoletti 2017, pp. 230–33; Bruce et al. 2018, p. 12). From the perspective of the consumer, a number of risks arise (Gal and Elkin-Koren 2017, p. 322; Meyers 2018). First, risks of discrimination, which means that high-risk customers could be faced with a growing degree of exclusion from insurance coverage (Gillis and Spiess 2019; on health insurance: McFall 2019, pp. 60–61). Second, there is a risk of an increased information asymmetry. Consumers tend to be unaware of different personalization functions, i.e., what data are included, how they are processed, what are the outcomes of this processing, and finally, how it affects their situation, e.g., how the insurance contract they concluded was modified by this process (Południak-Gierz 2020a). Third, undue influence on the decision-making autonomy of the consumer poses another relevant risk. The personalization of the content presented to a potential customer means that the time, form, and manner of concluding a contract, are chosen to target a consumer in a way and time that make them most prone to enter into such an agreement. The fact that this technology allows one to exploit the biases and the weaknesses of the consumer may strongly limit their ability to make a free and informed choice about whether to conclude a contract (Gal 2018, p. 89; Wagner and Eidenmüller 2018, p. 583).
From a socioeconomic perspective, it is claimed that big data use in insurance brings about the twilight of solidarity defined as sharing payment responsibility for the occurrence of different risks (Prainsack and Van Hoyweghen 2020; McFall 2019, p. 70). The aim of the paper is to tackle one aspect of the use of the personalization mechanisms in the daily practice of insurance distributors (that is all classes of entities that sell insurance products to insurance clients, Cappiello 2018, p. 23), which has not yet been explicitly addressed by legal scholars, namely, the distribution of the risks of the improper personalization of an insurance contract. Two research questions are addressed. First, when personalization mechanisms applied by insurers function improperly, how should risks resulting from their imperfection be allocated? Second, if one assumes insurance distributors are obligated to provide advice to each prospective client, how could the fact that insurance distributors are under the duty to advise their clients with respect to the prospective transaction influence the outcome of this risk allocation? In other words, we examine the relevance of the insurance distributors’ duty to advise their clients on the allocation of risks resulting from the malfunctioning of any personalizing tools they deploy.
The use of personalization mechanisms should allow an insurance distributor to adjust the insurance products they offer to the needs, characteristics, and situation of an individual client (or a class of clients, depending on the method of personalization applied). In reality, the result of personalization in a given case may diverge from the outcome one would regard as appropriate: an insurance product offered may not cover the risk the client wants to have covered or the coverage may be offered in return for a premium that is significantly higher than those charged by other providers on the relevant market (cf. Baker and Dellaert 2019, pp. 18–20, regarding robo-advisors). Thus, the crucial question is who should bear the liability when an insurance contract is concluded that is not appropriate from a consumer’s perspective. The implications may be considerable since, in most cases, the inadequacy of an insurance product translates itself either into a lack of insurance coverage of certain risks or incomplete (partial) coverage of a specific risk or excessive (unnecessary) insurance coverage (Tereszkiewicz 2015, pp. 295–96). The difficulty in providing a clear answer to this question lies in the fact that the reasons for suboptimal outcomes of personalization processes can be diverse. Most typical reasons include the following: the personalization mechanism provided by a professional operator might be defective; the mechanism might function properly, but its configuration might be flawed; the mechanism and the configuration might work correctly but the data gathered might not be adequate or sufficient for its objective; and finally, the objective of the personalization might be set in a manner that diverges from what contractual fairness (honesty) would require.
While this paper examines the liability of insurance distributors, our analysis draws significantly on consumer law scholarship. Specifically, we consider the interplay between the protection of the interests of consumers who face personalization while seeking insurance products, and the scope of liability of insurance distributors for breaches of pre-contractual duties toward clients. This will enable the identification of the main risks related to the application of personalization tools in the insurance sector, based on which a workable test of insurance distributor liability in cases of the suboptimal personalization of an insurance contract can be proposed. The main objective of the research is to establish whether a possible diversification of risks shall lead to a situation in which different entities are liable for different factors, causing the mispersonalization of an insurance contract.
Our paper focuses on general consumer insurance (Loacker 2015, pp. 9–10). This choice has two major implications. First, insurance-based investment products (IBIPs) remain outside the scope of this paper due to their specific nature and a partly different legal regime to which they have become subject under the European Union Insurance Distribution Directive (Directive (EU) 2016/97 of the European Parliament and of the Council of 20 January 2016 on insurance distribution, subsequently: IDD). Insurance-based investment products should preferably be the topic of a separate paper, in which a representative range of personalization tools used in the distribution of these products will be examined. Second, this paper focuses on insurance products purchased by consumers who are acting outside their trade or profession. At present, the application of numerous protective legal rules depends on whether the insurance client is acting for a private (nonprofessional) purpose. National legislation on insurance law increasingly adopts a specific treatment of consumer insurance (e.g., the Consumer Insurance (Disclosure and Representations) Act 2012 in the UK; certain provisions in German and Polish insurance laws apply only to consumer transactions).

2. Structure of the Analysis: Materials and Methods

The first part of the study is devoted to exploring the typical uses of new technologies allowing for the adjustment of insurance contracts (policies) to the needs, characteristics, and situation of an individual client or a class of clients. To this end, a mosaic approach is applied since analysis covers theoretical scholarship on InsurTech, a selection of current online insurance offers, and the position papers of various international expert bodies (BEUC 2017; OECD 2017, 2020; EIOPA 2018). After outlining the application of new technologies in the insurance sector, we investigate the general stance of insurance law toward the personalization of consumer insurance contracts. First, the impact of the above-mentioned business practices on the insurance distributor’s duty to provide advice as set out in IDD is discussed. The findings of the analysis are presented, together with their implications. Using the analytical method, we interpret (construe) the provisions of IDD and other applicable legal instruments that determine the content of the insurance distributor–client relationship. The analysis enables one to establish the vision that a particular legal system (e.g., that of EU law) has of distributing consumer insurance products. Second, drawing on research on the granularization of contracts, we provide an overview of the main risks associated with the use of this kind of tool for the personalization of insurance contracts. For each typical group of factors behind the improper results of the personalization process (referred to as scenarios), the findings of the investigation are shown and discussed separately, and consequently, the entity liable for the mispersonalization caused by each type of risk is identified, considering general rules on contractual liability applicable to consumer insurance contracts. Finally, the results are summarized with the reasons for adopting the proposed manner of attributing the liability for mispersonalization explained.

3. Market Overview

Data-driven technologies, allowing for big data analysis and the application of artificial intelligence (AI) mechanisms (Borselli 2018, pp. 41–43; Łańcucki 2020, pp. 8–10), have become a multitool in the insurance sector (Senousy et al. 2018, pp. 40–41; for a general overview see: Łańcucki 2019, pp. 9–15). A typical example of big data use is a usage-based insurance model. For instance, in Poland, the insurer ERGO HESTIA makes use of the data gathered by the application Yanosik (a traffic alert system, https://yanosik.pl/ (accessed on 27 April 2021)) on a potential client’s driving style and skills for the personalization of car insurance. Using AI tools, the Polish insurer WARTA runs two pilot projects. The first one uses a virtual adjuster (image recognition) for the estimation of the costs of vehicle repair. In the second project, a call center consultant gathers data from vehicle accident victims.
AI is used for the purposes of indirect marketing, customer assistance, monitoring of client behavior (e.g., driver’s monitoring), claim predictions (by using telematics and geospatial data), claim processing (e.g., fraud detection, automatic execution), and insurance market analytics. It has become possible, inter alias, to gather data in a cost- and time-efficient manner, to perform a more in-depth analysis of the consumer’s situation, and to optimize insurance offers (e.g., by price personalization). AI engines automatize the process of data gathering either by actively entering into contact with (potential) clients in order to extract data or by monitoring their activities. It follows that with the application of new technological tools, insurance distributors are also capable of fulfilling the duty to provide advice to prospective clients. (Tereszkiewicz 2013b, 2020; Nicoletti 2017, p. 218) Yet, the deployment of these new technologies gives rise to new risks for consumers and legal risks for insurance distributors, who face uncertainty as to how their conduct will be evaluated.

3.1. New Manner of Fulfilling the Duty to Advise: Analysis, Results, and Discussion

To begin, one should consider how the application of new technologies—especially those used for personalizing the content presented to a potential client—may affect the liability of the professional party (on how the principle of “know your customer” affects the general conduct of the insurance business, see Tereszkiewicz 2013b, p. 240; Cousy 2017, pp. 45–48). It should be underscored that these mechanisms can be used by different classes of operators in the insurance distribution chain, including insurance agents, brokers, and insurers (Cappiello 2018, p. 56). As a result, comparable challenges appear regardless of who takes part in the process of insurance distribution and sale. It is submitted that there is a strong case in favor of imposing on business parties a robust duty to advise a prospective client. Specifically, this means both strengthening the insurance broker’s duty to advise the client, and—provided that it is the insurer that uses the personalization technique—extending this duty to insurers as well.
The big data technology and AI-based mechanisms allow one to reduce the exploration effort necessary for the insurance distributor to get to know the customer (on “know your customer” duties see Tereszkiewicz 2015, pp. 297–99). Big data enables the extracting and processing of an unprecedented amount of data on a (potential) client with such accuracy that in the end, the person using these tools may know more about the consumer than consumers themselves (Tereszkiewicz 2020, p. 131). This phenomenon is especially easy to notice if the common misperceptions about oneself are considered. A powerful example comes from research on cognitive biases and the manner in which these may be used during the process of price personalization (Bar-Gill 2019, pp. 217–54). The following illustration may be useful. Adam who is about to purchase a gym pass is likely to use the pass two to three times a month. However, he estimates the frequency of his gym visits too optimistically; he is convinced that he will go at least twice a week. As a result, his preference-based willingness to pay is significantly lower than his misperception-based willingness to pay, and the knowledge about this difference can be easily abused by an entrepreneur when setting the price of a gym pass. Another example is the IKEA effect—own amateurish creations tend to be considered having similar value to the creations of professionals by the consumers who were involved in their creation—if the consumer is given the opportunity to customize the product, then they are likely to value it more (e.g., see Share a Coke’ campaign).
The EU lawmaker appears to move consistently toward imposing on financial providers a broad duty to advise their clients (Tereszkiewicz 2020, p. 142). In the field of EU insurance law, significant advances toward protecting consumers against the most common risks of misselling of insurance products were made by Directive 2002/92/EC on insurance mediation (hereafter: IMD). Significantly, Article 12 (3) of the IMD introduced into EU insurance law as the duty of an insurance intermediary the necessity to explore the potential policyholder’s needs (Moloney 2010, p. 254, described it as “a quasi-know-your-client requirement”). The above-mentioned IMD provision imposed on an insurance intermediary a duty, on the one hand, to specify the demands and the needs of a prospective policyholder with a view to a specific contract, and, on the other hand, to specify the underlying reasons for any advice the intermediary gives to a customer. The language of Article 12 (3) IMD was very broad and did not provide detailed guidance on the precise extent of the intermediary’s duty. Yet, this allowed national lawmakers (i.e., legislators, courts, and oversight bodies) to “modulate details regarding the intermediary’s duties according to the complexity of the insurance contract proposed” to the consumer (cf. Article 12 (3) IMD). Enacted after the financial crisis of 2008, IDD considerably extended the level of protection offered by IMD (De Maesschalck 2017; Marano 2019). In what is a significant novelty in EU insurance law (Cousy 2017, p. 48), insurance distributors have become subject to an overarching general duty to act honestly, fairly, and professionally in accordance with the best interests of their customers. This duty, set forth in Article 17 (1) IDD, mirrors the equivalent duty introduced first in MiFID I (Directive 2004/39/EC of 21 April 2004 on markets in financial instruments, referred to as MiFID I, meanwhile repealed by Directive 2014/65/EU of 15 May 2014 on markets in financial instruments, referred to as MiFID II). It lays down a general standard of conduct for insurance distributors that should guide the interpretation of more specific duties provided by the IDD. In what is a significant extension of the scope of consumer protection, the IDD applies to certain activities conducted through price comparison websites (Marano 2019). Specifically, as Article 2(1) provides, the IDD applies to persons whose activity consists of the provision of information on one or more contracts of insurance in response to criteria selected by the customer, whether via a website or other media, or the provision of a ranking of insurance products or a discount on the price of an insurance contract when the customer is able to directly or indirectly conclude an insurance contract at the end of the process. While the IDD lays down no specific rules applicable only to comparison websites, its general principles and standards deal with most of the issues arising out of the comparison websites’ status as insurance intermediaries (Marano 2019, p. 304). In our view, this is a significant improvement of the IMD standard, which will be subject to further regulatory developments in the future. In practice, most comparison websites use hyperlinks that transfer the insurance client to websites deployed by different distributors. As a result of using comparison websites, clients searching for insurance products may easily end up being subject to personalization by insurance distributors.
While it is arguable that IDD does not explicitly impose on insurers a duty to advise their clients, it does extend precontractual individualized duties of exploration to all insurance product distributors: Under Article 20 Sec. 1 IDD, insurance distributors are obligated to “specify demands and needs of a customer based on information obtained from them, and shall provide that customer with objective information about the insurance product in a comprehensible form to allow that customer to make an informed decision” (about whether they want to purchase an insurance product from this operator). This provision should be read in the light of Recital 44 IDD, which requires that any insurance product proposed to the customer always be consistent with the customer’s demands and needs.
What is more, insurance products are becoming more volatile nowadays, making it more difficult for a consumer to grasp the difference between the different products available from various providers, let alone to understand their essence and risks (Tereszkiewicz 2013b, p. 237). As a result, the information and skill asymmetry between an individual client and an insurance distributor as to the possibility of assessing which product is the most suitable from the perspective of this client has increased significantly more than it had in the age of conventional (nondigital) insurance offering. Not only has the consumer virtually no capability to assess the adequacy of insurance products correctly in the light of their needs, but also the insurance distributor tends to be better informed about the characteristics of a potential client than the client themselves. This makes models in which it is the primary duty of the client (prospective policyholder) to obtain information on their own insurance needs (e.g., the German model, about which see Cousy 2012a; Tereszkiewicz 2013a), utterly unsuited for the new reality of digital insurance distribution.
With respect to digital insurance distribution, we submit that the consumer’s consent to profiling by an insurance distributor ought to be considered a request for advice by this insurance distributor. This, in certain national laws of insurance (e.g., Germany, Poland, the UK), could suffice to trigger an obligation on the part of the broker/insurer to explore the client’s insurance needs. Further, one should carefully analyze the scope of the client’s consent in this regard; the client might be consenting to the further processing of their personal data and by this “paying” with their personal data for the additional service of recommending to them the contract that is most appropriate for their needs and situation (Elvy 2017, part II).
Finally, it needs to be emphasized that the personalization of content in the online environment has been presented as a tool for increasing trust and changing the relationship between the entrepreneur and the consumer (Meyers 2018, p. 169, on the focus on assisting a policyholder by means of a driving skills development program; Południak-Gierz 2020a). For the insurance sector, this means that technologically empowered distributors of insurance (including the underwriters themselves) may successfully strive to change the way they are perceived on the market so that they make good on their promise of becoming “trusted advisors helping customers anticipate, navigate and eliminate the unique risks they face in a changing world.” (so claimed by providers of personalizing mechanisms, https://earnix.com/wp-content/uploads/2019/11/The-Age-of-Insurance-Personalization-1.pdf (accessed on 27 April 2021 P. 5; in this vein, Tereszkiewicz 2020, p. 131).
When a personalizing tool selects or personalizes the product for the consumer, a question arises as to whether performing this functionality as such should be regarded as an act of providing advice regarding an insurance product under Art. 2 Sec. 1 Point 15 IDD. As soon as an insurance distributor starts offering personalized products, that is, products adjusted to the needs and the situation of a customer (“tailor-made” products), and not just grossly matching their profile, this conduct should be considered giving advice. The major reason in support of this view is that when an insurance distributor undertakes the personalization of this content, the balance of expectations and corresponding duties between the parties shifts. The insurance distributor has a much broader knowledge about both the products they offer and the features of a client (including a client’s biases); the insurance distributor is capable of selecting the product that fully meets the needs of that client. It is not only the information asymmetry between the parties that deepens. At the same time, the vulnerability of the client increases because an additional reason for their possible lack of critical scrutiny (mental alertness) appears, namely, the trust in the profiler (Południak-Gierz 2020a). If a consumer consents to the processing of their personal data, they do it for a well-defined purpose, which is to obtain an offer indeed tailored to their needs and situation. Given the information asymmetry between the parties, the consumer voluntarily agrees to disclose a significant amount of data, which implies that the other party should reciprocate the trust placed in it and act accordingly.
In conclusion, the personalization process should be considered “the provision of personal recommendation” and thus should be subject to the requirements set forth by the IDD. This means that the customer should be informed whether “advice is provided on the basis of a fair and personal analysis,” i.e., whether the advice is provided on the basis of an analysis of a sufficiently large number of insurance contracts available on the market, and in accordance with professional criteria (cf. Recital 47, Art. 19, Sec. 1, letter c; Art. 20, Sec. 3 IDD). The technological system deployed by the insurer distributor should process sufficient personal data so as to identify customers’ demands and needs (Recital 44, Art. 20, Sec. 1 IDD). Furthermore, the information on the personalization process and its assumptions should be provided to the consumer, i.e., explaining why the price diverges from the standard or why the coverage of insurance was changed (Recital 45, Art. 20, Sec. 1 sentence 3 IDD). Finally, the personalizing program deployed by the insurance distributor should be sufficiently smart to appropriately classify the products in its database as to their functionality and the target group so that it correctly matches the product to the needs and situation of a potential customer (Recital 55 IDD).
The position on the insurance distributor’s duties toward clients that we advocate in this paper has significant ramifications for the conduct of insurance business in the digital environment.
Assuming that all insurance distributors are obligated to advise their prospective clients to the extent defined above, they should design their digital infrastructure in a manner that enables the fulfillment of that obligation. This means that once they start offering and selling products online, they should design their websites in such a manner that a consumer is advised on the product they aim to purchase in accordance with the requirements set forth by IDD. From a technical perspective, an insurance distributor can, in principle, fulfill this requirement in three manners. First, the insurance distributor can undertake measures to profile every consumer that accesses their website. As soon as a consumer decides to explore offers available to them or to launch the personalization of the insurance tool available on that website, the insurance distributor acquires all the necessary data to appropriately adjust the offer or to warn the potential client that the selected option is inappropriate or suboptimal from the perspective of their interests. However, the main difficulty of the model would be that the data obtained in such a manner would rarely suffice for the adequate personalization of a client. Second, the insurance distributor may use lengthy online forms that the consumer would need to fill out in order to obtain access to a personalized offer. Third, an insurance distributor may already collaborate with third-party operators that collect and store data, including data that may be crucial for the purpose of personalizing insurance offers. In this model, the insurance distributors may limit themselves to requesting the consumers to consent that their data be processed also for the purpose of an insurance offer made by an indicated third-party operator. If the law imposed on all insurance distributors a duty to advise their clients, those distributors would, in consequence, have to personalize their offers as well.
Additionally, one should also consider a different legal position, under which insurance distributors have to advise their clients only under narrowly defined circumstances (e.g., only with respect to certain insurance products or only in cases in which certain financial risks may materialize). Such a legal position would result in a need for an inquiry into when the use of new technologies by insurance distributors may be considered offering advice to clients. The typical and the strongest case is the personalization of an offer of an insurance product. Once the insurance distributor starts personalizing the content of offers of insurance products, they should be considered to be giving advice to prospective policyholders. Thus, provisions on advising on insurance products specified in IDD should become applicable. This invites the conclusion that whenever an insurance distributor uses personalization mechanisms that enable them to tailor an offer of insurance product to a particular client’s needs, they then assume a duty to advise their client on the product’s suitability.

3.2. Allocation of Risks Resulting from the Application of Personalizing Tools: Analysis, Results, and a Discussion of Typical Scenarios

Personalization mechanisms within the insurance sector allow businesses to offer consumers tailored recommendations based on one’s interests, lifestyle, and behavior, which, in principle, should on the one hand maximize their sales and profits, and on the other, incite client’s trust. From the consumer’s perspective, personalization means effortless access to better-adjusted content (including offers corresponding with their needs) as well as premiums and discounts (e.g., behavior-based pricing). Clearly, despite its benefits for insurers and consumers, the application of personalizing tools by insurers may lead to undesired outcomes. From the consumer’s perspective, these are typically inadequate insurance coverage (certain risks not covered), double insurance of the same risk, or overpriced insurance protection (Loacker 2015, p. 28). The frequency of these outcomes will be increased due to consumers’ lack of expertise in using the technological tools deployed by insurance distributors. Our study thus needs to address the question of how risks resulting from the use of personalizing tools should be allocated.
As a matter of principle, it could be argued that regardless of the nature and technical details of the personalization tools applied by the professional party, the negative consequences resulting from failure to fulfill the duty to specify the demand and needs of customers and underlying reasons for any advice on a particular insurance product should burden this particular insurance distributor who has used a personalizing tool in their interaction with the consumer. We draw upon the underlying principle of IDD: “According to the approach toward client protection that underlies the IDD, consumers should benefit from the same level of protection despite the differences between distribution channels.” (Cappiello 2018, p. 24; Tereszkiewicz 2020, p. 140; cf. Recitals 6 and 8 IDD). It follows that it should not matter whether the insurance contract was concluded via a third-party website or whether the insurance was distributed by an agent, broker, ”bancassurance” operator, travel agent, car rental company, or directly by the underwriting insurance undertaking itself (the list of entities qualified as insurance distributors is included in Recital 5 IDD.) Should this principle be interpreted to mean that regardless of who is profiling the client, errors at the personalization stage, including those made during adjusting an insurance product to the individual needs of the client, should have the same effects? Before we offer a conclusive answer to these questions, a brief overview of typical classes of reasons for mispersonalization should be provided.

3.2.1. Defectiveness of the Personalizing Mechanism Provided by Another Professional Entity

The improper result of the personalization of an insurance product may be caused by the fact that the mechanism used for this process is defective. Typically, it may simply mismatch the product as the factors taken into account during the process are not correctly balanced, important factors are neglected, or those irrelevant are included in an algorithm that distorts the personalization outcome (e.g., presumptions are made based on single interactions or purchases). Another possibility is that the segmentation is not sufficiently granular, and consequently, offers sent to a customer only roughly match their profile (Sitecore and Vanson Bourne study 2017, for an overview see: https://www.sitecore.com/company/news-events/press-releases/2017/10/new-study-reveals-brands-fail-to-use-customer-data-to-deliver-personalized-digital-experiences (accessed on 27 April 2021); Gartner Research 2018). The defectiveness of the personalizing tool may also be associated with the fact that the mechanism sends either too many or too personalized messages, which results in the consumer being discouraged from contracting as the distributor’s activities are viewed as an invasion of privacy. These instances, however, do not lead to the mispersonalization of the contract and thus shall not be discussed further.
When a personalizing tool is developed by the entity using it, determining the person liable for possible mispersonalization appears straightforward. However, the technology used in the personalization process is often developed and provided to the insurance distributor by a third-party operator (Joint Committee Discussion Paper 2016, p. 12). In these scenarios, the entity providing the personalizing system may be liable for the defectiveness of that mechanism and the damage that resulted from its improper functioning. The allocation of liability will depend on the provisions governing liability for improper performance of a contract in a given case; the content of a contract concluded between these entities will constitute an important factor in determining the scope of the liability. In this regard, national rules of EU Member States implementing Directive 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services are relevant. Member States are free to broaden the scope of the application of these provisions so that this Directive covers non-consumer contracts (Recital 16), dual-purpose contracts (Recital 22), and platform providers who are not direct contractual partners of the consumer (Recital 23) (Carvalho 2019, p. 195).
Requirements for liability of a third-party operator that provided the personalizing tools will be of importance for the insurance distributor since the latter carries the risk of the tool’s malfunction in relation to their clients. They are of no significance for the legal situation of the client who entered into an insurance contract following the use of a personalizing tool. This is because the consumer will usually have a direct claim against the insurance distributor. Thus, from the consumer’s perspective, it does not matter which entity in the insurance distribution chain (i.e., insurer, agent, or broker) uses the personalization mechanism or who supplies the mechanism with the use of which the personalization is carried out (it could be an IT company that provides the toolset to the insurer or an insurer who provides it to the agent or broker). The fact that the mispersonalization was caused by a defect of the personalizing mechanism supplied by a third-party operator to the insurance distributor will be relevant for the legal relationship between these two entities.

3.2.2. Incorrect Configuration of Personalization Mechanisms

A possible second category of cases is that the personalization tool applied by the insurer is not defective as such, but its configuration process was disturbed or improperly performed. As in the previous category, from the consumer’s perspective, these circumstances should be regarded as irrelevant, unless it is the consumer themselves that customizes the product with the use of the program provided by the insurance distributor (“design-your-own insurance model,” see, e.g., https://sidecarhealth.com/personalized (accessed on 27 April 2021); https://www.hcf.com.au/insurance/health/get-a-quote/customise-cover (accessed on 27 April 2021)).
If that was the case, then the question is whether the consumer should bear the negative consequences of the fact that they, acting by themselves, tailored the insurance product in a manner that does not meet their actual needs and situation. In principle, it could be argued that the individual should not be restricted in exercising their own autonomy in this regard and should not be prevented from making suboptimal contractual decisions or taking disproportionate risks as long as their choice in this regard is actually free and autonomous (Południak-Gierz 2020b, p. 79).
However, in the case of insurance products, consumers rarely act having sufficient knowledge both of the risks and content of a given insurance product, which means that the risks of miscustomizing insurance policies are real. Further, depending on the specific risk that the consumer wants to have covered by an insurance product, the consequences of making an error in the customization process may be detrimental for that consumer.
Given the above considerations, there is a strong case for regulating the customization of certain types of insurance policies. In particular, those policies aimed at covering risks, the materialization of which could lead to especially adverse effects for policyholders (e.g., liability insurance). This would prevent insurance operators from exploiting particular weaknesses (behavioral biases) of consumers (their classes) and tricking them into purchasing certain products while at the same time avoiding providing advice correlated with the personalization of the content. Yet, the design of such a regulatory framework should not overly impede innovation by indirectly preventing the emergence of new products or new models of product distribution in the insurance sector. As a middle way, product distributors could be obligated to provide personalized warning mechanisms. In order to impede the easy circumvention of that duty, the customization of the product should be allowed only for clients who are already profiled, since only then would the ex post verification of the adequacy of their choice and personalization of the warning system be possible.

3.2.3. Inadequate or Insufficient Data

With respect to data processed by the personalizing tool, several factors may hinder adequate personalization. In order to assure an adequate outcome of the personalization process, data of two types are needed: “(1) the relevant attributes of the products available to the consumer, which must include an adequate representation of the variety of potentially suitable products available in the market to provide meaningful choice; and (2) the relevant attributes of the consumers for whom the algorithm is ranking or matching the products” (Baker and Dellaert 2018, p. 737). The main issues related to these two categories are the accessibility and quality of these data.
As for the first category of data, access to specific and up-to-date information on particular insurance products might be limited either due to technological reasons (e.g., a program not compatible with the database or unable to extract data from it in an adequate form, or the information necessary might not be saved in the accessible database) or because of the business or legal reasons (the insurer might be reluctant to make certain data available to the external robo-advisors) (Baker and Dellaert 2018, pp. 737–38).
As for the second data category, in principle, a customer as an insurance applicant is obligated to inform the insurance distributor about risk-relevant circumstances (Borselli 2020, p. 115). However, this obligation was developed in times when the insurance distributor had, as a rule, little knowledge of the insurance applicant and, to great extent, depended on the data provided by the latter (Cousy 2012a; Borselli 2020, p. 131). Technological evolution, which has enabled insurers to elicit material information successfully and to investigate risk-relevant factors, coupled with consumer law development, has brought about a new approach toward the disclosure by a policyholder.
Most recently, the law, as exemplified by new statutes in Germany and the UK, has shifted the emphasis from the insurance applicant’s duty to disclose risk-relevant factors toward the insurer’s obligation to collect contract-relevant information from the insurance applicant (Cousy 2012b; Tereszkiewicz 2013a; Merkin and Gurses 2015; Hertzell 2017).
Furthermore, the business model of insurance distributors underwent a significant evolution. Rather than requesting the insurance applicant for specific information, insurance distributors seek to obtain permission to gather and process their personal data. A question immediately arises as to what specific challenges this business model represents. Most importantly, one should ask whether it should be relevant for the scope of the liability of an insurance distributor for the mispersonalization of an insurance contract. The insurance law provisions do not answer this question in a direct manner. It is submitted that once the insurance applicant consents to data processing by the insurance distributor and the profiling process is launched, the active participation in the data provision of the insurance applicant ends. It can be argued that by giving consent, the applicant already fulfills their duty to disclose risk-relevant circumstances to the other party (Borselli 2020, pp. 115, 131; Christofilou and Chatzara 2020, p. 60); in practice, the automatization of data processing and personalization frequently leave no room for individual disclosures. In addition, insurance distributors have the resources and toolset necessary to extract and analyze the necessary information on the insurance applicant. The big-data-based mechanisms allow a business to deduce the features or circumstances of the person that the latter is unaware of. Thus, if the conclusions drawn by a smart personalization mechanism contradict the information given directly by the insurance applicant, they will be prioritized as more credible over the communication sent by the insurance applicant themselves. As a result, once an insurance applicant agrees to data processing for the purpose of the personalization of an insurance contract and the personalization mechanisms are launched, the information individually and actively provided by the insurance applicant becomes largely irrelevant. As a result, the case for requiring the insurance applicant to additionally inform the insurance distributor about the risk-relevant circumstances becomes much weaker, in particular when the insurance distributor does not actively ask any further questions related to risk-relevant factors.
The automatization of the stage of data collection poses numerous challenges. The mispersonalization may be caused by the fact that the data is insufficient for achieving the personalization purpose. In this regard, the main difficulty is setting the minimum threshold regarding when the data on the person becomes credible and sufficiently depicts individual traits, tendencies, or needs (Południak-Gierz 2017, pp. 30, 32). Further, there is a need for constant updates of data, so that this does not become obsolete (Sitecore and Vanson Bourne study 2017). This is particularly relevant when data is bought or downloaded ex post, e.g., from applications used for a different purpose. Additionally, the information thus obtained might be inadequate for the purpose of its processing (e.g., the content of the data might be altered intentionally or unintentionally by the individual—a famous example here is the increase in purchases of carbon-monoxide detectors when the public noticed that their purchase was considered a factor reflecting the trustworthiness of a debtor by some insurance companies (Duhigg 2009), or predetermined by the design of the system from which it is obtained). Given our view that in the case of insurance offers made following the use of personalization tools by an insurance distributor, the insurance applicant’s duty to disclose may be fulfilled just by consenting to the profiling as long as no further questions are addressed to them, then all the above risks should in principle be allocated to the insurance distributor using the personalization tool.
In sum, though the inadequacy of the data fed to the personalization system may be caused by different entities, including the data subject themselves (i.e., the client buying insurance), it is the personalizing insurance distributor who will typically be liable for the consequences of data inadequacy.

3.2.4. Aim of the Personalization

Finally, the mispersonalization might result from the fact that the aim of the personalization might be set in a manner that diverges from the principle of contractual fairness (loyalty). However, it should be noted that the said disloyalty of an insurance distributor may take different forms (Baker and Dellaert 2019, pp. 16–17). Typically, the price can be set not in accordance with the consumer’s situation and needs, but their misperceptions as to the latter (the price that is offered is the maximum that could be accepted by that consumer, regardless of what the regular price of such an insurance product is). Similarly, determining the consumers’ willingness to pay may be based on the misperceptions of that person (Bar-Gill 2019, p. 246; in the context of insurance: Baker and Dellaert 2018, p. 736). Additionally, the offered product may be ill suited from the perspective of the interests of the consumer but profitable from the perspective of the product distributor. This will typically qualify as misselling of insurance products.
Another example is restricting the consumer’s access to some offers by “surveying a strategic subset of options that are most profitable for the firm” (Baker and Dellaert 2019, p. 16). In addition, the behavioral biases of the consumer might be exploited in different manners. Certain features of the offered product (i.e., the manner of its presentation, context, etc.) might suggest that the product is adequate for the needs of a particular consumer. For example, the hyperlink with travel insurance offer having an exclusion for winter sports can be presented on the website of a travel agency offering ski trips (Tereszkiewicz 2013b, p. 236), or the insurance personalization tool may be correlated with the projections of future scenarios that underline certain factors (e.g., a risk that is not particularly probable will be presented as such). Finally, behavioral strategies, including framing or selective highlighting (on the power of architecture: Baker and Dellaert 2018, pp. 739–40), might be used to steer the consumer to the option that may indeed be relatively well adjusted to their needs but, at the same time, more profitable for the personalizing entity than the equivalent alternative (Baker and Dellaert 2019, p. 16).
In principle, the admissibility of such configurations of personalization tools could be reviewed on a case-by-case basis under the legal framework dealing with unfair business-to-consumer commercial practices (Directive 2005/29/EC of the European Parliament and of the Council on unfair commercial practices, referred to as UCPD). However, it is uncertain whether UCPD can be effective when it comes to controlling personalization as a market practice (Południak-Gierz 2019, pp. 170–73). A major challenge in this regard results from the requirements for applying the EU Unfair Commercial Practices framework to the commercial practice concerned: the law requires the examining of whether a commercial practice at hand “materially distorts or is likely to materially distort the economic behaviour with regard to the product of the average consumer whom it reaches or to whom it is addressed” (Article 5 Sec. 2b UCPD). This means that the UCPD legal framework is grounded in the idea of protecting “an average consumer,” thus a typical member of a class, a separate question being how this notion is defined (empirically or normatively) and by whom (regulators ex ante or courts ex post, Golecki and Tereszkiewicz 2019, pp. 97–99, see also the evolution of CJEU case law on the concept of “an average consumer,” especially: Case C-120/78 Cassis de Dijon 1979; Case C-382/87 Buet 1989; Case 362/88 GB-INNO-BM 1990; Case C-126/91 Yves Rocher 1993; Case C-373/90 Nissan 1992; Case C-315/92 Clinique 1994; Case C-470/93 Mars 1995; Case C-313/94 Graffione 1996; Case C-210/96 Gut Springenheide 1998). By nature, the notion of an average consumer is formulated without reference to personal characteristics that would distinguish one consumer from another.
The idea of the personalization of consumer products, including insurance policies, does not fit easily with this legal approach. The essence of personalization means that the content is highly persuasive for a selected individual only rather than a class of individuals or a typical average individual. A further difficulty lies in the fact that under the unfair commercial practices framework it is the isolated commercial practice of a provider that is subject to control. The power and appeal of personalization frequently lie in the coordination between different practices applied by a provider, neither of which, when assessed separately, materially distorts the consumer’s behavior, but each of these practices is designed to correlate with other factors. Thus, it is the whole bundle that materially distorts the market behavior of the consumer. Finally, there is a procedural difficulty. Proving that a commercial practice strongly limited the consumer’s ability to make an informed choice is difficult since it requires demonstrating that if the personalization was not unfair, the consumer would have not concluded the contract in question. In this regard, it would be crucial to draw a line between an effective and sophisticated market practice that is still legitimate and a personalization mechanism, the use of which amounts to an unfair market practice due to its impact on the consumer decision-making process (Południak-Gierz 2019, pp. 170–73).
Finally, it should be noted that some of the negative effects related to the mispersonalization at the time of the contract making may be reduced by the application of the European Union Unfair Contract Terms Directive (Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts, referred to as UCTD). This can be illustrated by the following hypothetical: Certain abusive (“unfair”) contract terms are added only to those contracts that are concluded with individuals who, due to their characteristics or situation, are highly unlikely to notice such terms or to challenge them, whether in or out of courts. However, for the UCTD regime to apply, personalized agreements must be recognized as “not being individually negotiated,” that is, imposed by the trader on the consumer (Południak-Gierz 2019, pp. 164–70).

4. Summary of the Results

Mispersonalization may be caused by different factors. However, specific reasons for inadequate personalization should rarely influence the situation of the consumer. Unless it is the consumer’s intentional fraudulent behavior that leads to an inadequate result of the personalization of an insurance product, it should be irrelevant from the consumer’s perspective who caused the mistailoring of the insurance product. The insurance distributor who made this personalization mechanism available to the consumer should bear the negative consequences of their decision to make the mechanism available. This outcome appears justified considering the balance of interests to be protected in the case of the personalization of insurance products.
On the one hand, it is the consumer who, due to their increased vulnerability, i.e., asymmetry as to the information and ability to process it, as well as the illusion of transparency, which means that the abundance of information needed to make an informed choice may overwhelm even a savvy user (Gal 2018, p. 88), needs protection. On the other hand, the complexity of the process of personalization, the size, and the market position of the actors that take part in the personalization process, compared with the individual’s position, necessitates that increased risks of error (diversity of the stages at which the malfunction may distort the effect of the personalization) also be taken into consideration. Applying innovative solutions always poses risks and challenges, and the law should be careful not to introduce burdens and restrictions in a manner that hinders development. Hence, provided that it is the insurance distributor who should be held liable for mispersonalization, a further question should be how this liability ought to be shaped so as not to overly restrict the use of new technologies in that sector, but rather to effectively mitigate the detrimental consequences of their application.

Author Contributions

Conceptualization, P.T. and K.P.-G.; methodology, P.T. and K.P.-G.; software, K.P.-G.; validation, P.T. and K.P.-G.; formal analysis, P.T. and K.P.-G.; investigation, P.T. and K.P.-G.; resources, K.P.-G.; data curation, K.P.-G.; writing—original draft preparation, P.T. and K.P.-G.; writing—review and editing, P.T. and K.P.-G.; visualization, K.P.-G.; supervision, P.T.; project administration, P.T.; funding acquisition, P.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Narodowe Centrum Nauki, (NCN) [The National Science Centre] in Poland, Grant Number 2018/29/B/HS5/01281.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Baker, Tom, and Benedict Dellaert. 2018. Regulating Robo Advice across the Financial Services Industry. Iowa Law Review 103: 713–50. Available online: https://scholarship.law.upenn.edu/cgi/viewcontent.cgi?article=2742&context=faculty_scholarship (accessed on 30 November 2020). [CrossRef] [Green Version]
  2. Baker, Tom, and Benedict Dellaert. 2019. Behavioral Finance, Decumulation, and the Regulatory Strategy for Robo-Advice. Faculty Scholarship at Penn Law. 1993. Available online: https://scholarship.law.upenn.edu/faculty_scholarship/1993 (accessed on 30 November 2020).
  3. Bar-Gill, Oren. 2019. Algorithmic Price Discrimination When Demand Is a Function of Both Preferences and (Mis)perceptions. The University of Chicago Law Review. vol. 86, pp. 217–54. Available online: https://chicagounbound.uchicago.edu/uclrev/vol86/iss2/12/ (accessed on 26 November 2020).
  4. BEUC. 2017. Fintech: A More Competitive and Innovative European Financial Sector BEUC Response to Commission Consultation. Available online: https://www.beuc.eu/publication/position-papers (accessed on 27 April 2021).
  5. Borselli, Angelo. 2018. Insurance by Algorithm. The European Insurance Law Review 2: 41–44. [Google Scholar]
  6. Borselli, Angelo. 2020. Smart Contracts in Insurance: A Law and Futurology Perspective. In InsurTech: A Legal and Regulatory View. Edited by Pierpaolo Marano and Kyriaki Noussia. Berlin/Heidelberg: Springer, pp. 101–26. [Google Scholar] [CrossRef] [Green Version]
  7. Bruce Daniel, Carole Avis, Matthew Byrne, Visesh Gosrani, Zhixin Lim, Jools Manning, Darko Popovic, Richard Purcell, and Weihe Qin. 2018. Improving the success of InsurTech opportunities. British Actuarial Journal 23: 1–34. [Google Scholar] [CrossRef] [Green Version]
  8. Cappiello, Antonella. 2018. Technology and the Insurance Industry. In Re-configuring the Competitive Landscape. Cham: Palgrave Macmillian. [Google Scholar]
  9. Carvalho, Morais. 2019. Sale of Goods and Supply of Digital Content and Digital Services—Overview of Directives 2019/770 and 2019/771. Journal of European Consumer and Market Law 5: 194–202. [Google Scholar]
  10. Christofilou, Alkistis, and Viktoria Chatzara. 2020. The Internet of Things and Insurance. In InsurTech: A Legal and Regulatory View. Edited by Pierpaolo Marano and Kyriaki Noussia. Berlin/Heidelberg: Springer, pp. 49–82. [Google Scholar] [CrossRef]
  11. Cousy, Herman. 2012a. Insurance Law. In Elgar Encyclopedia of Comparative Law. Edited by Jan Smits. Edgar: Elgar Publishing, pp. 408–20. [Google Scholar]
  12. Cousy, Herman. 2012b. About Sanctions and the Hybrid Nature of Modern Insurance Contract Law. Erasmus Law Review 5: 123–31. [Google Scholar] [CrossRef]
  13. Cousy, Herman. 2017. Changing Insurance Contract Law: An Age-Old, Slow and Unfinished Story. In Insurance Regulation in the European Union. Edited by Marano Pierpaolo and Siri Michele. London: Palgrave Macmillan, pp. 31–58. [Google Scholar] [CrossRef]
  14. De Maesschalck, Nic. 2017. The Insurance Distribution Directive: What Does It Change for Intermediaries and for Others? In Insurance Regulation in the European Union. Edited by Marano Pierpaolo and Siri Michele. London: Palgrave Macmillan, pp. 59–78. [Google Scholar] [CrossRef]
  15. Duhigg, Charles. 2009. What Does Your Credit-Card Company Know About You? The New York Times Magazine. May 17. Available online: http://www.nytimes.com/2009/05/17/magazine/17credit-t.html (accessed on 29 November 2020).
  16. EIOPA. 2018. Joint Committee Report on the Results of the Monitoring Exercise on ‘Automation in Financial Advice’. Available online: https://service.betterregulation.com/document/34598 (accessed on 27 April 2021).
  17. Elvy, Stacy-Ann. 2017. Paying For Privacy and the Personal Data Economy. Columbia Law Review 117: 1369–459. Available online: https://columbialawreview.org/content/paying-for-privacy-and-the-personal-data-economy/ (accessed on 26 November 2020).
  18. Gal, Michal S. 2018. Algorithmic Challenges to Autonomous Choice. Michigan Technology Law Review 25: 59–104. Available online: https://repository.law.umich.edu/cgi/viewcontent.cgi?article=1243&context=mttlr (accessed on 26 November 2020). [CrossRef] [Green Version]
  19. Gal, Michal S., and Niva Elkin-Koren. 2017. Algorithmic Consumers. Harvard Journal of Law & Technology 30: 309–53. [Google Scholar]
  20. Gartner Research. 2018. State of Personalization Report. Available online: https://www.gartner.com/en/documents/3892113/2018-state-of-personalization-report (accessed on 27 April 2021).
  21. Gillis, Talia B., and Jann L. Spiess. 2019. Big Data and Discrimination. The University of Chicago Law Review 2: 459–88. Available online: https://lawreview.uchicago.edu/publication/big-data-and-discrimination (accessed on 26 November 2020).
  22. Golecki, Mariusz J., and Piotr Tereszkiewicz. 2019. Taking the Prohibition of Unfair Commercial Practices Seriously. In New Developments in Competition Law and Economics. Edited by Klaus Mathis and Avishalom Tor. Berlin/Heidelberg: Springer, pp. 91–106. [Google Scholar] [CrossRef]
  23. Hertzell, David. 2017. The Insurance Act 2015: Background and Philosophy. In The Insurance Act 2015. A New Regime for Commercial and Marine Insurance Law. Edited by Malcolm Clarke and Baris Soyer. London: Informa Law from Routledge, pp. 2–12. [Google Scholar] [CrossRef]
  24. Joint Committee Discussion Paper on the Use of Big Data by Financial Institutions, JC 2016 86. 2016. Available online: https://www.esma.europa.eu/press-news/consultations/joint-committee-discussion-paper-use-big-data-financial-institutions (accessed on 27 April 2021).
  25. Loacker, Leander D. 2015. Informed Insurance Choice? The Insurer’s Pre-Contractual Information Duties in General Consumer Insurance. Cheltenham: Edward Elgar. [Google Scholar]
  26. Łańcucki, Jerzy. 2019. Wpływ innowacyjnych technologii na funkcjonowanie rynku ubezpieczeniowego. Prawo asekuracyjne 99: 6–22. [Google Scholar] [CrossRef]
  27. Łańcucki, Jerzy. 2020. Konsument wobec wyzwań sztucznej inteligencji na rynku ubezpieczeniowym. Prawo asekuracyjne 103: 3–21. [Google Scholar] [CrossRef]
  28. Marano, Pierpaolo. 2019. Marano, Pierpaolo 2019. Navigating InsurTech: The digital intermediaries of insurance products and customer protection in the EU. Maastricht Journal of European and Comparative Law 26: 294–315. [Google Scholar] [CrossRef]
  29. McFall, Liz. 2019. Personalizing solidarity? The role of self-tracking in health insurance pricing. Economy and Society 48: 52–76. Available online: https://0-www-tandfonline-com.brum.beds.ac.uk/doi/full/10.1080/03085147.2019.1570707 (accessed on 26 November 2020). [PubMed]
  30. Merkin, Robert, and Ozlem Gurses. 2015. Insurance act 2015: Rebalancing the interests of insurer and assured. The Modern Law Review 78: 1004–27. [Google Scholar] [CrossRef] [Green Version]
  31. Meyers, Gert. 2018. Behaviour-Based Personalisation in Health Insurance: A Sociology of a Not-Yet Market. Ph.D. dissertation, Katholieke Universiteit Leuven, Leuven, Belgium. No 364. [Google Scholar]
  32. Moloney, Niamh. 2010. How to Protect Investors. Lessons from the EC and the UK. Cambridge: Cambridge University Press. [Google Scholar]
  33. Nicoletti, Bernardo. 2017. The Future of Fintech. In Integrating Finance and Technology in Financial Services. Cham: Palgrave Macmillian. [Google Scholar]
  34. OECD. 2017. Technology and Innovation in the Insurance Sector. Available online: https://www.oecd.org/finance/Technology-and-innovation-in-the-insurance-sector.pdf (accessed on 27 April 2021).
  35. OECD. 2020. The Impact of Big Data and Artificial Intelligence (AI) in the Insurance Sector. Available online: https://www.oecd.org/finance/The-Impact-Big-Data-AI-Insurance-Sector.pdf (accessed on 27 April 2021).
  36. Południak-Gierz, Katarzyna. 2017. Dangers and Benefits of Personalisation in Contract Law: Big Data Approach. Queen Mary Law Journal 2017: 25–36. [Google Scholar]
  37. Południak-Gierz, Katarzyna. 2019. Consequences of the Use of Personalization Algorithms in Shaping an Offer–A Private Law Perspective. Masaryk University Journal of Law and Technology 13: 161–88. Available online: https://journals.muni.cz/mujlt/article/view/11476 (accessed on 26 November 2020). [CrossRef] [Green Version]
  38. Południak-Gierz, Katarzyna. 2020a. Personalized agreement—A new contractual model. Vestnik of Saint Petersburg University Law 11: 1009–21. [Google Scholar]
  39. Południak-Gierz, Katarzyna. 2020b. Wady oświadczenia woli w umowach zawieranych na internetowym rynku konsumenckim. Warszawa: C.H. Beck. [Google Scholar]
  40. Prainsack, Barbara, and Ine Van Hoyweghen. 2020. Shifting Solidarities: Personalisation in Insurance and Medicine. In Shifting Solidarities. Trends and Developments in European Societies. Edited by Ine Van Hoyweghen, Valeria Pulignano and Gert Meyers. London and Cham: Palgrave Macmillan, pp. 127–51. [Google Scholar] [CrossRef]
  41. Senousy, Youssef, Nashaat El-Khamisy, and Alaa el-din Mohamed Riad. 2018. Recent Trends in Big Data Analytics Towards More Enhanced Insurance Business Models. International Journal of Computer Science and Information Security 16: 39–45. [Google Scholar]
  42. Sitecore and Vanson Bourne study. 2017. Available online: https://www.vansonbourne.com/work/14121601jd (accessed on 27 April 2021).
  43. Sundar, S. Shyam, and Sampada S. Marathe. 2010. Personalization versus Customization: The Importance of Agency, Privacy, and Power Usage. Human Communication Research 36: 298–322. [Google Scholar] [CrossRef]
  44. Tereszkiewicz, Piotr. 2013a. Obowiązek informacyjny ubezpieczającego i skutki jego naruszenia z perspektywy prawno-porównawczej: Zmierzch uberrima fidei w epoce ochrony konsumenta? [An Insurance Applicant’s Duty to Disclose and Remedies for its Breach from a Comparative Perspective: The Demise of uberrima fidei in the Times of Consumer Protection?]. In Rozprawy cywilistyczne. Księga pamiątkowa dedykowana Profesorowi Edwardowi Drozdowi. [Studies in Civil Law in Honor of Professor Edward Drozd]. Edited by Marlena Pecyna, Jerzy Pisuliński and Małgorzata Podrecka. Warszawa: Lexis Nexis, pp. 473–506. [Google Scholar]
  45. Tereszkiewicz, Piotr. 2013b. The Europeanisation of the Insurance Contract Law: The Insurer’s Duty to Advise and its Regulation in German and European law. In The Transformation of European Private Law: Harmonisation, Consolidation, Codification or Chaos? Edited by James Devenney and Mel Kenny. Cambridge: Cambridge University Press, pp. 235–55. [Google Scholar] [CrossRef]
  46. Tereszkiewicz, Piotr. 2015. Obowiązki informacyjne w umowach o usługi finansowe. Warszawa: Wolters Kluwer. [Google Scholar]
  47. Tereszkiewicz, Piotr. 2020. Digitalisation of Insurance Contract Law: Preliminary Thoughts with Special Regard to Insurer’s Duty to Advise. In InsurTech: A Legal and Regulatory View. Edited by Pierpaolo Marano and Kyriaki Noussia. Berlin/Heidelberg: Springer, pp. 127–46. [Google Scholar] [CrossRef]
  48. Wagner, Gerhard, and Horst Eidenmüller. 2018. Down by Algorithms? Siphoning Rents, Exploiting Biases, and Shaping Preferences: Regulating the Dark Side of Personalized Transactions. The University of Chicago Law Review 86: 581–609. Available online: https://lawreview.uchicago.edu/publication/down-algorithms-siphoning-rents-exploiting-biases-and-shaping-preferences-regulating (accessed on 26 November 2020). [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tereszkiewicz, P.; Południak-Gierz, K. Liability for Incorrect Client Personalization in the Distribution of Consumer Insurance. Risks 2021, 9, 83. https://0-doi-org.brum.beds.ac.uk/10.3390/risks9050083

AMA Style

Tereszkiewicz P, Południak-Gierz K. Liability for Incorrect Client Personalization in the Distribution of Consumer Insurance. Risks. 2021; 9(5):83. https://0-doi-org.brum.beds.ac.uk/10.3390/risks9050083

Chicago/Turabian Style

Tereszkiewicz, Piotr, and Katarzyna Południak-Gierz. 2021. "Liability for Incorrect Client Personalization in the Distribution of Consumer Insurance" Risks 9, no. 5: 83. https://0-doi-org.brum.beds.ac.uk/10.3390/risks9050083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop