April 30, 2024April 30, 2024 Unveiling The Privacy Paradox Of Predictive Analysis Technology By Ishita Nayak (4th Year Student, Institute of Law, Nirma University, Ahmedabad) Image Credits: https://incubator.ucf.edu/what-is-artificial-intelligence-ai-and-why-people-should-learn-about-it/ Introduction Given the rapid rise in technological advancement and as a result of constant surveillance and monitoring, from a simple activity of browsing the web to complex online transactions, all these activities feed data to the most commonly used AI phenomenon known as predictive analysis.[1] Predictive analytics typically uses data mining as a method to identify trends or patterns that are not readily visible. For instance, for a user who is habitual in shopping online on a particular platform, his/her relevant searches would be sequenced as per the past behavior that the user has displayed while using the platform. This is essentially happening because the algorithm of the shopping platform is designed in a way wherein it is capable of predicting the user behavior based on the trend of previous usages like what the user generally purchases from the platform, preferred price range, color category, size, etc. The increasing popularity of predictive analytics has prompted its use in other areas like healthcare, wherein it is used for detecting the early onset of life-threatening diseases and illnesses. The usage of this emerging technology is not restricted to aforesaid areas, but interestingly even the legal field deploys the use of predictive analytics to understand criminal behavior and tendencies of offenders to re-offend based on their past crime records, social circumstances, etc. As various equipped statistical models are coming into the market these models along with big data are being increasingly used for predictive analytics by businesses across and this has led to serious privacy dilemmas. Technology v. Privacy Dilemma The problem arises when businesses collect vast amounts of customer data without the user’s consent for profiling consumer behavior and tracking their preferences for personalized marketing. The fact that predictive analytics generates outcomes that contain the potential to reveal the most sensitive or crucial aspect of a user’s life without their consent or much less consultation is essentially the reason why privacy concerns are arising over the increasing use of predictive analysis technology. It infers data from the hidden layers that lie beneath the surface data, for example, data concerning furniture preference, clothes, and accessories can be used to unveil sensitive personal information like account details, sexual preferences, political affiliation etc. One such instance was observed at Cambridge University when researchers drew accurate inferences about a person’s gender, sexuality, age, and race based on the user’s Facebook activity, like the kind of posts they like or share with others.[2] While predictive analytics can help provide customized recommendations, catering to particular uses, it can also have a perilous impact on a user’s privacy space. So even if it is imperative to talk about privacy violations by predictive analysis technology, however at the same time it is important to consider questions as to what constitutes privacy or what privacy actually ascribes to. Does it mean “control over personal information?” If privacy is defined from the perspective of control over personal information, then it is important to consider the fact that predictive analytics produces customized responses based on the already available data or the data in the public domain.[3] Does this imply that the user has no interest in controlling the information that is let out in the public domain? it is in this aspect, that the idea of presumed consent comes whereby it is presumed on the part of the user that they must have necessarily consented to their information being used if they are using a particular platform. This can be better understood, if we compare the user preference for WhatsApp over other paid platforms. Consumers are not usually very enthusiastic about shifting to paid platforms even though they offer better privacy incentives, and continue using WhatsApp. Does this mean that the user has consented to the unauthorized use of their personal information simply because they opt to continue the services of WhatsApp? And, DOES this mean that the user has no interest in protecting such information? These are some of the important questions that any legislator would face while formulating a legal framework. Therefore, at this juncture, it becomes important for any comprehensive regulatory regime to define what constitutes the contours of privacy. How to solve this trade-off of privacy in predictive analytics- A significant approach to this could be Jack Balkin’s (a noted scholar) idea of information fiduciaries whereby he has proposed that these online service providers or platforms that are collecting and processing user information should be imposed with the duty of care and loyalty.[4] These duties should apply to data fiduciaries even while processing user’s information for predictive analysis. The duties of these information fiduciaries are substantial obligations that go beyond the simple obligation of notifying the user and are inclusive of the duty to act diligently so as to not cause prejudice to the user. This would ensure a broad limitation on not just privacy but also bias, manipulation, and unfairness. Abandoning the use of such technology would not be prudent given the benefits that it can potentially offer, however, its use also cannot remain unhinged. A probable solution to this would be to frame clear guidelines on how data must be stored, processed or analysed to ensure restricted use. Thirdly, parameters can be fixed based on which an AI processes decisions through predictive analysis. For example, in the USA, a commonly used AI application is the Public Safety Assessment tool, which specialises in making risk assessments based on nine factors namely; prior conviction based on misdemeanour or felony, prior violent crime conviction, past failure in two years to appear before a pre-trial hearing, prior failure to appear at a hearing stage more than two years ago and prior prison sentence. In this tool, the methodology of risk assessment is known beforehand and the outcome reflected is not necessarily a creation of Blackbox. Conclusion In the contemporary era, technology has become an indispensable part of human life and therefore it is not possible for humans to isolate themselves from the aftereffects of the use of such technology. So, while predictive analysis technology has potential benefits, its widespread usage has led to a trade-off with privacy. To eliminate these concerns, it is important to take into account certain ethical considerations while deploying the use of algorithms based on predictive analysis to ensure transparency and accountability. EU has taken an important initiative in this regard with the introduction of the Ethical Charter on the Use of Artificial Intelligence in Judicial Systems, it identifies certain principles like transparency, impartiality and fairness whereby data processing is made accessible and understandable by users, and authorises external audits.[5] This charter establishes guidelines which need to be necessarily complied with in the processing of judicial decisions by algorithms. A similar approach can be adopted concerning privacy violations by predictive analysis technology however, one must not lose sight of the fact that not all ethical considerations are programmable into an AI system. [1] Robert Sprague, Welcome to the Machine: Privacy and Workplace Implications of Predictive Analytics, 21 Rich. J.L. & Tech. 1, 2-3 (2015). [2] Stratis Loannidis, Privacy trade-offs in Predictive Analytics, Acm Digital Library (Jun. 16, 2014, 8:45 PM), https://dl.acm.org/doi/abs/10.1145/2637364.2592011. [3] Tomislav Bracanovic, Predictive Analytics, Personalized Marketing and Privacy (2019), https://tomislav.bracanovic.ifzg.hr/wp-content/uploads/2020/03/Bracanovic_Predictive-analytics.pdf. [4] Dennis D. Hirsch, From Individual Control to Social Protection: New Paradigms for Privacy Law in the Age of Predictive Analytics, SSRN (Apr. 15, 2024, 9:45 PM), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3449112. [5] Council Of Europe Portal, https://www.coe.int/en/web/cepej/cepej-european-ethical-charter-on-the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-environment (last visited on Apr. 19, 2024). Post Views: 479 Related Technology Law