Artificial Intelligence encompasses a strategic role in fighting the ongoing COVID-19 pandemic by determining new outbreaks, automatically identifying risky areas, and sleuthing recent close contacts of infected people.
However, AI processes giant quantities of personal information. Once algorithms operate on personal information classes like race, gender, age, etc., the risks of discrimination against communities are vast. The COVID-19 pandemic leads governments around the world to resort to tracking technology and alternative data-driven tools to monitor and deter the novel coronavirus. Such large-scale incursion into privacy and information protection is unthinkable throughout times of normalcy.
However, in times of a virus, the utilization of location information provided by telecommunication operators and/or technology firms becomes a viable choice. Significantly, legal rules hardly shield people’s privacy against governmental and company misuse. Established privacy regimes area unit centered on individual consent, and most human rights treaties grasp derogation from privacy and information protection norms for states of emergency. This leaves very few safeguards nor remedies to ensure individual and collective autonomy.
However, the challenge of accountable information use throughout a crisis isn't novel. The humanitarian sector has over a decade of expertise to supply. International organizations and humanitarian actors have developed elaborated pointers on the way to use information responsibly beneath extreme circumstances.
During this session, we'll explore the fine lines between saving lives and respecting the rights of people, assessing risks associated with huge processing operations, finding legal grounds, and taking the proper measures to limit the risks to human rights whereas expeditiously fight COVID-19.
Sneha is a Senior Cyber Threat Analyst at a Fortune 100 MNC