Amnesty International has raised concerns about the potential violation of human rights due to the implementation of new automated social protection systems using advanced technology. Specifically, the rights group has highlighted the Samagra Vedika algorithmic system in India’s Telangana state, in use since 2016. Amnesty International emphasizes the importance of governments ensuring the effectiveness of these systems and avoiding the denial of welfare benefits to eligible individuals.
Amnesty International has released a technical explainer document outlining the human rights risks associated with the Samagra Vedika system. The use of “entity resolution” in this system, which involves machine-learning algorithms to determine welfare eligibility and identify fraudulent or duplicate beneficiaries, has raised concerns about the lack of transparency in automated decision-making systems. There are concerns about reducing individuals to mere numbers using AI and algorithms, making it difficult to assess their impact on human rights.
An investigation by Al Jazeera in 2024 revealed errors in the Samagra Vedika system, resulting in thousands of families being deprived of essential benefits related to food security, income, and housing. This has led to significant human rights concerns about social security. Despite efforts by Amnesty International to audit the system, they faced challenges in accessing it and received minimal transparency from the developers and deployers of the system.
The use of entity resolution in welfare technology raises concerns about the complex processes and use of AI and machine learning to assess eligibility and detect fraudulent beneficiaries. Amnesty International stresses the need for greater transparency and accountability in the design and implementation of such automated tools.
The rights group has highlighted the barriers created for civil society and journalists in investigating the technical aspects of these systems, due to government procurement from private companies. This lack of transparency allows those responsible for the systems to evade accountability, while affected individuals have limited access to remedy.
The case of Samagra Vedika reflects a wider trend of governments turning to AI and automated decision-making systems for social protection programs, often resulting in unjust outcomes for marginalized groups. Amnesty International underscores the importance of conducting comprehensive human rights impact assessments before integrating technology into social protection systems and engaging with affected communities.
The technical explainer published by Amnesty International builds on the 2024 investigation by Al Jazeera, highlighting the flaws in the implementation of the Samagra Vedika system. The rights group also calls for engagement with affected communities and clear communication about any changes to vital support systems. It is crucial for any technology introduced into social protection systems to undergo rigorous human rights impact assessments and effective mitigation measures.
In light of these concerns, it is imperative for governments to prioritize the human rights of individuals when implementing automated social protection systems. The case of the Samagra Vedika system serves as a reminder of the potential risks and consequences associated with relying on advanced technology for social welfare programs. Therefore, it is crucial for governments to ensure that these systems align with human rights standards and do not harm the individuals they are intended to assist.