PlanetAI Research Lab

FairLens – AI Fairness Auditor
Developed By: Shiladitya Munshi, Kamalesh Karmakar
Know How to use this tool and check out a sample dataset (.csv file)
Go back to WebTool Home
Dataset Input

Responsible Use Notice
  • Anonymity of Sensitive Data: Users must ensure that uploaded datasets do not contain personally identifiable information such as names, phone numbers, addresses, or unique identifiers.
  • Ethical Use of Protected Attributes: Sensitive attributes such as gender, ethnicity, disability status, age group, or socio-economic indicators should only be used for legitimate fairness auditing and responsible AI research.
  • Intended Usage: FairLens is designed for fairness auditing, research, and transparency in algorithmic decision systems.
Dataset Summary
Upload dataset to view summary.
Protected Attribute Distribution
Select a protected attribute and run analysis.
Fairness Analysis Results
Bias Heatmap (Prediction Rate by Group)
Heatmap Color Guide:
Low prediction rate Moderate prediction rate High prediction rate
Color intensity indicates how frequently the model predicts positive outcomes for each protected group.
PlanetAI Responsible AI Principles
  • Fairness: Promote equitable outcomes in algorithmic decision systems.
  • Transparency: Encourage openness in AI evaluation and auditing.
  • Privacy: Protect individual identity by requiring anonymized datasets.
  • Accountability: Support responsible development and deployment of AI systems.
  • Sustainability: Advance ethical and sustainable AI practices through research and tools.