While different frameworks exist, three widely accepted golden rules of AI centre on transparency, fairness, and accountability. Transparency means AI systems should be explainable and understandable, so users and stakeholders can see how decisions are being made rather than treating the system as an impenetrable black box. Fairness means AI should be designed and tested to avoid bias, ensuring it does not discriminate against individuals or groups based on race, gender, age, or other protected characteristics. Accountability means there must always be a human or organisation responsible for the outcomes an AI system produces, especially when those outcomes affect people’s lives. These three principles form the ethical backbone of responsible AI development and are echoed across major frameworks from the EU AI Act to Google’s AI principles.