AbstractAI will change many aspects of the world we live in, including the way corporations are governed. Many efficiencies and improvements are likely, but there are also potential dangers, including the threat of harmful impacts on third parties, discriminatory practices, data and privacy breaches, fraudulent practices and even ‘rogue AI’. To address these dangers, the EU published ‘The Expert Group’s Policy and Investment Recommendations for Trustworthy AI’ (the Guidelines). The Guidelines produce seven principles from its four foundational pillars of respect for human autonomy, prevention of harm, fairness, and explicability. If implemented by business, the impact on corporate governance will be substantial. Fundamental questions at the intersection of ethics and law are considered, but because the Guidelines only address the former without (much) reference to the latter, their practical application is challenging for business. Further, while they promote many positive corporate governance principles—including a stakeholder-oriented (‘human-centric’) corporate purpose and diversity, non-discrimination, and fairness—it is clear that their general nature leaves many questions and concerns unanswered. In this paper we examine the potential significance and impact of the Guidelines on selected corporate law and governance issues. We conclude that more specificity is needed in relation to how the principles therein will harmonise with company law rules and governance principles. However, despite their imperfections, until harder legislative instruments emerge, the Guidelines provide a useful starting point for directing businesses towards establishing trustworthy AI.