Skip to main content

Articles

The EU’s AI Act and How Companies Can Achieve Compliance

The article lays out what boards, C-suites, and managers need to do to ensure their companies will be compliant when AI regulations comes into force.

Generative AI-nxiety

Covers four cross-industry risks of GenAI: the hallucination problem, the deliberation problem, the sleazy salesperson problem, and the problem of shared responsibility.

How to Avoid the Ethical Nightmares of Emerging Technology

A framework for navigating the worst of what AI, quantum computing, and other new technologies could create.

The Risks of Empowering Citizen Data Scientists

New tools are enabling organizations to invite and leverage non-data scientists to propel their AI efforts. This article addresses the associated risks and opportunities and suggests 5 ways to create a successful and responsible citizen data scientist strategy.

When — and Why — You Should Explain How Your AI Works

AI can be a black box, which often renders us unable to answer crucial questions about its operations. In this article I explain when explainable AI is important, and why.

Why You Need an AI Ethics Committee

(Originally appeared in the July/August 2022 print edition of HBR). The sources of problems in AI are many. You need a committee—comprising ethicists, lawyers, technologists, business strategists, and bias scouts—to review any AI your firm develops or buys to identify the ethical risks it presents and address how to mitigate them. This article describes how to set up such a committee effectively.

A Practical Guide to Building Ethical AI

Companies are quickly learning that AI doesn’t just scale solutions — it also scales risk. In this environment, data and AI ethics are business necessities, not academic curiosities.

Why Blockchain’s Ethical Stakes Are So High

This article looks at four risks — the lack of third-party protections, the threat of privacy violations, the zero-state problem, and bad governance — and offers advice for how blockchain developers and users can mitigate potential harm.

If Your Company Uses AI, It Needs an Institutional Review Board

When it comes to AI, focusing on fairness and bias ignores a huge swath of ethical risks; many of these ethical problems defy technical solutions.

Building Transparency into AI Projects

There’s a growing demand for transparency around why an AI solution was chosen, how it was designed and developed, on what grounds it was deployed, how it’s monitored and updated, and the conditions under which it may be retired.

Ethics and AI: 3 Conversations Companies Need to Have

While concerns about AI and ethical violations have become common in companies, turning these anxieties into actionable conversations can be tough.

Everyone in Your Organization Needs to Understand AI Ethics

When most organizations think about AI ethics, they often overlook some of the sources of greatest risk: procurement officers, senior leaders who lack the expertise to vet ethical risk in AI projects, and data scientists and engineers who don’t understand the ethical risks of AI.

Four steps for drafting an ethical data practices blueprint

Data and analytics leaders are in the organizationally unique position to spearhead ethical data practices. Here are four key practices that chief data officers/scientists and chief analytics officers (CDAOs) should employ when creating their own ethical data and business practice framework.

How to Monitor Your Employees — While Respecting Their Privacy

As work from home has become the new normal, many employers have started to worry about just how much work their employees are doing.

Purchase