Gå til hovedindhold

Trav­is Greene

Tenure Track Assistant Professor

Emner
Big data Kunstig intelligens Maskinlæring Data Etik Filosofi

Primary research areas

Data sci­ence eth­ics
Philo­sophy of In­form­a­tion Sys­tems
Per­son­al­iz­a­tion

Bridging the cre­ativ­ity of the hu­man­it­ies with the pre­ci­sion of data sci­ence

I am a tenure-track Assistant Professor at Copenhagen Business School’s Department of Digitalization. I have a Ph.D. in business analytics from the Institute of Service Science at National Tsing Hua University in Taiwan. My current research is focused on the ethics and philosophy of new and emerging data science methods and applications, especially in business contexts. I am interested in how we can leverage our best normative theories in ethics and political philosophy to help us align and evaluate AI/ML applications in society.  

My interdisciplinary research has appeared in journals such as AI & Society, Nature Machine Intelligence, Journal of the Association for Information Systems, Journal of the Royal Statistical Society, and Big Data. Broadly, my work aims to contribute new perspectives, frameworks, and ideas from the humanities into data science, and vice versa. 

december 2025

Monetization Could Corrupt Algorithmic Explanations

Tra­vis Gre­e­ne, Tenure Track Assistant Professor

Sofie Goethals

David Martens

Galit Shmueli

Go to publication

november 2023

Taking the Person Seriously

Ethically Aware IS Research in the Era of Reinforcement Learning-based Personalization

Go to publication

2023

Atomist or Holist?

A Diagnosis and Vision for more Productive Interdisciplinary AI Ethics Dialogue

Go to publication

Recent research projects

To­wards So­cially Re­spons­ible Fore­cast­ing: Identi­fy­ing and Typi­fy­ing Fore­cast­ing Harms

We de­veloped a nov­el harm tax­onomy for fore­cast­ing based on a syn­thes­is of philo­soph­ic­al the­or­ies of harm and semi-struc­tured in­ter­views with 21 ex­pert in­dustry prac­ti­tion­ers and aca­dem­ic re­search­ers.

Be­ware of “Ex­plan­a­tions” of AI

A good ex­plan­a­tion is con­text-de­pend­ent. This pro­ject cre­ates prac­tic­al guidelines to help prac­ti­tion­ers know when ex­plain­able AI meth­ods work well and when they risk back­fir­ing.