
Dr. Tiffany Tsz Kwan Tse, Institute of Social and Economic Research
How people cooperate with humans, and rely on algorithms
Dr. Tiffany Tsz Kwan Tse is an experimental economist at the Institute of Social and Economic Research at The University of Osaka. Her research examines how people make decisions in strategic settings, from cooperation in public goods games to collusion in auctions and the growing role of algorithms in financial forecasting. Using laboratory experiments, she studies how incentives, bounded rationality, uncertainty, and the behavior of others shape human decision-making.
At the center of her work is a broader question: what allows cooperation to emerge, and what causes it to break down?
Why cooperation matters
Global challenges, such as climate change, public health, and financial stability all depend on people working together over long periods of time. Yet cooperation is often fragile. In her doctoral research, Dr. Tse used an infinitely repeated public goods game to study how cognitive ability affects cooperation over time. Her experiments revealed that individuals with higher cognitive ability were more likely to cooperate at the start of repeated interactions, suggesting that they were better able to recognize the long-term gains from cooperation. However, cooperation was highly sensitive to the behavior of others. Once participants observed defection, they often withdrew cooperation themselves. In many cases, this pattern resembled what economists call a “trigger strategy,” where cooperation collapses after the first instance of defection.
These findings suggest that sustaining cooperation depends not only on good intentions, but also on the ability to recognize the expected long-term benefits of cooperation. Designing institutions that make the long-term benefits of cooperation more visible may therefore be crucial in addressing collective challenges.
Deterring collusion in markets
Cooperation is not always socially beneficial. In procurement auctions, firms may secretly coordinate bids to keep prices high, increasing public costs. Dr. Tse studies such collusion through laboratory experiments, seeking to provide insights into this important policy question.
One of her research projects compares the effects of fines and detection probabilities in discouraging collusion. The findings suggest that fines alone do little if firms believe they are unlikely to be caught. By contrast, higher detection probabilities significantly reduce collusive behavior, highlighting the importance of credible monitoring and enforcement. Namely, measures such as analyzing bidding patterns or strengthening investigation procedures can make anti-collusion rules more meaningful. In another project, she has also examined minimum bid policies aimed at preventing unrealistically low bids. Her results show that while such policies can raise bid levels, they do not necessarily improve construction quality unless penalties for poor performance are clearly enforced.
When humans rely on algorithms
Dr. Tse also studies how people respond to algorithmic advice in financial decision-making. In stock price forecasting experiments, participants made predictions while receiving recommendations from algorithms with different levels of accuracy. The results revealed that participants often struggled to distinguish between high-performing and low-performing algorithms when only summary statistics were provided. When they could compare the algorithm’s performance with their own, however, their decisions became more calibrated. Even so, some participants continued to follow algorithmic advice even when it performed worse than their own predictions. This suggests that the challenge in an AI-driven world is not only whether algorithms are accurate, but whether users can evaluate them appropriately.
Dr. Tse explains that people often rely on algorithms because they appear more objective and data-driven than human judgment. Yet this reliance can become problematic if users treat AI systems as “black boxes” without understanding the data behind them or the conditions under which they may fail. In uncertainty and risk are unavoidable financial markets, understanding the limitations of AI can be just as important as using it. At the same time, Dr. Tse suggests that relying on AI may make it easier for individuals to shift responsibility. For that reason, she emphasizes that while AI can assist complex tasks, humans must remain responsible for final decisions and their consequences.
The future of decision-making
Looking ahead, Dr. Tse continues to explore how people value different sources of advice. In one recent project, she examines how much individuals are willing to pay for forecasting advice from algorithms, financial experts, or peers. Interestingly, the findings suggest that people often value algorithmic advice almost as highly as expert advice, even when it performs no better. Such results point to a growing tendency toward “algorithm appreciation.” Dr. Tse is also preparing a new project on AI and gender bias, asking whether AI can narrow gender gaps in task performance and whether existing stereotypes remain even when participants use AI tools. As part of this research, she visited Monash University under the university's Global Knowledge Partner funding.
As AI becomes increasingly embedded in everyday decisions, understanding how people interpret, trust, and respond to algorithmic advice will become ever more important. Through experimental research, Dr. Tse’s work offers valuable insights into how people make decisions in complex strategic environments. As she summarizes it, “Human decision-making is shaped by bounded rationality, incentives, enforcement, and learning.”
For further information: https://researchmap.jp/tse?lang=en