Application of artificial intelligence for Generating Smart Behavioral Nudges

Document Type : Research Paper

Authors

1 Associate Prof., Department of Leadership and Human Capital, Faculty of Public Administration and Organizational Sciences, College of Management, University of Tehran, Tehran, Iran.

2 Ph.D. Candidate, Department of Public Policy, Kish International Campus, University of Tehran, Kish, Iran.

10.22059/jipa.2025.395752.3700

Abstract

Objective
The primary aim of this study is to examine the capacities and capabilities of artificial intelligence in designing and implementing intelligent, personalized behavioral nudges. These nudges, viewed as the second generation of behavioral interventions, can enhance the effectiveness of behavioral policymaking by relying on technology-driven and data-based tools. The central research question guiding this study is: How can AI-based behavioral nudges, grounded in choice architecture and nudge theory, contribute to public policymaking and influence both individual and collective behaviors? By integrating big data analytics, machine learning algorithms, and cognitive technologies, the study seeks to demonstrate how behavioral interventions can be elevated from a generalized and impersonal level to one that is precise, individualized, and adaptive.
Methods
This research follows a qualitative approach using thematic analysis. Data were collected through semi-structured interviews with 12 experts in public policy, behavioral economics, and artificial intelligence. Sampling was conducted using the snowball method until theoretical saturation was achieved. The data were analyzed through open and axial coding, and the results were synthesized into a conceptual model. To enrich the model, findings from library research and international literature were incorporated. This combination of empirical and theoretical data provided the foundation for developing a comprehensive conceptual architecture of an intelligent nudge system.
Results
The theoretical exploration identified technological tools such as big data, machine learning, the Internet of Things, intelligent software agents, algorithmic methods, and cognitive technologies as the central components of an intelligent nudge system. Expert interviews led to the recognition of nine complementary tools—predictive analytics, reinforcement learning, neural networks, recommender systems, notifications, fuzzy logic, natural language processing, adaptive learning platforms, and decision-support systems—that strengthen the personalization of nudges. The proposed conceptual model is built around several key components: (1) a user profile containing descriptive data (age, gender, health, location), preferences, past behaviors, and individual capabilities, gathered explicitly (e.g., user responses) or implicitly (e.g., online behavior, wearable devices); (2) a profile learner that functions as the central processor, analyzing user data to detect behavioral patterns and design context-appropriate nudges; (3) a data collection and analysis process that uses big data and algorithms such as predictive analytics and natural language processing to transform information into actionable insights; (4) nudge design, where tailored interventions are created; and (5) evaluation of user response, where feedback and behavioral changes are measured, and new data are reintegrated into the system’s learning cycle to ensure continuous refinement. The findings highlight that unlike traditional nudges—which are uniform and general—AI-based nudges can be precisely tailored to individuals and delivered at the right time and in the right context. This capacity allows policymakers to move beyond broad, often inefficient interventions toward adaptive, data-driven tools. However, risks were also identified, including privacy violations, the reproduction of human biases in AI systems, and the possibility of “dark nudges.” Addressing these risks requires regulatory safeguards, algorithmic transparency, and ethical oversight.
Conclusion
The integration of artificial intelligence with behavioral sciences creates new capacities for data-driven policymaking. Intelligent nudge systems not only increase the effectiveness of interventions but also provide opportunities for continuous learning, refinement, and long-term policy improvement. By presenting a conceptual model grounded in a profile learner, this study offers policymakers, developers, and behavioral researchers a roadmap for using AI to design interventions that are more efficient, equitable, and adaptive. Ultimately, AI-based nudges represent not only tools for influencing individual behavior but also a transformative step toward reimagining public policymaking in the era of big data and intelligent decision-making.

Keywords

Main Subjects


Abson, D. J., Fischer, J., Leventon, J., Newig, J., Schomerus, T., Vilsmaier, U., …, & Lang, D. J. (2017). Leverage points for sustainable transformation. Ambio, 46, 30–39. https://doi.org/10.1007/s13280-016-0800-y
Adomavicius, G. & Yang, M. (2022). Integrating behavioral, economic, and technical insights to understand and address algorithmic bias: A human-centric perspective. ACM Transactions on Management Information Systems, 13(3), 1-27.
Aggarwal, C. C. (2016). Recommender systems. Cham: Springer International Publishing.
Ahuja, A. (2023, March 1). Generative AI is sowing the seeds of doubt in serious science. The Financial Times. https://www.ft.com/content/e34c24f6-1159-4b88-8d92-a4bda685a73c
Alasubramanian, G. (2020). When artificial intelligence meets behavioural economics. NHRD Network Journal, 14(2), 216-277. https://doi.org/10.1177/2631454120974810
Ali, S., Abuhmed, T., El-Sappagh, S., Khan Muhammad, J. M., Alonso-Moral, R., Confalonieri, R., Guidotti, R., Del Ser, J., Díaz-Rodríguez, N. & Herrera, F. (2023). Explainable artificial intelligence (XAI): What we know and what is left to attain trustworthy artificial intelligence. Information Fusion, 99, 101805. https://doi.org/10.1016/j.inffus.2023. 101805
Aonghusa, P. M. & Michie, S. (2020). Artificial intelligence and behavioral science through the looking glass: Challenges for real-world application. Annals of Behavioural Medicine, 54, 942–947. https:// doi.org/10.1093/abm/kaaa095
Balasubramanian, G. (2021). When artificial intelligence meets behavioural economics. NHRD Network Journal14(2), 216-277.
Bar-Gill, O., Sunstein, C. R. & Talgam-Cohen, I. (2023). Algorithmic harm in consumer markets. SSRN at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4321763. Accessed 23 June 2023.
Bechtel, W., Abrahamsen, A. & Graham, G. (2017). The life of cognitive science. In The Blackwell Companion to Cognitive Science. https://doi.org/10.1002/9781405164535. part1.
Beer, S. (1993). Designing freedom. Anansi: Canada.
Beshears, J. & Kosowsky, H. (2020). Nudging: Progress to date and future directions. Organizational Behavior and Human Decision Processes, 161(Suppl.), 3–19. https://doi.org/10.1016/j.obhdp.2020.09.001
Bommasani, R., Creel, K. A., Kumar, A., Jurafsky, D. & Liang, P. (2022). Picking on the same person: Does algorithmic monoculture lead to outcome homogenization? Advances in Neural Information Processing Systems35, 3663-3678. https://arxiv.org/abs/2211.13972
Brooks, R., Nguyen, D., Bhatti, A., Allender, S., Johnstone, M., Lim, C. P. & Backholer, K. (2022). Use of artificial intelligence to enable dark nudges by transnational food and beverage companies: Analysis of company documents. Public Health Nutrition, 25(5), 1–23. https://doi.org/10.1017/S1368980022000490
Bryan, C. J., Tipton, E., & Yeager, D. S. (2021). Behavioural science is unlikely to change the world without a heterogeneity revolution. Nature human behaviour5(8), 980-989.
Burr, C., Cristianini, N. & Ladyman, J. (2018). An analysis of the interaction between intelligent software agents and human users. Minds and Machines, 28(4), 735–774. https://doi.org/10.1007/s11023-018-9479-0
Butenko, A. & Larouche, P. (2017). Regulation for innovativeness or regulation of innovation? TILEC Discussion Paper No. 2015-007. https://ssrn.com/abstract=2584863
Buyalskaya, A., Ho, H., Milkman, K. L., Li, X., Duckworth, A. L., & Camerer, C. (2023). What can machine learning teach us about habit formation? Evidence from exercise and hygiene. Proceedings of the National Academy of Sciences120(17), e2216115120.
Chater, N. & Loewenstein, G. (2022). The i-frame and the s-frame: How focusing on individual-level solutions has led behavioural public policy astray. Behavioral and Brain Sciences, 46, e147. https://doi.org/10.1017/S0140525X22002023
De Marcellis-Warn, N., Marty, F., Thelisson, E. & Warin, T. (2022). Artificial intelligence and consumer manipulations: From consumer’s counter algorithms to firm’s self-regulation tools. AI and Ethics, 2, 239–268. https://doi.org/10.1007/s43681-022-00149-5
DellaVigna, S. & Linos, E. (2022). RCTs to scale: Comprehensive evidence from two nudge units. Econo- metrica, 90(1), 81–116. https://doi.org/10.3982/ECTA18709
Duckworth, A. L. & Milkman, K. L. (2022). A guide to megastudies. PNAS Nexus, 1(5), 1–5. https://doi. org/10.1093/pnasnexus/pgac214
Forrester, J. W. (1971). Counterintuitive behavior of social systems. Technological Forecasting and Social Change, 3, 109–140. https://doi.org/10.1016/S0040-1625(71)80001-X
Hacker, P. (2021). Manipulation by algorithmics: Exploring the triangle of unfair commercial practice, data protection, and privacy law. European Law Journal, 1–34. Advanced online publication. https://doi.org/10.1111/eulj.12389
Hagendorff, T. (2022). A virtue-based framework to support putting AI ethics into practice. Philosophy & Technology35(3), 55.Hagman, W., Andersson, D., Västfjäll, D. & Tinghög, G. (2015). Public views on policies involving nudges. Review of Philosophy and Psychology, 6(3), 439-453. https://doi.org/10.1007/s13164-015-0263-2
Hallsworth, M. (2023). A manifesto for applying behavioural science. Nature Human Behaviour, 7, 310–323. https://doi.org/10.1038/s41562-023-01555-3
Halpern, D. (2015). ’Inside the Nudge Unit’ W. H. Allen.
Hansen, P.G., Jespersen, A.M. (2013). Nudge and the manipulation of choice: A framework for the responsible use of the nudge approach to behaviour change in public policy. European Journal of Risk Regulation, 31 (4), 1-28, 10.1017/S1867299X00002762
Jesse, N. (2018). Internet of things and big data: The disruption of the value chain and the rise of new software ecosystems. AI & Society, 33(2), 229-239. https://doi.org/10.1007/s00146-018-0807-y
Johnson, E. J., Shu, S. B., Dellaert, B. G., Fox, C., Goldstein, D. G., ... & Weber, E. U. (2012). Beyond nudges: Tools of a choice architecture. Marketing letters23(2), 487-504.
Kahneman, D. & Tversky, A. (1979). ‘Prospect Theory: An Analysis of Decision Under Risk’. Econometrica, 47(2), 263–291.
Kahneman, D. (2011). Thinking, fast and slow. Penguin Books: UK.
Karlsen, R. & Andersen, A. (2019). Recommendations with a nudge. Technologies, 7(2), 45. https://doi.org/10.3390/technologies7020045
Kleinberg, J., Ludwig, J., Mullainathan, S. & Obermeyer, Z. (2015). Prediction policy problems. American Economic Review, 105(5), 491–495. https://doi.org/10.1257/aer.p20151023
Komaki, A., Kodaka, A., Nakamura, E., Ohno, Y. & Kohtake, N. (2021). System design canvas for identify- ing leverage points in complex systems: A case study of the agricultural system models, Cambodia. Proceedings of the Design Society, 1, 2901–2910. https://doi.org/10.1017/pds.2021.551
Leventon, J., Abson, D. J. & Lang, D. J. (2021). Leverage points for sustainability transformations: Nine guiding questions for sustainability science and practice. Sustainability Science, 16, 721–726. https:// doi.org/10.1007/s11625-021-00961-8
Ludwig, J. & Mullainathan, S. (2021). Fragile algorithms and fallible decision-makers: Lessons from the justice system. Journal of Economic Perspectives, 35(4), 71–96. https://doi.org/10.1257/jep.35.4.71
Mac Aonghusa, P. & Michie, S. (2020). Artificial intelligence and behavioral science through the looking glass: Challenges for real-world application. Annals of Behavioral Medicine, 54(12), 942–947. https://doi.org/10.1093/abm/kaaa095
Maier, M., Bartoš, F., Stanley, T. D. & Wagenmakers, E. (2022). No evidence for nudging after adjusting for publication bias. Proceedings of the National Academy of Science, 119(31), 2200300119. https:// doi.org/10.1073/pnas.2200300119
Matz, S., Kosinski, M., Nave, G. & Stillwell, D. (2017). Psychological targeting as an effective approach to digital mass persuasion. Proceedings of the National Academy of Sciences, 114(48), 12714–12719. https://doi.org/10.1073/pnas.1710966114
McCarthy, J., Minsky, M. L., Rochester, N. & Shannon, C. E. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence [Online] [Date accessed: 23/03/2021]: http://www-formal.stanford.edu/jmc/history/dartmouth/ dartmouth.html
Meadows, D. (1997). Leverage points: Places to intervene in a system. Whole Earth, 91(1), 78–84.
Meadows, D. (2001). Dancing with systems. Whole Earth, 106(3), 58–63.
Mele, C., Spena, T. R., Kaartemo, V. & Marzullo, M. L. (2021). Smart nudging: How cognitive technologies enable choice architectures for value co-creation. Journal of Business Research, 129, 949–960. https://doi.org/10.1016/j.jbusres.2020.09.004
Michalek, G., Meran, G., Schwarze, R. & Yildiz, Ö. (2016). Nudging as a new ‘soft’ tool in environmental policy – An analysis based on insights from cognitive and social psychology. Zeitschrift für Umweltpolitik & Umweltrecht, 39, 169–207.
Michie, S., Thomas, J., Johnston, M., Aonghusa, P. M., Shawe-Taylor, J…, & West, R. (2017). The Human Behaviour-Change Project: Harnessing the power of artificial intelligence and machine learning for evidence synthesis and interpretation. Implementation Science, 12(121). https://doi.org/10.1186/ s13012-017-0641-5
Mills, S. & Sætra, H. S. (2022). The autonomous choice architect. AI and Society, 1–13. https://doi.org/10.1007/s00146-022-01352-0
Mills, S. (2022, August 2). Autonomous nudges and AI choice architects – Where does responsibility lie in computer mediated decision making? Impact of Social Sciences Blog.
Mills, S. (2023). AI for behavioural science. London: Taylor & Francis Group.
Mills, S., Costa, S. & Sunstein, C. R. (2023). AI, behavioural science, and consumer welfare. Journal of Consumer Policy, 46(3), 387–400. https://doi.org/10.1007/s10603-023-09526-0
Mills, S., Costa, S., & Sunstein, C. R. (2023). AI, behavioural science, and consumer welfare. Journal of Consumer Policy46(3), 387-400.
Mont, O., Neuvonen, A. & Lahteenoja, S. (2014). Sustainable lifestyles 2050: Stakeholder visions, emerging practices and future research. Journal of Cleaner Production, 63(2), 24–32. https://doi.org/10.1016/j.jclepro.2013.09.007.
Newland, C., & Argyriades, D. (2019). Reclaiming Public Space: Drawing Lessons from the Past as We Confront the Future: Sustainable Development Goal 16. In Public service excellence in the 21st century (pp. 1-30). Singapore: Springer Singapore.
Ng, C. F. (2016). Behavioral mapping and tracking. In R. Gifford (Ed.), Research methods for environmental psychology (pp. xx–xx). https://doi.org/10.1002/9781119162124.ch3
Okeke, F., Sobolev, M., Dell, N., and Estrin. D. (2018). Good vibrations: can a digital nudge reduce digital overload? In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '18). Association for Computing Machinery, New York, NY, USA, Article 4, 1–12. https://doi.org/10.1145/3229434.3229463
Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P. & Bernstein, M. S. (2023). Genera- tive agents: Interactive simulacra of human behavior. ArXiv at https://arxiv.org/pdf/2304.03442. pdf. Accessed 20 Apr 2023.
Peterson, J. C., Bourgin, D. D., Agrawal, M., Reichman, D. & Griffiths, T. L. (2021). Using large-scale experiments and machine learning to discover theories of human decision-making. Science, 372, 1209–1214. https://doi.org/10.1126/science.abe2629
Rauthmann, J. F. (2020). A (more) behavioural science of personality in the age of multi-modal sensing, big data, machine learning, and artificial intelligence. European Journal of Personality, 34, 593–598.
Robila, M. & Robila, S. (2020). Applications of artificial intelligence methodologies to behavioral and social sciences. Journal of Child and Family Studies, 29, 1–13. https://doi.org/10.1007/s10826-019-01689-x
Ruggeri, K., Benzerga, A., Verra, S., Folke, T. (2020). A behavioral approach to personalizing public health. Behav Public Policy. https://doi. org/10.1017/bpp.2020.31
Sætra, H. S. (2020). Privacy as an aggregate public good. Technology in Society, 63, 101422. https://doi. org/10.1016/j.techsoc.2020.101422
Sætra, H. S. (2022). AI for the sustainable development goals. CRC Press.
Saheb, T. (2022). Ethically contentious aspects of artificial intelligence surveillance: A social science perspective. Advance online publication. https://doi.org/10.1007/s43681-022-00196-y
Saura, J. R., Ribeiro-Soriano, D. & Zegarra Saldaña, P. (2022). Exploring the challenges of remote work on Twitter users' sentiments: From digital technology development to a post-pandemic era. Journal of Business Research, 142, 242–254.
Schmauder, C., Karpus, J., Moll, M., Bahrami, B., & Deroy, O. (2023). Algorithmic nudging: The need for an interdisciplinary oversight. Topoi42(3), 799-807. https://doi.org/10.1007/s11245-023-09907-4
Schmidt, R., & Stenger, K. (2021). (Dis) Embodied Rationality and ‘Choice Posture’: Addressing Behavioral Science's Mind-Body Problem. Available at SSRN 4185086.
Sharbek, N. (2022, August). How traditional financial institutions have adapted to artificial intelligence, machine learning and FinTech? In Proceedings of the International Conference on Business Excellence, 16(1), 837-848.
Shin, D. & Ahmad, N. (2023). Algorithmic nudge: An approach to designing human-centered generative artificial intelligence. ZU Scholars All Works, 5980. https://zuscholars.zu.ac.ae/works/5980
Simon, H. A. (1981). The sciences of the artificial (2nd ed.). MIT Press.
Smith, J. & de Villiers-Botha, T. (2021). Hey, Google, leave those kids alone: Against hypernudging children in the age of big data. AI and Society, 38(4), 1639-1649.
Stone, P. (2016). Artificial intelligence and life in 2030 (Report No. 52). Stanford University.
Sunstein, C. R. (2015). The ethics of influence. Cambridge University Press: USA.
Szaszi, B., Higney, A., Charlton, A., Gelman, A., Ziano, I., Aczél, B., Goldstein, D. G., Yeager, D. S. & Tipton, E. (2022). No reason to expect large and consistent effects of nudge interventions. Proceedings of the National Academy of Sciences, 119(31), e2200732119. https://doi.org/10.1073/pnas.2200732119
Thaler, R. H. & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Penguin Books.
Vanderelst, D. & Winfield, A. (2018). An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research, 48, 56–66. https://doi.org/10.1016/j.cogsys.2017.04.002
Vuong, Q. H., Ho, T. M., Nguyen, H. K. & Vuong, T. T. (2018). Healthcare consumers’ sensitivity to costs: A reflection on behavioural economics from an emerging market. Palgrave Communications, 4(1), 1–10. https://doi.org/10.1057/s41599-018-0127-3.
Weinmann, M., Schneider, C. & vom Brocke, J. (2016). Digital nudging. Business & Information Systems Engineering, 58(6), 433–436. https://doi.org/10.2139/ ssrn.2708250.
West, R., Michie, S., Chadwick, P., Atkins, L. & Lorencatto, F. (2020). Achieving behaviour change: A guide for national government. Public Health England. https://assets.publishing.service.gov.uk/gover nment/uploads/system/uploads/attachment_data/file/933328/UFG_National_Guide_v04.00 1 1_.pdf. Accessed 24 Apr 2023.
Xu, Y., Liu, X., Cao, X., Huang, C., Liu, E., Qian, S., ... & Zhang, J. (2021). Artificial intelligence: A powerful paradigm for scientific research. The Innovation, 2(4), 100179. https://doi.org/10.1016/j.xinn.2021.100179
Yeung, K. (2017). Hypernudge: Big data as a mode of regulation by design. Information Communication and Society, 1, 118–136. https://doi.org/10.1080/1369118X.2016.1186713