Research Statement

The central interest in my research career can be summarized with the following question: How do people make decisions from the moment they have hints of a complicated, unstructured problem or opportunity, to the moment they have designed, engineered, made, and released a complex (partly) automated system intended to solve that problem/opportunity at scale?

This is an old interdisciplinary question. But taken as a research topic, it is not as rigorously studied as you might expect. Already since the industrial revolution, and certainly from the second half of the 20th century till today [Ch1969,CH1994,Ro1982,FS1997], developed societies seem to be on a one-way trajectory towards increasing reliance on automated systems to satisfy various expectations, from the basics of health, food, shelter, through to mediation of human relationships and self-identity.

Automation – from the control theory variant [DB2011,Is2013] through to statistical learning in AI [RN1995,JWHT2013] – is advanced in domains that have a substantial role in basic aspects of quality of life, such as identity (think about how your identity data is stored, and how much you do or do not know about how it is used), privacy (think of the overwhelming surveillance, online and off), military (enabling new forms of industrial and military conflict), education (beyond trends such as MOOCs, note the role software has in children’s “screen time”), and healthcare (electronic health records, automated diagnostics, patient monitoring, etc.).

All these systems are man-made, so they do reflect choices made by those who had the possibility and authority to decide on the purpose and scope of these systems, and of how they realize that purpose. This begs the question of how they are designed, engineered, and made, and specifically, how decision-making, which shapes all those processes, happens. Needless to say, this is even more interesting today, when such decisions determine what a product or service does, autonomously, on my behalf, and thus, takes away some of my decision autonomy.

The topic is clearly important for future design and engineering of systems, but also for understanding their economics and management (e.g., [Or1992,BH2000,BBH2002,VFS2004]), shaping law which regulates their design, production, and use (e.g., [CL2001,LH2004,Le2009,WMF2017]), and understanding their broader societal effects and side-effects (e.g., [MTG2006,CH2010,SY2016]).

What activities does engineering decision-making involve? How do they unfold? How do many seemingly minor decisions amount to significant effects later? How do they relate over time, across levels of abstraction, how they depend on one another? What does accountability mean in such processes? When can we say that these processes have been performed diligently enough? What do procedural quality and outcome quality mean in such processes? And once we start looking at the normative considerations, how to do it all, better, and critically, what “better” means in the first place?

This is not a new research topic. You can certainly point to different domains for bits and pieces that may be relevant. The body of prior work goes back to the 19th century, and taken across disciplines, is considerable. We first design, and then build systems (and so we live their consequences later than we predict them, during design), so every discussion of uncertainty, be it in philosophy or probability theory, has some relevance (e.g., [Ko1950], [DF1975], [Dr1995], [Da2017], [Ha2017]). Systems represent some aspects of the world in order to perform computation on representations, so as to choose and take action, so discussions in metaphysics (ontology in particular) and epistemology matter (e.g., [Fo1975], [JL1983], [De1989],[BLR1992], [Se1995], [So2000], [HL2008]). Expected utility theory (e.g., [Sc1982], [HO1994]) and its alternatives (e.g., [TK1981], [St2000], [KMM2005]) come with a basic decision-making ontology, of goals, criteria, preferences, uncertainty, and alternatives. Game theory asks what happens when decision makers take others’ expected behaviour into account. Behavioural decision theory (e.g., [EH1981], [BM2008]) identifies patterns and deviations from idealized rational behaviour; its predecessor, bounded rationality links these to bounded cognitive resources. Psychology of choice (e.g, [Lo1987], [HR1987], [HD2010]) offers more elaborate accounts of intentional states, their interplay, and relationship to choices, or more specifically, to commitments. Theory of organizations (e.g., [MRT1976], [LMPPSM1995]) offers narratives of collective choice in formal organisations. Research on information systems management gives an account of how information systems change and are shaped by and fit organizations and work processes. Computer science, and software engineering more specifically (e.g., [Bo1981], [MBJK1990], [DLF1993], [vL2001], [CNYM2012]), look at how to specify designs of software parts of such systems, how to ensure and evaluate their engineering quality, how to make those processes more efficient, transparent, accountable, and perhaps centrally, how to turn specifications into software that works. Artificial intelligence (e.g., [RN1995]), at the intersection of statistics, computer science, and decision theory provides techniques to transfer specific kinds of human knowledge into procedures that can be turned into software, and, in its statistical learning variants, to make software which exhibits simple learning behaviours.

From the standpoint of research methodology, how do you approach such a broad topic? Do you generate theory, then move it to the field, or do you go to the field, to build theory from it? Do you work at a macro-level, trying to provide a theory that spans the entire aggregate phenomenon (entire process from idea to working system), or do you produce theories for specific phenomena, isolated from the overall one?

During my PhD from 2005 to 2008, then from 2008 to 2012, my primary results have been re-examinations of the ontology of decision making in engineering systems which rely to a great extent on software, the notion of what such an engineering problem may be, how to represent it formally, and what qualifies as its solution (e.g., [Ju2011], [JMF2008], [JMF2009], [JBEM2010], [NBMJ2012], and all my other publications up to and including 2012).

My research from 2012 to 2017 involved parallel and related work towards several objectives.

  • Improve data quality and strengthen research methodology. My aim has been to move away from purely theoretical work, and work where I test hypotheses with inexperienced engineers, including students, who work on hypothetical engineering decision problems. Data from expert engineers is necessary, who have a stake in and work on real-world problems. To this aim, I have established contacts with high technology start-ups in Belgium and USA, who have allowed me to observe and participate in their decision processes. I was specifically interested in start-ups founded by experienced engineers and executives, where the approach they have is already to some extent rigorous. All my publications since 2012 have benefitted from this.
  • Setting up an environment for experimentation. Observation and participation in engineering problems of others, however, only goes so far, since it is impossible to obtain insight on the kinds of introspection one may be doing, when having a stake – bearing risks and rewards – in decisions that are being made. It is also hard to influence the process and options in a decision, except in rare cases. To deal with this issue, I have taken a stake in a University of Namur spin-off, BStorm, which I co-founded with colleagues, where I actively participate in software design and engineering decisions. My aim is to set up more spin-offs in such a way, that I can both test and apply long term the insights gained through research. Since I am an active shareholder, I have a stake and operate under similar constraints as founders of business ventures. Moreover, this confronts me to conceptualizations and methods for decision-making produced outside academia. So far, I have not yet found a way to get closer to the phenomenon I am interested in studying. I do still lack access to larger organizations, something that I am working on and expect to solve in the next 2 years.
  • Build a decision ontology grounded in observables. A widespread premise in conceptualizations of engineering problems, and of how decision-making proceeds in engineering, is that it all starts from internal motives of specific individuals, i.e., from intentional states, such as desires to change what one believes to be presently the case. These then become or otherwise shape the purpose of the system-to-be, and engineering then is concerned with producing a specification of the system which fits the purpose. This is reflected beyond engineering; classical decision theory is grounded too, in what is typically referred to as a folk psychology theory of mind, where behaviour is explained as the outcome of an interplay of intentional states (beliefs, desires, intentions, emotions, moods, etc. – depending on how you conceptualize intentional states). The problem with conceptualizations which rely on folk psychology, is that intentional states are inaccessible, that is, no instrument exists to detect, measure, influence them consistently over time. While it appears accepted, in social psychology, that mind reading is a fact (i.e., my ability to ascribe to you intentional states in some way which is relevant for my decision-making), my field experience is that even if it may work for every day, low stake interactions, it is problematic in engineering decision-making. In other words, building systems based on one’s interpretation of others’ intentional states, is a source of low quality information for system design and engineering. Since 2016, I have intensified research and publication on this topic; at the time of writing this, my published work highlights limitations of relying on intentional states, and a suggestion of an alternative decision ontology, where inputs can be subjected to reproducible validation, i.e., which is less grounded in non-observable intentional states, is being prepared for publication. My most recent publication on this topic is [Ju2017].
  • Study relationships between time, risk, and structure of decision processes in software systems engineering. Surprisingly enough, decision ontologies in engineering give little attention to the fact that requirements, which convey the purpose of a system, are decided before the solution, which satisfies these requirements is specified, and specification is completed well before the system is up and running. Since 2015, I have been doing theoretical work, on producing a model which relates risk of preference and expectation change, between these three moments, of requirements, specification, and system release. What if preferences and expectations change, after requirements are frozen? What if they change after specification moves to production, and the resulting system proves to be solving the wrong problem? This seems to be an evident problem, but one which has only been dealt with via preference change in economic theory, far away from engineering processes, and normative matters. More interestingly, our early results on this suggest a need for a more sophisticated economic analysis of engineering processes, so as to understand how engineering methodology is and can be shaped by economic concerns; again, rather obvious problem for practitioners, yet few constructive and thought through theoretical accounts exist, with empirical results being only a distant goal. I expect, however, interesting progress on this in the next 2-3 years.
  • Improve conceptualizations of adaptive systems engineering. From 2012 to 2015, I was actively involved in theoretical work on the definition of the so-called requirements problem for adaptive systems. This is the problem that engineers are solving, when they aim to build an adaptive system, the latter being a system which can sense its environment for (some) changes, and change its behaviour in response. Together with colleagues, we made several contributions: we proposed a definition of the problem, a formalism (mathematical logic) for representations of problem instances, and positioned the problem relative to classical decision theory. My main publications on decision-making in adaptive systems engineering are [EBJM2014], [JBEM2015], [Ju2015a].
  • Advance research on, and practice of creation of formalisms for the representation and analysis of decision problems in software engineering. In 2015, Springer published my book [Ju2015b] on the design of requirements modelling languages. Requirements modelling languages are mathematical formalisms, types of classical and non-classical logic, used to describe decision problems during early system design, when data and knowledge are sparse, incomplete, or otherwise deficient, and uncertainty about system purpose, its functions, and its environment is high. The book is a synthesis of my experience designing such formalisms, something I had to do often, for many of my publications, with many collaborators. A practical outcome of the book is funding for a second spin-off, which will focus on commercializing software tools to aid decision-making during engineering, a prototype of which is available at http://analyticgraph.com [GBJM2016]
  • Apply and validate past research results. PhD students whom I supervised or co-supervised have worked on specializing some of my prior contributions to more specific problems in engineering decision-making, including requirements elicitation (e.g., BJF2014), business intelligence (e.g., [BJLF2016]), release planning (e.g., [GFHJS2012]), and design of recommendation algorithms (e.g., [BJFH2014]).

References

  • [BBH2002] Bresnahan, Timothy F., Erik Brynjolfsson, and Lorin M. Hitt. “Information technology, workplace organization, and the demand for skilled labor: Firm-level evidence.” The Quarterly Journal of Economics 117.1 (2002): 339-376.
  • [Bh2000] Bharadwaj, Anandhi S. “A resource-based perspective on information technology capability and firm performance: an empirical investigation.” MIS quarterly (2000): 169-196.
  • [BJF2016] Burnay, Corentin, Ivan J. Jureta, and Stéphane Faulkner. “What stakeholders will or will not say: A theoretical and empirical study of topic importance in Requirements Engineering elicitation interviews.” Information Systems 46 (2014): 61-81.
  • [BJFH2014] Bouraga, Sarah, et al. “Knowledge-based recommendation systems: a survey.” International Journal of Intelligent Information Technologies (IJIIT) 10.2 (2014): 1-19.
  • [BJLF2016] Burnay, Corentin, et al. “A framework for the operationalization of monitoring in business intelligence requirements engineering.” Software & Systems Modeling 15.2 (2016): 531-552.
  • [BLR1992] Brachman, Ronald J., Hector J. Levesque, and Raymond Reiter, eds. Knowledge representation. MIT press, 1992.
  • [BM2008] Bazerman, Max H., and Don A. Moore. “Judgment in managerial decision making.” (2008).
  • [Bo1981] Boehm, Barry W. Software engineering economics. Vol. 197. Englewood Cliffs (NJ): Prentice-hall, 1981.
  • [Ch1969] Chandler, Alfred D. “Strategy and Structure: Chapters in the History of the American Industrial Enterprise.” MIT Press Books (1969).
  • [CH1994] Chandler Jr, Alfred D., and Takashi Hikino. Scale and Scope. Harvard University Press, 1994.
  • [CH2010] Chadwick, Andrew, and Philip N. Howard, eds. Routledge handbook of Internet politics. Taylor & Francis, 2010.
  • [CL2001] Cohen, Julie E., and Mark A. Lemley. “Patent scope and innovation in the software industry.” California Law Review(2001): 1-57.
  • [CNYM2012] Chung, Lawrence, et al. Non-functional requirements in software engineering. Vol. 5. Springer Science & Business Media, 2012.
  • [Da2017] Dawid, Philip. “On individual risk.” Synthese 194.9 (2017): 3445-3474.
  • [DB2011] Dorf, Richard C., and Robert H. Bishop. Modern control systems. Pearson, 2011.
  • [De1989] Dennett, Daniel Clement. The intentional stance. MIT press, 1989.
  • [DF1975] De Finetti, Bruno. Theory of probability: A critical introductory treatment. Vol. 6. John Wiley & Sons, 2017.
  • [DLF1993] Dardenne, Anne, Axel Van Lamsweerde, and Stephen Fickas. “Goal-directed requirements acquisition.” Science of computer programming 20.1-2 (1993): 3-50.
  • [Dr1995] Draper, David. “Assessment and propagation of model uncertainty.” Journal of the Royal Statistical Society. Series B (Methodological) (1995): 45-97.
  • [EBJ2011] Ernst, Neil A., Alexander Borgida, and Ivan Jureta. “Finding incremental solutions for evolving requirements.” Requirements Engineering Conference (RE), 2011 19th IEEE International. IEEE, 2011.
  • [EBJM2014] Ernst, Neil, et al. “An overview of requirements evolution.” Evolving Software Systems. Springer Berlin Heidelberg, 2014. 3-32.
  • [EH1981] Einhorn, Hillel J., and Robin M. Hogarth. “Behavioral decision theory: Processes of judgement and choice.” Annual review of psychology 32.1 (1981): 53-88.
  • [Fo1975] Fodor, Jerry A. The language of thought. Vol. 5. Harvard University Press, 1975.
  • [FS1997] Freeman, Chris, and Luc Soete. “The Economics of Industrial Innovation.” MIT Press Books (1997).
  • [GBJM2016] Gillain, Joseph, et al. “AnalyticGraph. com: Toward Next Generation Requirements Modeling and Reasoning Tools.” Requirements Engineering Conference (RE), 2016 IEEE 24th International. IEEE, 2016.
  • [GFHJS2012] Gillain, Joseph, et al. “Product portfolio scope optimization based on features and goals.” Proceedings of the 16th International Software Product Line Conference-Volume 1. ACM, 2012.
  • [Ha2017] Halpern, Joseph Y. Reasoning about uncertainty. MIT press, 2017.
  • [HD2010] Hastie, Reid, and Robyn M. Dawes. Rational choice in an uncertain world: The psychology of judgment and decision making. Sage, 2010.
  • [HL2008] Van Harmelen, Frank, Vladimir Lifschitz, and Bruce Porter, eds. Handbook of knowledge representation. Vol. 1. Elsevier, 2008.
  • [HO1994] Hey, John D., and Chris Orme. “Investigating generalizations of expected utility theory using experimental data.” Econometrica: Journal of the Econometric Society (1994): 1291-1326.
  • [HR1987] Hogarth, Robin M., and Melvin W. Reder. Rational choice: The contrast between economics and psychology. University of Chicago Press, 1987.
  • [Is2013] Isidori, Alberto. Nonlinear control systems. Springer Science & Business Media, 2013.
  • [JBEM2010] Jureta, Ivan J., et al. “Techne: Towards a new generation of requirements modeling languages with goals, preferences, and inconsistency handling.” Requirements Engineering Conference (RE), 2010 18th IEEE International. IEEE, 2010.
  • [JBEM2015] Jureta, Ivan J., et al. “The requirements problem for adaptive systems.” ACM Transactions on Management Information Systems (TMIS) 5.3 (2015): 17.
  • [JL1983] Johnson-Laird, Philip N. Mental models: Towards a cognitive science of language, inference, and consciousness. No. 6. Harvard University Press, 1983.
  • [JMF2008] Jureta, Ivan, John Mylopoulos, and Stephane Faulkner. “Revisiting the core ontology and problem in requirements engineering.” International Requirements Engineering, 2008. RE’08. 16th IEEE. IEEE, 2008.
  • [JMF2009] Jureta, Ivan J., John Mylopoulos, and Stéphane Faulkner. “A core ontology for requirements.” Applied Ontology 4.3-4 (2009): 169-244.
  • [Ju2011] Jureta, Ivan. Analysis and design of advice. Springer Science & Business Media, 2011.
  • [Ju2015a] Jureta, Ivan. “Requirements Problem and Solution Concepts for Adaptive Systems Engineering, and their Relationship to Mathematical Optimisation, Decision Analysis, and Expected Utility Theory.” arXiv preprint arXiv:1507.06260 (2015).
  • [Ju2015b] Jureta, Ivan. The Design of Requirements Modelling Languages: How to Make Formalisms for Problem Solving in Requirements Engineering. Springer, 2015.
  • [Ju2017] Jureta, Ivan J. “What Happens to Intentional Concepts in Requirements Engineering If Intentional States Cannot Be Known?.” International Conference on Conceptual Modeling. Springer, Cham, 2017.
  • [JWHT2013] James, Gareth, et al. An introduction to statistical learning. Vol. 112. New York: springer, 2013.
  • [KMM2005] Klibanoff, Peter, Massimo Marinacci, and Sujoy Mukerji. “A smooth model of decision making under ambiguity.” Econometrica 73.6 (2005): 1849-1892.
  • [Ko1950] Kolmogorov, Andreĭ Nikolaevich. “Foundations of the Theory of Probability.” (1950).
  • [Le2009] Lessig, Lawrence. Code: And other laws of cyberspace. 2009.
  • [LH2004] Lastowka, F. Gregory, and Dan Hunter. “The laws of the virtual worlds.” California Law Review (2004): 1-73.
  • [LMPPSM1995] Langley, Ann, et al. “Opening up decision making: The view from the black stool.” Organization Science 6.3 (1995): 260-279.
  • [Lo1987] Lopes, Lola L. “Between hope and fear: The psychology of risk.” Advances in experimental social psychology 20 (1987): 255-295.
  • [MBJK1990] Mylopoulos, John, et al. “Telos: Representing knowledge about information systems.” ACM Transactions on Information Systems (TOIS) 8.4 (1990): 325-362.
  • [MRT1976] Mintzberg, Henry, Duru Raisinghani, and Andre Theoret. “The structure of” unstructured” decision processes.” Administrative science quarterly (1976): 246-275.
  • [MTG2006] Mossberger, Karen, Caroline J. Tolbert, and Michele Gilbert. “Race, place, and information technology.” Urban Affairs Review 41.5 (2006): 583-620.
  • [NBMJ2012] Ernst, Neil, et al. “Agile requirements evolution via paraconsistent reasoning.” Advanced Information Systems Engineering. Springer Berlin/Heidelberg, 2012.
  • [Or1992] Orlikowski, Wanda J. “The duality of technology: Rethinking the concept of technology in organizations.” Organization science 3.3 (1992): 398-427.
  • [RN1995] Russell, Stuart, Peter Norvig, and Artificial Intelligence. “A modern approach.” Artificial Intelligence. Prentice-Hall, Egnlewood Cliffs 25 (1995): 27.
  • [Ro1982] Rosenberg, Nathan. Inside the black box: technology and economics. Cambridge University Press, 1982.
  • [Sc1982] Schoemaker, Paul JH. “The expected utility model: Its variants, purposes, evidence and limitations.” Journal of economic literature (1982): 529-563.
  • [Se1995] Searle, John R. The construction of social reality. Simon and Schuster, 1995.
  • [So2000] Sowa, John F. Knowledge representation: logical, philosophical, and computational foundations. Vol. 13. Pacific Grove: Brooks/Cole, 2000.
  • [St2000] Starmer, Chris. “Developments in non-expected utility theory: The hunt for a descriptive theory of choice under risk.” Journal of economic literature 38.2 (2000): 332-382.
  • [SY2016] Skeggs, Beverley, and Simon Yuill. “Capital experimentation with person/a formation: how Facebook’s monetization refigures the relationship between property, personhood and protest.” Information, Communication & Society 19.3 (2016): 380-396.
  • [TK1981] Tversky, Amos, and Daniel Kahneman. “The framing of decisions and the psychology of choice.” Science 211.4481 (1981): 453-458.
  • [VFS2004] Varian, Hal R., Joseph Farrell, and Carl Shapiro. The economics of information technology: An introduction. Cambridge University Press, 2004.
  • [vL2001] Van Lamsweerde, Axel. “Goal-oriented requirements engineering: A guided tour.” Requirements Engineering, 2001. Proceedings. Fifth IEEE International Symposium on. IEEE, 2001.
  • [WMF2017] Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi. “Why a right to explanation of automated decision-making does not exist in the general data protection regulation.” International Data Privacy Law 7.2 (2017): 76-99.