Quality Assurance for AI: An Inevitable Tradeoff

How do you ensure #quality of a #service which uses #AI for #personalization? #QA in that case is all about risk management. pic.twitter.com/IdKcY5noBL
— ivanjureta (@ivanjureta) January 31, 2018
IP compliance requirements on generative AI reduce the readily and cheaply available amount of training data, with a few consequences on how product development and product operations are done.
Figures 1 and 2 show cost versus time; Figure 1 shows long iterations, Figure 2 short iterations. We choose to do something at time zero, at the origin of the graph in the Figure, and when we do so, we do it under some assumptions that we made at that time. Dashed red lines convey…
I wrote in another note (here) that AI cannot decide autonomously because it does not have self-made preferences. I argued that its preferences are always a reflection of those that its designers wanted it to exhibit, or that reflect patterns in training data. The irony with this argument is that if an AI is making…
This short interview on my research on decision making and use of it in companies, was done in 2018 with fnrs.tv, part of the Belgian Fonds de la Recherche Scientifique – FNRS, in Brussels. Each of my first two academic books led to the founding of a spin-off; see the books here.
The work on Techne (here) can be seen as a case study in developing a new business analysis and requirements engineering mehods. I gave a general talk on this topic in 2013, at the Sauder School of Business, at the University of British Columbia, in Vancouver. The presentation used at the talk is below.