Which is the outtake of Artificial Intelligence (AI)? This is a recurrent conversation topic among AI practitioners, specialized journalists, and brave politicians. Although some simple concepts are clearly conveyed to the general audience, there are some others that are not so widely known. In this
Since AI is impacting our lives through products available in the marketplace, the goal of this post is to analyze what’s up with AI systems when consumed via the free market. In other words, AI is developed and consumed in a market-driven fashion and I would like to better understand which are the consequences of that. Hence, I’ll be focusing on the economic side of AI to show that for encouraging the main AI actors to behave ethically we better (directly) act over the market.

Don’t fear AI, fear evil capitalists
Like the tobacco industry avoids advertising that tobacco produces cancer, or the oil industry negates the climate change — the AI industry is not interested in an open discussion on ethics and AI.
Why? Because AI is just a mean to increase their revenue. As far as this technology increases their income: life is good — because the principal mission of a company is to “make money”. With that in mind, why should third-parties join the party? Or, why these third-parties should question what these companies are doing (now that money starts to flow)?
Just using AI to increase the revenue?
Is it irresponsible to just use AI to increase the revenue of a company? No matter what are the consequences? CEOs also have to consider the ethical and social implications of their decisions when using AI. Interestingly, though, most CEOs are not educated to understand the limits of this technology. Advisory boards having experts on AI and ethics could be a solution.
But, still, decisions in a company are generally made to increase their earnings. Not necessarily to build a better society.
AI certificates on ethics to enforce a positive social impact
Premise i) Building “social trust” for AI systems is key for the proper development of those. Consequently, it seems a good idea to encourage the development of trustworthy AI systems that are developed following ethical principles.
Premise ii) As said, AI impacts society via the market. Hence, to facilitate the incorporation of trustworthy AI agents in our society, one needs to directly act over the market.
A nice idea towards this direction is to promote certificates on ethics and AI. The goal of those certificates is to add value to those products that are developed following ethical principles. Note, then, that under the current economic paradigm, trustworthiness is not a technological issue — it is just part of a (potential) business model. Having an ethical perspective can serve as a positive differentiation in the marketplace.
Examples: A nice parallelism for this idea is the way we currently consume eggs. When we go to the supermarket, we can behave ethically and buy cage-free or free-range eggs. Other examples can be the mandatory food quality certificates, or the EU certifications for medical devices or toys.
Ethics (in AI) is not a technological issue, it is a business model.
Changing everyone’s mind-set might help solving the problem in practice.
In which part of the world do you live?
The previous analysis assumes a rather neo-liberal economical perspective. What happens if your country does not specifically
In the USA, private companies play a central role in the development and deployment of AI. In 2017, the R&D investment of Amazon and Alphabet together sums up to US$ 30 billion — which is significantly more than the government investment of the USA (US$ 5.3 billion in 2019). In China, since 2014, the government has launched a series of key national economic initiatives that are relevant to AI with the goal of creating a EUR 13 billion AI market (≈ US$ 14.7 billions) by 2018, with the intention to help China lead AI by 2030. In South Korea, the government announced it would spend 1 trillion won (≈ US$ 840 million) by 2020 to boost the AI industry. Canada announced its AI strategy in the 2017 budget, which allocates CAN$ 125 million (≈ US$ 94.3 million) over five years. And India and Japan started the political discussion but did not (yet) decide how much money governments would allocate for AI investment.
Private or public investment in AI?
The unrestricted development of AI through the free market (USA’s current model) inquires the risk that large corporations define, de facto, the applicability and the nature of AI.
Or, if sufficient public investment in AI enables powerful centralized AI systems (like it could be in China), it exists the possibility that governments utilize AI not to maximize economic profit, but to maximize their reelection or to perform social control.
In both scenarios, it does not seem a bad idea to enable mechanisms of democratic control over the economy/power. What if we guide our actions following this principle? Privacy for the people, transparency for the powerful.
Privacy for the people, transparency for the powerful.
A principle for enabling mechanisms of democratic control over the economy/power.
What’s the role of the European Union?
European Union’s current intentions are based on three pillars: boost industrial capacity, prepare socio-economic changes, and define
Their ambition is not only to influence the European
AI’s social definition is affected by marketing
Corporation’s marketing efforts are also contributing to the social definition of AI. This legitimate push for increasing their shares’ price is promoting a hype with consequences still to be unveiled. Although the technology is not even there, and our society is not ready to critically discuss its adoption, corporations are moving forward and are selling to the world their vision of what AI is.
Note, therefore, that the optimistic narrative comes from marketing — that is out there with the intention of increasing the sells/earnings of these companies.
“Good AI”: a market-driven definition
People’s judgment of what “good” means is greatly influenced by “whatever the market decides is good”.
However, it is important to notice that our current AI/tech market is full of power asymmetries. Consequently, tech giants have the capacity to (ab)use (of) their market dominance to define, de facto, what “Good AI” means.
Breaking Power Asymmetries
Example 1: Can your grandpa develop (or even imagine) an AI system? No, because he does not even understand which are the building blocks of this technology.
Example 2: Can you develop a successful AI system? Yes, if you work for a large corporation with sufficient capacity to invest in getting enough annotated data and hardware. Or yes, if you work in an academic institution and you are smarter than these researchers working for large corporations (and are able to develop a new AI technique that requires few data and few computational resources).
As seen, only a few people can envision and develop AI systems. This asymmetry in terms of effective capacities defines how powerful is each of the actors. Hence, most citizens cannot critically and independently oversee what’s going on in AI. If we seek for a higher democratic control in AI systems, we need to first break these power asymmetries.
Some companies use the term “democratize AI”, but their idea only consists in making everyone use their AI-based services (as black-boxes) in the cloud. In my opinion, this is far from democratizing AI.
Disclaimer 1: These are not original ideas of mine, is just a compilation of interesting ideas I found while assisting to the HUMAINT Winter School in Seville (February, 2019). Names who greatly influenced my thinking are: Nuria Oliver, Virginia Dignum, Jonnathan Penn, Christopher Markou, Bertin Martens, Songül Tolan, Ted Chiang, Hector Geffner.
Disclaimer 2: I’m not an economist, I’m just a deep learning practitioner and a motivated citizen.