In order to lay the foundations for a discussion around the argument that the adoption of artificial intelligence (AI) technologies benefits the powerful few (Chaslot, 2016; Morozov, 2018), focussing on their own existential concerns (Busby, 2018; Sample, 2018a), the paper will narrow down the analysis of the argument to social justice and jurisprudence (i.e. the philosophy of law), considering also the historical context. The paper explores the notion of humanised artificial intelligence (Kaplan & Haenlein, 2019; Legg & Hutter, 2007) in order to discuss potential challenges society might face in the future. The paper does not discuss current forms and applications of artificial intelligence, as, so far, there is no AI technology (Bostrom, 2014), which is self-conscious and self-aware, being able to deal with emotional and social intelligence. It is a discussion around AI as a speculative hypothetical entity. One could then ask, if such a speculative self-conscious hardware/software system were created at what point could one talk of personhood? And what criteria could there be in order to say an AI system was capable of committing AI crimes?

In order to address AI crimes, the paper will start by outlining what might constitute personhood in discussing legal positivism and natural law. Concerning what constitutes AI crimes the paper uses the criteria given in King et al’s paper Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions (King, Aggarwal, Taddeo, & Floridi, 2018), where King et al coin the term AI crime, mapping five areas in which AI might, in the foreseeable future, commit crimes, namely:

  • commerce, financial markets, and insolvency

  • harmful or dangerous drugs

  • offences against persons

  • sexual offences

  • theft and fraud, and forgery and personation

Having those potential AI crimes in mind, the paper will discuss the construction of the legal system through the lens of political involvement of what one may want to consider to be powerful elites. Before discussing these aspects the paper will clarify the notion of “powerful elites”. In doing so the paper will be demonstrating that it is difficult to prove that the adoption of AI technologies is undertaken in a way which mainly serves a powerful class in society. Nevertheless, analysing the culture around AI technologies with regard to the nature of law with a philosophical and sociological focus enables one to demonstrate a utilitarian and authoritarian trend in the adoption of AI technologies (Goodman, 2016; Haddadin, 2013; Hallevy, 2013; Pagallo, 2013).

The paper will base the discussion around Crook’s notion on “power elites” (2010), in Media Law and Ethics (Crook, 2009), and apply it to the discourse around artificial Intelligence and ethics. Following Crook the paper will introduce a discussion around power elites with the notions of legal positivism and natural law, as discussed in the academic fields of philosophy and sociology. The paper will then look, in a more detailed manner, into theories analysing the historical and social systematisation, or one may say disposition, of laws, and the impingement of neo-liberal (Parikh, 2017) tendencies upon the adoption of AI technologies. Pueyo demonstrates those tendencies with a thought experiment around superintelligence in a neoliberal scenario (Pueyo, 2018). In Puevo’s thought experiment the system becomes techno-social-psychological with the progressive incorporation of decision-making algorithms and the increasing opacity of such algorithms (Danaher, 2016), with human thinking partly shaped by firms themselves (Galbraith, 2015).

The regulatory, self-governing potential of AI algorithms (Poole, 2018; Roio, 2018; Smith, 2018) and the justification by authority of the current adoption of AI technologies within civil society will be analysed next. The paper will propose an alternative, some might say practically unattainable, approach to the current legal system by looking into restorative justice for AI crimes (Cadwalladr, 2018), and how the ethics of care, through social contracts, could be applied to AI technologies. In conclusion the paper will discuss affect (Olivier, 2012; Wilson, 2011) and humanised artificial intelligence with regards to the emotion of shame, when dealing with AI crimes.

Legal Positivism and Natural Law

In order to discuss AI in relation to personhood this paper follows the descriptive psychology method (Ossorio, 2013) of the paradigm case formulation (Jeffrey, 1990) developed by Ossorio (1995). Similar to how some animal rights activists call (Mountain, 2013) for certain animals to be recognised as non-human persons (Midgley, 2010), this paper speculates on the notion of AI as a non-human person being able to reflect on ethical concerns (Bergner, 2010; Laungani, 2002). Here Schwartz argues that “it is reasonable to include non-humans as persons and to have legitimate grounds for disagreeing where the line is properly drawn. In good faith, competent judges using this formulation can clearly point to where and why they agree or disagree on what is to be included in the category of persons” (2014).

According to Ossorio (2013) a deliberate action is a form of behaviour in which a person a) engages in an intentional action, b) is cognizant of that, and c) has chosen to do that. Ossorio gives four classifications: ethical, hedonic, aesthetic, and prudent as fundamental motivations. Ethical motivations, as well as aesthetic motivations, can be distinguished from prudent (and hedonic) motivations due to the agent making a choice. “Aesthetic and ethical motivations are only relevant when deliberate action is also possible since aesthetic and ethical action require the eligibility to choose or refrain, to potentially deliberate about the desirable course to follow. In the service of being able to choose, and perhaps think through the available options, a person’s aesthetic and ethical motives are often consciously available” (Schwartz, 1984).

In the fields of philosophy and sociology countless theories have been advanced concerning the nature of law, addressing questions such as: Can unethical law be binding? Should there be a legal code for civil society? Can such a legal code be equitable, unbiased, and just, or, is the legal code always biased? In the case of AI technologies one can ask whether the current vision for the adoption of AI technologies is a vision that benefits only the powerful elites.

To address the question one needs to discuss the idea of equality. Reference is made to Aristotle’s account on how the legal code should be enacted in an unbiased manner (Aristotle, 1981). Aristotle differentiated between an unbalanced and balanced application of the legal code, pointing out that the balanced juridical discussion of a case should be courteous. Here, as with the above mentioned animal rights activists, in Dependent Rational Animals MacIntyre (2001) argued, drawing on Aquinas’ (2006) discussion of misericordia, for the recognition of our kinship to some species calling for the “virtues of acknowledged dependence” (MacIntyre, 2013). Austin, on the other hand, suggests that the legal code is defined by a higher power, “God”, to establish justice over society. For Austin the legal code is an obligation, a mandate to control society (Austin, 1998).

Hart goes on to discuss the social aspect of legal code and how society apprehends the enactment of such legal code (Hart, 1961). Hart argues that the legal code is a strategy, a manipulation of standards accepted by society. Contrary to Hart, Dworkin proposes for the legal code to allow for non-rule (Dworkin, 1986) standards reflecting ethical conventions of society. Dworkin discusses legislation as an assimilation of these conventions, where legislators do not define the legal code, but analyse the already existing conventions to derive conclusions, which then in turn define the legal code. Nevertheless, Dworkin fails to explain how those conventions come into being. Here for Kelsen (1967, 2009) legal code is a product of the political, cultural and historical circumstances society finds itself in. For Kelsen the legal code is a standardising arrangement which defines how society should operate (Kelsen, 1991).

The paradigm case (Ossorio, 2013) allows for the potential AI as non-human persons (Putman, 1990; Schwartz, 1982). Referring to the paradigm case method allows one to work out where parties are in agreement or disagreement concerning what constitutes a person.

Here social contract theories, as defined and discussed below, might serve to explain and analyse how legal codes deal with the emergence of legal issues concerning AI technologies or AI crimes. Following Ossorio (1995) since persons act consciously, they are motivated by ethical, aesthetic, prudent and hedonic motivations: at the same time, social contract(s) allowing persons to act in patterns of significance, giving meaning to one’s actions.

AI can be interpreted as automated distribution systems, using data drawn from a ‘datasphere’, which could easily be imagined continuously operating without human interference. Thus, a more particular definition of ‘datasphere’ would emphasise how a vast amount of data circulates, while only becoming meaningful when viewed in the context of a social contract. In other words, the transformation of ‘data’ into ‘meaning’ can always be seen to take place within a social contract. For example, a protocol extracting data always has to be configured, i.e. socially or politically agreed upon. Legal or activist interventions thus always interpellate the datasphere. dataspheres include all forms of data that exist in the public domain and public spheres. This data becomes meaningful only when actors interpret it. Such instances of interaction are always in some ways social.

In that sense a legal system, social contracts, aiming to control the dataspheres, needs to be tailored carefully because the situation as being controlled by the most driven producers and consumers. The old distribution model is so impoverished that it chooses the safest route. Applying the notion of ‘social contracts,’ the notion of open and distributed sharing can be reinforced as an overall heuristic and social ethos. One can even elaborate upon the idea of slavery, extending it to the idea of social contracts with reference to Jean-Jacques Rousseau’s Social Contract, which states: “The words ‘slavery’ and ‘right’ are contradictory, they cancel each other out. Whether as between one man and another, or between one man and a whole people, it would always be absurd to say: I hereby make a covenant with you which is wholly at your expense and wholly to my advantage” ([1762] 1968, p. 58).

“Man is born free; and everywhere he is in chains”, begins Rousseau’s work of political philosophy, The Social Contract (1968). Rousseau (Dart, 2005; Hampsher-Monk, 1992) aimed to understand why “a man would give up his natural freedoms and bind himself to the rule of a prince or a government” (Bragg, 2008). This question of political philosophy was widely discussed in the 17th and 18th centuries, as revolution was in the air all over Europe, particularly in France 1789. In the 18th century Rousseau published The Social Contract. Rousseau thought that there is a conflict between obedience and persons’ freedom and argued that our natural freedom is our own will. Rousseau defined the social contract as a law ‘written’ by everybody (Roland, 1994). His argument was that if everybody was involved in making the laws they would only have to obey themselves and as such follow their free will. How could persons then create a common will? For Rousseau this would only have been possible in smaller communities through the practice of caring for each other and managing conflicts for the common good – ultimately through love. In The Art of Loving Erich Fromm reminds us that “love is not a sentiment which can be easily indulged in by anyone … [S]atisfaction in individual love cannot be attained without the capacity to love one’s neighbour, without true humility, courage, faith and discipline” (1956, p. xix). Rousseau imagined a society the size of his native city of Geneva as an ideal ground for the implementation of social contract theory. Ironically it was the French who, through their revolutionaries, implemented social contract theory. Nevertheless, the French people read it differently, as imposing social contracts onto the persons. The mass-scale imposition of contracts compromised their non-mandatory status.

In the 20th century, moral and political theory around the social contract had a revival with John Rawls’ A Theory of Justice (2005) and David Gauthier’s Morals by Agreement (1986). Gauthier argues after Thomas Hobbes (1651) and explains that there can be morality in our society without the state having to impose morality with the help of external enforcement mechanisms. For Gauthier rationality is the key for cooperation and for following agreements made between different parties. Celeste Friend states in Social Contract Theory (2004) that feminist philosophers criticise social contract theory for not reflecting moral and political lives correctly and completely, and for the contract itself being “parasitical upon the subjugations of classes of persons” (2004).

In a more critical approach to rationalized contracts, in The Sexual Contract Carole Pateman argues that “lying beneath the myth of the idealized contract, as described by Hobbes, Locke, and Rousseau, is a more fundamental contract concerning men’s relationship to women” (Friend, 2004). Similarly, for Pateman, “[t]he story of the sexual contract reveals that there is good reason why ‘the prostitute’ is a female figure” (1988, p. 192). The feminist philosophers Annette Baier (1988, 1995) and Virginia Held (1993, 2006) criticise social contract theory for not demonstrating fully what a moral person should be and how this affects relationships. Baier argues that Gauthier does not reflect on the full spectrum of human motivations and their psychology, that he fails to see that there is a dependency on certain relationships (like mother-child) before one can enter into those contracts, as described in Baier’s expression “the cost of free milk” (1988). Held, as quoted by Friend, even goes so far as to argue that “contemporary Western society is in the grip of contractual thinking” (2004).

In The Racial Contract, Charles Wade Mills (1997) inspired by The Sexual Contract argues that non-whites have similar problems with the class society as women, both sets of conflicts and suppression deriving from a patriarchal mindset. For Mills there is a ‘racial contract’ which is more important to the industrialized part of the world than the social contract, which one might want to consider in relation to humanised artificial intelligent systems. “This racial contract determines in the first place who counts as fully moral and political persons, and therefore sets the parameters of who can ‘contract in’ to the freedom and equality that the social contract promises” (Friend, 2004).

The subject of the Debian Social Contract (2004) might very well be the one who writes most of the code for the data sphere and defines AI technologies: the white male (Knight, 2017). Taking the above criticism regarding the sexual and the racial contract on board one could extend the discussion on social contracts with the notion of Open Contracts. First one needs to look into the current Debian Social Contract and the issue of privacy with regard to Intellectual Property (Ristroph, 2009). The Debian Foundation is one of the biggest communities for the Linux (Torvalds, 2002) operating system. The beginning of the Debian Social Contract for the FLOSS community states:

Our priorities are our users and free software. We will be guided by the needs of our users and the free software community. We will place their interests first in our priorities. We will support the needs of our users for operation in many different kinds of computing environments. We will not object to non-free works that are intended to be used on Debian systems, or attempt to charge a fee to people who create or use such works. We will allow others to create distributions containing both the Debian system and other works, without any fee from us. In furtherance of these goals, we will provide an integrated system of high-quality materials with no legal restrictions that would prevent such uses of the system. (2004)

The idea of the Debian Social Contract could be extended to AI technologies, in the form of Open Contracts, suggesting similar principles that can be applied to free and open source software. One can argue that these would be a pre-condition for ‘ethical’ AI technologies. With open contracts such as the Debian Social Contract in place, various communities can start discussing, experimenting with and practising the production, distribution, and sharing of AI technologies. Although this sounds like a promising scenario one also has to be critical, as these alternatives can be vulnerable to corruption.

One could support an Open Contract practice, and suggest that a feminist notion of ‘restorative justice’ (Christie, 1977a; Crook, 2009) might serve to judge Open Contracts, by applying the notions of solidarity and care as principles of judicial practice. However the concern is how to move from an abstract idea of open contracts to a concrete legislation which could enable a AI technology production that is not deemed antithetical, or oppositional to the current judicial system, by formulating a set of ground rules and protocols that will allow AI communities to function and prosper. One could argue that this can be done by defining the independent terms and conditions, namely free and open licenses. Social contracts and laws will eventually be defined for these dataspheres, but until then power elites will try to appropriate every piece of AI technology in accord with the old, non-efficacious, “IP legislation” (Electronic Frontier Foundation, 2009).

Nevertheless, in trying to evaluate the argument that the adoption of AI technologies is a process controlled by powerful elites who wield the law to their benefit, one also needs to discuss the notion of power elites. Chambliss and Seidman argue that powerful interests have shaped the writing of legal codes for a long time (1982). However, Chambliss and Seidman also state that legislation derives from a variety of interests, which are often in conflict with each other. One needs to extend the analysis not only to powerful elites, but one also needs to examine the notion of power itself, and the extent to which power shapes legislation, or, on the contrary, if it is legislation itself that controls power.

In an attempt to identify the source of legislation, Weber argues that legal code is powerfully interlinked with the economy. Weber goes on to argue that this link is the basis of capitalist society (Weber, 1978). Here one can refer back to Marx’s idea of materialism and the influence of class society on legislation (Marx, 1990). For Marx legislation, legal code, is an outcome of the capitalist mode of production (Harris, 2018). Marx’s ideas have been widely discussed with regards to the ideology behind the legal code. Nevertheless Marx’s argumentation limits legal code to the notion of class domination.

Here Sumner extended on Marx’s theories regarding legislation and ideology and discussed the legal code as an outcome of political and cultural discussions, based on the economic class domination (Sumner, 1979). Sumner expands the conception of the legal code not only as a product of the ruling class but also as bearing the imprint of other classes, including blue-collar workers, through culture and politics. Sunmner argues that with the emergence of capitalist society, “the social relations of legal practice were transformed into commercial relations” (ibid: 51). However, Sumner does not discuss why parts of society are sidelined by legislation, and how capitalist society not only impacts on legislation, but also has its roots in the neo-liberal writing of legal code.

To apprehend how ownership, property and intellectual rights became enshrined in legal code and adapted by society one can turn to Locke’s theories (1993). Locke argued that politicians ought to look after ownership rights and to support circumstances allowing for the growth of wealth (capital). Following Locke one can conclude that contemporary society is one in which politicians influence legislation in the interest of a powerful upper-class – a neo-liberal society. Still, one needs to ask, should this be the case, and should powerful elites have the authority over legal code, how legislation is enacted and maintained?

The Disciplinary Power of Artificial Intelligence

In order to discuss these questions one has to analyse the history of AI technologies leading to the kind of “humanised” AI system this paper posits. Already in the 50s Turing, the inventor of the Turing test (Moor, 2003), had stated that:

We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried. We can only see a short distance ahead, but we can see plenty there that needs to be done. (Turing, 1950)

The old fashioned approach (Hoffman & Pfeifer, 2015), some may say still contemporary approach, was to primarily research into ‘mind-only’ (Nilsson, 2009) AI technologies/systems. Through high level reasoning, researchers were optimistic that AI technology would quickly become a reality.

Those early AI technologies were a disembodied approach using high level logical and abstract symbols. By the end of the 80s researchers found that the disembodied approach was not even achieving low level tasks humans could easily perform (Brooks, 1999). During that period many researchers stopped working on AI technologies and systems, and the period is often referred to as the ‘AI winter’ (Crevier, 1993; Newquist, 1994).

Brooks then came forward with the proposition of ‘Nouvelle AI’ (Brooks, 1986), arguing that the old fashioned approach did not take into consideration motor skills and neural networks. Only by the end of the 90s did researchers develop statistical AI (Brooks, 1999) systems without the need for any high level logical reasoning; instead AI systems were ‘guessing’ through algorithms and machine learning. This signalled a first step towards humanistic artificial intelligence, as this resembles how humans make intuitive decisions (Pfeifer, 2002); here researchers suggest that embodiment improves cognition (Renzenbrink, 2012; Zarkadakis, 2018).

With embodiment theory Brooks argued that AI systems would operate best when computing only the data that was absolutely necessary (Steels & Brooks, 1995). Further in Developing Embodied Multisensory Dialogue Agents Paradowski (2011) argues that without considering embodiment, e.g. the physics of the brain, it is not possible to create AI technologies/systems capable of comprehension, and that AI technology “could benefit from strengthened associative connections in the optimization of their processes and their reactivity and sensitivity to environmental stimuli, and in situated human-machine interaction. The concept of multisensory integration should be extended to cover linguistic input and the complementary information combined from temporally coincident sensory impressions” (Paradowski, 2011).

With this historical analysis in mind one can discuss the paper’s focus on power elites. Raz studied the procedures through which elites attain disciplinary power in society (Raz, 2009). Raz argues that the notion of the disciplinary power of elites in society is exchangeable with the disciplinary power of legislation and legal code. Raz explains that legal code is perceived by society as the custodian of public order. He further explains that by precluding objectionable actions, legislation directs society’s activities in a manner appropriate to jurisprudence. Nevertheless, Raz did not demonstrate how legislation impacts on personal actions. This is where Foucault’s theories on discipline and power come in. According to Foucault the disciplinary power of legislation leads to a self-discipline of individuals (Foucault, 1995). Foucault argues that the institutions of courts and judges motivate such a self-disciplining of individuals (Chen, 2017), and that self-disciplining rules serve “more and more as a norm” (Foucault, 1981, p. 144).

Foucault’s theories are especially helpful in discussing how the “rule of truth” has disciplined civilisation and how power elites, as institutions, push through an adoption of AI technologies which seem to benefit mainly the upper-class. Discussions around truth, Foucault states, form legislation into something that “decides, transmits and itself extends upon the effects of power” (Foucault, 1986, p. 230). Foucault’s theories help to explain how legislation, as an institution, is rolled out throughout society with very little resistance, or “proletarian counter-justice” (Foucault, 1980b, p. 34). Foucault explains that this has made the justice system and legislation a for-profit system. With this understanding of legislation, and social justice, one does need to reflect further on Foucault’s notion of how disciplinary power seeks to express its distributed nature in the modern state. Namely one has to analyse the distributed nature of those AI technologies, especially through networks and protocols, so that the link can now be made to AI technologies becoming “legally” more profitable, in the hands of the upper-class.

If power generates new opportunities rather than simply repressing them, then, following Michel Foucault (1980a), more interaction and participation can extend and not simply challenge power relations. Foucault’s text The Subject and Power (1982) offers a valuable insight into power relationships relevant also within AI technologies. It is the product of research that was undertaken by Foucault over a period of over twenty years. Foucault uses the metaphor of a chemical catalyst for a resistance which can bring to light power relationships, and thus allow an analysis of the methods this power uses: “[r]ather than analysing power from the point of view of its internal rationality, it consists of analysing power relations through the antagonism of strategies” (1982, p. 780).

In Protocol, Galloway describes how these protocols changed the notion of power and how “control exists after decentralization” (2004, p. 81). Galloway argues that protocol has a close connection to both Deleuze’s concept of ‘control’ and Foucault’s concept of biopolitics (Foucault, 2008, pp. 1978–1979) by claiming that the key to perceiving protocol as power is to acknowledge that “protocol is an affective, aesthetic force that has control over life itself” (2004, p. 81). Galloway suggests (2004, p. 147) that it is important to discuss more than the technologies, and to look into the structures of control within technological systems, which also include underlying codes and protocols, in order to distinguish between methods that can support collective production, e.g. sharing of AI technologies within society, and those that put the AI technologies in the hands of the powerful few. Galloway’s argument in the chapter Hacking (2004, p. 146) is that the existence of protocols “not only installs control into a terrain that on its surface appears actively to resist it”, but goes on to create the highly controlled network environment. For Galloway hacking is “an index of protocological transformations taking place in the broader world of techno-culture.” (2004, p. 157).

In order to be able to regulate networks and AI technologies, control and censorship mechanisms are introduced to networks by applying them to devices and nodes. This form of surveillance, or dataveillance, might constitute a development akin to Michel Foucault’s concept of “panopticism” (1977), “panoptic apparatus” (Zimmer, 2009, p. 5), defined as both massive collections and storage of vast quantities of personal data and the systemic use of such data in the investigation or monitoring of one or more persons. Laws and agreements like the Anti-Counterfeiting Trade Agreement (European Commission, 2007; Lambert, 2010), the Digital Economy Act and the Digital Millennium Copyright Act require surveillance of the AI technologies that consumers use in their “private spheres” (Fuchs, 2009; Medosch, 2010; Wolf, 2003), and can be used to silence “critical voices” (Movius, 2009).The censorship of truth, and the creation of fear of law through moral panics stand in opposition to the development of a healthy democratic use of AI technologies. Issues regarding the ethics of AI (Berkman Klein Center, 2018; Clark, 2018; Green, 2017; Lufkin, 2017) arise from this debate.

Fitzpatrick expands on Foucault’s theory, investigating the “symbiotic link between the rule of law and modern administration” (Fitzpatrick, 2002, p. 147). Fitzpatrick states that legal code is not only a consequence of disciplinary power, but that it also legalises dubious scientific experiments. Here again one can make the link to ethical questionable advances with AI technologies. Legislation, or legal code, Fizpatrick argues, corrects “the disturbance of things in their course and reassert the nature of things” (ibid, p. 160). For Fitzpatrick legislation is not an all-embracing, comprehensive concept as argued by Dworkin (1986) and Hart (1961), but rather legislation is defined by elites. For Fitzpatrick legislation “changes as society changes and it can even disappear when the social conditions that created it disappear or when they change into conditions antithetical to it” (Fitzpatrick, 2002, p. 6). Furthermore, West (1993) suggests that the impact of disciplinary power through legislation on the belief system of individuals does not allow for an analytical, critical engagement by individuals with the issues at stake. Legislation is simply regarded as given. In relation to the disciplinary power of AI technologies, issues with privacy, defamation and intellectual property laws are not being questioned. Nevertheless, West’s argument that all individuals adhere to equivalent morals is improbable.

Adams and Brownsword (2006) give a more nuanced view of contemporary legislation. They argue that legislation aims to institute public order. Legislation sets up authoritative mechanisms whereby social order can be established and maintained, social change managed, disputes settled and policies and goals for the community adopted (ibid: 11). Adams and Brownsword go on to argue that legal code is skewed in favour of the upper-class and those who engage more with politics in society – examples of which could be the corporate sector producing AI technologies and business elites seeking to use AI technologies for profit. According to Adams and Brownsword there seems to be no unbiased, fair legislation or legal code, and the maintenance of public order must simply reproduce an unfair class society. If this is the case, following Adams and Brownsword argumentation, one can argue that indeed the adoption of AI technologies does not follow a utilitarian ethical code, benefiting society, but rather conforms to the interests of a small group, those owning AI technologies.

A further discussion of disciplinary power within the process of writing legal code is that of Chambliss and Seidman (1982), who argue that legislation is not produced through a process characterised by balanced, fair development, but rather by powerful elites writing legal code by themselves. Translating this again back to the adoption of AI technologies, it becomes evident that the freedom to engage with those technologies is left to those who have the financial means, and with it the legal means, to do so. According to Chambliss and Seidman, in a culture dominated by economics, legislation and technologies are being outlined and modelled by those powerful elites.

The analysis of the theories above has attempted to show that the implementation of AI technologies might be construed as a project deriving from, and serving the interests of, the dominant class; following Foucault’s terminology, this is achieved using the disciplinary power of legislation, through regimes of truths, over individuals.

AI technologies, rather than benefiting society, could very well be implemented against society. The implementation of AI technologies follows legislation set out by elites, raising issues connected with privacy, national security, or intellectual property laws. On this note, Crook states that “there is the risk that their decisions are based on profit and loss rather than truth/justice and freedom of expression» (Crook, 2009, p. 94).

AI technologies and Restorative Justice: The Ethics of Care

Having said this, the prospect could be raised that restorative justice might offer “a solution that could deliver more meaningful justice” (Crook, 2009, p. 310). With respect to AI technologies, and the potential inherent in them for AI crimes, instead of following a retributive legislative approach, an ethical discourse (Courtland, 2018), with a deeper consideration for the sufferers of AI crimes (Fry, 2018) should be adopted. That said, acting ethically is more difficult than ever (Ito, 2017), due to the hyper expansion of big data and artificial intelligence (Bridle, 2018; Singh, 2018). Research into artificial intelligence has gone from being a public service undertaken mainly at universities to being run (and regarded) as businesses, run by big corporations such as Alphabet (parent company of Google) and Facebook, created to generate profit (Keeble, 2008). The companies need to attract a large number of paying customers. AI technologies have become workers in the market economy, rarely following any ethical guidelines (Kieran, 1998). One can ask: could restorative justice offer an alternative way of dealing with the occurrence of AI crimes (Etzioni, 2018; Goel, 2017)?

Millar and Vidmar described two psychological perceptions of justice (Vidmar & Miller, 1980). One is behavioural control, following the legal code as strictly as possible, punishing any wrongdoer (Wenzel & Okimoto, 2010), and second the restorative justice system, which focuses on restoration where harm was done. Thus an alternative approach for the ethical implementation of AI technologies, with respect to legislation, might be to follow restorative justice principles. Restorative justice would allow for AI technologies to learn how to care about ethics (Bostrom & Yudkowsky, 2014; Frankish & Ramsey, 2014). Fionda (2005) describes restorative justice as a conciliation between victim and offender, during which the offence is deliberated upon. Both parties try to come to an agreement on how to achieve restoration for the damage done, to the situation before the crime (here an AI crime) happened. Restorative justice advocates compassion for the victim and offender, and a consciousness on the part of the offenders as to the repercussion of their crimes. Tocqueville argued for one to live in liberty, “it is necessary to submit to the inevitable evils which it engenders.” (Tocqueville, 2004)

One can argue that these evils are becoming more evident nowadays with the advance of AI technologies. For AI crimes punishment in the classical sense may seem to be adequate (Montti, 2018). Duff (2003) argues that using a punitive approach to punish offences educates the public. Okimoto and Wenzel (2010) refer to Durkheim’s studies on the social function of punishment (Durkheim, 1960), serving to establish a societal awareness of what ought to be right or wrong. Christie (Christie, 1977b), however, criticises this form of execution of the law. He argues that, through conflict, there is the potential to discuss the rules given by law, allowing for a restorative process, rather than a process characterised by punishment and a strict following of rules. Christie states that those suffering most from crimes are suffering twice, as although it is the offenders being put on trial, the victims have very little say in courtroom hearings where mainly lawyers argue with one-another. It basically boils down to guilty or not guilty, and no discussion in between. Christie argues that running restorative conferencing sessions helps both sides to come to terms with what happened. The victims of AI crimes would not only be placed in front of a court, but also be offered engagement in the process of seeking justice and restoration.

Restorative justice might support victims of AI crimes better than the punitive legal system, as it allows for the sufferers of AI crimes to be heard in a personalised way, which could be adopted to the needs of the victims (and offenders). As victims and offenders represent themselves in restorative conferencing sessions, these become much more affordable (Braithwaite, 2003), meaning that the barrier to seeking justice due to the financial costs would be partly eliminated, allowing for poor parties to be able to contribute to the process of justice. This would benefit wider society and AI technologies would not only be defined by a powerful elite. Restorative justice could hold the potential not only to discuss the AI crimes themselves, but also to get to the root of the problem and discuss the cause of an AI crime. For Braithwaite (1989) restorative justice makes re-offending harder.

In such a scenario, a future AI system capable of committing AI crimes would need to have a knowledge of ethics around the particular discourse of restorative justice. The implementation of AI technologies will lead to a discourse (Sample, 2018b) around who is responsible for actions taken by AI technologies. Even when considering clearly defined ethical guidelines, these might be difficult to implement (Conn, 2017), due to the pressure of competition AI systems find themselves in.

That said, this speculation is restricted to humanised artificial intelligence systems to be part of a restorative justice system, through the very human emotion of shame. Without a clear understanding of shame (Rawnsley, 2018) it will be impossible to resolve AI crimes in a restorative manner. Thus one might want to think about a humanised, cyborgian (Haraway, 1985; Thompson, 2010) proposal of a symbiosis between humans and technology, along the lines of Kasparov’s advanced chess (Hipp et al., 2011), as in advanced jurisprudence (Baggini, 2018), a legal system where human and machine work together on restoring justice, for social justice.

Competing Interests

The author has no competing interests to declare.

Author Information

Adnan Hadzi is currently working as resident academic in the Department of Digital Arts, at the Faculty of Media and Knowledge Sciences, University of Malta. Hadzi has been a regular at Deckspace Media Lab, for the last decade, a period over which he has developed his research at Goldsmiths, University of London, based on his work with Deptford.TV. It is a collaborative video editing service hosted in Deckspace’s racks, based on free and open source software, compiled into a unique suite of blog, cvs, film database and compositing tools.

Hadzi is co-editing and producing the after.video video book, exploring video as theory, reflecting upon networked video, as it profoundly re-shapes medial patterns (Youtube, citizen journalism, video surveillance etc.).

Hadzi’s documentary film work tracks artist pranksters The Yes Men and!Mediengruppe Bitnik Collective. Bitnik is s collective of contemporary artists working on and with the Internet. Bitnik’s practice expands from the digital to affect physical spaces, often intentionally applying loss of control to challenge established structures and mechanisms. Bitnik’s works formulate fundamental questions concerning contemporary issues.

References

Adams, J N and Brownsword, R 2006 Understanding Law. New York: Sweet & Maxwell.

Aquinas, T 2006 Summa Theologiae: Volume 33, Hope: 2a2ae. 17–22. Cambridge: Cambridge University Press.

Aristotle and Saunders, T J 1981 The Politics. London: Penguin UK.

Austin, J 1998 The Province of Jurisprudence Determined: And, The Uses of the Study of Jurisprudence. Hackett Publishing.

Baggini, J 2018, July 8 Memo to those seeking to live for ever: eternal life would be deathly dull|Julian Baggini. The Guardian. Retrieved from: https://web.archive.org/web/20181225111455/https://www.theguardian.com/commentisfree/2018/jul/08/live-for-ever-eternal-life-deathly-dull-immortality.

Baier, A 1988 Pilgrim’s Progress: Review of David Gauthier, Morals by Agreement. Canadian Journal of Philosophy, 18(2): 315–330. DOI:  http://doi.org/10.1080/00455091.1988.10717179

Baier, A 1995 Moral Prejudices. Cambridge, MA: Harvard University Press.

Bergner, R 2010 The Tolstoy Dilemma: A Paradigm Case Formulation and Some Therapeutic Interventions. Advances in Descriptive Psychology, 9. Retrieved from: http://www.sdp.org/sdppubs-publications/advances-in-descriptive-psychology-vol-9/.

Berkman Klein Center 2018 Ethics and Governance of AI. Retrieved 22 September 2018, from: https://cyber.harvard.edu/topics/ethics-and-governance-ai.

Bostrom, N 2014 Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

Bostrom, N and Yudkowsky, E 2014 The ethics of artificial intelligence. The Cambridge Handbook of Artificial Intelligence, 316: 334. DOI:  http://doi.org/10.1017/CBO9781139046855.020

Bragg, M 2008, February 7 The Social Contract. In our Time. London: BBC 4. Retrieved from: http://www.bbc.co.uk/radio4/history/inourtime/inourtime_20080207.shtml.

Braithwaite, J 1989 Crime, Shame and Reintegration. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9780511804618

Braithwaite, J 2003 Restorative Justice and a Better Future. In: McLaughlin, E and Hughes, G (eds.), Restorative Justice: Critical Issues, 54–67. London: SAGE.

Bridle, J 2018, June 15 Rise of the machines: has technology evolved beyond our control? The Guardian. Retrieved from: https://web.archive.org/web/20190111222310/https://www.theguardian.com/books/2018/jun/15/rise-of-the-machines-has-technology-evolved-beyond-our-control-.

Brooks, R 1986 A robust layered control system for a mobile robot. IEEE Journal on Robotics and Automation, 2(1): 14–23. DOI:  http://doi.org/10.1109/JRA.1986.1087032

Brooks, R 1999 Cambrian Intelligence: The Early History of the New AI (1 edition). Cambridge, Mass: A Bradford Book.

Busby, M 2018, August 21 Use of ‘killer robots’ in wars would breach law, say campaigners. The Guardian. Retrieved from: https://web.archive.org/web/20181203074423/https://www.theguardian.com/science/2018/aug/21/use-of-killer-robots-in-wars-would-breach-law-say-campaigners.

Cadwalladr, C 2018, July 15 Elizabeth Denham: ‘Data crimes are real crimes’. The Guardian. Retrieved from: https://web.archive.org/web/20181121235057/https://www.theguardian.com/uk-news/2018/jul/15/elizabeth-denham-data-protection-information-commissioner-facebook-cambridge-analytica.

Chambliss, W J and Seidman, R B 1982 Law, Order, and Power. London: Addison-Wesley Publishing Company.

Chaslot, G 2016, November 27 YouTube’s A.I. was divisive in the US presidential election. Retrieved 25 February 2018, from: https://medium.com/the-graph/youtubes-ai-is-neutral-towards-clicks-but-is-biased-towards-people-and-ideas-3a2f643dea9a#.tjuusil7d.

Chen, S 2017, September 18 AI Research Is in Desperate Need of an Ethical Watchdog. Wired. Retrieved from: https://www.wired.com/story/ai-research-is-in-desperate-need-of-an-ethical-watchdog/.

Christie, N 1977a Conflicts as Property. Br J Criminol, 17(1): 1–15. DOI:  http://doi.org/10.1093/oxfordjournals.bjc.a046783

Christie, N 1977b Conflicts as property. The British Journal of Criminology, 17(1): 1–15. DOI:  http://doi.org/10.1093/oxfordjournals.bjc.a046783

Clark, J 2018 AI and Ethics: People, Robots and Society. Retrieved 22 September 2018, from: http://www.washingtonpost.com/video/postlive/ai-and-ethics-people-robots-and-society/2018/03/20/ffdff6c2-2c5a-11e8-8dc9-3b51e028b845_video.html.

Conn, A 2017, March 31 Podcast: Law and Ethics of Artificial Intelligence. Retrieved 22 September 2018, from: https://futureoflife.org/2017/03/31/podcast-law-ethics-artificial-intelligence/.

Courtland, R 2018, June 20 Bias detectives: the researchers striving to make algorithms fair [News]. DOI:  http://doi.org/10.1038/d41586-018-05469-3.

Crevier, D 1993 AI: the tumultuous history of the search for artificial intelligence. New York: Basic Books.

Crook, T 2009 Comparative media Law and Ethics. London: Routledge. DOI:  http://doi.org/10.4324/9780203865965

Crook, T 2010 Power, Intelligence, Whistle-blowing and the Contingency of History. In: History’s first draft? Journalism, PR and the problems of truth-telling. London: Goldsmiths, University of London. Retrieved from: https://www.gold.ac.uk/media-communications/staff/crook/.

Danaher, J 2016 The Threat of Algocracy: Reality, Resistance and Accommodation. Philosophy & Technology, 29(3): 245–268. DOI:  http://doi.org/10.1007/s13347-015-0211-1

Dart, G 2005 Rousseau, Robespierre and English Romanticism. Cambridge: Cambridge University Press.

Debian 2004 Debian Social Contract. Retrieved 26 February 2009, from: http://www.debian.org/social_contract.

de Tocqueville, A 2004 Democracy in America. Washington, DC: Library of America.

Duff, R A 2003 Punishment, Communication, and Community. Oxford: Oxford University Press.

Durkheim, E 1960 The Rules of Sociological Method. New Delhi: Vani Prakashan.

Dworkin, R 1986 A Matter of Principle. Oxford: Clarendon Press.

Electronic Frontier Foundation 2009 Takedown Hall Of Shame. Retrieved 2 November 2009, from: http://www.eff.org/takedowns.

Etzioni, O 2018, January 20 How to Regulate Artificial Intelligence. The New York Times. Retrieved from: https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html.

European Commission 2007 The Anti-Counterfeiting Trade Agreement (ACTA). Retrieved 30 December 2010, from: http://ec.europa.eu/trade/creating-opportunities/trade-topics/intellectual-property/anti-counterfeiting/.

Fionda, J 2005 Devils and Angels: Youth Policy and Crime. London: Hart.

Fitzpatrick, P 2002 The Mythology of Modern Law. London: Routledge. DOI:  http://doi.org/10.4324/9780203162125

Foucault, M 1977 Discipline and punish: the birth of the prison. New York: Pantheon.

Foucault, M 1980a Power, Gordon, C (ed.). London: Penguin.

Foucault, M 1980b Power/Knowledge: Selected Interviews and Other Writings, 1972–1977. London: Pantheon Books.

Foucault, M 1981 The History of Sexuality Volume I. London: Harmondsworth: Penguin, repr.

Foucault, M 1982 The Subject and Power. Critical Inquiry, 8(4): 777–795. DOI:  http://doi.org/10.1086/448181

Foucault, M 1986 Disciplinary Power and Subjection. In: Lukes, S (ed.), Power. New York: NYU Press.

Foucault, M 1995 Discipline and Punish: The Birth of the Prison. London: Vintage Books.

Foucault, M 2008 The Birth of Biopolitics: Lectures at the Collège de France, 1978–1979. London: Pan Macmillan.

Frankish, K and Ramsey, W M 2014 The Cambridge Handbook of Artificial Intelligence. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9781139046855

Friend, C 2004 Social Contract Theory. Retrieved 19 October 2009, from: http://www.iep.utm.edu/soc-cont/.

Fromm, E 1956 The Art of Loving. New York: Harper & Row.

Fry, H 2018, September 17 We hold people with power to account. Why not algorithms? The Guardian. Retrieved from: https://web.archive.org/web/20190102194739/https://www.theguardian.com/commentisfree/2018/sep/17/power-algorithms-technology-regulate.

Fuchs, C 2009 Social Networking Sites and the Surveillance Society. Vienna, Austria: Verein zur Förderung der Integration der Informationswissenschaften.

Galbraith, J K 2015 The New Industrial State. Oxford: Princeton University Press.

Galloway, A R 2004 Protocol: how control exists after decentralization. Cambridge, MA: MIT Press. DOI:  http://doi.org/10.7551/mitpress/5658.001.0001

Gauthier, D 1986 Morals by agreement. Alderley: Clarendon Press.

Goel, A 2017, December 22 Ethics and Artificial Intelligence. The New York Times. Retrieved from: https://www.nytimes.com/2017/09/14/opinion/artificial-intelligence.html.

Goodman, J 2016 Robots in Law: How Artificial Intelligence is Transforming Legal Services. ARK Group.

Green, P 2017 Artificial Intelligence and Ethics. Retrieved 22 September 2018, from: https://www.scu.edu/ethics/all-about-ethics/artificial-intelligence-and-ethics/.

Haddadin, S 2013 Towards Safe Robots: Approaching Asimov’s 1st Law. London: Springer.

Hallevy, G 2013 When Robots Kill: Artificial Intelligence Under Criminal Law. London: UPNE.

Hampsher-Monk, I 1992 A History of Modern Political Thought. New York: Wiley-Blackwell.

Haraway, D 1985 A Cyborg Manifesto. Socialist Review, 15(2). Retrieved from: http://www.stanford.edu/dept/HPS/Haraway/CyborgManifesto.html.

Harris, M 2018, April 23 Glitch Capitalism: How Cheating AIs Explain Our Stagnant Present. Retrieved 16 May 2018, from: http://nymag.com/selectall/2018/04/malcolm-harris-on-glitch-capitalism-and-ai-logic.html.

Hart, H L A 1961 The concept of law. Oxford: Oxford University Press.

Held, V 1993 Feminist Morality. Chicago, IL: University of Chicago Press.

Held, V 2006 The Ethics of Care. New York: Oxford University Press US.

Hipp, J, Flotte, T, Monaco, J, Cheng, J, Madabhushi, A, Yagi, Y, Balis, U J, et al. 2011 Computer aided diagnostic tools aim to empower rather than replace pathologists: Lessons learned from computational chess. Journal of Pathology Informatics, 2. DOI:  http://doi.org/10.4103/2153-3539.82050

Hobbes, T 1651 Leviathan. London: Andrew & William Crooke, St. Paul’s Churchyard. Retrieved from: http://www.gutenberg.org/files/3207/3207-h/3207-h.htm.

Hoffman, M and Pfeifer, R 2015 The Implications of Embodiment for Behavior and Cognition: Animal and Robotic Case Studies. In: Tschacher, W and Bergomi, C (eds.), The Implications of Embodiment: Cognition and Communication. Exeter: Andrews UK Limited. Retrieved from: https://arxiv.org/abs/1202.0440.

Ito, J 2017 Resisting Reduction: A Manifesto. Journal of Design and Science. DOI:  http://doi.org/10.21428/8f7503e4

Jeffrey, J 1990 Knowledge engineering: Theory and practice. Society for Descriptive Psychology, 5: 105–122.

Kaplan, A and Haenlein, M 2019 Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1): 15–25. DOI:  http://doi.org/10.1016/j.bushor.2018.08.004

Keeble, R 2008 Ethics for Journalists. London: Routledge. DOI:  http://doi.org/10.4324/9780203698822

Kelsen, H 1967 Pure Theory of Law. Los Angeles: University of California Press.

Kelsen, H 1991 General theory of norms. Oxford: Clarendon Press. DOI:  http://doi.org/10.1093/acprof:oso/9780198252177.001.0001

Kelsen, H 2009 General Theory of Law and State. New Jersey: The Lawbook Exchange, Ltd.

Kieran, M 1998 Media Ethics. London: Psychology Press.

King, T, Aggarwal, N, Taddeo, M and Floridi, L 2018 Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions (SSRN Scholarly Paper No. ID 3183238). Rochester, NY: Social Science Research Network. Retrieved from: https://papers.ssrn.com/abstract=3183238.

Knight, W 2017, October 3 Google’s AI chief says forget Elon Musk’s killer robots, and worry about bias in AI systems instead. Retrieved 8 January 2019, from: https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/.

Lambert, J 2010, November 24 Statement on adoption of joint resolution on ACTA. Retrieved 15 December 2010, from: http://www.jeanlambertmep.org.uk/news_detail.php?id=620.

Laungani, P 2002 Mindless psychiatry and dubious ethics. Counselling Psychology Quarterly, 15(1): 23–33. DOI:  http://doi.org/10.1080/09515070110102305

Legg, S and Hutter, M 2007 A Collection of Definitions of Intelligence. Lugano, Switzerland: IDSIA. Retrieved from: http://arxiv.org/abs/0706.3639.

Locke, J 1993 Political Writings. London: Mentor.

Lufkin, B 2017 Why the biggest challenge facing AI is an ethical one. Retrieved 22 September 2018, from: http://www.bbc.com/future/story/20170307-the-ethical-challenge-facing-artificial-intelligence.

MacIntyre, A 2001 Dependent Rational Animals: Why Human Beings Need the Virtues (Revised edition). Chicago: Open Court.

MacIntyre, A 2013 After Virtue. London: A&C Black.

Marx, K 1990 Capital Vol 1. London: Penguin Books Limited.

Medosch, A 2010, January 15 Post-Privacy or the Politics of Labour, Intelligence and Information. Retrieved 19 January 2010, from: http://thenextlayer.org/node/1237.

Midgley, M 2010 Fellow Champions Dolphins as “Non-Human Persons”. Retrieved 8 January 2019, from: https://www.oxfordanimalethics.com/2010/01/fellow-champions-dolphins-as-%E2%80%9Cnon-human-persons%E2%80%9D/.

Mills, C W 1997 The Racial Contract. Cornell University Press.

Montti, R 2018, May 20 Google’s ‘Don’t Be Evil’ No Longer Prefaces Code of Conduct. Retrieved 22 September 2018, from: https://www.searchenginejournal.com/google-dont-be-evil/254019/.

Moor, J 2003 The Turing Test: The Elusive Standard of Artificial Intelligence. New York: Springer Science & Business Media. DOI:  http://doi.org/10.1007/978-94-010-0105-2

Morozov, E 2018 The Geopolitics Of Artificial Intelligence. London: Nesta. Retrieved from: https://www.youtube.com/watch?v=7g0hx9LPBq8.

Mountain, M 2013, December 2 Lawsuit Filed Today on Behalf of Chimpanzee Seeking Legal Personhood. Retrieved 8 January 2019, from: https://www.nonhumanrights.org/blog/lawsuit-filed-today-on-behalf-of-chimpanzee-seeking-legal-personhood/.

Movius, L B 2009 Surveillance, Control, and Privacy on the Internet: Challenges to Democratic Communication. Journal of Global Communication, 2(1): 209–224.

Newquist, H P 1994 The Brain Makers (1st edition). Indianapolis, Ind: Sams.

Nilsson, N J 2009 The Quest for Artificial Intelligence. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9780511819346

Olivier, B 2012 Cyberspace, simulation, artificial intelligence, affectionate machines and being human. Communicatio, 38(3): 261–278. DOI:  http://doi.org/10.1080/02500167.2012.716763

Ossorio, P G 1995 Persons: The Collected Works of Peter G. Ossorio, Volume I. Ann Arbor Mich.: Descriptive Psychology Press. Retrieved from: http://www.sdp.org/sdppubs-publications/persons-the-collected-works-of-peter-g-ossorio-volume-1/.

Ossorio, P G 2013 The Behavior of Persons. Ann Arbor Mich.: Descriptive Psychology Press. Retrieved from: http://www.sdp.org/sdppubs-publications/the-behavior-of-persons/.

Pagallo, U 2013 The Laws of Robots: Crimes, Contracts, and Torts. Springer Science & Business Media. DOI:  http://doi.org/10.1007/978-94-007-6564-1

Paradowski, M B 2011, November Developing Embodied Multisensory Dialogue Agents. Presented at the AISB/IACAP 2012 Symposium. Birmingham. Retrieved from: http://events.cs.bham.ac.uk/turing12/.

Parikh, P 2017, October 21 On Liberalism and Neoliberalism. Retrieved 4 January 2019, from: https://medium.com/@pparikh1/on-liberalism-and-neoliberalism-5946523aa2ca.

Pateman, C 1988 The sexual contract. Palo Alto, CA: Stanford University Press.

Pfeifer, R 2002, November Embodied Artificial Intelligence. Presented at the International interdisciplinary seminar on new robotics, evolution and embodied cognition. Lisbon. Retrieved from: https://www.informatics.indiana.edu/rocha/publications/embrob/pfeifer.html.

Poole, S 2018, September 20 Arabic, algae and AI: the truth about ‘algorithms’. The Guardian. Retrieved from: https://web.archive.org/web/20181119100303/https://www.theguardian.com/books/2018/sep/20/from-arabic-to-algae-like-ai-the-alarming-rise-of-the-algorithm-.

Pueyo, S 2018 Growth, degrowth, and the challenge of artificial superintelligence. Journal of Cleaner Production, 197: 1731–1736. DOI:  http://doi.org/10.1016/j.jclepro.2016.12.138

Putman, A 1990 Artificial persons. Advances in Descriptive Psychology, 5: 81–104.

Rawls, J 2005 A theory of justice. Cambridge, MA: Harvard University Press.

Rawnsley, A 2018, July 8 Madeleine Albright: ‘The things that are happening are genuinely, seriously bad’. The Guardian. Retrieved from: https://web.archive.org/web/20190106193657/https://www.theguardian.com/books/2018/jul/08/madeleine-albright-fascism-is-not-an-ideology-its-a-method-interview-fascism-a-warning.

Raz, J 2009 The Authority of Law: Essays on Law and Morality. Oxford: OUP Oxford.

Renzenbrink, T 2012, February 9 Embodiment of Artificial Intelligence Improves Cognition. Retrieved 10 January 2019, from: https://www.elektormagazine.com/articles/embodiment-of-artificial-intelligence-improves-cognition.

Ristroph, G 2009 Debian’s Democracy. In: Davies, T and Gangadharan, S P (eds.), Online Deliberation: Design, Research and Practice, 207–212. Chicago, Illinois, USA: Center for the Study of Language. Retrieved from: http://www.press.uchicago.edu/presssite/metadata.epl?mode=synopsis&bookkey=5667101.

Roio, D 2018 Algorithmic Sovereignty (Thesis). University of Plymouth. Retrieved from: https://pearl.plymouth.ac.uk/handle/10026.1/11101.

Roland, J 1994 The Social Contract and Constitutional Republics. Retrieved 19 October 2009, from: http://www.constitution.org/soclcont.htm.

Rousseau, J J 1968 The Social Contract. London: Penguin Classics. Retrieved from: http://www.constitution.org/jjr/socon.htm.

Sample, I 2018a, July 18 Thousands of scientists pledge not to help build killer AI robots. The Guardian. Retrieved from: http://www.theguardian.com/science/2018/jul/18/thousands-of-scientists-pledge-not-to-help-build-killer-ai-robots.

Sample, I 2018b, December 7 Technologist Vivienne Ming: ‘AI is a human right’. The Guardian. Retrieved from: https://web.archive.org/web/20190111125507/https://www.theguardian.com/technology/2018/dec/07/technologist-vivienne-ming-ai-inequality-silicon-valley.

Schwartz, W 1982 The Problem of Other Possible Persons: Dolphins, Primates and Aliens (SSRN Scholarly Paper No. ID 2402230). Rochester, NY: Social Science Research Network. Retrieved from: https://papers.ssrn.com/abstract=2402230.

Schwartz, W 1984 The two concepts of action and responsibility in psychoanalysis. Journal of the American Psychoanalytic Association, 32(3): 557–572. DOI:  http://doi.org/10.1177/000306518403200306

Schwartz, W 2014 What Is a Person and How Can We Be Sure? A Paradigm Case Formulation (SSRN Scholarly Paper No. ID 2511486). Rochester, NY: Social Science Research Network. Retrieved from: https://papers.ssrn.com/abstract=2511486.

Singh, P J 2018, July 27 AI superpower or client nation? The Hindu. Retrieved from: https://www.thehindu.com/opinion/op-ed/ai-superpower-or-client-nation/article24523017.ece.

Smith, A 2018, August 30 Franken-algorithms: the deadly consequences of unpredictable code. The Guardian. Retrieved from: https://web.archive.org/web/20190105054549/https://www.theguardian.com/technology/2018/aug/29/coding-algorithms-frankenalgos-program-danger.

Steels, L and Brooks, R 1995 The Artificial Life Route to Artificial Intelligence: Building Embodied, Situated Agents. London, New York: Taylor & Francis.

Sumner, C 1979 Reading ideologies: an investigation into the Marxist theory of ideology and law. London: Academic Press.

Thompson, C 2010, March 22 The Cyborg Advantage. Wired, 18(4). Retrieved from: https://www.wired.com/2010/03/st-thompson-cyborgs/.

Torvalds, L 2002 Torvalds. New York: Harper-Collins.

Turing, A M 1950 Computing Machinery and Intelligence. Mind, 59(236): 433–460. DOI:  http://doi.org/10.1093/mind/LIX.236.433

Vidmar, N and Miller, D T 1980 Socialpsychological processes underlying attitudes toward legal punishment. Law and Society Review, 565–602. DOI:  http://doi.org/10.2307/3053193

Weber, M 1978 Economy and Society: An Outline of Interpretive Sociology. Los Angeles: University of California Press.

Wenzel, M and Okimoto, T G 2010 How acts of forgiveness restore a sense of justice: Addressing status/power and value concerns raised by transgressions. European Journal of Social Psychology, 40(3): 401–417. DOI:  http://doi.org/10.1002/ejsp.629

West, R 1993 Narrative, Authority, and Law. Michigan, MI: University of Michigan Press.

Wilson, E A 2011 Affect and Artificial Intelligence. Washington: University of Washington Press.

Wolf, C 2003 The Digital Millennium Copyright Act. Washington: Pike & Fischer – A BNA Company.

Zarkadakis, G 2018, May 6 Artificial Intelligence & Embodiment: does Alexa have a body? Retrieved 10 January 2019, from: https://medium.com/@georgezarkadakis/artificial-intelligence-embodiment-does-alexa-have-a-body-d5b97521a201.

Zimmer, M 2009, July 1 The panoptic gaze of web 2.0: How Web 2.0 Platforms act as Infrastructure of Dataveillance. Kulturpolitik, 2. Retrieved from: http://michaelzimmer.org/files/Zimmer%20Aalborg%20talk.pdf.