Blame the Tool, or the Tool User?
A long time ago and far away, a Captain in the British Army retired from service and moved to an estate near Ballinrobe in Ireland. His name was Charles and he moved into Lough Mask House, a property owned by a wealthy landowner by the name of Lord Erne who was, by the way, the third Lord of Erne, and became an agent for Erne’s properties. Charles oversaw the Lord’s properties in County Mayo and collected rents. Lord Erne and other landlords of the time owned something like 99.8% of all the land and property in Ireland. The Irish were not entirely happy with this arrangement.
By all accounts, Charles, our retired captain, was a bit of a control freak and sought to enforce the ‘divine rights’ of the landowners, in particular those of Lord Erne, by increasingly strict rules, financial penalties for minor infractions, and forced evictions. Eventually, the good people of County Mayo went on strike and began a campaign to effectively ostracize Charles from the community in which he lived. James Redpath called it “social excommunication”. The captain’s full name, I should probably mention, was Charles Cunningham Boycott. It’s from him that we get the word, “boycott”.
Over the decades and into the 21st century, boycotts have become increasingly complex affairs but. As large corporations swallow up smaller companies who have themselves swallowed up others, the trail of who owns who becomes murkier all the time. You may decide to boycott one particular product, but that product may be owned by a company that is owned by a shell corporation that is itself owned by a numbered corporation in a foreign country.
Boycott, or the idea, has also entered the world of free software, in the sense that some have been trying to punish companies whose business practices don’t fall in line with a particular social ideology. A few years ago, there was a spate of articles regarding the creators of particular open source packages fighting against companies that used that specific package; it had to do with a developer who worked on a piece of software that was being used by ICE (Immigration and Customs Enforcement) in the United States.
Do we blame the tool, or do we blame the tool user? Perhaps we shouldn’t blame either. After all, how are we to know how a particular tool is going to be used? It’s pretty obvious that a gun’s purpose, for instance, is to kill, but it has also been used to hunt and feed families, and also to defend and protect life. When our free software is being used by a company to sell to an organisation that collects children at the border and locks them in cages, can the developer of that software decide to recall their software or block its use?
Today, we have another tool that's making the headlines, with people lining up to limit or ban its use in various scenarios. You guessed it; I'm talkina about artificial intelligence (AI). As AI permeates more of our lives, the question of responsibility becomes more pressing. Do we blame the AI, or do we blame its creators and users? Perhaps, like with open-source software, we shouldn't blame either outright. The ethical conundrum is similar: how can we foresee every use of a tool we create?
Consider AI's role in various sectors: healthcare, finance, criminal justice, and more. Its potential for societal good is immense, yet so is its capacity for misuse. AI systems used in predictive policing or in determining loan eligibility could perpetuate biases if not carefully designed and monitored. This is not unlike the dilemma faced by open-source software developers: when our creation is used in ways that contradict our values, what is our responsibility?
Consider openssl, the secure sockets layer that is used to encrypt web traffic and digital Communications. This interesting piece of technology is basically the backbone of the entire internet, and everything having to do with e-commerce as we know it. If we were to in advance trying to decide whether a company was worthy of being allowed to use this ubiquitous piece of software it would be a daunting task indeed. By what means would you analyze whether (or not) a company was acting in the public good, for instance? Are the products being sold, dangerous to minors? Have they been tested it away to make sure that they’re completely safe? Is the labour employed to produce this particular product that is being sold using our open source powered website, taking advantage of under-aged children working in sweatshop conditions, in some distant under-developed country?
Worse... Is the technology being used to encrypt communications between terrorist groups? Foreign agents? Powers hostile to our governments?
When we create an open-source package, and release it into the wild, it typically doesn’t come with any kind of provision as to how that software is going to be used. To be perfectly honest, it would be practically impossible to do this, but let me step back for a moment and try to come up with an environment that makes a little bit of sense, in terms of how we might set this up.
In the world of Open Source, we have “The Four Freedoms”. They bear consideration and are as follows.
A program is “free software” if the program’s users have the four essential freedoms:
- The freedom to run the program as you wish, for any purpose.
- The freedom to study how the program works, and change it so it does your computing as you wish. Access to the source code is a precondition for this.
- The freedom to redistribute copies so you can help your neighbor.
- The freedom to distribute copies of your modified versions to others. By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
If software is licensed in a way that does not provide these four freedoms, then it is categorized as nonfree or proprietary.
By definition then, for free software to be ‘free’, there can be no restrictions on who uses it or what they use it for. Can you block people from using your free software?
But in the AI world, should there be an equivalent? A set of AI Ethics Principles, perhaps? One could argue that AI, like open-source software, should be developed with certain 'freedoms', but also with an awareness of its potential impact. Perhaps, given the way in which the Open Source community is embracing AI development, those four freedoms are sufficient. And what if those freedoms aren't sufficient for Open Source, perhaps they aren't sufficient for AI development either.
The idea of an AI Hippocratic Oath has been floated – a commitment to 'do no harm'. But the complexity lies in the interpretation and enforcement of such an oath.
This has led to the creation of a new license called “The Hippocratic License” which, like the medical oath to which it alludes, begins by saying that the software being developed must, above all, do no harm. If we adhere to the Four Freedoms, the resulting software would then be nonfree. As with my openssl question, it’s just not that simple since the initial user of the free software may themselves not be causing harm, but how many levels deep in the distribution of that software do you need to go before you discover that harm is being done.
There are no easy answers and plenty of questions. Perhaps the first to ask is, “Is it time to rethink what ‘free’ means?” Feel free to weigh in with your comments.