I have been thinking a fair amount over the past couple of years about digital ethics. On the most microscopic scale, consider that every character used in programming takes a tiny amount of power, so the more that we are going to be running a calculation, the more efficient it should be, as an environmental concern, but digital ethics covers so much more than that, and is something I figured I would like to write something about.
In an increasingly digital world, and with the technological revolution underway, it is important that tech is used and designed to improve and enhance people’s quality of life.
Embedding ethical principles (Transparency, accountability, explainability, etc) into the conceptualisation and creation of products, tools and services is essential for the continued trust and confidence in technology. This is important because digital technology can present ethical challenges.
Looking at Tesla cars, we see a version of the trolley problem; in a situation where someone will die – let’s say that a child has run into the road on an icy day – the computer has to calculate whether to hit the child or crash the car, killing the driver, and which of the outcomes you would opt towards will have been discussed at length by philosophers at length over the centuries. I may even do a post about that at some other point.
Digital technology however can pose much larger, existential threats. We are already in the throws of the Holocene extinction, which we have the potential to be able to turn around, and yet there are technologies being embeded which have massive environmental impact, like cryptocurrencies such as bitcoin, which uses around 110 Terawatt Hours per year: as much energy as Sweden, Malaysia or Finland and of course, alongside this, we have increased carbon footprints, and that’s before we factor in the rise of NFTs. Most “crypto art” distribution and security technology derive from Ethereum, the platform uses 48.14 kilowatt-hours of energy per transaction, and generates thousands of transactions a day.
So organisations are increasingly expected to consider ethical obligations, social responsibility and so on as guides for what opportunities to pursue and how to pursue them.
62% of gen Z consumers prefer to purchase sustainably sourced goods and services, Millennials and Zoomers alike want to work in ethical companies, because whilst it used to be ok to have a boring job, we are now more aware when we have a boring job that is in some way contributing to global problems.
Digital adopters want tech that isn’t harmful, that isn’t abusive, or been made through abusive means (which means companies need ethical hiring practices, more than legal minimum breaks and benefits for staff, etc) – there is an opportunity to do well by doing good.
In short, the way to ‘win the digital revolution’ is to build ethics into every level of the business.
We have the world in our pockets – more tech than first sent people to the moon. Watches are ‘smart’, cars don’t need drivers, and everything is recorded digitally, forever. The tech we craft creates new use cases, new opportunities, new threats. Technology and its application cannot be seperated, and so we need to start baking ethics into the very DNA of what we are making, lest we end up with some sort of AI controlled robot dog/sniper rifle hybrid hellscape.
Digital ethics is a field of study concerned with how tech shapes our political, social and moral existence. In a broad sense it focuses on how IT impacts our society, and our environment. It tries to assess the ethical implications of things which may not yet exist, or has impacts that we cannot predict.
The transferring of consciousness to computers, Combining true AI with the IOT, and once again (different link): robot dogs with guns attached are all things that have been the focus of dystopian fiction from Asimov to Black Mirror, and yet are things that we have or are working on today.
We don’t even need to wonder what the world will be like in 50 years time; We also need to consider the implocations of the tech that we have at our disposal today. We already know that social media can be a huge tool for social change, but on a personal level, is bad for individuals’ mental health.
Out of this can arise different philosophical and political questions, such as…
If so, how should it be legislated? Should it be legislated? If a coder in a company programs something defamatory into a program, is it covered under free speech? Who is at fault if not; the individual or the company? What if it’s not something defamatory, and instead is an insidious function in a cookie for the company website?
At the core of the technological revolution is communication – how we talk to computers and how they talk to each other – should that be legislated?
In 2016, the FBI told Apple to create a backdoor, Apple refused based on US law that views code as free speech and that to do so would constitute a breach of the first amendment, however, this conclusion has only been reached in lower courts, not the supreme court
But what about here in the UK? in May 2021 the government proposed the ‘Draft Online Safety Bill’ which would hand the Culture Secretary disproportionate powers in the name of protecting users from ‘harmful’ content by giving him the right to ‘modify’ the Ofcom code of practice – the blueprints for how tech companies should protect users – to ensure that it ‘reflects government policy’. This would undermine the regulator’s independence and could potentially politicize the regulation of the internet.
So the question of whether we consider code to be speech is a very interesting one and one that we may have to wrestle with for a long time.
For one viewpoint, we could turn to the works of Lawrence Lessig, who contributes less that code is speech and more that code is law. The nature of code creates a form of social regulation. In the same ways that we have laws to reign in the emergence of monopolies, and to balance against the oppressive social mores of the time, so too do we need to find ways to tame and calm the emergence of code as a social regulator.
We already know that as code evolves, so too does the nature of cyberspace. Where previously it protected anonymity, free speech and individual autonomy (real enlightenment era philosophy), we see it moving to a space where anonymity is harder, speech more restricted and individual control severely reduced.
The question of ‘What is code?’ (in the metaphysical sense) is actually a foundational question in understanding how we will shape the world in the future, and it’s questions like this that the field of digital ethics tries to answer.
I will try to do a few more pieces on this in the coming weeks as it’s something that I really enjoy delving into, but for now, happy holidays one and all.
-Morgan Grey