Select Page
Tech is in its pre seatbelt phase.

Safety, Justice, Compassion – Contributing to the Tech Paradigm Shift

Oct 13, 2021 | An Event Apart 2021 Online

An Event Apart talk by Eva PenzeyMoog

Speaker Intro: Eva PenzeyMoog is a principal designer at 8th Light, and the author of Design for Safety from A Book Apart. Before joining the tech field, she worked in the nonprofit space and volunteered as a domestic violence educator and rape crisis counselor. She specializes in user experience design, as well as education and consulting in the realm of digital safety design, or work brings together her expertise in domestic violence and technology helping others prevent tech facilitated interpersonal harm… she’s a first time AEA speaker!

—-

What do we mean by Ethical tech?

We need to understand this before we can understand how to contribute to an ethical tech paradigm shift, but I feel like this is actually kind of an unclear topic.

There’s a lot of different things going on… such a wide array of things: we’ve got interpersonal harm, anonymous harm, racism, outing gay people without their consent, misinformation leading to death… These are all under the umbrella of ethical tech issues but ultimately are so incredibly different from one another.
This term has a couple of problems.

First of all, it’s vague, there’s no actual problem defined and when there is no actual problem defined, that means there are no actual solution = no accountability = no actual change.

I feel like this term should be used to describe things like a course that covers a wide array of ethical tech issues or as a catch-all term for the multitude of horrible problems in tech right now. I don’t think it should be used for really anything else…

But then I also want to ask the question, what do we mean when we say paradigm shift? What exactly does that mean? Since there have been a lot of a lot of paradigm shifts throughout history, I’m going to talk about my favorite one, and then we’ll come back to ethical tech.

The Seatbelt Paradigm Shift

I’m going to take us back to 1956 when cars looked like this. They were really cool but they were missing something big, that all cars have today – seatbelts.

Old car with cool red seats but without seatblets

We all know that seatbelts keep us safe, but the actual statistic is that seatbelts reduce serious injuries and car crashes by half. But when they were first introduced in the 1950s, no one wanted them. And this was despite the fact that automobile injuries and deaths were at an all time high, and this was a really really big problem.

Part of the reason that no one wanted them is because of the cost seatbelts were an add on feature. And in today’s equivalent, they were hundreds of extra dollars every time you bought a car. Child seats in the 1950s were horrifyingly unsafe (my note). Alternatively, you could simply sort of tie your kid to the seat in a standing position, or let them ride on a little booster seat with you in the front, again, without a seatbelt.

There were basically no rules. Clearly auto safety has come a long way since 1956. When we know that only 2% of customers were purchasing seatbelts for their cars, and we don’t actually know how many of those 2% were regularly using their seatbelts, Probably less.

So let’s go ahead to 1965, along comes a young man by the name of Ralph Nader, you might be thinking that name sounds really familiar and yes this is the same Ralph Nader who is known more recently for running through president for the Green Party multiple times. And that’s the Ralph Nader that I grew up with and before I started researching this talk, I didn’t really know that he is a really amazing activist, and he basically set the groundwork for modern consumer protection.

So in the mid-60s Ralph Nader wrote an expose called Unsafe at Any Speed, about how car manufacturers were prioritizing profits over the safety of their users. This might sound really familiar because tech is doing the exact same thing right now.

So this book covered, not just the absurdity of not including seatbelts and cars, but also things like how tire pressure was actually calibrated to prioritize comfort over safety. Some cars had tires that could not even properly bear the weight of a fully-loaded vehicle, and that was just allowed also talked about dashboards that had bright chrome displays that looked really cool, but they reflected sunlight directly into the driver’s eyes. And the fact that there was no standard shifting gear pattern. So this meant that if you borrowed a friend’s car, you might think that you were going into reverse, but it had a different shifting pattern so you were actually in Drive. And then when you hit the gas you would shoot forward instead of back.

Ralph Nader also wrote about the fact that car designers ignored existing crash science, which definitely did exist back then, but no one had to really do anything with it. He also described the fact that cars were already having a negative impact on the environment. If there’s anyone here from Los Angeles, that’s a city he focuses on in this book because smog from pollution from cars was already a really big problem in the 1960s, criticized the automotive company leaders for refusing to prioritize safety over the fear of making cars more expensive and alienating their users.

He pointed out how the industry had run marketing campaigns to shift responsibility of safety away from themselves and onto drivers as well as the designers of roads, insisting that they were not responsible for the enormous amount of preventable car-related injuries and death happening back then. They were basically saying, “this isn’t actually our problem, it’s a problem of user education.” People need to learn how to use our product better. They just need to be better drivers, and then that will solve the problem… (lol)

The last part of this book involves Ralph Nader calling on the federal government to regulate the automotive industry and force them to do a better job of preventing all this harm. This book became a best seller! And what followed was one of the most successful public health campaigns of all time. This led Congress to create the National Highway Traffic Safety Administration (NHTSA) which is a part of our government that we still have that is responsible for A LOT of automotive safety issues.

Jump forward to 1968: It’s three years since Ralph Nader’s book has come out, and it’s 12 years since seat belts have first become available.

The NHTSA created a new law that all vehicles, with the exception of buses, had to be fitted with seatbelts as a standard feature, and that they were not allowed to charge extra. Now every single car had a seatbelt for every single seat. You might think that this is where the story should end.

However many people were still against seat belts. Some people said it was because of the supposed inconvenience. And some made arguments that if you say drove your car into a lake, the seatbelt would actually trap you. Others thought that the seatbelt would do internal damage to your organs when they prevented you from flying out of your car during a crash (LOL which is worse? people are dumb), and others thought that it was actually safer to be launched, away from your car at a very high speed. This is despite the fact that scientists had done a lot of research and proven that none of these things were actual concerns… but even back then, a lot of people were choosing not to follow science and medicine.

And remember, even though car companies were now required to include seatbelts in cars, there was no law saying that people had to actually use them.

I just want to point out that a lot of people responded to seatbelts back then the way that some people are responding to masks today claiming they’re some kind of incredibly awful thing whose mild inconveniences far outweigh their life-saving benefits, and are just ignoring all of the science that is going against their beliefs. It’s a little reassuring to know that things were similar back then, but it’s also kind of sad.

By 1983, not much had changed. Less than 15% of Americans reported consistently buckling up in New York State. In 1984, they passed the first law in the country that required people to wear seatbelts.

By 2019, the National use for seatbelts was just over 90%.

And just to recap, seatbelts were introduced in 1956 and activism around them started in 1965. 1997 is the last year that a state passed a seatbelt law. That’s over three decades of everyday people as long as activists, academics, and politicians all working together to create this paradigm shift to work seatbelts are the norm.

What does the adoption of seat belts have to do with the tech industry?

In both instances, we have a massively powerful company or industry, choosing to prioritize profits over the safety of their users.
With the automotive industry, eventually public outrage and activism, led to the government creating regulations and forcing them to comply with tech, we have the company leaders prioritizing profits over the safety of their users. There is growing public outrage and a healthy amount of activism, the government has yet to pass meaningful regulatory laws.

In both cases, there’s an absolutely massive amount of totally preventable harm going on.

People will say that it’s the user’s responsibility to understand how a product works, and how someone might use it against them. People will say like well if you’re going to use this app or this piece of tech you know you should know what’s going on, you should know how to use it and the different features that it has, and if there’s something you don’t like about it, you just don’t have to use it as if that’s always a realistic option.

But my question is, why is this the only solution to the problem shoved onto the user? Shouldn’t we be tackling the problem at the source instead?

And that’s what my work designing for safety is all about fixing the digital side of these problems as a digital source. Going back to this fact of putting responsibility for safety on to the user, I just want to point out that this is a very specific tactic. This is something company leaders within tech are doing right now to intentionally shift blame and responsibility away from themselves, and on to end-users — that it’s on us to understand every little piece of our tech and all of the implications and this didn’t come out of nowhere, the feeling that it’s our responsibility and it’s our fault, if we get hurt with tech or if our data gets breached.

This is a tactic that powerful industries always use and attempt to not take responsibility for the harm that they’re causing and to avoid new regulation or different rules that might hurt their bottom lines.

Tech is in its pre seatbelt phase.

The seatbelts are there, and a lot of us as individuals and teams are working really really hard to make our products, ethical, but most products remain incredibly harmful.

We don’t yet have laws mandating that we make the seatbelts of our tech, a regular part of our process, and of the products, and those that lead the industries companies are continuing to choose to prioritize profits over the tech equivalent of seatbelts, all this in mind, let’s take one more look at the timeline of the paradigm shifts of seatbelts, this time with the additional framing of what’s going on with the public.

So there are two key insights here:

  1. Paradigm shifts are totally possible, but they require a sustained effort over a long period of time.
  2. It is very important to focus on highly specific goals.

So, let’s get back to this question of what do we mean when we say ethical tech, because, like I was just saying, it’s important that we are very specific.
So I think that there are sort of two areas of focus, usually we’re talking about tech products, but I think sometimes we’re also talking about the tech industry itself.

And then within each of these. There are the issues of safety, justice and compassion. I think that whenever we’re talking about Ethical Tech, we can start the issue into one of these three categories within one of these two issues.

Issues with Tech Products

I’m about to list every single tech issue that I have been able to identify and where it fits in these different categories.

Safety

The first is the safety of our tech products includes things like using tech for stocking tech facilitated domestic violence, those are sort of where I spend my efforts. Image abuse, which is sometimes called revenge porn, although that’s not quite the best term because revenge isn’t actually always part of it and porn does imply some level of consent.

There’s invasive surveillance, which typically involves surveilling domestic partners, children, elders, as well as workers, cyberbullying through text or social media or any other type of digital platform, as well as anonymous, harassment, like what happens on Twitter and other platforms. And then there’s threatening and doctoring people’s identities and personal information.

These are all examples of safety issues that are facilitated with our tech products.

Justice

Issues of justice within tech products include texts that harm the planet like Bitcoin mining to name one, although I’ve heard that might be getting better. Data harms that include things like what sort of data gets collected and about who. So for example in Chicago where I live, there’s this incredibly harmful thing called the gang database that is run by the police and there’s basically zero transparency about it. It includes a lot of names of people who are not actually affiliated with gangs, and yet when they find out that their name is on this database, they have no recourse to get off of it or to explain that they are not actually in a gang.

Next is racist, sexist, and otherwise oppressive algorithms such as predictive policing, saying that a black man is more likely to commit a crime than a white man who has the same history.

Then there are exploitative design practices, such as bringing in end users to help create solutions to problems that are then packaged up and sold back to them without them receiving power or funds from the project. They’re sort of just exploded for their knowledge and ideas and this is especially harmful when it’s a vulnerable or marginalized community.

Then there are social good problems from the project, they’re sort of just exploded.

Then there are exploitative design practices, such as bringing in end users to help create solutions to problems that are then packaged up and sold back to them without them receiving power or funds from the project, they’re sort of just exploded for their knowledge and ideas and this is especially harmful when it’s a vulnerable or marginalized community.

And then there are social good projects that do more harm than good, which we see a lot of in terms of design groups going into poor spaces, or third world countries, and aim to create design solutions for them without fully understanding problems, without giving the community power, and then leaving before the project outcomes are fully understood.

Lastly there’s harmful disruption, such as Airbnb, which has contributed to many cities becoming even more expensive and unlivable for the actual locals.

Compassion

And then lastly, there’s compassion. This includes things like cruelty in advertising or promotion, such as showing a woman who has just had a miscarriage ads for baby diapers. Hurtful copy, such as Twitter’s original messaging when a user went over a character account, they would say, ‘try to think of something more clever’ to encourage users to make it shorter.

And then there’s failing to design for stress cases. So for example in Eric Meyer and Sarah washer Becker’s book designed for real life, they give the example of a stress case being that Home Depot usually thinks about users who are excited and doing a home renovation, when in reality there are users who are in situations like my refrigerator is broken, I’m about to lose all the food in there. I don’t have a lot of excess money and I really need a new fridge quickly and it needs to be affordable. That would be a stress case.

Then there’s retraumatizing users by doing things like showing unwanted content and not giving them control over what they’re seeing. And a lot of times this comes in the form of suggested content like related articles at the bottom of an article that you’re reading, which can involve graphic things that trigger past traumas.

Next is disallowing control over what is seen. An example of this also comes from designed for real life, where Eric Meyer talks about how he tragically lost his daughter, and Facebook continue to show things like reminders that it was her birthday, even after she had died and there was no way that he could stop it.

Lastly, there’s secretly experimenting with you there’s emotions, which is another thing we can blame Facebook for. They did it in 2014. Some people might remember, they were in the news when it came out that they had run these experiments where they would take all of the happy things out of someone’s feed and see if it impacted what they posted, and they found that actually yeah if you only show people sad content, they will create more sad content and it will impact their mood. (how is this surprising…?)

Issues in the Tech Industry

Let’s move into issues in tech companies.

Safety

First is safety. This includes workplace harassment, assault, and abuse – these things do happen in our tech companies, HR teams will protect the company over people, especially victims of the things in that first issue.

And then there are unsafe working conditions, such as what we see in Amazon warehouses where there’s been regular documentation of all sorts of safety issues from issues of overheated spaces that don’t have air conditioning, to the really tragic story of the woman who was denied lighter duties when pregnant, she just wanted to do less heavy lifting, but they wouldn’t let her and then she had a late-term miscarriage.

Justice

Next is issues of justice in the tech industry. These include inequitable hiring, retention and promotion in which certain powerful groups get an easier time being hired, retained, and promoted.

There’s toxic and exploitative work cultures and unjust worker compensation which, again, I think we can see at a company like Amazon where some employees see a much bigger share of the enormous amount of profits than others. Poor or no health care, especially offered to certain employees like contractors at Google.

There’s failing with accessibility needs, which we’re really seeing right now as companies force their employees back into offices after showing that they totally can accommodate working from home, but they’re gonna choose not to.

And then last is education that reproduces existing oppression in the industry. So with this, I’m thinking about things like science departments in universities, where, where male professors are allowed to continually harass and abuse female students, which is pushing women out of STEM industries before they even enter them.

Compassion

And lastly there’s issues of compassion in the tech industry. Tech industry compassion issues include lack of agency and just workers burning out which we are seeing in huge numbers right now. And also this general theme of not seeing human beings before seeing employees who can help your bottom line (I didn’t understand this last one). This might be more of an issue of just capitalism in general, but we are definitely seeing it in the tech industry.

I’m betting that a lot of people at this conference are already very interested in one of these topics, and might even already be doing some kind of meaningful work and at least one or are reading a book about it or have learned about it or have seen this at other conferences so I just want you to take a few seconds to think about your topic and think about where does it fall into this sort of general timeline of a paradigm shift.

An analysis of a tech industry paradigm shift

So next I think it might help to look at a specific example from the list of ethical tech issues, and just quickly look at the paradigm shift of that and kind of see where we’re at, especially considering that none of these are complete yet with the seat belts that paradigm issue is done. It’s over the paradigm shift has happened. But I want to look at one that is sort of still in the works. So for this, I chose the issue of racist algorithms.

This starts in 1986. This is what we saw the first documented biased algorithm that I could find.

It starts with a doctor at a medical school called St. George’s, which is in the UK. He created an algorithm to help with admissions, and his goal was actually to make the process fair and to weed out human bias. But after a few years, other staff members looked around and noticed that there was very little diversity in the successful applicants and new students. So they led an inquiry into the algorithm, and they found some big problems.

For example, the process of giving and taking away points to a candidate would weigh things like the applicant’s name, and having a non-european name would dock a candidate 15 points. (what the fuck?!) The algorithm would also designate candidates as either Caucasian or non-Caucasian based on their name and place of birth, and give Caucasian people more points. The school was found guilty of discrimination, but they didn’t actually face any consequences.

It wasn’t until 2016 (which is a whole 30 years later) that activism around this issue had a breakthrough. And that came from Pro Publica which was demonstrating that criminal prediction algorithms are racist and that they have a bias against black people. There have been articles and other warnings about this for decades, there were some meaningful studies on the topic in the 2000s, but it’s around this time that I think the average non-tech worker began to be exposed to this problem of the bias of racist algorithms.

Two years later, Joy Buolamwini and Timnit Gebru published their paper demonstrating that Amazon’s facial recognition has a far higher error rate for dark skinned women than light skinned women.

And that brings us to where we are now in 2021, where something exciting is happening. The algorithmic justice and online platform Transparency Act has been introduced into Congress. It hasn’t passed yet but it is still very exciting.

What comes next on our paradigm shift timeline is bills being passed, and then additional laws, then change starting to happening, and change really happening and then the paradigm shift becoming complete.

But I want to note that once again with the bias algorithm paradigm shift, it’s been around 30 years between the issue first being identified, to the present day and real momentum starting to happen around the problem getting fixed. WOMP WOMPPPP.

But also, this is where opposition starts to get a lot fiercer. First off, Timnit Gebru was fired from Google, and I’m sure a lot of people already know about this. I can expect to see opposition from companies as well, because they would have to make a substantive change in the way that they’re working.

In May of 2021, the algorithmic justice and online platform Transparency Act was introduced, and it was quickly thereafter in June of 2021 that there started to be reporting on a rise in lobbying from industry groups. There is currently an industry group backed by Amazon, Facebook, Google, and Twitter, that is trying to work against this act. They say that actually exposing platforms processes could be risky, and that making their algorithms more transparent will provide a roadmap for hackers Russian trolls and conspiracy theorists. That’s why they say that they shouldn’t have to make their algorithms more transparent.

They also said that actually what we should do instead is expand our existing civil rights and discrimination laws and housing, employment, and credit which, yeah, we absolutely should do all of those things, but that doesn’t mean that we have to just ignore biased algorithms. It’s a pretty transparent attempt to deflect away from the ways that their own companies are contributing to those very problems. And I think we can definitely expect a lot more pushback from company leaders who don’t want to change their very lucrative current way of working, even a little bit. If it’s going to hurt their bottom line, even if it means that their products will do a lot less harm.

So let’s look at some main takeaways from all of this, between the deep dive into paradigm shifts, and learning from them, and breaking apart what we mean when we talk about ethical tech into issues of safety, justice and compassion. We can now get into the key ways that I think people can contribute towards pushing forward this paradigm shift for ethical tech.

So first off, choose one issue. There are so many different issues and to try to tackle more than one is just going to lead to being really overwhelmed and burning out. In fact, that could well happen, Even just if you choose one thing. So I think it’s really important to choose one issue to focus on signal boast and support others of course don’t ignore them incorporate them. A lot of times they bleed into each other, but definitely choose one thing to focus on.
Next, plug into existing activism. For a lot of these issues, they’re already groups doing really amazing work, and they need more people to help carry out their mission.

There’s already a fight happening you can join

There’s almost always something that is already happening. Don’t start your own new thing, unless it’s really necessary. Remember that the goal is to make meaningful progress in terms of the ethical tech paradigm shift and not to center on yourself.

Lots of times there are really amazing organizations already doing work and I get that sometimes there aren’t. So if you look around and you see actually there really isn’t anyone talking about this or working on this, then yeah, try to start your own thing, but in almost all cases there are already one if not multiple different groups working on an issue. So just be sensitive to the fact that the goal is to make the paradigm shift happen and not to starve yourself or make a voice for yourself, or a name for yourself. Also, always follow the voices of the people actually being impacted by this problem.
I really like this quote from Sasha Costanza chalk in the book design justice, they say wherever there is oppression, there are people resisting oppression, they talk in this book about how wherever there are communities, facing big problems and depression. They already are working on solutions to help end it. So instead of, you know starting your own thing or trying to find your own solutions, it’s very important to work with impacted communities and highlight the things that they are already working on and contribute towards them.

A limit to empathy

I also talked about this in my book, but there is a limit to empathy. Remember that we cannot pretend to to stand in for lived experiences, we always need to follow the lead of people who have those actual lived experiences of being impacted by the problem that we’re trying to help solve.

Next, think about the paradigm shift timeline to inform your work, thinking about things like what is actually needed right now where sort of are we in this paradigm shift. Do we need more research, like maybe there needs to be a study, maybe you can work with a university or different groups to make some type of study happen, and actually understand the issue better, maybe education is needed, and there needs to be articles and different ways to inform people who work in tech or maybe people who are outside of tech, and maybe law they’re needed, maybe actually everyone, Or like generally people already sort of know about this issue, and now there needs to be a meaningful law. But I do want to point out that a lot of times we’ll kind of jump to the laws part of it, and it’s important to remember that laws are never ahead of the curve, they’re just the cap on many, many years of activism, and like yes, ideally, politicians would be more proactive about this stuff. But the reality is that until public opinion changes, there’s kind of unlikely to act. So it’s important to be realistic about what is actually needed. And to start there with research or education, or sort of getting the word out and not just jump to laws.

Expect Resistance

And then, it’s important to remember that resistance is going to happen.

When activism ramps up, and there are some small wins. That’s when the opposition gets more organized and we’ve seen this all over the place. We’ve seen this with other paradigm shifts in recent memories, such as marriage equality. There are still efforts against that and we’ve seen this sort of backlash, especially in terms of trans people, and the sort of other groups that are outside of just the sort of very simple to digest, ‘straight or gay binary’ which we know is actually a lot more complicated… But we’re still seeing a lot of pushback after that big victory.

So it’s important to expect backlash from very well-funded company leaders in tech, who are going to fight as hard as they can to protect the status quo.

Resisting checkbox frameworks

So now that you’re all fired up, I’m gonna throw just the tiny amount of cold water on you and talk about the need to resist checkbox frameworks, and action bias, and to do deep learning on the topic.

So, a checkbox framework is sort of just what it sounds like it’s a framework made up of checkboxes that we can simply sort of go through as we’re working in tech off, we can say, Yes, I have considered history I have considered bias I have considered in our personal harm. And when we do that, we run the risk of not necessarily internalizing the problem and doing deep learning about it checks checkbox frameworks come from action bias, which is a tendency to favor action over inaction. It’s responding with action as default, even when we don’t have a very solid rationale to support that action. And this comes from a sort of evolutionary standpoint, reacting to problems as quickly as possible helped our ancestors stay alive. But when this shows up in big complex problems, and our attempt to solve them it’s not really a good thing, and it can actually cause us to do more harm than good. When we attempt to solve a problem quickly before we fully understand it.

So checkbox frameworks give us the ability to sort of use action bias, but they don’t help us do deep learning on the topic a little more about this action bias happens when people are incapable of living with discomfort when presented with difficult topics and everything from the slide I had of all of the different problems in tech, are really difficult topics, and they’re hard to sit with, and it makes sense that we want to just sort of jump into action and try to solve them, but this actually comes from the fact that we’re not able to sit with uncomfortable things especially you know if you’re a white person, it can be really hard to sit with the realities of racism and for men, it’s really hard to sit with the realities of sexism and the patriarchy, but it’s important that we do those things, and sit with those difficulties and really learn about it.

Let’s end with a little bit of advice

First, stay hopeful. Remember that change is absolutely possible. There are so many examples of paradigm shifts throughout history, we know that they can happen, despite what our opposition would like us to think we know that we can completely transform. But at the same time, be realistic about all this. Remember that the paradigm shifts that I discussed in this talk took over 30 years to really gain traction.

But I do think it’s important to remember that change does not just happen overnight. They totally can happen as the insight I had earlier in the talk said paradigm shifts can happen, but they require sustained effort over a long period of time.

Next is to find your people, an organization or a group working on the issue, find others in your workplace, who share your politics and want to take some sort of positive change with change within your company, you might find a Slack group, or you might just find a Twitter hashtag and follow a bunch of people.

But whatever it is, find those people, and remember that nothing great happens alone and without a team for solidarity and for support when the going gets tough, it’s going to be a lot harder to stay hopeful and continue to work, so finding your people is just so important.

Next, take a nap. Okay, this is actually about setting a sustainable pace, take breaks. Remember that this work is going to take years, and that we need to be in it for the long haul. And this means just being realistic about what you can actually do. Maybe you can rally your team at work to implement a big change. Maybe you can volunteer a few hours a month with an organization, maybe you legitimately do not have time for any of this stuff but you do have some extra money and you can afford to donate $10 a month to a few different organizations that are doing this really important work. Whatever it is, find a sustainable pace and take breaks when you get overwhelmed, so that you can come back and continue fighting.

Really though, take a nap. I’m talking about this one twice because it is just that important. We need to remember that the Mark Zuckerbergs and the Jeff Bezoses of the world are counting on us to be too tired and too exhausted and too overwhelmed and too heartbroken about all these setbacks that we face in order to keep going. They’re counting on that. They’re counting on us to give up. So take a nap, and come back ready to work on your issue.

I want to point out that we should not live with this expectation that activists or people just trying to do good work and make a change should sacrifice everything. I don’t think you should sacrifice everything, I think that you deserve to live your life you deserve to be happy. You deserve to be rested, and that you can still do those things while putting in very valuable time and effort into whatever the specific issue is that you’re working on, to transform our tech industry into one that is more ethical.

Finally, I want to encourage people to push back against the rhetoric of ethical tech. Remember that this is often a marketing tactic, And what do you see that call it out.

And what do you see people doing this. Don’t call them out. Be kind, but ask them what specific problem they’re actually talking about. And a lot of times this shows up at work so hold company leaders accountable to defining actual problems, and then making progress on actual solutions. Don’t let people be vague about this.

Resources:

  1. The design justice book by Sasha Costanza-Chock (youtube video on the same topic: https://youtu.be/Ji7pYma4meo)
  2. Emergent strategy by Adrian Marie Brown
  3. Three books from A Book Part: one of them is my book Design For Safety. Then there’s Design For Real Life. Then there’s Design for Cognitive Bias. I think these three together are really good.
  4. Finally, a workshop about how traditional design thinking protects white supremacy from Creative Reaction Lab, This is such a good one for everyone in tech, but especially designers. This is where I learned about action bias and checklist frameworks, and I learned a lot of other things about how white supremacy shows up in my work as a designer and just in my life, in general, it’s been life-changing. The creative Action Lab postings pretty regularly so check their website and see if you can get in on an upcoming session.

I want to close with this quote from Arthur Ashi, who was a groundbreaking kind of star, as well as a really awesome social activist, and he said, “Start where you are. Use what you have. Do what you can.”

Maybe your thing right now is just changing one aspect of how your team works. Like I said, maybe you could organize with coworkers, maybe you could plug into a group, maybe you can donate a few dollars a month, but whatever it is it is going to help.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

how-to-choose-a-wordpress-theme-thumbnail

How To Choose A WordPress Theme For Your Website

In this guide you'll learn four tips to make choosing a WordPress theme a bit easier. This is a 5 page guide with an approximately 4 minute read.

How To Pick A WordPress Theme For Your Website • Download the Guide

How-to-choose-a-wordpress-theme-thumbnail-sm 5 page guide • 4 minute read

In this guide you'll learn four tips to make choosing a WordPress theme a bit easier.

Powered by ConvertKit

Success! Check your email for the guide.

where-to-host-thumbnail

Where To Host Your WordPress Website and Why It Matters

In this guide we discuss what makes 'good hosting' including security, speed, and support. We tell you who we use and why, and give you a coupon! YAY COUPONS! This is a 45 page guide with an approximately 3 minute read.

How To Pick A WordPress Theme For Your Website • Download the Guide

How-to-choose-a-wordpress-theme-thumbnail-sm 5 page guide • 4 minute read

In this guide you'll learn four tips to make choosing a WordPress theme a bit easier.

Powered by ConvertKit

Success! Check your email for the guide.

8-next-level-plugins-cover

8 WordPress Plugins That Can Take Your Website To The Next Level

For our business clients, we install some specialized WordPress plugins. Get our list of the 8 next level plugins we love and the use case for each. 

How To Pick A WordPress Theme For Your Website • Download the Guide

How-to-choose-a-wordpress-theme-thumbnail-sm

5 page guide • 4 minute read

In this guide you'll learn four tips to make choosing a WordPress theme a bit easier.

Powered by ConvertKit

Success! Check your email for the guide.