Episode 004

Thrive

with Rahaf Harfoush

What happens to the ways we work when we have digital-physical hybrid spaces, hybrid-companies governed by automation, and hybrid people – virtual characters, interacting as though they’re real? Rahaf Harfoush joins Sydney Allen-Ash and Lane founders, Clinton Robinson and Kofi Gyekye for a conversation on mixed-realities in the workplace as we listen to a futures true crime podcast questioning who’s at fault when an automated organizational system goes rogue.

Guest

Rahaf Harfoush is a Strategist, Digital Anthropologist, and Best-Selling Author who focuses on the intersections between emerging technology, innovation, and digital culture. 

She is the Executive Director of the Red Thread Institute of Digital Culture and teaches “Innovation & Emerging Business Models” at Sciences Politique’s school of Management and Innovation in Paris.  She is currently working on her fourth book. 

Rahaf is a member of France’s National Digital Council. In 2021 she joined The Oxford Internet Institute as a Visiting Policy Fellow. 

Her third book, entitled “Hustle & Float: Reclaim Your Creativity and Thrive in a World Obsessed with Work,” was released in 2019.  She has been featured by Bloomberg, The CBC, CTV, and Forbes for her work on workplace culture.  It has been translated into Chinese and French. 

► Open Transcript

Episode 4
Thrive w/ Rahaf Harfoush

Syd:
Welcome to Work/Place, a podcast about the futures of where and how we work. I'm your host, Sydney Allen-Ash, currently recording in Brooklyn, New York. And I'm joined by the founders of Lane, Clint Robinson in Toronto and Kofi Gyekye, also in New York today. We are also joined by our special guest Rahaf Harfoush recording in Paris. In this episode, we're going to discuss what happens when digital and physical realities mix, an experiment that's kind of playing out today and has been going on for a long time. Um, but will continue to create new and strange experiences in the future. In particular, we're going to chat about what happens to the ways we work when we have digital physical hybrid spaces, hybrid companies governed by automation, and hybrid people like virtual characters interacting as though they may be real people. So, in every episode we hear an audio artifact from the future as a way to spark the conversation. So, before we go any further, uh, we're going to listen to that speculative soundscape from a not-too-distant future. Um, I'm also going to issue a little trigger warning for our listeners. The situation in this fictional scenario gets a little bit violent.

-----
Narrator: It’s 2027 — and Joanna, like millions around the world, is glued to StreamStars — the Netflix of live streaming. She’s watching one of her favourite inspirational streamers. But today’s stream is different. Joanna stares in horror…

Woman:
No, no! Stop! Please! Someone! Just stop! 

[screams, shuffling, sounds of struggle, and then sudden silence]

Narrator: The victim is Rosie, a popular streamer sensation and self-made entrepreneur behind “The Hustle Mask”. Rosie’s just been brutally slain in her own showroom, live, for all her followers to see. Her fans watch in horror.

The perpetrator is her husband, Westbrook, himself a streamer-preneur known for his progressive mental health activism. When he eventually snaps out of it. He looks bewildered, then terrified. Realizing what he’s done, he proceeds to take his own life.... leaving behind their four children. The stream is still live.

Joanna
: Oh my god.

Narrator:
Joanna watches in shock. She can’t believe what she’s seeing.

The crime scene is quiet now . . . but for the panic’d breathing of their youngest. The five year-old child shivers with fear, alone, barely visible, hiding behind a fort of unsold packages — her mom’s product.

Beside them — a familiar yellow button continues to glow. “Buy Now” it urges. The animated purchase trigger would seem vulgar … if those things weren’t so ubiquitous on shopstream channels like this one.

Instinctively, Joanna swipes to buy. It’s all she can think of doing to support Rosie’s poor family.

She speaks into the stream of fan comments, “We need to keep her memory alive, please, buy more of Rosie’s masks…. If you can’t afford one: Tip! Tip to save her children.

Their heart strings - played like a violin — Rosie’s fans start purchasing.

“Alexa - buy five hustle masks.”
“Siri - one hustle mask please.”
“Confirm purchase”
“Support Rosie’s children.”
“Add to cart.”“Donation accepted”

Narrator:
By morning — hundreds of thousands have bought the Hustle Mask — and millions more are supporting the cause through tips and donations.

The sympathetic shoppers have no idea they’ve been conned.  

Rosie isn’t really dead. Westbook neither.  

And the traumatic incident the world just witnessed — is just one small piece of a complicated plot to dupe Rosie’s unwitting followers in what will become one of the strangest and most complex wire fraud cases in history.

Host:
I’m Chuck Reynolds, and this is Stream Crimes. On this episode: The Network Behind the Network.

Joanna was in her early twenties when she first started watching Thrive — a content house on the StreamStars Network. Thrive started as a squad of about a dozen people, though it grew to be closer to 40. Their content model blended self-help, coaching, entrepreneurship, and retail.

Joanna:
Thrive was so good – like a real-life soap opera that also had a bunch of great creative retail experiences and a focus on really developing yourself. And the people were so inspiring. At the time, it helped me get out of my shell and push myself... But that was before it started mining us. I mean, I don’t really know when that started.

Host:
Follower mining, the practice of algorithmically attracting fans, is the motivating force at the core of this case. In 2019, Thrive began building its follower mining rig using a tool known as a Decentralized Autonomous Brand, or DAB, for short.

Host:
It’s 2021 — and two of Thrive’s founders have been racking their brains for months to figure out a way to grow their audience. It finally seemed like they’d made a breakthrough.

Founder 1:
OK I think this should be workable.

Founder 2:
What is it?

Founder 1:
It’s our new C.M.O. — I’ve built something here that’s going to put our business on autopilot.

Founder 2:
Are you sure about this? We’re in a fast-paced competitive environment. Bots are not gonna cut it.

Founder 1:
No — this is different. It codes itself. And it’s set up for two goals and two goals only: maximize attention and maximize sales. Look! Our SEO rating is already up 10%!

Host:
Cobbled together from a few machine learning models, Thrive’s DAB was a program that could adapt and change over time — testing out different tactics and figuring out for itself what works best.

The DAB did exactly what it was designing itself to do: Thrive was … well..  Thriving… into one of the biggest content houses on StreamStars.
But - for day-one die-hard fans like Joanna — the sudden mainstream success was a little off-putting.

Joanna:
The community began to change. The forums began to feel more … bland… like Facebook. I mean, it also felt exciting to be part of something growing so quickly, but it’s just, you could feel it becoming less personal.

Host
: That’s around the time things got strange. Livestreamers started getting cryptic messages from fake fans.

Fake Fan:
I love how outlandish you are on your stream — keep pushing the boundaries. Be more mean!

Host:
In hindsight, it’s as obvious as it is remarkable. The DAB had found a way to communicate directly with Thrive’s streamers. It would tell them to change their behaviour to be more engaging.  It would even suggest they break up with their partner to create drama when it sensed audiences losing interest.

Joanna:
I noticed myself spending more time following them. I was getting sadder. The worst part was that I couldn’t stop shopping. I kept clicking those stupid Buy Now buttons. You could feel something was at work behind the scenes, but you couldn’t put your finger on it. You just knew you wanted more.

Host:
Noting that the Thrive network lacked a personality that would be totally and utterly vulnerable with their feelings, the DAB created a synthetic one – a photorealistic 3D model and voice that would broadcast just like a real streamer. Synthesizing top content from across the internet into a compact personality, it knew exactly which buttons to press.

Joanna:
Rosie was unreal. She was just so clever, and open … and funny. And most of all, inspiring. I was rooting so hard for her and Westbrook. It was amazing - all they accomplished with everything they were going through together.  I can’t believe she was … a bot.

Host:
It’s not totally clear why The DAB killed off Rosie and Westbook – it’s very own creations after all. Sure — it was a blatant play to capitalize on the sympathy sales. But why stop the party?

Some theorize it was going for one last cash grab.

The DAB’s automated marketing had caught the attention of the Justice Department — they’d been watching it for months.

Is it possible the algorithm knew about the investigation? It seemed to be behaving that way.
Rosie’s death happened — just days before Thrive would be indicted for wire fraud.
Could an algorithm that scoured the web feel the heat of a federal investigation?
Who’s liable when automation goes unattended? There’s no prison for algorithms. So, is it the Thrive founders who willfully designed a piece of software that could act on its own without human oversight? The developers of the open-source machine-learning models that were used? Or the Streamstars Network, which claims it’s merely a platform, but profited as much as anyone off the scandal.  

On the next Stream Crimes – we trace the chain of knowing  – and examine the key decisions that point to each party’s culpability. We’ll take you inside the courtroom to hear the prosecutor’s case.

I’m Chuck Reynolds, and this is Stream Crimes. 

-----

Syd: So, that was arguably the most twisted scenario that we've heard thus far. There's a lot going on in it. That was an audio clip produced by the Toronto-based foresight studio, From Later. It was, as we heard, a fictional true crimes podcast from the future called Stream Crimes. There is just, just so much going on in that, but let's just start with some easy first impressions, first reactions.

Rahaf, since you're our esteemed guest, I'd love for you to go first and we can go to Kofi and Clint. So, what stood out for you out of that scenario? What do you think it reveals about the future of work? What did that bring up for you?

Rahaf: That was an emotional roller coaster. The funny thing is, is that, and maybe it's because I’ve researched so much in this space, but a lot of it seemed quite natural extensions of stuff that I'm seeing now on a wide variety of level – the performative vulnerability of using these like emotional moments to gain sympathy in order to drive sales.

I mean, we're seeing that now in influencers now, right? The rise of CGI influencers, the rise of the, um, virtual influencers, like that's all kind of happening now. So to me, it's just always, it's kind of interesting because then it becomes more of a question about almost like the Netflix problem, right? If an algorithm can put together the perfect influencer, or the perfect entertainer, the perfect actor, or the perfect accountant, then what's left for the rest of us?

Syd:
Um, I mean, big, big question that we will definitely dive into. Um, but, but Clint, what was your first impression? What did, what did this bring up for you? I was kind of watching your face while you were listening to it. 

Clint: Yeah. This one is, this one is a little bit darker than all the other ones, but also this is already happening to some degree. Like we have virtual, like bots, that are influencers already. The logical extension of that, like that a bot is creating other bot, not just as the influencers themselves, so you can watch their contrived, like algorithmic lives are perfectly attuned to like what we want to see, but then also, like the bot commenters are commenting on the bot to create this world that you're stuck in as a human.

And you had no idea who was a bot who wasn't and that's all starting to like, kind of happening in front of us. I mean, from one side, I'm really interested in this future where kind of AIs exist alongside humans and how we work and interact with them. One side of me is like super interested. Like I want to enter a VR world and, and hang out it with bots and, and try to figure out who is one and who isn't.

But then on the, um, on the other side, like what is this doing to society? There's this asymmetric power distribution. Like, if you're a company with a lot of data and a lot of computing processing power, you can create these bots, and if you don't, you can't. So there's also this inherent like continuing of the wealth divide because, uh, like right now I can like start an internet startup and like start something easily with like low barrier to entry.

But I can't go create like a hyper intelligent bot without tons of data, tons of like processing power. So that was, that was kind of like a lot of random thoughts all together thrown at you there.

Syd: Kofi, what was your first reaction? 

Kofi: That was a lot. It was really long this time round, there was a lot to unpack. Um, I do agree with Clint. I think what was interesting to me was the inter-connectability that already exists. So, I did like that there was law enforcement, and they were using computers and the computer’s intelligent enough to out play the other. It's like the 50 chess moves ahead. And so, part of that strategy is building bots that can also just continuously improve in order to get private sector away from having to report or be responsible for sort of the actions in a sense.

And I think there’s a massive impact to our mental health that comes from this. It's like that really uncanny valley stuff where it's sort of, we know these aren't real, something about them is like, yeah, you're a bot. I get it. You're not fully human, but yeah, I am emotionally reacting. And what does that do to our mental state as human beings moving forward? I kind of get excited for this sort of world, but at the same time, am I a slave to it?

And it happens. I see that, I do that myself, even on Instagram, for somebody who doesn't enjoy a platform and busy scrolling around all day, doing nothing. And I'm emotionally invested for some reason, even if I'm aware I should not be. I’m very well aware, we build these things. Part of the machine that's building it, but at the exact same time, I'm reacting to it. So, it's kind of trippy. 

Syd: We should be immune. 

Kofi: Yeah. But we're not. And to that divide, it's even, it's even further, you see that with, you know, news networks, you look at WhatsApp and just how viral news can go terribly across the planet. What happens when influencers become these entities that, and now coming up with political stances, start coming up with, right, where do we end up on the planet? Because people will follow it.

Syd: Yeah like what if their plot lines expand beyond interpersonal drama to like inter country drama, international drama. You know, if they have the power to shape those narratives.

Kofi: It often starts in media and so forth. It's going to work its way into our everyday lives. 

Syd: What do we like about this world? What do we think is like positive? What do we think could go right here? Again, maybe we'll start with Rahaf and then go around into the metaphorical table.

Rahaf: I would like to think that it gives, you know, that if you're going to look at all technologies as a spectrum of both good and bad, that despite the fact that we entered through some of the darker elements, that the more positive elements could be real communities being built, real people connecting, and look, maybe it is going to be a bot, but if there can be a bot that can be created where their personality is optimized to make people feel comfortable, to share information about their mental health, to get help, to get counseling, to seek support. Like that could be something that could be really interesting. We're already seeing now the rise of chat bots and the research showing that the next generation, uh, both younger millennials and Gen Z, or are more comfortable talking to a chatbot about mental health issues, because they feel like they're not being judged. So if you could create optimal personalities to give people that type of compassion, that type of support, I think that could be a pretty good application. 

Syd: Yeah I mean, to, to that point, like, what is the difference between creating a character that's a bot and creating a character in a movie or in a song that is incredibly moving to people and connects with them in a deeper way?

Clint: Or even a bot who's super creative and it's going to help me design my next like, uh, project rather than just like a, yeah, the influencer kind of weird world that we're in right now. 

Syd: Kofi?

Kofi: I agree with both. I just think we… again, I'm trying really hard to put a positive one on this, but the, from a capitalist standpoint, the bot’s going to have to make money. We're going to have to maintain it. Somehow, you're going to have to get engineers and UX and UI designers on it, working every single day. How do we do that in the, in the state of what things are? That's ultimately what becomes the issue. 

Rahaf: I mean, people used to take care of, and we're invested in like Tamagotchis. I could see that there are alternative revenue streams than just selling products. It could be subscription-based. It could be that you're paying to get access to something that makes you feel better.

I could see people paying if it provides real value. Like I think the limitation would be to limit ourselves that the only possible revenue would be like selling, and thereby like needing to have affiliations with brands, etc. Maybe there are, there are other ways. And like, what does an open-source version of this look like? And how could entire communities or countries or populations or entire segments of people, could they get together and create something or, you know, someone quote unquote that could represent a voice for them? What if you could get somebody to become, you know, a voice for the refugee experience, for example, that could help other people in that area? So, I think the bad taste we all got was just because I think we all feel the same way about the influence part of it, because it's just so hyper-curated and fake and transactional these days that I think it's lost a lot of the authenticity, but it'd be interesting to explore other parts of that spectrum as well. 

Kofi: And I think we've touched on this a number of times, just really, you know, the industries that, um, we're not applying this technology and are hurting us the most, right? It should be our education system, our food and sustainability initiatives, our political agendas. And I just say true, like really getting through that, we're not applying it in those places. We're making really great models, like Lil Miquela to sell sneakers. 

Clint: Yeah. Do you feel that way when you watch Netflix now? Like the Netflix, like, it's almost like I can just see the Netflix algorithm instead of the content that I'm supposed to be seeing, like eating a McDonald's cheeseburger version of content. Like there's, it's kind of empty inside. Does anybody else feel that?

Rahaf: As a writer, as a content creator, a filmmaker or whatever, your job is, this is my opinion, obviously, but your job is to give the audience what they need and not necessarily what they want. And sometimes if you think about the most powerful stories that we've heard in our culture, we might absolutely hate the ending, but like that ending was totally necessary.

Right? And then what I think is interesting is, and this was alluded to in the soundscape, is that right off the bat, they said the bot is geared for optimal engagement. So then, with say a company like Netflix, they start building not on what we need as a viewer to get a really good story, but on what we want. And oftentimes what we want might not be what's good for us. And so, we keep seeing content as hitting all the notes, but in many ways, you're removing some of the agency of the storyteller to tell the story the way they want to tell the story, where you have a bunch of number crunchers that say, ‘hey, like if you kill off this character, now we'll get a 40% increase in viewership’. You make the story in response to the data. But then what are you losing? You're losing the authentic identity of that story. 

Clint: This is one of the problems with neural networks too, is like over-training and you need a new data source to be generated, to continually train and improve them. But if you train them in a… in isolation or overtrain them on a data set, they stopped producing novel results. You're not leaving yourself open to random experimentation. And you're going to get an actually a worst product over time ultimately. 

Syd: Interesting, when you take this idea of like a decentralized autonomous brand, which is what this thrive network was in the scenario, and you apply it to a whole company. So every decision about that company is being, uh, automatically decided for you by this kind of like bot all-knowing neural network, what happens to that organization? What happens to the feelings of organizational culture or feelings of connectedness, ff you know that your CEO is not a CEO, but a decentralized algorithm?

Clint: I mean, we kind of have versions of that to be honest, like incredibly large organizations kind of already work like that. You've got a whole bunch of humans generating novel ideas, but that's kind of put through a filter of a like decision-making framework that removes all the humanness out of it, distill it into a bunch of numbers.

And that's when we get brands or giant companies that we don't kind of like working with and associating with, because they feel empty and they feel authentic. So, I think you get a hyper extreme version of that ff you do this. Now, that being said, I think if somebody creates a smarter algorithm to incorporate all these things in there, it's also possible that you could be training like a neural network or a, you know, a DAB, like a decentralized autonomous organization or brand with inherent randomness. So, I guess it's possible that you could train it in such a way that it would not cause this kind of like homogeneous boring, um, like just sterile, dry environment. I think that's what you'd probably get without having some like random human element in there.

Kofi: Yeah. I think what you'd want to do is add human beings into that layer. So I think the good part of this is you'd be getting information and data really quickly. We often get until way too late in order to actually make a great experience or environment for people working in it, because it has to go to the manager, it goes to a panel. By the time, you know, the HR whomever gets to it, it's far too late. So I think there is a nice combination there. It could be very helpful in getting us intel and data, but didn't have these human beings, you know, that a side-by-side, in partnership essentially between the bot and the human.

Syd: If you actually had an algorithm running your company, what do you think the implications of that would be? 

Rahaf: Depends on how it's packaged. Like humans love a good hero. We love our myths. We look at how we worship in certain ways, you know, Steve Jobs or Bill Gates or Elon Musk, who I'm not convinced is not a bot himself already. But we have to really get into – I mean we don't have to – but it'd be interesting to sort of get into the, instead of just saying it's going to be run by an algorithm, it's like sell me on the pitch. Like, what if I said to you, okay, we have taken the 50 top business minds in the world and we have fused them together into an algorithm who is going to be visionary. And they're going to put ideas together in ways that we've never seen before. And it's going to be sustainable and inclusive. And we're going to like put this to work because we know where human CEOs have blind spots, and we've made this algorithm to not have those blind spots. I could see people being like, yeah, I'm into it.

I could see people getting the merch and tuning into the Twitch streams and reading the information because why not? If we're talking about the future, you're eventually going to have a generation of kids that are growing up where these like bots are normal. So, they're never going to know anything else.

So if you're watching a bot on TV and you're being taught by a bot and your mom uses bots at home and like, is it that much of a stretch then to have like a sophisticated one? It'll be interesting. Cause then much to I think what Clint was saying before, uh, then that becomes about money, right? Cause the power of the algorithm, it's like what mega corporation, right? What big monolith, multinational corporation is going to have the capacity to have the most sophisticated bots and then it'll just be no different than celebrities. And aren't CEOs just like uh, a business, a hustle culture celebrity anyway?

Syd: And you could argue that also with like the power of just like corporate PR and corporate comms that like the, what we see as the CEO, what we see as Steve Jobs or Elon is like, so heavily filtered – maybe not Elon cause his Twitter is a little off the rails – but like some of these people are so overly processed through like media training that are we even seeing a human reaction anyways?

Rahaf: Yeah, I mean, if you look at the, a comms department, it's like everything essentially becomes the sum of what they believe the CEO should represent at that moment, because I think sometimes people feel disconnected if they hear algorithm or bot, but it's like, let's pretend that it's somebody that looks just like a real person.

Clint: I'm not a mega corporation. I don't have access to insane computing power and data, but maybe I can hire a marketing bot company and like hire an AI to like filter my speeches as a CEO. That's also really interesting, like this, this kind of rise of consultants that are driven by AI tools. 

Rahaf: But could you imagine an oil executive or like some company doing something maybe not so great. And then you hire a spokesperson or whatever that has been specifically designed to appear sympathetic, believable, trustworthy, likable. You just like, imagine like, you know, the big eyes and just imagine like a Disney character. I don't know. But like, imagine then that becomes part of it because I think Kofi said it about how are our emotions going to be manipulated? And, you know, they were doing experiments in the nineties with very basic robots, like first level robots that looked really cute, and the experiment was the human had to go in and it had to turn off the robot, but when it tried to turn off the robot, the robot, that was super cute – it looked like a teddy bear just said, ‘please don't turn me off. I'm scared of the dark.’ And people hesitated. They hesitated. A lot of them couldn’t do it. Think about that. Think about how your emotions can deliberately be exploited to sway how you feel about something. 

Kofi: It would be great to have a debate with a political candidate that has to go against an AI that’s taken all of our sentiments and is intelligent enough to articulate itself and really challenge that person to see if they're going to be a good enough leader for us. Great. That could be a great way of, of seeing that piece. 

Clint: I think there's going to be a time in the future when we look back and we'll be kind of blown away that any of this was allowed because all this stuff is really just exploiting innate weaknesses within humans.

And anytime you've been allowed to do that, we create a legislation to stop you from doing it. Things that are built into your like sensory response network at a very, very base level. Like the cuteness thing. There's a reason we've evolved to like cute things. And that's essentially being exploited by this robot, but also kind of, you know, like Pixar movies and branding and marketing.
And I think at some point we're going to have to like draw the line and be like, this is like an unfair thing to throw at a human. Very much just like injecting them with fentanyl every time they buy your product is inappropriate. It's like weaponized emotional manipulation on a massive scale against people. And like that sounds really bad.

Syd: Part of the problem with running a country or running a company is getting everyone to agree on a direction. If you create this spokesperson that can get everyone to agree, even if they are like manipulating you emotionally, is that such a bad thing? 

Rahaf: Yes. I mean, I think you said it like going back to want versus need, right?
Like we want, do we want somebody that just makes us feel good or do we want somebody that's actually gonna help us like fix problems. 

Syd: Where's the accountability there? Where is the liability? What happens like what we saw or what we heard in this scenario? What happens when something goes horribly wrong?

Clint: Yeah, I feel like a lot of people have left this up to the collective good, hoping that we can self-govern and I actually don't think that seems to be working out that well. So, I think there has to be accountability. We have to put in some kind of framework and legislation around this stuff. As much as I don't trust governments to like, be able to handle this stuff, Ed Burtynsky in the last podcast said it great, like sometimes governments are the only ones who can do it.

I think somebody has to be held accountable. You can't just unleash a whole bunch of decentralized bots running around and hope that it all goes well. There's gotta be restriction and legislation. Somebody has to be, like a human has to be held accountable at some point. 

Kofi: Yeah it’s a social contract we're all joining, right? So that is effectively what the constitution is like you look at the States and how they've been able to deal with bringing issues in. The 14th amendment's been a gateway for so many things that maybe they've not seen in the past, rights, women’s rights, all of that. I feel, in the same vein, we’d probably need some sort of script that we all are aligned to and try our best to hold ourselves accountable to it. As these decentralized organizations move on and we, and we get to new challenges and like really try and get to a nice, happy middle for everybody.

Rahaf: I think it's going to be a, both. I think there's going to be multiple parties because on one hand you do want to punish the person that's using the tool in a bad way without necessarily punishing the creator of the tool.

Like you, you punish the arsonist, not the match company type vibe, you know. However, there's also cases where if you do create something that's designed to be so dangerous, like maybe there's a liability there too. So, I don't think it's necessarily a binary answer. I think it's gotta be a, a complex approach because we are seeing this now with facial recognition, right? Companies are now saying we made the technology, Microsoft, we're making the technology, but we're not going to sell it to law enforcement because of its capacity to be abused. We're seeing, um, I don't know about the US, but I know in Canada right now, Canada, New Zealand, France, a couple of other countries are discussing the potential liability of sites like Facebook in perpetuating, uh, domestic terrorism and acts of terror.

And I think about the riots that happened in the Capitol, and I think, okay, well, on one hand, yes, arrest the rioters, obviously, but also if the studies of Facebook's own data shows that 60% of people were shown extremist groups as a result of Facebook's recommendations, well, that to me says there is a responsibility there. Every everybody's got to be held somewhat responsible. 

Syd: Is there anything liberating about this situation? Like, is there anything that could feel freeing or positive about having the responsibility displaced? Maybe not wholly out of our hands, but into partial responsibility between this like algorithmic entity and some decision-making leaders? Is there anything positive there? 

Rahaf: Yeah, I mean, I think it takes a lot of pressure off. There are a lot of people that are responsible for making decisions, you know, and humans aren't perfect. And I think it would be nice to remove some of that pressure from them to maybe, um, have other difficult decisions be made.

I'm even thinking something like landing planes or, you know, something that's just really high stress and puts people under tremendous mental strain. I think that would be good. I think the entire benefit of AI, you know, not calling in sick, not making mistakes, not getting tired, I'm sure there are a lot of high-risk areas where it can act as just like a second pair of eyes, uh medical diagnosis.

Um, we're seeing all of this. Now, the only thing I thought of was I'm not even worried about an evil AI. I'm just worried about misaligned AI, AI that takes what we tell it, and then just interprets it in a way that we might not have predicted. And that was the example that I think Elon Musk gave, I think it was Elon Musk a long time ago about like, an AI that whose sole purpose is to increase shareholder revenue that ultimately decides that the best way to increase shareholder revenue’s to like start a war, or sort of geopolitical conflict, right?

Clint: It’s like a thought experiment that you do when you take AI class. If you create an autonomous robot whose job is just to create paperclips, it seems pretty benign. But in this scenario, it's like the robot takes over and turns everything into paperclips. 

Syd: It sounds like generally speaking, we are more comfortable with a bot, not interacting directly with humans in a way like a bot is here to achieve kind of like more technical or practical goals.

It's not meant to apply emotional thinking. It's not meant to start reasoning. It's not meant to apply strategy. You know, we're happy for it to do these kinds of more like technical things. But when it comes to having a dad making like strategic business decisions, like, are we okay with that? That requires some type of like empathy and reasoning. It sounds like we want it to just stay over here, like on the ground. 

Clint: I think it’s cause we’re kind of afraid of them. Facebook already has an AI that outperforms every human on the planet. It can recognize one of 8 billion faces on the planet and in under a millisecond. I can't even come close to that.And that's an extreme example, but like, for me, I'm a little bit afraid of what happens when we unleash that stuff. It's not about, will it be beneficial? Is there good sides and bad sides? It's just ultimately kind of scary I think. 

Kofi: I’m just worried that we’re not educating ourselves enough through our schooling systems and I bring this up in almost every podcast. That's really the fear for me is that we don't have enough information around it. And we're in this space. And I personally even feel limited. I think that's where the fear comes in for me. 

Rahaf: I think part of my problem right now with a lot of the development, is that from a best practices perspective, we're not insisting on transparencies. Sophisticated AI that are accurately diagnosing people, right? So the answers it's coming to are correct, but the programmers can't quite figure out how it got there. And so one of the best practices around like ethical AI development is that every single bot or algorithm should have a transparency functionality where it is forced to show you its work. How did it reason? How did it get to that conclusion? And if it doesn't have that, this is my problem my problem is that we're unleashing AI without some of these 1.0 best practices. And if we're not doing it at 1.0, we're certainly not going to be doing it at 16.0 and 16.0 is going to be too late. 

Syd: I’m curious from all of your perspectives, like, how do you think live-streaming will continue to change the way that we work? Right now it’s kind of filtered off in specific areas, but what do we see happening in the future? 

Rahaf: I could see a scenario where everyone's live-streaming all the time, even while you're working. You know, maybe there's the expectation just like right now, we expect everyone to be instantly reachable and instantly connected.

Like maybe cultural norms will evolve so that everyone's streaming all the time. That's part of your like performance at work, right? So that, that could be a thing versus people starting, you know, streaming jobs and different types of streamers and things like that. But I wonder because there's already problems with the algorithms that are around streaming. You know, once again, the algorithm is performed for engagement and that results in people needing to stream 24/7 and to stream for eight hours a day and six days a week. So that model is a direct result of the way the algorithm calculates engagement. Are we all going to become so desensitized to each other’s stories that only more and more extreme stories play out as bots? As artificial scenarios come into play, that's just going to ramp up the drama even more. So like what's going to happen?

Clint: Well, if everybody's surveilled all the time, it's almost like being, not surveilled at all, in the sense that everybody's equal, everything's exposed, there's no room for behaviors that we don't want in our society anymore. And it could actually be super liberating to enter and step into that world.

The problem with that is like, it can't be asymmetric. We've just asked police officers to livestream all the time. Like, should we ask the people that we've put in charge of our society that they should livestream first? Maybe that's not a bad idea. 

Kofi: Um, maybe a bit more concerned around what happens to the livestream after, and that we’re allowed to evolve and improve as human beings. Or are people just going to be constantly, you know, what happens to that video of Kofi at 24, that's now evolved as a completely different human being at 33. But being held sort of to that standard. Kind of reminds me of that Black Mirror episode, where you keep going back in time. You're stuck in this loop of constantly living in the past versus moving forward. Um, and society as a whole doing that.

Syd: So, I feel like this is probably a good place to end it. Work/Place is brought to you by lane and Toronto-based core side studio From Later.My name is Sydney Allen-Ash, and I was joined today by Lane's co-founders Clinton Robinson and Kofi Gyekye. This episode is produced by Robert Bolton and Macy Siu, with Udit Vira and Valdis Silins. Audio production by Jeremy Glenn. Sound design by Dani Ramez, and voice acting by Aaron Hagey-MacKay and Maxie Solters. 

Credits

Work/Place is brought to you by Lane and Toronto-based Foresight studio, From Later

Host:
Sydney Allen-Ash
Producer:
Robert Bolton
Macy Siu
Udit Vira
Valdis Silins
Audio Production:
Jeremy Glenn
Sound Design:
Dani Ramez
Voice Actors:
Aaron Hagey-MacKay
Maxie Solters
Published on May 24th 2021