EP 275 – Gavril Gavrilov – CCO at Humans in the Loop – Understanding and the Illusion of Understanding

by | May 17, 2023

The Asia Tech Podcast was joined by ⁠Gavril (Gary) Gavrilov⁠, the CCO at ⁠Humans in the Loop⁠. Humans in the Loop was founded in 2017 with the vision to connect conflict-affected communities to digital work.

Some of the topics that Gary covered:

  • Artificial Intelligence and its philosophical implications
  • Ethical AI as a core concept of Humans in the Loop
  • Artificial Intelligence and embedded biases
  • The risks involved in doing this work in conflict-affected regions
  • The positive impact that Humans in the Loop has on its beneficiaries

Watch the video snippet here.

Some other titles we considered for this episode:

 
  1. Large Language Models Have Been In Development Since the 1960s
  2. It Brought It to Our Living Rooms
  3. AI Is the New Electricity
  4. It Comes up in the Most Unexpected Places
  5. The Challenge Is the Human Verification
Read the best-effort transcript

Read the best-effort transcript below (This technology is still not as good as they say it is…):

Michael Waitze 0:05
We’re on. Hi, this is Michael Waitze. And welcome back to the Asia Tech Podcast. Today we are joined by Gavril Gavrilov, did i get that right, the Chief Commercial Officer of Humans in the Loop. We had another founder of Humans in the Loop on I don’t know, a couple of years ago, I feel like more than 18 months ago, and we wanted to have somebody back on just talk about what’s changed. Gary, I can tell you Gary, yes?

Gavril Gavrilov 0:28
Absolutely. Please do. And thank you for having me.

Michael Waitze 0:30
Oh, no, thank you very much for coming. How are you? By the way?

Gavril Gavrilov 0:34
I am, I am. Well, it’s been a crazy start of the year for us, which is good. I just came back from from a conference in, in France last week, focusing around developments in the AI space, which was a great event for me to be at. And yeah, I’m again, happy, very happy to be here, part of this conversation. And thank you for inviting me.

Michael Waitze 0:57
It’s my pleasure. Can you talk to me a little bit about this conference, this was in Cannes, right?

Gavril Gavrilov 1:02
This was in Cannes, correct. So this is the world AI foundation or rather Conference, which is essentially it’s a multinational event, although there, it’s kind of focused on the French market, we had a collection of some of the global leaders in the AI space. But they also had an amazing startup track. So I was fortunate enough to be able to talk to well over 20 different startups in the AI space. And I was just amazed at some of the developments and new ideas that are just entering this industry, which is for the most part and in the public high now focused on, you know, chat GPT and self driving cars, for example, but there’s so much beneath the surface. So yeah, it wasn’t it was an eye opening event for me.

Michael Waitze 1:54
I do believe that in the technology world. Look, we have this whole brand that we call x undercover, right? Ecommerce, undercover innovation undercover. And I think maybe we should do something called AI undercover. Because I do think a lot of the stuff that happens in the tech world is not just what we see on the surface, but what’s happening underneath. And in a way, it’s almost like a positive iceberg, right? What you’d see is just the tip stuff that’s floating into top of the ocean. But what’s happening underneath is what’s way more interesting. Is there a way for you to just maybe give me some of the hottest topics or whatever was happening there. Not the stuff that surprised you, per se, but just new ideas that maybe people haven’t been speaking about.

Gavril Gavrilov 2:30
Sure, I guess for me the biggest takeaway from from that particular event, and it kind of ties into the way we’re looking at the whole AI industry. And like you said, this positive iceberg. And of course, on the flip side, this this negative iceberg, you know, people tend to focus on whatever is trending on, you know, Facebook, Instagram, whatever whatever they manage to catch on TechCrunch or, or the any of the other tech publications online. And that’s sort of the flavor of the day. And there was actually a funny anecdote from from this conference, which was one of the presenters joke that I’m proud to be part of this panel, where it took us all of eight minutes before we mentioned Chad GPT. Whereas all of the other panels and discussions basically had to GPT pop up in the conversation within the first 30 seconds.

Michael Waitze 3:22
I’m so anti hype, like I almost wish that that thing didn’t come out or that it didn’t get published, because I still think we could have a substantive conversation about artificial intelligence and its impact on what’s going on in the world and technology. But that elephant walked into the room like we couldn’t stop it. So I agree with you eight minutes in we didn’t talk about it yet. But yeah, I get it. Sorry. Go ahead.

Gavril Gavrilov 3:43
No, no, that’s, that’s perfectly fine. I mean, it’s a sign of the time. So I guess, and I’m also anti hype as you as you put it, but you know, we’re all part of the same ecosystem. So as much as we don’t want to, we kind of have to, you know, go with go with the flow, at least to a certain certain extent, if you want to be seen as relevant, you know, particularly from a marketing perspective and right, people who are deeply within the industry on our in the notes, or the people that are in the kitchen, they can see through the hype, and you know, as one of the presenters put it, you know, large language models have been in development since the 1960s. Right, you know, and now it’s 2023. And people are getting all open arms, because of this predictive chatbot, which is, you know, it’s a novelty, at least, to a large extent. I mean, there’s certainly technological innovations there. But

Michael Waitze 4:30
so, so LLM is right, these large language models have been around since the 1960s. But can we say that the biggest change is not in the models themselves, but in the throughput in the compute, it’s actually allowed to process them in other words, that the ideas around LLM and large language models were sophisticated enough in the 60s but the technology wasn’t there to handle them because back then you needed a whole room filled with like IBM. I don’t even know what they were called anymore a 360s or something to process this, but today exactly like I have enough process Seeing power in on my iPhone 14, to do some of that I’m exaggerating a little bit. But you know what I mean? Yeah,

Gavril Gavrilov 5:06
absolutely. I think you’re absolutely correct. Let’s say the the abundance or the accessibility of powerful, you know, IT infrastructure is sort of democratized this concept that has been in existence for, you know, over 50 years, it more importantly than been democratized. It brought it to our living rooms, right? It brought it to your cell phone, to your tablet to your desktop. And in a way, I think that’s what’s driving the the hype. And and again, people kind of seem to be missing the most important aspect of this, and this is that, okay, if you’re using, let’s say, Chuck JpT to help you with your work, maybe you’re generating some, some content for SEO or something similar, right? And you’re, and you’re, and you’re so happy to have access to this tool? Well, you have to understand that you’re using it as a sort of a free user, right? So you’re not paying anything for that. So we know the old adage that if you know, if you’re not paying, you know, for a service, then then you’re the service. Right? Right, or the product. But this technology, obviously it has enterprise applications, and enterprise pricing models, enterprise availability, and that’s a whole different universe that’s kind of closed off of, you know, from from the public high. But going back to your your first question of kind of like what was, you know, surprising or interesting for me to learn that this this event in con was people are openly now talking about really the democratic is Democrat isolation, I guess, of AI, in the sense of not, not not in the sense of people really starting to understand either the fundamentals or the applications of AI, but rather to have the ability to consume to use an end product you based on an AI. And when we say companies that use AI and people that use AI, the in I guess in recent years, we’ve taken the thinking that these are again, companies within the IT ecosystem globally, right, specialized companies, or automotive companies that are focusing on autonomous driving vehicles. But no what what this event really tried to push forward as an idea is that AI will and should be available and leveraged by anyone, anywhere. So regardless of which line of business you’re in, there’s an aspect of AI based or machine learning based technology that you can you can utilize. And one of the slides, I guess, that really stands out, in my mind is basically said, you know, AI is the new electricity. So AI, in and of itself, definitely can be or is a game changer. But I guess the applications and the processes, and the innovations that you can spin off based on this technology are absolutely mind boggling. So I guess to an extent, you know, the general public is approaching AI. Not not completely unlike the way people approached electricity when it

Michael Waitze 8:20
first. So Steve Jobs said this, famously, and I hate going back to this, to quote him, but I just can’t get this out of my head. You know, when he was talking to the founders of Dropbox, he was like, that’s a feature. It’s not an application on its own. That’s a feature of everything, you’re gonna have like ubiquitous storage everywhere. And you’re right, if we, if we came to the point where electricity was just being started, you would have been like, Oh, my God, this is gonna change the world. And you just walk in your house and flip on a switch. It’s everywhere and runs everything you do. And I was actually having this conversation with somebody a couple of days ago. Ai is already in so many things that we use, and most people don’t know it. I mean, if you just look at a recommendation engine on Amazon, and then all the recommendation engines, there are 1000s of places where artificial intelligence is being used. I can’t get into the building that I went to look at today to rent a condominium without getting my face scanned. Whether you like it or not, it’s everywhere. But if that’s the case, though, right? I don’t know. Do you think about like the philosophical aspects, because there were no philosophical aspects around like, electrifying everything. But I feel like there’s some philosophical challenges around the implementation of artificial intelligence everywhere. Was that a topic as well that people talked about?

Gavril Gavrilov 9:29
I’m sorry to say that. It definitely was a topic on certain panels kind of focused on these aspects, but they are. So I’ll put it this way. Go ahead. To me and to me personally, and to us humans in the loop as an organization, they’re inextricably connected. Go ahead. The technological and ethical philosophical aspects of this technology are absolutely connected and should be top of mind for all All of us in the industry, but we all of us as users of this technology, right? But I think if you really want to go philosophical, it boils down to the control or the illusion of control, or understanding and the illusion of understanding, right? Because electricity when when it was first invented, it took a while to kind of explain to people, what we now know, fairly well as a physical process. Right? Right. There’s, there’s a physical process happening, electricity travels, you know, across conductors across wires. it terminates at some device that consumes electricity and performs a task, either a light comes on or a motor spins. And so instinctively, we as humans, even though we don’t, you know, obviously touch electricity directly, we feel that we have control over it. So if we snip the wire, one end, the process ends, right. Whereas with AI, I think that level of understanding and that level of control is missing, you know, essentially, you don’t know what’s going on in this machine. And you we’ve all heard the explanation. Well, this is a pre trained model, it has been trained on visual data that has been shown 100,000 images of apples. So now it can recognize an apple when shown a brand new image. But the mechanisms which are used in order for that to happen are completely beyond the the understanding of the general public. Right. Yeah. So so there in I think lies the foundation of this philosophical, then question, and we’ll get into that, I think, later on in the conversation. Let’s do it now. Okay, sure. So, so, ethical AI, to us is a core concept. And it’s one of our founding principles. And when you when you say ethical AI, most people tend to think of the applications of of an AI model or an ML system, right?

Michael Waitze 11:57
So can we just let’s, let’s define this, so people can get a better understanding of this. Because we’re gonna back up into where the ethics are really where they really originate. A lot of people I think, are thinking it’s just the output, right? So I want to make sure that the output is adequate generated. But I think what you’re saying, right, so that’s where if I asked an artificial intelligence thing, I get sort of a generic answer that feels like it’s been ethically created. But the reality is that how do you train the model is also part of this ethical conundrum that we have in relation to how that artificial intelligence actually gets built. Is that fair?

Gavril Gavrilov 12:35
That’s, that’s a, that’s a very fair statement. If we if we take No, no, that’s, that’s perfectly fine. So if we take, for example, the Dali platform, as an example, the immediate ethical, let’s say, question is, okay, well, this model has been trained on a number of different artworks created by artists, right. And now it’s generating output, which holds no copyrights. Right? So this is this is it came out of thin air.

Michael Waitze 13:08
But does it or doesn’t it because this is really topical, because we just had stable diffusion getting sued by Getty Images for saying you took all of our images without our permission, you didn’t pay for the license, you stripped off the watermarks? And then you created similar things to this. And then you’re saying it’s yours, and you pay no license fees? Nothing? So, I mean, is that part of what you’re, it’s only part of what you’re talking about? No,

Gavril Gavrilov 13:32
it absolutely is part of what I’m talking about. And I’d be very interested to learn the outcomes of this, this court case this lawsuit, because of watershed implications, they will have implications, you know, far reaching implications into into the industry. Absolutely. Because now now you have this, this model that can generate any number of, you know, similar images inspired by original artworks, and yet, you know, you you owe the original artist, nothing for that. So this is a concept that people can immediately kind of grasp, right? Because it’s quite similar to the old joke about you wouldn’t copy a car right? So you wouldn’t pirate an mp3 So the song or a video or video game. But another aspect, which is which is much more I think sinister, is when you get into the applications of AI for things like surveillance, defense, in the medical industry, for sure. There the so for example, let’s let’s take a ml powered surveillance platform that has been trained across different facial images, especially photos of people, right. And at this point, this system is left to run relatively or completely unsupervised. And through its intricate mechanisms. It begins Maybe not singling out but heavily biased and heavily discriminating against a particular subset of people. Right. So we arranged based on any number of trends, racial

Michael Waitze 15:12
profiling, height. relabeled. Right.

Gavril Gavrilov 15:14
Exactly. Exactly. Exactly. And the implications there are, you know, you can make the hair on the back of your neck stand up, right? Yeah. But this is one aspect of ethical AI, the output, right, and this is what most people focus on, right. However, there’s another ethical aspect within within the AI domain, which is the input, so to speak, the input? So let’s say that in order for you to train this model, you have to assemble 100,000 images of people’s faces, right? Sure. How do you source this data? How do you make sure that within this data set that you’re collecting for training purposes, there is sufficient representation of every single, let’s say, race, skin color, facial features, any any number of parameters, and the bias that you introduced when collecting this data set, because let’s be honest, this is not the interesting, this is not the sexy part of developing an AI, this is the part where that you have to get done, right. So so what normally happens is you outsource this to an any number of data collection, service providers. And you know, you hope you hope for the best and you you put together a nice set of instructions, you set the acceptance and rejection standards. But, you know, these are things that you that the challenge, there is the human verification, right, because we’re talking about enormous volumes of data. So inevitably, the level of you know, human supervision and verification will, will go down. And sort of stealthily under the radar bias can be introduced at the level of training data that you’re using to train your machine learning model. So you, you wind up with the expectation of having a very, you know, non biased model, but that can in fact, turn out to be not the case. Right?

Michael Waitze 17:19
Yeah. But I mean, where does bias come from? If you with no external stimuli, a normal whatever that means, but unborn child has no embedded I think, biases per se, and they’re introduced in different ways. Right? Some good, some bad, some amazing, some horrible, like, whatever they are, but they’re just in certain ways. So the presumption to me that a model, an artificial intelligence, or machine learning model that’s being programmed or created by humans, cannot be created without the biases that those humans have, for the most part, because most humans don’t even understand cuz they’re not self aware enough to know what their own biases are. That’s the beginning.

Gavril Gavrilov 18:04
Exactly. Go ahead. Sorry. They are they’re aware they’re aware of it, but they don’t consider this a bias, right. And this can this can be, this can come to be based on any number of social, economic, personal features that are defined by where this person was born, how they were brought up, where they received their education, you know, any any number of things that really define us what we are as humans, right? Yeah. Our recent recent interactions, both positive and negative. The geopolitical situation in the region, for example, because, for example, a lot of the like the major data collection, data annotation, service providers out there, they have a vast workforce, which is distributed. A lot of these people are in countries like India, or Thailand, China, Brazil, all over the world. So when you delegate the task of collecting and annotating the data to a particular workforce located in particular place, you are more or less likely to kind of stealthily introduce the bias that exists within that.

Michael Waitze 19:23
Yes, sorry, can I can I ask this question because I want to come up. I’m trying, I’m struggling to come up with a noncontroversial bias. So here’s, here’s what I’ve come up with. If you show a cow, a picture of a cow, to somebody in Wyoming, they’re going to eat dinner. Yes, even live. Cattle is for sustenance, there’s milk there, there’s food, there’s meat there, that’s great. But if you show that to somebody that doesn’t eat meat on and I’m going to say India, because that’s just the bias that I have around the way cows work there for the most part. They’re not going to think the same thing. It’s still a cow. But the impact Asians are that cow or not dinner? They’re maybe pet? I don’t know. But you know what I mean? Exactly. That’s

Gavril Gavrilov 20:06
exactly. And it comes up in the most unexpected places, right for take, for example company developing a farming equipment, right? Yes. So you can have a potato harvest or potato planter, potato sorting machine, which has an optical sensor, it has a machine learning model sitting behind it. And it’s trained to recognize, let’s say potatoes based on size, quality, any number of different parameters. Depending on where your workforce was, at the time the data was collected, and annotated. They your model can have a drastically different idea of what a quality potato is, as opposed to somebody who was living in the Netherlands, for example, or Germany, or even Bulgaria or the US, right, because for the majority of your life, because of the geographical region where you’re located, and because of the the local climate, potatoes have always had a certain size, a certain shape, a certain color, maybe, right, and to you what looks like a very good potato, something that you would be happy to put on your table for dinner might end up being unacceptable for the Western European market. And that’s an extreme case. And obviously, this can be treated through you know, proper guidelines and quality control during the annotation process. So I’m, I’m kind of taking this, you know, a step too far. But I just want to illustrate the minut ways in which bias can creep up into into AI.

Michael Waitze 21:39
So how do we how do we deal with this? Right? In other words, you mentioned the fact that these models are at some level trained by humans. I mean, that’s what humans in the loop, I guess is doing right, and you can dig deeper into that as well. But if humans globally are going out, and just in this case, we’re talking about annotating images, you’re right, a delicious looking potato again in Idaho is going to look really different than a delicious looking potato in Thailand, potentially, right if it’s got different shape, because when I think about a potato that it was served to me at dinner in the United States, when I was a kid, it’s like this perfect oval shaped, it cracks open, I can drop some butter in it, put a little salt on it, and I’m good to go. But maybe in other countries and cultures, it doesn’t matter. Like I didn’t know this when I was a kid. But plenty of the potatoes that are grown in the United States literally got thrown away or got turned into French fries, because they didn’t look good, even though they tasted perfectly fine. But again, the Annotate that’s just a potato example. But imagine if you’re annotating photos of criminals, what is a criminal look like once you get into this topic? Because potatoes aren’t that controversial. But here’s the thing, when you get into this idea of faces, and what’s this word you use, like just identifying people should, I mean, there’s so many ethical questions here around like, okay, you can take a camera that’s over there, and do face recognition of me. But I didn’t say that was okay. Even like, you’re not even backing up into the person who’s analyzing my face. I walked down the street today, and people with their cameras, I just go like this, I don’t want to get photographed, right, but I can’t stop that camera from photographing me because otherwise I’d have to wear a mask all day.

Gavril Gavrilov 23:18
Exactly, exactly. And then when you get into the really interesting applications of this, which are, you know, for, for example, workspace monitoring, which on the surface of it are just done to, to basically ensure the most efficient utilization of resources, which is office space, equipment, etc. Right, yeah. But then when you combine this with facial recognition, that’s also combined with sentiment detection. So you’re looking at this face, and the model makes a prediction on whether this person is bored, engaged, irritated, angry, you know, whether he starting to drift towards aggressive behavior, and these are really, really interesting and scary topics. Because, again, when we when we get back to, to even providing the source data for this, you know, it’s so it’s so easy to introduce bias, right? Because Take for example, your your localized workforce, in a particular part of the world and it doesn’t matter which it is, but they can have a drastically different idea of what an angry person looks like. You know, not in the extreme cases because I think that’s fairly obvious to us as human beings but when you start looking at like my new details are on a person’s face and something which to me clearly indicates irritation to another person might be you know, that’s that’s the way people look. People in in Eastern Europe are not perhaps and you know, I don’t want to sound controversial or anything but perhaps not the most frequently smiling people they we don’t walk around smiling all day long. So So to us that could very well mean well, no, this person A new normal day.

Michael Waitze 25:02
Yeah, you could be the happiest person in the world. But you don’t have to be smiling about it. Look, I had some. I was at a tech conference in Vietnam, it the beginning of January. And I mean, my first time in Vietnam was 1990. But 1991, excuse me, but a lot of people that were there, this was their first time there. And they went on a tour of some tech companies and some startup companies. And one of the things they said was, it was really strange, like around 430, they would just turn off the lights in the office, and everybody would go to sleep for like, 40 minutes. Yeah, you would never do that in Germany. But it doesn’t make them bad workers at all. That’s just what they do. And maybe they wake up after that and work for another six hours until, like 11 o’clock at night. So yeah, there’s tons of ways that buyers can get input into into this process. But there’s, there’s more than that, too, right. So humans in the loop is staffing all over the world, right. And the mission here again, tell me if I’m wrong is to take these people that may not be doing that may not be involved in this work. But maybe you just want to run through what this is, so people can get a better understanding of it, and then how you’re trying to interdict and to try to take some of this bias out of this process and make it better. I don’t know what the right word is. Go ahead.

Gavril Gavrilov 26:16
Sure, sure. So, so humans in the loop as an organization, it’s, it’s a kind of a hybrid enterprise. So we have a for profit company, which is operating in the AI space, so to speak, and we provide data annotation, and data collection and model live monitoring services, okay, we were structured is we contract with our annotators. And we sourced them based on the fact that these people have been affected by war and conflict and natural disasters in their place of residence. So the majority of our annotators have been affected by these horrible things, and they are unable to find, you know, dignified means of employment, something that would allow them to get their their lives back on track and to and to lead a happy life. So our idea was in our our founders idea, Iva Gumnishka, who is our founder, when she set up the Humans in the loop back in 2017, she had the idea Well, why not bring these people closer and to the, to these job opportunities, but not only to these job opportunities in the IT world, and not only in the IT world, but in one of the most cutting edge areas within it, which is the development of artificial intelligence and machine learning. And the most, I guess, straightforward way for that to happen was to basically connect these people with these data collection and data annotation projects that we’ve been working on for the past five, close to six years now. And our, our workforce, as you rightly put it, we do have a global workforce. At the moment, we’re focused on working with partnership, through partnerships with partners who are partnering organizations who are based within these affected regions, such as Syria, Yemen, Iraq, Afghanistan, we’re launching an amazing pilot program with partnering organization in Ukraine, and another partnering organization in the Democratic Republic of Congo, which we hope to kind of spin up in the coming weeks and months. And our the makeup of our annotative annotation workforce is a super diverse, we have, you know, obviously representation from people from these, these regions, there’s a very significant percentage of women within our workforce. And that, in and of itself, kind of immediately sets us up to be a more, let’s say, more ethical and more fair, and a more diverse, bias free service provider,

Michael Waitze 29:05
you’re already taking a vulnerable population. Right. So you said Syria and Yemen and the first thing that comes to my mind is just war, right? Because and crane, obviously we don’t need to go into that but same type of thing. And then inside of those vulnerable populations, if most of them are women and families that have been affected by war, correct? What is the implication for them of now being involved in this? Does it make them more of a target in the places where they live? Does it make them more elite in the places where they live? Like how does that impact their day to day lives as well when people know? You know what I mean? Like when I was a kid, if you if you lived in Armonk, New York, and you worked at the IBM Research Center that people thought about you in a particular way. Sure, what are the applications there as well, right, because you’re trying to create this community of sort of ethical people and ethical work and trying to make the artificial intelligence world itself more ethical And then also try and help these people in war torn places? Or what do you what do you call them? Um, conflict affected? Which is a good word

Gavril Gavrilov 30:07
conflictd-affected? Yeah.

Michael Waitze 30:10
What is like, what is the implication of that as well? I’m curious, if anything, so.

Gavril Gavrilov 30:15
So yeah, and I think your your question is quite, quite a steal, because definitely there are risks involved in any kind of activity with within particular areas in those broader regions. Particularly if people know that you’re steadily employed, that you have access to, to some money through your your job, which is coming out from from from the outside, and it’s not generated within the particular area. So we’re very careful to work very closely with with our partnering organizations, and we’re very flexible, and because this is part of our mission. So this is the reason for the existence of humans in the loop, this is a foremost priority for us the positive impact we have right on the lives of most people. So we are we are very kind of flexible and very, and work very hard to ensure that all of our beneficiaries, we don’t call them workers, we call them beneficiaries are able to first of all conduct the work safely, wherever possible in locations where the required infrastructure, which would mean computers, screens, connectivity, connection, connectivity, is available steadily available, so that these people don’t have to kind of bring home laptops, which would potentially make them a target for theft or, or something worse, again, it all goes through our close relationships with with these organizations that work with displaced people and refugees on the ground. And they really have an abundance of knowledge and experience in how to safely tackle these these subjects. Because we are as a core, the core of your organization is obviously based in, in Bulgaria. So we also work closely with people that have been displaced in that have migrated to Bulgaria, and are now seeking, you know, employment and an improved life in Bulgaria and other countries in the EU as well. So yeah, it’s definitely a topic that is high on our priority list. And I’m happy to say that we we have not had any, you know, unfortunate, you know, incidents related to you know, how our work is affecting and positioning these people within their society.

Michael Waitze 32:36
Yeah. And look, I feel like at the beginning of any any news sort of secular trend, there are always going to be risks, right. And yet, the pioneers in these places, and whether it’s in Syria, or in Yemen, or in Africa, as you mentioned, as well, the first people that do this are really like heroic and and and very much pioneers. But as it becomes more and more normalized, they run the risk of actually becoming super elite people, right? Because they’re involved in something that’s changing the lives of people not just in their town or in their country. But in the whole world. Here’s the thing, right? If we were having a conversation about artificial intelligence, even just a year ago, or two years ago, prior to GPT, three, or dolly, like literally blowing up all over the world. Most people wouldn’t even have thought about this, right? Because it had no impact directly on their lives. And even if it did, they wouldn’t have understood how. Right, but now that it has, we have to start thinking about this and having these conversations No,

Gavril Gavrilov 33:33
absolutely, absolutely. And these conversations need to be held every day by people of every walk of life, because that will then channel the I hate this phrase general public, but I will resort to it again. So to channel the general public’s energy and attention and to keep focused and keep pressure on the industry to not kind of deprioritize these aspects of the development of the technology which are related to the you know, ethics, and the true positive impact on on our lives. So, for example, if you take humans in the loop, we have a fairly straightforward guidelines as to which projects we can we can work on so we don’t work on any content moderation projects, because that has privacy, censorship implications, or connotations. We don’t work on any surveillance or defense related projects, not the least of which because it will be completely insensitive to the people that constitute our annotation teams. And also we prioritize projects where the output or the goal or the mission of the underlying machine learning platform AI model is directly related to creating a positive impact on people and society. So medical project right Crop and crop irrigation, geospatial projects related to conservation of natural resources. Those are the projects that we heavily prioritize when, when sourcing new opportunities for us to work on.

Michael Waitze 35:16
Do you work with these big companies like open AI and Google and Microsoft and stuff like that, that are more prominent out there and having these conversations about artificial intelligence and their implications for the rest of the technology world? Or have you not built those relationships yet?

Gavril Gavrilov 35:32
So we definitely do work with them. Unfortunately, like you, I’m sure you can understand we’re under pretty strict. Non Disclosure, I don’t want

Michael Waitze 35:42
to know the answer. I don’t want to know like the details about it. I’m just curious, if you do work with them, that’s all I want to know

Gavril Gavrilov 35:46
for sure, for sure we do we do work with them. But you have to understand compared to some of the major players on the market, we are a fairly small organization. And we offset that with our ability to kind of quickly pivot within projects, we also are clearly differentiated by our our social mission that’s at the heart of the organization, the makeup of our innovation teams, and our ability to really take on small, small in terms of volume of data that needs to be processed, but rather complex in terms of the understanding that you need in order to do a good annotation job. So this is not these are not projects where you would just you know, put a bounding box on an apple or a strawberry and call it a day, because those are fairly straightforward. But there are projects where you need to have multiple iterations where you need to really clarify what the designers of the the machine learning algorithm, consider a good instance of a given object or situation. And to be able to pinpoint those in the raw data.

Michael Waitze 36:51
Do you find you have to change the training? Of the beneficiary? So I have that name? Right? Absolutely, depending on where they’re based?

Gavril Gavrilov 37:03
Um, that’s an interesting question. So So, RP, obviously, because of the different cultural and language makeup of our group of beneficiaries, we do have to adapt also, because of the availability of resources. So sometimes, we can do one to one training sessions with you know, in a in a complete, like, normal corporate style, online session with people joining in from different places, other times, we have to take great efforts to organize and to bring people to a central location where we can do a one to many style training session. So yeah, we’re happy to adapt. And again, it’s it’s part of what sets us apart from from the competition that don’t really have that capability, just because their annotation forces that that that large,

Michael Waitze 37:50
okay, Gary, look, I want to do this, I want to let you go. But I want to offer you and I offer this to Iva as well. I mean, I last spoke to her, she introduced me to you, right. So obviously, I invited her to be on she said, Talk to Gary instead. But what I want to do is I want to offer to both of you the opportunity as things arise, as new topics come up, as you go to more conferences, and you want to talk about specific topics, I highly encourage you to reach out to me and if you have any partners that want to come on the show to talk about this, we can continue this conversation about artificial intelligence and the ethics around it about ethical AI, I’d love to have more people on to have more of these conversations. I’m offering that to you. And I’m I want to do it on the recording. So people know that they have a place to come to listen to it. Because I do want to I do think we need to continue to have these conversations. So the general public can be more informed about what’s happening here, not just about the sort of impact it has on their own lives, but then it has on society and on technology as a whole. So I do want to let you go, I really want to thank you for doing this. I love these conversations. I learned something every time I talked about this and I get to kind of reframe the way I think about it. So I really appreciate it is Was there something else you wanted to mention or

Gavril Gavrilov 39:01
I would like on my site to just thank you for for inviting me and for giving us as an organization the opportunity to talk about these topics and to put our name out there. We’re more than happy to come on the podcast again and talk about more topics because really, we’re as a society as a global society. We’re just scratching the surface right now. And we and humans in the loop were very focused on you know, this concept of always keeping the human relevant keeping a human in the loop. And this is what our development as an organization and as a portfolio of services that we offer is going to follow in the next weeks and months of this year, kind of offering tools to our customers to have an easy way of having human verification for their already trained and working AI models. And I’d be very happy to come on again and or to invite my colleagues ergo, Iva and our technical chief technical officer as well, so we can talk about these topics in more detail.

Michael Waitze 40:06
Thank you again, Gavril Gavrilov. Gary, the Chief Commercial Officer of Humans in the Loop. I really appreciate you joining us tonight.

Gavril Gavrilov 40:14
Thank you very much, Michael. Appreciate it as well. I wish you all the best and look forward to speaking again.

 

Latest Episodes: