Lila Ibrahim of DeepMind: “It’s hard not to go through the imposter syndrome”


Lila Ibrahim is the first chief operating officer ever DeepMind, one of the most well-known artificial intelligence companies in the world. He has no formal training in AI or research, which is the company’s primary job, yet oversees half of its workforce, a global team of about 500 people, including engineers and scientists.

They are working on a unique, rather amorphous mission: to build general artificial intelligence, a powerful mechanical version of the human brain that can advance science and humanity. Its task is to transform that vision into a structured operation.

“It’s difficult not to go through the imposter syndrome.” I’m not an AI expert and here I am, working with some super-smart people. . . it took me a while to understand something beyond the first six minutes of some of our research meetings, ”she says.“ But I realized that I wasn’t supposed to be that expert, I was supposed to bring my own. 30 years of experience, my human aspect to understand technology and impact, and to do it in a fearless way to help us achieve this ambitious goal. “

The Lebanese-American engineer, 51, joined DeepMind in 2018, transferring his family to London from Silicon Valley, where he had been head of the online education company Coursera, via 20 years at Intel. Before leaving Intel in 2010, he was chief executive of Craig Barrett, chief of staff for an 85,000-person organization, and had only twins.

As an Arab-American in the Midwest, and a woman engineer, Ibrahim was “always the stranger.” Even in DeepMind she was a foreigner: she came from the corporate world, having worked in Tokyo, Hong Kong and Shanghai. It also manages a non-profit purpose, Team4Tech, which recruits technology industry volunteers to improve education in the developing world.

DeepMind, based in King’s Cross in London, is managed by Demis Hassabis and a mainly British leadership team. In his three years here, Ibrahim has overseen a doubling of his staff to more than 1,000 in four countries, and faces some of the most thorny questions in AI: how do they move forward with commercial value? How do you expand the talent pipeline in the most competitive job market in technology? And how do you invent the AI ​​that is responsible and ethical?

Ibrahim’s first challenge was how to measure the success and value of the organization, when it does not sell tangible products. Acquired by Google in 2014 for £ 400 million, the company lost £ 477 million in 2019. Its revenues of £ 266 million in that year come from other Alphabet companies such as Google, which pay DeepMind for any commercial AI application you develop in-house.

“Having sat on a public board of public companies before, I know the pressure that Alphabet is under. In my experience, when organizations focus on the short term, you can often stumble. The alphabet has to think short-term. it goes a long way in terms of value, ”says Ibrahim. “Alphabet sees DeepMind as an investment in the future of AI, while giving it some commercial value in the organization. Take WaveNet, which is the DeepMind technology now integrated into Google products [such as Google Assistant] and in the Euphonia Project. ”This is a text-to-speech service where ALS [motor neuron disease] patients can preserve their voice.

These applications are developed primarily through the DeepMind4Google team, which works exclusively on marketing their AI for Google’s business.

She argues that DeepMind has as much autonomy from its parent company as it “needs to be,” structuring, for example, its own performance management goals. “I have to say, when I came in I was curious, will there be any tension? And there wasn’t,” she says.

Another significant challenge has been the recruitment of researchers into a competitive job market, where companies such as Apple, Amazon and Facebook compete for AI scientists. Anecdotally, it is said that senior scientists can be paid in the region of £ 500,000, with a few million commanders. «DeepMind [pay] it’s competitive, no matter what level and position you have, but it’s not the only reason why people stay, ”says Ibrahim.“ Here, people care about the mission, and see how the work they do makes it move forward. on a mission [of building artificial general intelligence], not only in itself but also in the framework of a greater effort ”.

The third challenge on which Ibrahim is focused is to translate the ethical principles into DeepMind’s AI research practices. Increasingly, researchers are highlighting the risks posed by AI, such as autonomous killer robots, and issues such as the replication of human prejudice and the invasion of privacy through technologies such as facial recognition.

Ibrahim has always been driven by the social impact of technologies. Intel has been working on projects such as bringing the internet to isolated populations in the Amazon rainforest. “When I had my interview with Shane [Legg, DeepMind co-founder], I went home and I thought, could I work in this company and put my twin daughters to sleep at night knowing what mom worked for? ”

DeepMind’s sister company, Google, has been criticized for its way of handling ethical issues in AI. Last year, Google reportedly fired two ethical AI researchers, Timnit Gebru and Margaret Mitchell, have suggested that the AI ​​of language transformation (which Google is also developing) may represent the prejudice of human language. (Google described Gebru’s departure as a “resignation”.) Public repercussions have led to a crisis of faith among the AI ​​community: are technology companies like Google and DeepMind aware of the potential damage to AI, and do they have any intentions to mitigate it?

To that end, Ibrahim set up an internal social impact team from a variety of disciplines. Meet with the company’s key research teams to discuss the risks and impacts of DeepMind’s work. “You have to constantly revisit the hypothesis. . . and the decisions you make and update your thinking based on that, ”he says.

She adds that “if we don’t have skills around the table, we bring in experts from outside DeepMind. We bring people from the space of security, privacy, bioethics, social psychologists. It was a cultural barrier to [scientists] to open up and say “I don’t know how to use it, and I’m almost afraid to guess, so what happens if I’m wrong?” We’ve done a lot to structure these meetings to be safe psychologically. ”

DeepMind has not always been cautious: in 2016, it developed a hyper-accurate AI lip-reading system from videos, with possible applications for the deaf and hard of hearing, but did not recognize the security and safety risks. privacy for individuals. However, Ibrahim says DeepMind now places much more consideration on the ethical implications of its products, such as WaveNet, its text-to-speech system. “We’re thinking about the possible possibilities for abuse. Where and how we can mitigate and limit applications for that,” he says.

Ibrahim says part of the job is to know what AI can’t solve. “There are areas that should not be used. For example, surveillance applications are a concern [and] lethal autonomous weapons “.

She adds: “They often describe it as a mural call. Everything I had done prepared me for this moment, to work on the most advanced technology to date, and [on] understanding. . . how it can be used. “

Three questions for Lila Ibrahim

Who is your leadership hero?

Craig Barrett. I was chief of staff of Intel, and then I was CEO. He followed in the footsteps of Bob Noyse, and Andy Grove and Gordon Moore. . . they were legends of the semiconductor industry. Together, we did a ton of pioneering work, such as getting the internet connection to remote parts of the world that we had never had access to. He would say, “If anyone gives you shit, come talk to me, because I have your back.”

What was the first leadership lesson you learned?

There were many people in the organization asking [my work]. I had problems with some of them [Barrett’s] direct relationships, senior executives. He sat me down and said, “Lilac, the perceivers always end up with more arrows in the back than in the front, because everyone is always trying to get back.” He said, “Let me shoot those arrows so I can run farther, faster.” That’s how I drive, I want people to try and not be afraid to make mistakes. The reason I am able to do this is because at the beginning of my career my head hero did this for me.

If you’re not a CEO, what are you?

The first job I ever wanted was to be president of the United States, but probably more of a diplomat today. Bringing people together, and understanding their differences to move things forward is something I’ve come to realize that I’ve always been passionate about. It is about finding similarities where the evidence is different.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *