DeepMind’s Laila Ibrahim: ‘It’s hard not to go through impostor syndrome’


Laila Ibrahim is the first female CEO of a company deep mindOne of the most famous artificial intelligence companies in the world. She has no formal background in artificial intelligence or research, which is the company’s core business, yet she oversees half of its workforce, a global team of about 500 people, including engineers and scientists.

They work on one, somewhat amorphous mission: to build an artificial general intelligence, a powerful mechanical version of the human brain that can advance science and humanity. Its mission is to turn this vision into an organized process.

“It’s hard not to go through impostor syndrome. I’m not an AI expert, and here I am working with some really smart people… I took the time to understand anything beyond the first six minutes of some of our research meetings.” Expert, I have been appointed to bring my 30 years of experience, my human side in understanding technology and impact, and to do so in a brave way to help us achieve this ambitious goal.”

The 51-year-old Lebanese-American engineer joined DeepMind in 2018, moving her family to London from Silicon Valley, where she was COO of online education company Coursera, for 20 years at Intel. Before leaving Intel in 2010, she was CEO of Craig Barrett in an 85,000-person organization, and had just had twins.

As an Arab-American in the Midwest and an engineer, Ibrahim was “always an eccentric.” At DeepMind, too, she was curious: She came from the corporate world, and worked in Tokyo, Hong Kong, and Shanghai. She also runs a non-profit organization, Team4Tech, which recruits volunteers from the technology industry to improve education in the developing world.

DeepMind, based in Kings Cross, London, is run by Demis Hassabis and a predominantly British leadership team. During his three years there, Ibrahim has overseen the doubling of its workforce to more than 1,000 in four countries, and is tackling some of the toughest questions in AI: How do you achieve breakthroughs with commercial value? How is your talent pipeline expanding in the most competitive tech job market? How do you invent a responsible and ethical AI?

Ibrahim’s first challenge was how to measure the success and value of an organization, when it was not selling tangible products. Google acquired it in 2014 for £400m, and the company lost £477m in 2019. Its £266m of revenue that year came from other companies like Google, which pays DeepMind for any commercial AI applications internally developed.

“Having sat on the board of a public company before, I know the pressure Alphabet is under. From my experience, when organizations focus on the short term, they often falter. Alphabet has to think short term and long term in terms of value,” says Ibrahim. . Alphabet sees DeepMind as an investment in the future of artificial intelligence, while giving some commercial value to the organization. Take WaveNet, a DeepMind technology now integrated into Google products [such as Google Assistant] And in the Euphonia project.” This is a speech-to-text service where ALS [motor neuron disease] Patients can keep their voices.

These apps are primarily developed by the DeepMind4Google team, which exclusively markets its AI for Google’s business.

It asserts that DeepMind has as much autonomy from its parent company as it “needs it for now”, for example, structuring its performance management goals. “I have to tell you, when I joined I was curious, would there be some tension? There was none,” she says.

Another important challenge is hiring researchers in a competitive job market, where companies like Apple, Amazon, and Facebook all compete for AI scientists. According to anecdotal accounts, it is said that top scholars in the region may be paid £500,000, with a few million against a few million. “Deep mindعقل [pay] Competitive, regardless of your level and position, but it’s not the only reason people stay,” Abraham says. “Here, people care about the mission, and they see how the work they do advances the mission. [of building artificial general intelligence]Not only by itself but also as part of a larger effort.”

The third challenge that Ibrahim focused on is translating ethical principles into practical aspects of AI research at DeepMind. Researchers are increasingly highlighting the risks posed by artificial intelligence, such as autonomous killer robots, and issues such as duplicating human biases and violating privacy through technologies such as facial recognition.

Ibrahim has always been driven by the social impact of technologies. I worked at Intel on projects such as bringing the Internet to isolated populations in the Amazon rainforest. When I interviewed Shin [Legg, DeepMind co-founder]I came home and thought, can I work for this company and put my twin daughter to sleep at night knowing what my mom is working on? ”

DeepMind’s sister company, Google, has faced criticism for how it handles ethical concerns in the field of artificial intelligence. Last year, Google claimed to have launched two Ethical AI Researchers, Timnit Gebru and Margaret Mitchell, for suggesting that AI language processing (also developed by Google) could reflect human language bias. (Google described Gebero’s departure as “Resignation”.) The public fallout has led to a crisis of faith among the AI ​​community: Are tech companies like Google and DeepMind aware of the potential harms of AI, and do they have any intentions of mitigating them?

To this end, Ibrahim established an internal community impact team from a variety of disciplines. Meets with the company’s core research teams to discuss the risks and impacts of DeepMind’s work. “You have to constantly reconsider the assumptions . . . the decisions you have made and update your thinking accordingly,”

She adds, “If we don’t have experience around the table, we bring in experts outside of DeepMind. We’ve brought in people from the security and privacy space, bioethicists and social psychologists. It was a cultural obstacle to [scientists] To open up and say “I don’t know how this can be used, and I’m almost afraid to guess it, because what if I get it wrong?” We have done a lot to organize these meetings to be psychologically safe.”

DeepMind hasn’t always been wary: In 2016, it developed a high-resolution AI lip-reading system from videos, with potential applications for deaf and blind people, but it didn’t acknowledge the security and privacy risks for individuals. However, Ibrahim says that DeepMind is now putting more attention on the ethical implications of its products, such as WaveNet, a text-to-speech system. “We have considered potential opportunities for misuse. Where and how can we mitigate them and restrict their applications,” she says.

Part of the work, Ibrahim says, is figuring out what AI can’t solve. “There are areas that should not be used. For example, monitoring applications are a concern [and] Lethal autonomous weapons.

She adds, “I often describe it as a moral calling. Everything I’ve done has prepared me for this moment, to work on the most advanced technology to date, and [on] Understanding . . . How it can be used.

Three questions for Laila Ibrahim

Who is your champion?

Craig Barrett. I was chief of staff at Intel, and he was CEO at the time. He followed in the footsteps of Bob Noyes, Andy Grove and Gordon Moore. . . They were the legends of the semiconductor industry. Together, we’ve been doing a lot of groundbreaking work, like how to connect the internet to remote parts of the world that didn’t have access before. He would say, “If someone is going to bother you, have them come and talk to me, because I have your back.”

What was the first driving lesson you learned?

There were a lot of people within the organization who were interrogating [my work]. I was having trouble with some [Barrett’s] Direct reports and CEOs. He sat me down and said, “Layla, road riders always end up with arrows more in the back than the one in the front, because everyone is always trying to catch up.” “Let me take out those arrows so you can run farther and faster,” he said. It’s the way I drive, I want people to try not to be afraid to make mistakes. The reason I’m able to do this is because early in my career the champion boss did it for me.

If you weren’t a CEO/leader, what would you be?

The first job I wanted was President of the United States, but perhaps more than a diplomat these days. Bringing people together and understanding their differences to move things forward is something I’ve always realized I’ve been passionate about. It is about finding parallels where the obvious is different.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *