Mozilla launches a new startup focused on 'trustworthy' AI

On the eve of its 25th anniversary, Mozilla, the not-for-profit behind the Firefox browser, is launching an AI-focused startup.

Called, the newly forged company's mission isn't to build just any AI — its mission is to build AI that's open source and "trustworthy," according to Mark Surman, the executive president of Mozilla and the head of

"Working on trustworthy AI for almost five years, I’ve constantly felt a mix of excitement and anxiety," he told TechCrunch in an email interview. "The last month or two of rapid-fire big tech AI announcements has been no different. Really exciting new tech is emerging -- new tools that have immediately sparked artists, founders...all kinds of people to do new things. The anxiety comes when you realize almost no one is looking at the guardrails."

Surman was referring to the rash of AI models in recent months that, while impressive in their capabilities, have worrisome real-world implications. At release, OpenAI's text-generating ChatGPT could be prompted to write malware, identify exploits in open source code and create phishing websites that looked similar to well-trafficked sites. Text-to-image AI like Stable Diffusion, meanwhile, has been co-opted to create pornographic, nonconsensual deepfakes and ultra-graphic depictions of violence.

The creators of these models say that they're taking steps to curb abuse. But Mozilla felt that not enough was being done.

"We’ve been working on trustworthy AI on the public interest research side for about five years, hoping other industry players with more AI expertise would step up to build more trustworthy tech," Surman said. "They haven’t. So we decided mid-last year we needed to do it ourselves -- and to find like-minded partners to do it alongside us. We then set out to find someone with the right mix of academic and industry AI experience to lead it."

Funded by a $30 million seed investment from the Mozilla Foundation, Mozilla's parent organization, is a wholly owned subsidiary of the Mozilla Foundation -- much like the Mozilla Corporation (the org responsible for developing Firefox) and Mozilla Ventures (the Mozilla Foundation's VC fund). Its managing director is Moez Draief, who previously was the chief scientist at Huawei's Noah's Ark AI lab and the global chief scientist at consulting company Capgemini.

Harvard’s Karim Lakhani, Credo’s Navrina Singh and Surman will serve as's initial board members. Lakhani is the chair and co-founder of the Digital, Data and Design Institute at Harvard, while Singh is a member of the U.S. Department of Commerce's National AI Advisory Committee, which advises the president on a range of ethical AI issues.

Surman describes as part research firm, part community -- a startup dedicated to helping create a trustworthy, independent open source AI stack. Initially,'s priority will be building a team of around 25 engineers, scientists and product managers to work on "trustworthy" recommendation systems and large language models along the lines of OpenAI's GPT-4. But the company's broader ambition is to establish a network of allied companies and research groups -- including Mozilla Ventures–backed startups and academic institutions -- that share its vision.

"We think there is a commercial market in trustworthy AI -- and that this market needs to grow if we want to shift how the industry builds AI into the apps, products and services we all use everyday," Surman said. " -- working loosely with many allied companies, researchers and governments -- [has] the opportunity to collectively create a 'trust first' open source AI stack. If we’re successful, the mainstream of industry would pull from this stack as a part of their regular toolkit, just as they have with the Linux and Apache stack over the last two decades." won't be going it alone -- not entirely. Several nonprofits are on a mission to democratize AI tools, including the recently formed EleutherAI Institute, funded by corporate backers, including Canva and Hugging Face. There's also the Allen Institute for AI, founded by the late Microsoft co-founder Paul Allen, and the Alan Turing Institute. Smaller promising efforts include AI startup Cohere’s Cohere For AI and Timnit Gebru’s Distributed AI Research, a global decentralized research organization.

Tellingly, isn't a nonprofit. While it's bound to certain ethical principles (namely the Mozilla Manifesto), it's open to spinning out -- and indeed, aims to spin out -- its more successful explorations into products and companies in addition to open source projects.

Draief sees this as a plus rather than a disadvantage, arguing that it gives flexibility that nonprofits lack. To his point, there's cautionary tales like OpenAI, which was founded as a nonprofit in 2015 but was later forced to transition to a "capped-profit" structure in order to fund its ongoing research.

"The big question is, how many of the newer, smaller trustworthy AI startups will be able to stay independent?" Draief told TechCrunch via email. "It’s clear that the big players -- especially the cloud platforms from Amazon, Google and Microsoft -- are rushing to consolidate the AI space. This is where all the money is getting made. And it will be hard for small companies not to get vacuumed into this consolidation."

Chasing after the AI research trends of the day -- and, not coincidentally, the better-funded areas of research -- will spend the next few months developing tools that, for example, let users interrogate the sources behind the answers that AI chatbots give them. The company will also seek to create systems that give users more control over content recommendation AI (i.e., the algorithms that drive YouTube, Twitter and TikTok feeds), like systems that optimize a recommender for individual or community values -- building on Mozilla's existing research.

Draief doesn't pretend that shifting the AI stack in a meaningful way will be a speedy process. While he pledges that will ship code "this year," he speaks in terms of multiple years.

But measurable success will require more than time.

If history is any indication, voluntary frameworks and one-off tools won't move the needle much, if at all.'s challenge will be convincing the industry that its vision of trustworthy AI is the right one -- and to adopt that vision.

"Trustworthy AI features like these feel like they should be trivial to add -- but we still mostly see them in the lab," Draief said. " will work with researchers to turn their work into working code and make it possible to use in concert with more traditional AI tools."