Stanford AI leader Fei-Fei Li building 'spatial intelligence' startup

Reuters Reuters | 05-05 00:20

Prominent computer scientist Fei-Fei Li is building a startup that uses human-like processing of visual data to make artificial intelligence (AI) capable of advanced reasoning, six sources told Reuters, in what would be a leap forward for the technology.

Li, considered a pioneer in the AI field, raised money for the company in a recent seed funding round. Investors included Silicon Valley venture firm Andreessen Horowitz, three of the sources said, and Radical Ventures, a Canadian firm she joined as a scientific partner last year, according to two others.

Spokespeople for Andreessen Horowitz and Radical Ventures declined to comment. Li did not respond to requests for comment.

Li is widely known as the "godmother of AI," a title derived from the "godfathers" moniker often used to refer to a trio of researchers who won the computing world's top prize, the Turing Award, in 2018 for their breakthroughs in AI technology.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

In describing the startup, one source pointed to a talk Li gave at the TED conference in Vancouver last month, in which she said the cutting edge of research involved algorithms that could plausibly extrapolate what images and text would look like in three-dimensional environments and act upon those predictions, using a concept called "spatial intelligence."

To illustrate the idea, she showed a picture of a cat with its paw outstretched, pushing a glass toward the edge of a table. In a split second, she said, the human brain could assess "the geometry of this glass, its place in 3D space, its relationship with the table, the cat and everything else," then predict what would happen and take action to prevent it.

"Nature has created this virtuous cycle of seeing and doing, powered by spatial intelligence," she said.

Her own lab at Stanford University was trying to teach computers "how to act in the 3D world," she added, for example by using a large language model to get a robotic arm to perform tasks like opening a door and making a sandwich in response to verbal instructions.

Li made her name in the AI field by developing a large-scale image dataset called ImageNet that helped usher in a generation of computer vision technologies that could identify objects reliably for the first time.

She co-directs Stanford's Human-Centered AI Institute, which focuses on developing AI technology in ways that "improve the human condition." In addition to her academic work, Li led AI at Google Cloud from 2017 to 2018, served on Twitter's board of directors and has done stints advising policymakers, including at the White House.

Li has lamented a funding gap on AI research between a well-resourced private sector on one side and academics and government labs on the other, calling for a "moonshot mentality" from the U.S. government to invest in scientific applications of the technology and research into its risks.

Her Stanford profile says she is on partial leave from the beginning of 2024 to the end of 2025. Among the research interests listed on her profile are "cognitively inspired AI," computer vision and robotic learning.

On LinkedIn, she lists her current job as "newbie" and "something new," starting as of January 2024.

By jumping into the startup world, Li is joining a race among the hottest AI companies to teach their algorithms common sense in order to overcome the limitations of current technologies like large language models, which have a tendency to spit out nonsensical falsehoods in the midst of otherwise dazzling human-like responses.

Many say this ability to "reason" must be established before AI models can achieve artificial general intelligence, or AGI, referring to a threshold at which the system can perform most tasks as well as or more capably than a human.

Some researchers believe they can improve reasoning by building bigger and more sophisticated versions of the current models, while others argue the path forward involves the use of new "world models" that can ingest visual information from the physical environment around them to develop logic, replicating how babies learn.

Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.


ALSO READ

Ola Electric responds to ARAI notice, says prices of S1 X 2 kWh scooter unchanged

Ola Electric provided an invoice dated October 6, showing a INR 5,000 discount given to customers, a...

Hyundai Motor IPO’s off to a slow start

Around 35% of the total shares in the offering are reserved for retail investors, while QIBs and NII...

Under fire, Ola Electric taps EY India to get back on track

Close to a dozen executives from EY came on-board at Ola Electric a few weeks ago on deputation for ...

Tata Motors secures 5-star BNCAP safety ratings for Nexon, Curvv, and EV models in latest crash tests

Tata Curvv.EV BNCAP testTata Motors did it again! Tata Motors has once again secured 5 star rating i...

India needs to step up manufacturing to meet Viksit Bharat goal: Volvo Grp India MD

Volvo Group India Managing Director and President, Kamal Bali. The manufacturing sector is a weak li...

Dollar pullback to help Indian rupee, weak risk appetite to weigh

Investors are now nearly certain that the U.S. Federal Reserve will deliver a 25-basis-point rate cu...