Elon Musk finally admits Tesla is building its own custom AI chips

And gives us the news that god-like machines will take over within a decade


Elon Musk has revealed that Tesla, his electric automobile company, is developing its own custom chips for its driverless cars.

Musk revealed the effort at a Tesla party that took place at the intelligence conference NIPS. Attendees at the party told The Register that Musk said "I wanted to make it clear that Tesla is serious about AI, both on the software and hardware fronts. We are developing custom AI hardware chips".

Musk offered no details of his company's plans, but did tell the party that “Jim is developing specialized AI hardware that we think will be the best in the world."

"Jim" is Jim Keller, a well-known chip engineer who was lead architect on a range of silicon at AMD and Apple and joined Tesla in 2016. Keller later joined Musk on a panel discussing AI at the Tesla Party alongside Andrej Karpathy, Tesla’s Director of AI and chaired by Shivon Zilis, a partner and founding member at Bloomberg Beta, a VC firm.

Musk is well known for his optimism about driverless cars and pessimism about whether AI can operate safely. At the party he voiced a belief that “about half of new cars built ten years from now will be autonomous”. He added his opinion that artificial general intelligence (AGI) will arrive in about seven or eight years.

Stephen Merity, an AI researcher at Salesforce and attendee at the party said that Musk's belief that AGI machines replacing human workers would be a problem before drivers lose their jobs to self-driving vehicles was an “insanely optimistic estimate”.

In his remarks, Musk acknowledged the fact that he has repeatedly “[sounded] the alarm bell again and again about the dangers of AI” and people have responded with “there he goes again” and "stop being a buzzkill".

“But I think there are a lot of ways that AI can be useful short of being god-like," he concluded.

The Register has sought details from Tesla and will update this story if they are forthcoming. ®

Broader topics


Other stories you might like

  • Train once, run anywhere, almost: Qualcomm's drive to bring AI to its phone, PC chips
    Software toolkit offered to save developers time, effort, battery power

    Qualcomm knows that if it wants developers to build and optimize AI applications across its portfolio of silicon, the Snapdragon giant needs to make the experience simpler and, ideally, better than what its rivals have been cooking up in the software stack department.

    That's why on Wednesday the fabless chip designer introduced what it's calling the Qualcomm AI Stack, which aims to, among other things, let developers take AI models they've developed for one device type, let's say smartphones, and easily adapt them for another, like PCs. This stack is only for devices powered by Qualcomm's system-on-chips, be they in laptops, cellphones, car entertainment, or something else.

    While Qualcomm is best known for its mobile Arm-based Snapdragon chips that power many Android phones, the chip house is hoping to grow into other markets, such as personal computers, the Internet of Things, and automotive. This expansion means Qualcomm is competing with the likes of Apple, Intel, Nvidia, AMD, and others, on a much larger battlefield.

    Continue reading
  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading
  • AMD touts big datacenter, AI ambitions in CPU-GPU roadmap
    Epyc future ahead, along with Instinct, Ryzen, Radeon and custom chip push

    After taking serious CPU market share from Intel over the last few years, AMD has revealed larger ambitions in AI, datacenters and other areas with an expanded roadmap of CPUs, GPUs and other kinds of chips for the near future.

    These ambitions were laid out at AMD's Financial Analyst Day 2022 event on Thursday, where it signaled intentions to become a tougher competitor for Intel, Nvidia and other chip companies with a renewed focus on building better and faster chips for servers and other devices, becoming a bigger player in AI, enabling applications with improved software, and making more custom silicon.  

    "These are where we think we can win in terms of differentiation," AMD CEO Lisa Su said in opening remarks at the event. "It's about compute technology leadership. It's about expanding datacenter leadership. It's about expanding our AI footprint. It's expanding our software capability. And then it's really bringing together a broader custom solutions effort because we think this is a growth area going forward."

    Continue reading
  • If AI chatbots are sentient, they can be squirrels, too
    Plus: FTC warns against using ML for automatic content moderation, and more

    In Brief No, AI chatbots are not sentient.

    Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.

    The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.

    Continue reading

Biting the hand that feeds IT © 1998–2022