Watch Waymo's totally driverless self-driving car cruise around, how the US military wants to use AI ethically, etc

Kick off your Monday with machine-learning news


Roundup Hello, here’s a short but sweet round up of news from the world of machine learning beyond what we have already covered on El Reg.

Microsoft funded an AI startup that spies on Palestinians: Microsoft invested in AnyVision, a company that supports a secret Israeli military project that surveils Palestinians travelling within the West Bank.

Palestinians living in the contentious region occupied by Israel can only travel via designated checkpoints. The Israeli government also tracks their movements using CCTV cameras as they walk throughout eastern Jerusalem.

AnyVision supplies facial recognition software to the Israeli government, helping it surveil Palestinians. The military project, supposedly codenamed “Google Ayosh," was carried out to search for specific people by matching the faces spotted on the cameras to a known database, according to NBC News. Despite the name Google is not involved with AnyVision at all.

Microsoft’s venture capital biz M12 announced earlier this year that it partner up with Silicon Valley VC firm DFJ to plow $74m into a Series A funding round for the startup. The move is at odds with Microsoft’s established AI principles of using the technology for good. Brad Smith, Microsoft’s president, even urged governments around the world to start regulating facial recognition.

“Governments and the tech sector both play a vital role in ensuring that facial recognition technology creates broad societal benefits while curbing the risk of abuse,” he said last year in December.

“The use of facial recognition technology by a government can encroach on democratic freedoms and human rights. Democracy has always depended on the ability of people to assemble, to meet and talk with each other and even to discuss their views both in private and in public. This in turn relies on the ability of people to move freely and without constant government surveillance.”

But surprise, surprise, Microsoft, has failed to put its money where its mouth is.

Driverless Waymo cars spotted: Here’s some video footage or a Waymo branded car that seems to be driving on the road with no human driver in the car at all.

Youtube Video

An unnamed driver followed the vehicle with its blacked out windows, and at about 35 seconds into the video you can see that there is, indeed, no one sitting in it. No driver. No passengers. It’s kind of creepy, actually.

The video is proof of some of Waymo’s self-driving capabilities. It’s probably operating at Level 4 autonomy, where cars can operate without human assistance but only under strict conditions such as along a specific designated path.

The person recording the video doesn’t disclose where exactly the video was taken, but it’s somewhere in Phoenix, Arizona. It appears to be a quiet residential area and at 35 seconds in, you can see a street sign that reads “Flamingo”.

US Department of Defense has drafted its own AI principles: Okay, back to AI ethics. The US military approved a report outlining a series of guidelines drafted by the experts within Silicon Valley and the Army.

The Defense Innovation Board set up to advise the DoD on the latest technology wrote a report recommending how the military should use AI technology ethically. The Pentagon voted to adopt these principles this week, according to Venture Beat.

The report is centered around five main values: responsible, equitable, traceable, reliable, and governable.

  • Responsible Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DoD AI systems.
  • Equitable DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons.
  • Traceable DoD’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes, and operational methods of its AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable DoD AI systems should have an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.
  • Governable DoD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.

You can read the whole report here [PDF]. ®


Other stories you might like

  • NASA installs a new and improved algorithm to better track near-Earth asteroids

    Nearly 20 year-old software used to protect humanity gets an upgrade

    NASA has upgraded its near-Earth asteroid monitoring algorithm to model hazardous space rocks more accurately after nearly two decades, it announced on Tuesday.

    The new system, dubbed Sentry-II, is more powerful than its predecessor, Sentry. Astronomers working at the space agency's Center for Near Earth Object Studies can now automatically calculate thermal influences that nudge an asteroid’s orbit, potentially sending it hurtling towards our home planet.

    The so-called Yarkovsky effect describes the subtle and gradual change of motion when asteroids are heated by the Sun’s light. When asteroids spin, one side of its surface exposed to the star gets heated. As it continues to rotate, the hot region enters shade and cools down. Infrared energy is radiated outwards; the photons carry momentum and impart a tiny thrust on the asteroid. Over long periods of time, these small kicks can change their paths and knock them out of their original orbit.

    Continue reading
  • Facebook slapped with an eyepopping $150B lawsuit for spreading hate speech against Rohingya refugees

    Lawsuit claims social media giant's algos helped Myanmar military crackdown on the Rohingya

    Meta was sued on Tuesday for a whopping $150 billion in a class-action lawsuit for allegedly amplifying hate speech and aiding the Myanmar military in the genocide of the Rohingya people.

    The case, led by an anonymous Rohingya refugee living in the US, accuses the entity formerly known as Facebook of inciting hatred and inflicting real harm on the predominantly Muslim group for years. Not only did the social media platform ignore hate speech posts, it's alleged that the service's algorithms actively promoted anti-Rohingya propaganda as hundreds of thousands of people fled from Myanmar to escape persecution.

    Facebook has already acknowledged its role in the campaign, which saw an estimated 25,000 people perish and 700,000 forced from the country. The lawsuit also comes after ex-employee and whistleblower Frances Haugen leaked internal documents demonstrating how its algorithms prioritized engagement over safety.

    Continue reading
  • Power management IC shortage holding cars, laptops, hostage

    Couple of cents-worth of kit causing big problems for the year to come

    The shortage of power management chips is worsening and holding back companies from building cars, PCs and items with batteries or an on-off switch, Trendforce said in a study this week.

    Power management ICs cost just a few cents, and are among cheap chips that include display driver and USB-C components that are in short supply. These chips are as important to PCs and other electronics as CPUs or memory.

    The demand for PMICs has gone through the roof with the emergence of electric cars and growing demand for PCs and consumer electronics during the past 20 plus months. Trendforce expects the prices will go up by 10 per cent to a six-year high of $0.23.

    Continue reading

Biting the hand that feeds IT © 1998–2021