Boffins try to grok dogs using AI, a cyber-brain charter, a bot running for mayor, and more

Would you vote for a machine for public office?


Roundup Here are a few bits and pieces from this week's news in AI. Researchers have collected a dataset to analyze dog behaviour using neural networks, the first AI-assisted medical device for diagnosing diabetic retinopathy has been approved by the FDA, and, finally, an AI is running for mayor in Japan.

Who’s a good doggo? A team of researchers have developed a machine learning model that attempts to predict and understand dog behaviour.

They attached sensors and a GoPro camera to a dog to collect video data, a canine is an Alaskan Malamute called Kelp M. Redmon. The clips show Kelp interacting with the environment around it with a dog's eye view. Image stills from the video feed are then fed into a convolutional neural network as inputs and act as an embedding for a LSTM (long-short term memory network).

The LSTM processes the features of each progressive image from the clip over each time step, and is trained to predict the next frame. For example, it might be given images of a human throwing a ball that bounces past Kelp, and the neural network guesses that she will scramble and move right for the ball.

In a paper published on arXiv, the researchers from the University of Washington and the Allen Institute for AI, said the work was “a first step towards end-to-end modelling of intelligent agents. This approach does not need manually labeled data or detailed semantic information about the task or goals of the agent.”

Dogs obviously rely on a lot more than vision to navigate the world. The researchers hope to include more sensory data such as smell or touch. It’s also limited to one type of dog, and are interested to see if their work maps to multiple dogs across different breeds.

“We hope this work paves the way towards better understanding of visual intelligence and of the other intelligent beings that inhabit our world,” the paper includes.

The paper will be presented at the Conference of Computer Vision and Pattern Recognition (CVPR) later this year in June.

DeepMind gets a new COO - DeepMind have employed Lila Ibrahim as its first chief operating officer, it announced on Wednesday.

Ibrahim began her career in technology working for Intel as a microprocessor designer, assembler programmer, business development manager and rose to be chief of staff to its CEO & Chairman, Craig Barrett. She also was president and COO of Coursera, a company focused on education offering a variety of courses online.

She will work alongside DeepMind’s co-founders: Demis Hassabis, CEO; Shane Legg, chief scientist; and Mustafa Suleyman, head of applied AI.

FDA approves AI medical gizmo for diabetic retinopathy The US Food and Drug Administration have given the green light to the first medical AI device that uses algorithms to detect diabetes in retinal scans.

The company, IDx LLC, developed the tool known as IDx-DR. The FDA found it could detect mild diabetic retinopathy to an accuracy of 87.4 per cent, and was able to identify patients who did not the disorder to an accuracy of 89.5 percent.

“IDx-DR is the first device authorized for marketing that provides a screening decision without the need for a clinician to also interpret the image or results, which makes it usable by health care providers who may not normally be involved in eye care,” the FDA said.

It means IDx can now sell its devices to hospitals and clinics. Retinopathy is a well-known area in medicine and AI. Even Google has taken a stab at the problem, and have used machine learning to tell a patient’s risk of heart disease and even if they’re a smoker or not from retinol scans to a decent accuracy.

Fancy a trip to Korea? If you’re a pretty good at TensorFlow and deep learning and want to get away, then maybe consider applying to Deep Learning Camp Jeju at Jeju Island, Korea.

The month-long bootcamp will let you work on a deep learning project with mentors surrounded by about 20-30 participants. If you get accepted, you’ll get a $1,000 stipend (£811.50) $300 (£243.45) towards your flights and $1,000 worth of Google Cloud credits with access to its TPUs.

No visas are required if you plan to stay less than 30 days. The event is organised by TensorFlow Korea, and is a push to advance deep learning in Korea.

Previous projects have included computer vision research self-driving cars, recommender systems, and GANs. It’s all sounds pretty sweet, and you can apply here.

OpenAI’s AGI strategy OpenAI published a charter to help guide its long-term mission of creating artificial general intelligence (AGI).

AGI is a contentious topic. Some believe the world is deathly close to developing crazed killer robots (looking at you Elon and the now deceased Hawk), others believe it’s a useless term, some think it’s an impossible feat.

The charter is pretty interesting, nevertheless. It’s the first time a major AI research lab has declared it will stop its work in creating AGI if another project gets there first, on the condition that it won’t be used maliciously.

“We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years,” it said.

It also warned that as safety and and security issues escalate as AI progresses, it might have to be more careful about publishing research so openly in the future.

“We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.”

Other points on the charter include the usual announcements around building safe AI that will benefit humanity.

EU member states sign AI deal - Twenty-five countries under the European Union signed a “Declaration of cooperation on Artificial Intelligence”.

The deal promises to work together on the most pressing issues in AI, including ethical and legal issues, competitiveness in research, and where and how it should be deployed. It means that there should be more funding for research, development, and industry.

Austria, Belgium, Bulgaria, Czech Republic, Denmark, Estonia, Finland, France, Germany, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Poland, Portugal, Slovakia, Slovenia, Spain, Sweden, UK, and Norway, all signed the agreement.

Assess your algorithms The AI Now Institute at New York University have published a framework to help companies and public agencies assess the impact of its algorithms.

The Algorithmic Impact Assessment (AIA) report can be summed up in five points:

  • 1. Agencies should conduct a self-assessment of existing and proposed automated decision systems, evaluating potential impacts on fairness, justice, bias, or other concerns across affected communities.
  • 2. Agencies should develop meaningful external researcher review processes to discover, measure, or track impacts over time.
  • 3. Agencies should provide notice to the public disclosing their definition of “automated decision system,” existing and proposed systems, and any related self-assessments and researcher review processes before the system has been acquired.
  • 4. Agencies should solicit public comments to clarify concerns and answer outstanding question.
  • 5. Governments should provide enhanced due process mechanisms for affected individuals or communities to challenge inadequate assessments or unfair, biased, or otherwise harmful system uses that agencies have failed to mitigate or correct.

Although the AIA is inspired by other impact assessment such as environmental protection, data protection, privacy, or human rights, it can’t be legally enforced. So it relies on the good nature of organizations.

Despite this, Jason Schultz, professor of clinical law at NYU and a senior advisor on technology policy in the White House under Obama, told The Register, he does believe many companies will happily audit their own algorithms.

“The pressure for algorithmic accountability has never been greater, especially for public agencies. We believe that it’s urgent that public agencies begin evaluating algorithmic decision-making with the same level of scrutiny as these other areas [such as ] environmental effects, human rights, data protection, privacy, etc."

"And lawmakers are finally beginning to take this issue seriously. So I would anticipate many agencies adopting these frameworks voluntarily or with minimal policy interventions. Otherwise, they run huge risks of inflicting harms on the very people they are meant to serve and ultimately undermining public trust in government to help improve our lives with new technologies."

“Ultimately, this will require a multi-pronged approach. Legislation will likely be part of that. We also believe it’s imperative for companies developing these systems to take responsibility for providing transparency and ensuring that they do not create unintended harm.”

The AI Institute also calls for an “independent, government-wide oversight body” to take on third-party auditing to avoid any conflicts of interest.

An AI overlord In other news: An AI is running for mayor in Tama City, Japan.

Michihito Matsuda is a unique mayoral candidate. He? She? It? really is different from all the other politicians. (Matsuda, dressed all in silver, has quite feminine features so El Reg will refer to as her for now). Matsuda isn’t even human for god’s sake, but her supporters are.

Tetsuzo Matsumoto, a senior advisor to Softbank and Norio Murakami, an ex-representative for Google Japan are fans apparently, according to Otaquest.

Remember when Saudi Arabia granted Sophia, a bald, creepy robot citizenship? This could the beginning of the end for politicians, wouldn't that be a shame. ®

Similar topics


Other stories you might like

  • DORA explorers see pandemic boost in numbers of 'elite' DevOps performers

    Or is it that they're just more inclined to complete surveys about themselves?

    A report from DORA, that's the Devops Research and Assessment sponsored by Google and other DevOps vendors, says 26 per cent of surveyed technology workers consider themselves "elite performers."

    DORA was founded in 2015 by DevOps specialists Nicole Forsgren, Jez Humble, and Gene Kim, and in late 2018 was absorbed by Google Cloud. Each year the gang, now led by Google's Dustin Smith, publishes an Accelerate State of DevOps report, co-sponsored by nine other DevOps outfits.

    The research is based on responses from "1,200 working professionals," we're told, with over half in organizations of 500 or more employees. The majority of respondents work in development, software engineering, DevOps, site reliability engineering, or management. Two out of five participants are said to have at least 16 years of IT experience.

    Continue reading
  • Senior IBMer hit with £290k demand from Big Blue in separate case as unfair dismissal claim rolls on

    High Court and Employment Tribunal cases to be heard soon

    A former IBM general manager who was posted to the United Arab Emirates is being sued by the company for £290,000 after filing an employment tribunal case claiming unfair dismissal.

    In its particulars of claim lodged on 10 February 2021 and recently made available by the court, Big Blue claimed that former Middle East GM Shamayun Miah should hand back two "special payments" because it sacked him within two years of paying him the cash lump sums.

    Miah was paid pre-tax sums of £175,000 on 1 January 2018 and a further £100,000 on 1 January 2019, according to IBM's High Court filing. IBM has claimed he is "liable" to repay a portion of each of payment, together totalling £145,750.

    Continue reading
  • If you're Intel, self-driving cars look an awful lot like PCs

    Hardware capabilities, latest feature updates? You'll get what you pay for

    Intel's vision of the computing architecture of autonomous vehicles is similar to that of PCs, with pricey models getting better hardware and the latest software, and cheaper self-driving cars getting the bare minimum.

    The segments of premium and mid-range cars will need extra compute and over-the-air update capabilities to enable increasing levels of autonomous driving, said Erez Dagan, executive vice president at Mobileye, Intel's self-driving car system division, speaking at the Evercore ISI Autotech & AI Forum this week.

    On the other hand, low-end vehicles will have basic equipment, sensors, and features as mandated or incentivized by regulations like the EU's General Safety Regulation, which focuses on improving driver safety.

    Continue reading
  • Researchers finger new APT group, FamousSparrow, for hotel attacks

    Espionage motive mooted in attacks which hit industry, government too

    Researchers at security specialist ESET claim to have found a shiny new advanced persistent threat (APT) group dubbed FamousSparrow - after discovering its custom backdoor, SparrowDoor, on hotels and government systems around the world.

    "FamousSparrow is currently the only user of a custom backdoor that we discovered in the investigation and called SparrowDoor," ESET researcher and co-author of the report Tahseen Bin Taj explained in a prepared statement. "The group also uses two custom versions of Mimikatz. The presence of any of these custom malicious tools could be used to connect incidents to FamousSparrow."

    The group can be traced back to 2019, the researchers claimed, though the attacks tracked in the report made use of the ProxyLogon vulnerability in Microsoft Exchange starting in March this year. Victims were spread around Europe, the Middle East, the Americas, Asia, and Africa - without a single one being discovered in the US, oddly.

    Continue reading
  • Is it a bird? Is it a plane? Nah, it's just Windows suffering from a bit of vertigo

    Up above the streets and houses, XP's flying high

    Bork!Bork!Bork! Windows XP continues to hang in there – quite literally – as the operating system does what it does best some 90 metres above the London's River Thames.

    The screen, spotted by Register reader Andy Jones while safely ensconced within the confines of an Emirates Air Line gondola, appears to be in something of a boot loop. It looks to be endlessly resetting as the UK capital city's cable car attraction grinds itself along the kilometre or so between the Greenwich Peninsula and the Royal Docks.

    Continue reading
  • How many Android containers can you fit on your VM?

    The Register speaks to Canonical about running the OS in the cloud

    Interview Developers targeting Android are spoiled for choice with their platforms.

    There are a variety of options available for running Android application development environments these days. Even Microsoft has promised that its upcoming Windows 11 will eventually be able to run the apps on the desktop and has long since supported the mobile OS via its Your Phone app, even while smothering its ailing Windows Phone with a cuddly Android pillow.

    For Canonical, however, Anbox remains a cloud product, according to Simon Fels, engineering manager and is therefore unlikely to feature in any desktop version of the company's Ubuntu distribution any time soon, although with September's announcement it will now cheerfully scale from the heights of the cloud down to a single Virtual Machine via the Appliance version.

    Continue reading
  • Infosys admits it still hasn't fully fixed Indian tax portal

    Deadline came and went, but over 750 'resources' are still hard at work

    Infosys has admitted it has missed the Indian government's deadline to fix the tax portal it built, but which has been a glitchy mess since its June 2021 launch.

    The portal was introduced to make filing taxes more efficient. It delivered the opposite – India's government was forced to extend filing deadlines amid user complaints that they found the portal impossible to use. The portal was even placed into "emergency maintenance" mode at one point, during which it was completely unavailable.

    Infosys was shamed by ministers and on August 22nd was given a September 15th deadline to fix the portal.

    Continue reading
  • Here's an idea: Verification for computer networks as well as chips and code

    What tools are available? What are the benefits? Let's find out

    Systems Approach In 1984, artificial intelligence was having a moment. There was enough optimism around it to inspire me to explore the role of AI in chip design for my undergraduate thesis, but there were also early signs that the optimism was unjustified.

    The term “AI winter” was coined the same year and came to pass a few years later. But it was my interest in AI that led me to Edinburgh University for my PhD, where my thesis advisor (who worked in the computer science department and took a dim view of the completely separate department of artificial intelligence) encouraged me to focus on the chip design side of my research rather than AI. That turned out to be good advice at least to the extent that I missed the bursting of the AI bubble of the 1980s.

    The outcome of all this was that I studied formal methods for hardware verification at a point in time where hardware description languages (HDLs) were just getting off the ground. These days, HDLs are a central part of chip design and formal verification of chip correctness has been used for about 20 years. I’m pretty sure my PhD had no impact on the industry – these changes were coming anyway.

    Continue reading
  • Imagine a fiber optic cable that can sense it's about to be dug up and send a warning

    Forget wiring cities with IoT devices – this could be how wide-scale sensing gets done

    Imagine an optic fiber that can sense the presence of a nearby jackhammer and warn its owner that it is in danger of being dug up, just in time to tell diggers not to sink another shaft. Next, imagine that an entire city's installed base of fiber could be turned into sensors that will make planners think twice before installing IoT devices.

    Next, stop imagining: the tech is real, already working, and was yesterday used to demonstrate the impact of an earthquake.

    As explained to The Register by Mark Englund, CEO of FiberSense, the company uses techniques derived from sonar to sense vibrations in fiber cables. FiberSense shoots lasers down the cables and observes the backscatter as the long strands of glass react to their environment.

    Continue reading
  • Unable to test every tourist and unable to turn them away, Greece used ML to pick visitors for COVID-19 checks

    Inside the software built to figure out groups of potentially infected, asymptomatic passengers

    Faced with limited resources in a pandemic, Greece turned to machine-learning software to decide which sorts of travelers to test for COVID-19 as they arrived in the country.

    The system in question used reinforcement learning, specifically multi-armed bandit algorithms, to identify which potentially infected, asymptomatic passengers were worth testing and putting into quarantine if necessary. It also was able to produce up-to-date statistics on infections for officials to analyze, such as early signs of the emergence of COVID-19 hot spots abroad, we're told.

    Nicknamed Eva, the software was put to use at all 40 of Greece's entry points from August 6 to November 1 last year. Incoming travelers were asked to fill out a questionnaire detailing the country and region they were coming from as well as their age and gender. Based on these characteristics, Eva selected whether they should be tested for COVID-19 upon arrival. At its peak, Eva was apparently processing between roughly 30,000 and 55,000 forms a day, each form representing a household, and about 10 to 20 per cent of households were tested.

    Continue reading
  • Angry birds ground some Google Wing drones in Australia

    Between COVID and corvids, locked-down Aussies can't catch a break - or a coffee lowered from the treetops

    Some of Google parent company Alphabet's Wing delivery drones have been grounded by angry Australian birds.

    As reported by the Australian Broadcasting Corporation, and filmed by residents of Canberra, ravens have attacked at least one of Wing's drones during a delivery run.

    Canberra, Australia's capital city, is currently in COVID-caused lockdown. It's also coming into spring – a time when local birds become a menace in the leafy city. Magpies are a particular hazard because they swoop passers-by who they deem to be threateningly close to their nests and the eggs they contain. Being swooped is very little fun – magpies dive in, often from a blind spot, snapping their sharp beaks, and can return two or three times on a single run. Swooping is intimidating for walkers, and downright dangerous for cyclists.

    Continue reading

Biting the hand that feeds IT © 1998–2021