Connect with us

Artificial Intelligence

Reconocimiento facial para perros

Published

on

Reconocimiento facial para perros supnews
Captura

Decir que la Inteligencia Artificial no tiene límites ya suena a ‘cliché’. Y eso que ‘apenas’ estamos en 2019 y el mundo todavía está lejos de descubrir todas las bondades que ofrecen estas tecnologías. También son muchos los problemas, aunque estos van exclusivamente relacionados con los usos que los seres humanos hacemos de ellas.

 

Como sea. Para no salirnos del tema de las bondades de la AI, una tienda online de mascotas de Brasil ha diseñado un sistema que dará mucho de qué hablar. Se trata de una herramienta de reconocimiento facial para perros que permite a estas mascotas ‘comprar’ sus productos favoritos.

 

¿Cómo funciona?

 

Con la ayuda de Leonardo Ogata, reconocido entrenador canino brasileño, se creó una base de datos sobre los significados de las expresiones faciales de cánidos domésticos de varias razas. Con esta información, la Inteligencia Artificial de Pet-Commerce puede detectar cuáles son los intereses de cada can.


TAMBIÉN TE PUEDE INTERESAR:

La Inteligencia Artificial también juega Póker (y gana)


Para llegar a esto, las mascotas deben ver un video con todo el stock del ecommerce en un dispositivo que cuente con una cámara y conexión a internet. Según las reacciones de los animales, la AI calificará el interés que muestran hacia cada producto en una escala que parte desde hueso rojo (poco interés) hasta hueso verde (gran interés). Cuando ocurre esto último, el ítem se agrega de forma automática al carrito de compras.

El video tiene que reproducirse a volumen alto, al tiempo que los perros deben estar en la libertad de retirarse si las imágenes en la pantalla no captan su atención. Por el momento el sistema solo funciona con canes, aunque el equipo detrás de esta tecnología ya está desarrollando el reconocimiento facial para gatos.

Con información de E-News

Artificial Intelligence

Facebook blames COVID-19 for reduced action on suicide, self-injury, and child exploitation content

Published

on

facebook-blames-covid-19-for-reduced-action-on-suicide,-self-injury,-and-child-exploitation-content

Facebook says that COVID-19 has hindered its ability to remove posts about suicide, self-injury, and child nudity and sexual exploitation.

The social media giant said the decision to send content reviewers home in March had forced it to rely more heavily on tech to remove violating content.

As a result, the firm says it took action on 911,000 pieces of content related to suicide and self-injury in the second quarter of this year — just over half the number of the previous quarter.

On Instagram, the number dropped even further, from 1.3 million pieces of content in Q1 to 275,000 in Q2. Meanwhile, action on Instagram content that sexually exploits or endangers children decreased from 1 million to 479,400.

“With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram,” said Guy Rosen, Facebook‘s VP of Integrity, in a blog post today.

[Read: Social media firms will use more AI to combat coronavirus misinformation, even if it makes more mistakes]

Facebook said that stretched human resources had also reduced the number of appeals it could offer. In addition, the firm claimed that its focus on removing of harmful content meant it couldn’t calculate the prevalence of violent and graphic content in its latest community standards report.

More human moderation needed

Facebook did report some improvements in its AI moderation efforts. The company said the proactive detection rate for hate speech on Facebook had increased from 89% to 95%. This led it to take action on 22.5 million pieces of violating content, up from the 9.6 million in the previous quarter.

Instagram‘s hate speech detection rate climbed even further, from 45% to 84%, while actioned content rose from 808,900 to 3.3 million.

Rosen said the results show the importance of  human moderators:

Today’s report shows the impact of COVID-19 on our content moderation and demonstrates that, while our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology.

In other Facebook news, the company today announced new measures to stop publishers backed by political organizations from running ads disguised as news. Under the new policy, news Pages with these affiliations will be banned from Facebook News. They’ll also lose access to news messaging on the Messenger Business Platform or the WhatsApp business API.

With the US election season approaching, it’s gonna be a busy few months for Facebook‘s content moderation team.

Published August 11, 2020 — 18:21 UTC

Thomas Macaulay

Thomas Macaulay

August 11, 2020 — 18:21 UTC

Continue Reading

Artificial Intelligence

Pinterest improves and expands its skin tone search feature

Published

on

pinterest-improves-and-expands-its-skin-tone-search-feature

Pinterest is upgrading its skin tone search feature, which uses machine vision to sort pins in the site’s beauty category by skin tone. The feature launched in the US in 2018 and is now available in the UK, Canada, Ireland, Australia, and New Zealand as well.

The feature is designed to make it easier for users to find content relevant to them, says Pinterest. It’s a common problem in the search world that certain queries default to show white faces. By giving users the option to refine their searches based on skin tones, Pinterest says it helps users find they content they want to see.

The feature is now more prominent when users are searching for content and delivers more accurate results, says Pinterest. The company offers searches like “grey hair on dark skin women,” “blonde hair color ideas for fair skin blue eyes,” and “soft natural makeup for Black women“ as examples of the sort of fine-grained results the feature can deliver.

Pinterest’s Try On feature lets users try on lipstick shades in AR.
Image: Pinterest

Search by skin tone is also now integrated into the company’s augmented reality Try On feature, which lets users search for lipstick shades and try them on in AR. This feature is currently only available in the US but is launching in the UK “in the coming months.”

Continue Reading

Artificial Intelligence

UK court rules police use of facial recognition was ‘unlawful’

Published

on

uk-court-rules-police-use-of-facial-recognition-was-‘unlawful’

British police used facial recognition unlawfully, the Court of Appeal ruled today, in a landmark decision that could have a big impact on the technology’s use in the UK.

The judgment stems from a complaint by Cardiff resident Ed Bridges, who said police had scanned his face while he was Christmas shopping, and again when he was at a protest.

Bridges argued that South Wales Police (SWP) had breached his right to privacy, as well as equality and data protection laws. But last September, the UK‘s Supreme Court ruled against him, claiming cops had followed the relevant rules and met the requirements of the Human Rights Act.

Bridges appealed the decision, arguing that SWP’s actions were akin to taking fingerprints or DNA without consent. Bridges was supported by human rights group Liberty, which says the case is the world’s first legal challenge to police use of automated facial recognition (AFR).

[Read: Clearview AI can be fun — if you’re dirty, stinking rich]

Today, the Court of Appeal agreed that police had violated his right to privacy, as well as data protection and equality laws.

The judges said that “too much discretion is currently left to individual police officers,” and that SWP had “never sought to satisfy themselves, either directly or by independent verification, that the software program does not have an unacceptable bias on grounds of race or sex.”

Bridges said he was “delighted” with the decision:

This technology is an intrusive and discriminatory mass surveillance tool. For three years now South Wales Police has been using it against hundreds of thousands of us, without our consent and often without our knowledge. We should all be able to use our public spaces without being subjected to oppressive surveillance.

Future implications for facial recognition

The judges called for changes to the framework that regulates AFR. These could involve amendments to local policy documents, such as those operated by South Wales Police, or to the national Surveillance Camera Code of Practice.

However, they didn’t rule that primary legislation — the main laws passed in the UK — were required to regulate AFR in the same way as DNA or fingerprints.

“Instead, the Court has identified the relatively modest changes to the policy framework that are needed in order that live AFR can continue to be used,” said Anne Studd, a senior lawyer at 5 Essex Court who specializes in police law.

“It is noteworthy that this case arose in the course of a pilot of the system by South Wales Police – as part of that trial, through a co-operative and consensual process by which the issues were brought before the Court, the police service has been able to obtain a very helpful decision that maps the way ahead.”

South Wales Police and London’s Metropolitan Police were reportedly the only forces in the UK using AFR. Liberty is now calling for them to stop using the tech entirely.

Published August 11, 2020 — 11:10 UTC

Continue Reading

Trending

English
Spanish English