Connect with us

Artificial Intelligence

¿Están preparados los gerentes de Recursos Humanos para los nuevos tiempos?

Published

on

recursos humanos transformación digital supnews
Flirck

 

No es secreto. El mundo marcha inevitablemente hacia la transformación digital. Prácticamente ningún sector quedará inmune a estos cambios. Aquellos que se adapten más rápido a los nuevos escenarios sacarán mayor ventaja.

 

El manejo del recurso humano dentro de pequeñas, medianas y grandes empresas no será la excepción. De hecho, ya una gran mayoría de quienes ocupan posiciones de mando dentro de este sector dice estar consciente de esta situación. Pero según un estudio adelantado por la firma KPMG y reseñado por Factor Capital Humano, hasta el 60% de los gerentes de RR. HH. no cuenta con un plan de transformación digital.

 

Llama la atención dentro del reporte que apenas 37% de los entrevistados se muestra “muy confiado” en sus propias capacidades para hacer frente a los cambios. Según la misma investigación, el panorama en Latinoamérica muestra al 74% de los jefes de personal reconociendo la necesidad de variar las formas como se maneja la fuerza laboral, aunque apenas un 38% confía en sus propias capacidades para hacerlo.

 

El papel de la Inteligencia Artificial

 

Para un 42% de los manejadores de talento, la Inteligencia Artificial tiene ya un papel determinante en las labores de selección y preparación del personal. Factor que aseguran, crecerá exponencialmente en los próximos cinco años.

Así mismo, 60% de los encuestados ve con pesimismo la implementación de estas tecnologías. Aseguran que a fin de cuentas, la IA suprimirá un mayor número de plazas laborales, en comparación con las que generará.


TAMBIÉN TE PUEDE INTERESAR:

Próximamente: El primer Whiskey creado por Inteligencia Artificial

Artificial Intelligence

Facebook blames COVID-19 for reduced action on suicide, self-injury, and child exploitation content

Published

on

facebook-blames-covid-19-for-reduced-action-on-suicide,-self-injury,-and-child-exploitation-content

Facebook says that COVID-19 has hindered its ability to remove posts about suicide, self-injury, and child nudity and sexual exploitation.

The social media giant said the decision to send content reviewers home in March had forced it to rely more heavily on tech to remove violating content.

As a result, the firm says it took action on 911,000 pieces of content related to suicide and self-injury in the second quarter of this year — just over half the number of the previous quarter.

On Instagram, the number dropped even further, from 1.3 million pieces of content in Q1 to 275,000 in Q2. Meanwhile, action on Instagram content that sexually exploits or endangers children decreased from 1 million to 479,400.

“With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram,” said Guy Rosen, Facebook‘s VP of Integrity, in a blog post today.

[Read: Social media firms will use more AI to combat coronavirus misinformation, even if it makes more mistakes]

Facebook said that stretched human resources had also reduced the number of appeals it could offer. In addition, the firm claimed that its focus on removing of harmful content meant it couldn’t calculate the prevalence of violent and graphic content in its latest community standards report.

More human moderation needed

Facebook did report some improvements in its AI moderation efforts. The company said the proactive detection rate for hate speech on Facebook had increased from 89% to 95%. This led it to take action on 22.5 million pieces of violating content, up from the 9.6 million in the previous quarter.

Instagram‘s hate speech detection rate climbed even further, from 45% to 84%, while actioned content rose from 808,900 to 3.3 million.

Rosen said the results show the importance of  human moderators:

Today’s report shows the impact of COVID-19 on our content moderation and demonstrates that, while our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology.

In other Facebook news, the company today announced new measures to stop publishers backed by political organizations from running ads disguised as news. Under the new policy, news Pages with these affiliations will be banned from Facebook News. They’ll also lose access to news messaging on the Messenger Business Platform or the WhatsApp business API.

With the US election season approaching, it’s gonna be a busy few months for Facebook‘s content moderation team.

Published August 11, 2020 — 18:21 UTC

Thomas Macaulay

Thomas Macaulay

August 11, 2020 — 18:21 UTC

Continue Reading

Artificial Intelligence

Pinterest improves and expands its skin tone search feature

Published

on

pinterest-improves-and-expands-its-skin-tone-search-feature

Pinterest is upgrading its skin tone search feature, which uses machine vision to sort pins in the site’s beauty category by skin tone. The feature launched in the US in 2018 and is now available in the UK, Canada, Ireland, Australia, and New Zealand as well.

The feature is designed to make it easier for users to find content relevant to them, says Pinterest. It’s a common problem in the search world that certain queries default to show white faces. By giving users the option to refine their searches based on skin tones, Pinterest says it helps users find they content they want to see.

The feature is now more prominent when users are searching for content and delivers more accurate results, says Pinterest. The company offers searches like “grey hair on dark skin women,” “blonde hair color ideas for fair skin blue eyes,” and “soft natural makeup for Black women“ as examples of the sort of fine-grained results the feature can deliver.

Pinterest’s Try On feature lets users try on lipstick shades in AR.
Image: Pinterest

Search by skin tone is also now integrated into the company’s augmented reality Try On feature, which lets users search for lipstick shades and try them on in AR. This feature is currently only available in the US but is launching in the UK “in the coming months.”

Continue Reading

Artificial Intelligence

UK court rules police use of facial recognition was ‘unlawful’

Published

on

uk-court-rules-police-use-of-facial-recognition-was-‘unlawful’

British police used facial recognition unlawfully, the Court of Appeal ruled today, in a landmark decision that could have a big impact on the technology’s use in the UK.

The judgment stems from a complaint by Cardiff resident Ed Bridges, who said police had scanned his face while he was Christmas shopping, and again when he was at a protest.

Bridges argued that South Wales Police (SWP) had breached his right to privacy, as well as equality and data protection laws. But last September, the UK‘s Supreme Court ruled against him, claiming cops had followed the relevant rules and met the requirements of the Human Rights Act.

Bridges appealed the decision, arguing that SWP’s actions were akin to taking fingerprints or DNA without consent. Bridges was supported by human rights group Liberty, which says the case is the world’s first legal challenge to police use of automated facial recognition (AFR).

[Read: Clearview AI can be fun — if you’re dirty, stinking rich]

Today, the Court of Appeal agreed that police had violated his right to privacy, as well as data protection and equality laws.

The judges said that “too much discretion is currently left to individual police officers,” and that SWP had “never sought to satisfy themselves, either directly or by independent verification, that the software program does not have an unacceptable bias on grounds of race or sex.”

Bridges said he was “delighted” with the decision:

This technology is an intrusive and discriminatory mass surveillance tool. For three years now South Wales Police has been using it against hundreds of thousands of us, without our consent and often without our knowledge. We should all be able to use our public spaces without being subjected to oppressive surveillance.

Future implications for facial recognition

The judges called for changes to the framework that regulates AFR. These could involve amendments to local policy documents, such as those operated by South Wales Police, or to the national Surveillance Camera Code of Practice.

However, they didn’t rule that primary legislation — the main laws passed in the UK — were required to regulate AFR in the same way as DNA or fingerprints.

“Instead, the Court has identified the relatively modest changes to the policy framework that are needed in order that live AFR can continue to be used,” said Anne Studd, a senior lawyer at 5 Essex Court who specializes in police law.

“It is noteworthy that this case arose in the course of a pilot of the system by South Wales Police – as part of that trial, through a co-operative and consensual process by which the issues were brought before the Court, the police service has been able to obtain a very helpful decision that maps the way ahead.”

South Wales Police and London’s Metropolitan Police were reportedly the only forces in the UK using AFR. Liberty is now calling for them to stop using the tech entirely.

Published August 11, 2020 — 11:10 UTC

Continue Reading

Trending

English
Spanish English