Connect with us

Artificial Intelligence

Microsoft también escucha a los usuarios de Xbox One

Published

on

Microsoft escucha a usuarios xbox one supnews
Imagen de Olya Adamovich en Pixabay

El asunto de las escuchas de las grandes corporaciones estadounidenses parece un seriado televisivo por capítulos.

 

En el capítulo de hoy: Microsoft también escucha a los usuarios de la Xbox One

 

Los pupilos de Bill Gates admitieron la semana pasada que, con el propósito de mejorar las experiencias de los usuarios y darle un mejor entrenamiento a sus sistemas de Inteligencia Artificial, tenían a humanos escuchando las conversaciones guardadas por el asistente virtual Cortana, así como las registradas dentro de la aplicación de mensajería instantánea Skype.

 

Ahora nuevos elementos se suman a la trama. Según una reseña de El Nacional que se hace eco de una información difundida por Vice, Microsoft también tiene un staff contratado para examinar los audios recogidos por las consolas Xbox One. Los materiales incluye los comandos de voz que dictan al gadget los gamers, así como algunos diálogos que ‘accidentalmente’ quedaron registrados dentro de los dispositivos.


TAMBIÉN TE PUEDE INTERESAR:

Google retira un programa que ofrecía datos de sus usuarios a terceros


Por el momento nadie de la compañía ha emitido un pronunciamiento oficial al respecto. (Cuidado con lo que haces o dices cerca de algún dispositivo o gadget de última generación. También si cuentas con servicios de asistentes virtuales. Ya no sabes quien puede estar escuchando… u observando).

Artificial Intelligence

Why ‘human-like’ is a low bar for most AI projects

Published

on

why-‘human-like’-is-a-low-bar-for-most-ai-projects

Show me a human-like machine and I’ll show you a faulty piece of tech. The AI market is expected to eclipse $300 billion by 2025. And the vast majority of the companies trying to cash in on that bonanza are marketing some form of “human-like” AI. Maybe it’s time to reconsider that approach.

The big idea is that human-like AI is an upgrade. Computers compute, but AI can learn. Unfortunately, humans aren’t very good at the kinds of tasks a computer makes sense for and AI isn’t very good at the kinds of tasks that humans are. That’s why researchers are moving away from development paradigms that focus on imitating human cognition.

A pair of NYU researchers recently took a deep dive into how humans and AI process words and word meaning. Through the study of “psychological semantics,” the duo hoped to explain the shortcomings held by machine learning systems in the natural language processing (NLP) domain. According to a study they published to arXiv:

Many AI researchers do not dwell on whether their models are human-like. If someone could develop a highly accurate machine translation system, few would complain that it doesn’t do things the way human translators do.

In the field of translation, humans have various techniques for keeping multiple languages in their heads and fluidly interfacing between them. Machines, on the other hand, don’t need to understand what a word means in order assign the appropriate translation to it.

This gets tricky when you get closer to human-level accuracy. Translating one, two, and three into Spanish is relatively simple. The machine learns that they are exactly equivalent to uno, dos, and tres, and is likely to get those right 100 percent of the time. But when you add complex concepts, words with more than one meaning, and slang or colloquial speech things can get complex.

We start getting into AI‘s uncanny valley when developers try to create translation algorithms that can handle anything and everything. Much like taking a few Spanish classes won’t teach a human all the slang they might encounter in Mexico City, AI struggles to keep up with an ever-changing human lexicon.

NLP simply isn’t capable of human-like cognition yet and making it exhibit human-like behavior would be ludicrous – imagine if Google Translate balked at a request because it found the word “moist” distasteful, for example.

This line of thinking isn’t just reserved for NLP. Making AI appear more human-like is merely a design decision for most machine learning projects. As the NYU researchers put it in their study:

One way to think about such progress is merely in terms of engineering: There is a job to be done, and if the system does it well enough, it is successful. Engineering is important, and it can result in better and faster performance and relieve humans of dull labor such as keying in answers or making airline itineraries or buying socks.

From a pure engineering point of view, most human jobs can be broken down into individual tasks that would be better suited for automation than AI, and in cases where neural networks would be necessary – directing traffic in a shipping port, for example – it’s hard to imagine a use-case where a general AI would outperform several narrow, task-specific systems.

Consider self-driving cars. It makes more sense to build a vehicle made up of several systems that work together instead of designing a humanoid robot that can walk up to, unlock, enter, start, and drive a traditional automobile.

Most of the time, when developers claim they’ve created a “human-like” AI, what they mean is that they’ve automated a task that humans are often employed for. Facial recognition software, for example, can replace a human gate guard but it cannot tell you how good the pizza is at the local restaurant down the road.

That means the bar is pretty low for AI when it comes to being “human-like.” Alexa and Siri do a fairly good human imitation. They have names and voices and have been programmed to seem helpful, funny, friendly, and polite.

But there’s no function a smart speaker performs than couldn’t be better handled by a button. If you had infinite space and an infinite attention span, you could use buttons for anything and everything a smart speaker could do. One might say “Play Mariah Carey,” while another says “Tell me a joke.” The point is, Alexa’s about as human-like as a giant remote control.

AI isn’t like humans. We may be decades or more away from a general AI that can intuit and function at human-level in any domain. Robot butlers are a long way off. For now, the best AI developers can do is imitate human effort, and that’s seldom as useful as simplifying a process to something easily automated.

Published August 6, 2020 — 22:35 UTC

Continue Reading

Artificial Intelligence

Study: Instagram’s algorithm favored Trump over Biden

Published

on

study:-instagram’s-algorithm-favored-trump-over-biden

Conservatives have long accused social media platforms of being politically biased. A new report suggests they might be right — but not in the way they claim.

The Tech Transparency Report (TTP) found that Instagram pushed searches about Joe Biden towards negative hashtags about the former vice president, but blocked them for President Trump.

The action was apparently triggered by Instagram‘s related hashtags, which direct users to content related to their previous hashtag searches. TTP compared searches for 20 popular hashtags about the presidential candidates. It found that all the related hashtags were disabled for searches about Trump, but appeared for searches about Biden.

At times, Biden searches led to hashtags that insulted the former VP, such as #creepyjoebiden — an echo of the nickname beloved by allies of Trump. People who clicked on this hashtag were shown images mocking Biden‘s reputation for touching women inappropriately.

[Read: UK ditches visa algorithm accused of creating ‘speedy boarding for white people’]

Other searches led to more baseless and inflammatory accusations. Searches for #nomalarkey, a Biden campaign slogan, pointed users to the related hashtag #joebidenpedophile.

Meanwhile, searches for hashtags associated with Trump, such as #maga and #draintheswamp, didn’t trigger any related hashtags. According to a follow-up investigation by BuzzFeed News, this pattern continued for at least two months.

Instagram blamed the problem on a “bug.” The company claimed it also prevented thousands of non-political hashtags, such as #menshair, from producing related hashtags.

“A technical error caused a number of hashtags to not show related hashtags,” said Instagram spokesperson Raki Wane. “We’ve disabled this feature while we investigate.”

That won’t provide much comfort to Biden supporters. But at least they’ve got more evidence that social media’s anti-conservative bias is a myth.

Published August 6, 2020 — 16:21 UTC

Thomas Macaulay

Thomas Macaulay

August 6, 2020 — 16:21 UTC

Continue Reading

Artificial Intelligence

Depictions of AI are overwhelmingly white — and that’s a serious problem

Published

on

depictions-of-ai-are-overwhelmingly-white-—-and-that’s-a-serious-problem

Continue Reading

Trending

English
Spanish English