Este 11 de julio las empresas y startups más importantes de la región en cuanto a Inteligencia Artificial y Machine Learning se darán cita en el Kore AI.
El objetivo es reunir a las empresas más importantes de Latam para que reciban una inducción sobre las ventajas que la era de la Inteligencia Artificial tiene para ofrecer.
La organización del evento corre por cuenta de Latam Digital Marketing, de la mano con Tony Rallo, cofundador de Kio Networks, el primer unicornio mexicano.
Entre las marcas participantes destaca Adext, primera plataforma del mundo en utilizar AI para sacar el máximo provecho a la publicidad digital en Google, Facebook e Instagram. También participan Decidata, Hero Guest y Yalo, entre otras.
TAMBIÉN TE PUEDE INTERESAR:
Why ‘human-like’ is a low bar for most AI projects
Show me a human-like machine and I’ll show you a faulty piece of tech. The AI market is expected to eclipse $300 billion by 2025. And the vast majority of the companies trying to cash in on that bonanza are marketing some form of “human-like” AI. Maybe it’s time to reconsider that approach.
The big idea is that human-like AI is an upgrade. Computers compute, but AI can learn. Unfortunately, humans aren’t very good at the kinds of tasks a computer makes sense for and AI isn’t very good at the kinds of tasks that humans are. That’s why researchers are moving away from development paradigms that focus on imitating human cognition.
A pair of NYU researchers recently took a deep dive into how humans and AI process words and word meaning. Through the study of “psychological semantics,” the duo hoped to explain the shortcomings held by machine learning systems in the natural language processing (NLP) domain. According to a study they published to arXiv:
Many AI researchers do not dwell on whether their models are human-like. If someone could develop a highly accurate machine translation system, few would complain that it doesn’t do things the way human translators do.
In the field of translation, humans have various techniques for keeping multiple languages in their heads and fluidly interfacing between them. Machines, on the other hand, don’t need to understand what a word means in order assign the appropriate translation to it.
This gets tricky when you get closer to human-level accuracy. Translating one, two, and three into Spanish is relatively simple. The machine learns that they are exactly equivalent to uno, dos, and tres, and is likely to get those right 100 percent of the time. But when you add complex concepts, words with more than one meaning, and slang or colloquial speech things can get complex.
We start getting into AI‘s uncanny valley when developers try to create translation algorithms that can handle anything and everything. Much like taking a few Spanish classes won’t teach a human all the slang they might encounter in Mexico City, AI struggles to keep up with an ever-changing human lexicon.
NLP simply isn’t capable of human-like cognition yet and making it exhibit human-like behavior would be ludicrous – imagine if Google Translate balked at a request because it found the word “moist” distasteful, for example.
This line of thinking isn’t just reserved for NLP. Making AI appear more human-like is merely a design decision for most machine learning projects. As the NYU researchers put it in their study:
One way to think about such progress is merely in terms of engineering: There is a job to be done, and if the system does it well enough, it is successful. Engineering is important, and it can result in better and faster performance and relieve humans of dull labor such as keying in answers or making airline itineraries or buying socks.
From a pure engineering point of view, most human jobs can be broken down into individual tasks that would be better suited for automation than AI, and in cases where neural networks would be necessary – directing traffic in a shipping port, for example – it’s hard to imagine a use-case where a general AI would outperform several narrow, task-specific systems.
Consider self-driving cars. It makes more sense to build a vehicle made up of several systems that work together instead of designing a humanoid robot that can walk up to, unlock, enter, start, and drive a traditional automobile.
Most of the time, when developers claim they’ve created a “human-like” AI, what they mean is that they’ve automated a task that humans are often employed for. Facial recognition software, for example, can replace a human gate guard but it cannot tell you how good the pizza is at the local restaurant down the road.
That means the bar is pretty low for AI when it comes to being “human-like.” Alexa and Siri do a fairly good human imitation. They have names and voices and have been programmed to seem helpful, funny, friendly, and polite.
But there’s no function a smart speaker performs than couldn’t be better handled by a button. If you had infinite space and an infinite attention span, you could use buttons for anything and everything a smart speaker could do. One might say “Play Mariah Carey,” while another says “Tell me a joke.” The point is, Alexa’s about as human-like as a giant remote control.
AI isn’t like humans. We may be decades or more away from a general AI that can intuit and function at human-level in any domain. Robot butlers are a long way off. For now, the best AI developers can do is imitate human effort, and that’s seldom as useful as simplifying a process to something easily automated.
Published August 6, 2020 — 22:35 UTC
Study: Instagram’s algorithm favored Trump over Biden
Conservatives have long accused social media platforms of being politically biased. A new report suggests they might be right — but not in the way they claim.
The Tech Transparency Report (TTP) found that Instagram pushed searches about Joe Biden towards negative hashtags about the former vice president, but blocked them for President Trump.
The action was apparently triggered by Instagram‘s related hashtags, which direct users to content related to their previous hashtag searches. TTP compared searches for 20 popular hashtags about the presidential candidates. It found that all the related hashtags were disabled for searches about Trump, but appeared for searches about Biden.
At times, Biden searches led to hashtags that insulted the former VP, such as #creepyjoebiden — an echo of the nickname beloved by allies of Trump. People who clicked on this hashtag were shown images mocking Biden‘s reputation for touching women inappropriately.
Other searches led to more baseless and inflammatory accusations. Searches for #nomalarkey, a Biden campaign slogan, pointed users to the related hashtag #joebidenpedophile.
Meanwhile, searches for hashtags associated with Trump, such as #maga and #draintheswamp, didn’t trigger any related hashtags. According to a follow-up investigation by BuzzFeed News, this pattern continued for at least two months.
Instagram blamed the problem on a “bug.” The company claimed it also prevented thousands of non-political hashtags, such as #menshair, from producing related hashtags.
“A technical error caused a number of hashtags to not show related hashtags,” said Instagram spokesperson Raki Wane. “We’ve disabled this feature while we investigate.”
That won’t provide much comfort to Biden supporters. But at least they’ve got more evidence that social media’s anti-conservative bias is a myth.
Published August 6, 2020 — 16:21 UTC
August 6, 2020 — 16:21 UTC
Depictions of AI are overwhelmingly white — and that’s a serious problem
The “overwhelming whiteness” of AI is erasing people of color from visions of our future, researchers have warned.
The Cambridge University team studied depictions of AI systems in stock images, movies, TV, search results, and robots. They found that found the vast majority of them were portrayed as white. The researchers fear these depictions are creating a homogenous tech workforce who bake racial bias into their algorithms.
“People trust AI to make decisions. Cultural depictions foster the idea that AI is less fallible than humans. In cases where these systems are racialized as white that could have dangerous consequences for humans that are not,” said study co-author Dr Kanta Dihal.
“Given that society has, for centuries, promoted the association of intelligence with white Europeans, it is to be expected that when this culture is asked to imagine an intelligent machine it imagines a white machine.”
Prior research shows these racialized depictions affect how we interact with AI. A recent study from the universities of Drexel and Maryland found that “perceived interpersonal closeness” with a virtual agent is higher when it has the same racial identity as the person using it.
The researchers also fear that designers are discouraged from creating non-white depictions. They point to this anecdote from author Ruha Benjamin as evidence:
A former Apple employee who noted that he was ‘not Black or Hispanic’ described his experience on a team that was developing speech recognition for Siri, the virtual assistant program. As they worked on different English dialects — Australian, Singaporean and Indian English — he asked his boss: ‘What about African American English?’ To this his boss responded: ‘Well, Apple products are for the premium market.’
The team also investigated the depictions of machines in Google images search results. They found that all the non-abstract results for AI had either Caucasian features — or were literally the color white.
The researchers note that a few recent TV shows, such as Westworld, have featured AI characters with a mix of skin tones. But these remain rare exceptions. Until that changes, depictions of AI will continue to exacerbate racial inequality.
Here’s a demo of Cast Connect on Android TV as Google details supported apps
Aumenta el número de personas que buscan ingresar a Estados Unidos
Microsoft condemns Apple’s App Store policies
¿Máquinas con sentido común?
Vuelven Los Ángeles de Charlie
¿Cuál es la edad ideal para emprender?
Politics2 days ago
El gobierno de López Obrador, ¿puede cambiar las reglas del sector energético?
Entertainment2 days ago
Fall Guys is the feel-good game of the summer
Market1 day ago
Acciones de Peñoles acumulan rendimiento de 94.22% en el año
Startups2 days ago
YC-backed Statiq wants to bootstrap India’s EV charging network
Bridge1 day ago
Departamento del Tesoro de Estados Unidos colocará 112,000 millones de dólares en deuda
Startups1 day ago
Krisp snags $5M A round as demand grows for its voice-isolating algorithm
Politics1 day ago
Oaxaca, el primer estado que prohíbe la distribución y venta de comida chatarra a menores
Startups2 days ago
US tech needs a pivot to survive