Connect with us

Artificial Intelligence

¿Máquinas con sentido común?

Published

on

máquinas con sentido común supnews
Imagen de Seanbatty en Pixabay

La creciente oleada de inteligencia artificial de los últimos tiempos está teniendo entre uno de sus retos, la integración del ‘sentido común’ a las máquinas a través del deep learning y otras tecnologías. El  aprendizaje profundo consiste en crear e implementar en los aparatos una red neuronal artificial que emula las funciones del cerebro humano. 

 

No obstante, muchos investigadores del ramo no están de acuerdo y alegan que existen mejores formas de aprendizaje robótico. Como la automatización de tareas en base a pronósticos y programas de lógica

 

Actualmente, el avance en este tema es vertiginoso. Los métodos de aprendizaje para las máquinas son cada vez más sofisticados. Aún así, el sentido común o la capacidad de discernir, tomando en cuenta elementos abstractos y en base a ello actuar e interactuar con el entorno, no es una capacidad fácil de integrar en un mecanismo artificial. Lo que se ha llevado a cabo hasta ahora, no es algo 100% espontáneo. Lo que sí se ha podido lograr es la simulación de reacciones y respuestas humanas por parte de robots.

 

Investigaciones y avances

 

Grandes empresas mundiales como Facebook, Apple, Microsoft y Google, han invertido importantes cantidades de dinero en el desarrollo de tecnologías de identificación facial, ubicación de patrones y preferencias de los usuarios y reconocimiento de voces y expresiones. 


TAMBIÉN TE PUEDE INTERESAR:

Inteligencia Artificial: El dilema entre seguridad y privacidad


Sin embargo, han sido numerosas las fallas técnicas, pues dentro de la inteligencia artificial el más mínimo error de programación puede traer resultados confusos e inesperados. Una máquina no entiende los conceptos de intromisión en la privacidad o de justicia y libertad.

 

Actualmente hay muchas iniciativas que tienen como misión principal la creación de ‘conciencia’ en los aparatos. En Seattle, Washington, el Instituto Allen trabaja en el Proyecto Alejandría con una inversión de más de 120 millones de dólares, para el desarrollo de sentido común en las máquinas. 

 

Por otra parte, la compañía Vicarious, con sede en San Francisco, California, con el apoyo de magnates tecnológicos como Elon Musk y Mark Zuckerberg, está desarrollando robots que imiten la capacidad humana de dominar múltiples funciones y cambiar de una actividad a otra como lo hacemos las personas. 

entido común supnews

Imagen de Jonny Lindner en Pixabay

El mismo Pentágono está colaborando con investigaciones en diversas universidades en el desarrollo de tecnologías que imiten artificialmente el razonamiento humano. Dentro de La Agencia de Proyectos de Investigación Avanzados de Defensa (DARPA) se creó el programa Machine Common Sense (MCS) que también está desarrollando una investigación para insertar el sentido común en máquinas inteligentes. En ella se incluyen conocimientos de el procesamiento del lenguaje natural, la comprensión cognitiva y el aprendizaje profundo.

 

El futuro y las máquinas con sentido común

 

Desde hace décadas la ciencia ficción nos planteaba muchas opciones en avances tecnológicos que ya forman parte de nuestra cotidianidad. Un claro ejemplo de ello son las video llamadas. En cuanto a la creación de mecanismos artificiales con características y reacciones humanas, el camino ya está siendo recorrido. 

 

La conocida robot humanoide Sofía ha sorprendido al planeta entero, con su capacidad de contestar de forma casi espontánea en múltiples exposiciones y entrevistas. No obstante, sigue siendo un prototipo con mucho por mejorar y con carencia de numerosas respuestas. 

Las proyecciones a futuro de esta realidad están divididas entre adeptos y detractores. Las fantasías catastróficas basadas en la cinematografía, donde los robots con conciencia propia toman el mando de la humanidad en detrimento de las personas. Así como las utópicas que presentan una vida cómoda y placentera en la que los robots se encargan de ejecutar las labores tediosas y pesadas del día a día son extremas. 

 

Lo que sí es cierto es que los avances en la creación de máquinas con sentido común son impresionantes y en muchos casos, también espeluznantes. Y que esta realidad podría encontrarse a la vuelta de la esquina.

 

¿Cuál es tu percepción en relación a la inteligencia artificial y a los robots con conciencia propia? Compártenos tu opinión.

Artificial Intelligence

New AI project captures Jane Austen’s thoughts on social media

Published

on

new-ai-project-captures-jane-austen’s-thoughts-on-social-media

Have you ever wanted to pick the brains of Sir Isaac Newton, Mary Shelley, or Benjamin Franklin? Well now you can (kinda), thanks to a new experiment by magician and novelist Andrew Mayne.

The project — called AI|Writer — uses OpenAI’s new text generator API to create simulated conversations with virtual historical figures. The system first works out the purpose of the message and the intended recipient by searching for patterns in the text. It then uses the API‘s internal knowledge of that person to guess how they would respond in their written voice.

The digitized characters can answer questions about their work, explain scientific theories, or offer their opinions. For example, Marie Curie gave a lesson on radiation, H.G. Wells revealed his inspiration for The Time Machine, while Alfred Hitchcock compared Christopher Nolan’s Interstellar to Stanley Kubrick’s 2001.

[Read: Confetti, koalas, and candles of love: Backstage at Eurovision’s AI song contest]

Mayne also used the system for creative inspiration. When Edgar Allan Poe was asked to complete a poem that started with “The grey cat stared at the wall and the sounds beyond…” he made the following suggestion:

The grey cat stared at the wall

and the sounds beyond the wall,

And stood, his paws folded,

in silent grace,

until the walls fell away,

as cats can do.

AI|Writer’s strengths and weaknesses

The characters could also compare their own eras with the present day. When Jane Austen was asked how her characters would use social media in the 21st century, the author replied:

I’d have Emma promote her self-published book on Facebook. I’d have Emma update her status with a lament about the deplorable state of the publishing industry in a desperate attempt to get her Facebook friends to buy her book.

Mayne says the characters did well with historical facts, but could be “quite erratic with matters of opinion” and “rarely reply to the same question in the same way.”

He demonstrated these variations by asking both Newton and Gottfried Leibniz who invented calculus.

“Newton almost always insists that he invented Calculus alone and is pretty brusque about it,” Mayne wrote on his website. “Leibniz sometimes says he did. Other times he’ll be vague.” At one point, Leibniz even threatened to kill Mayne if he tried to take the credit for the discovery.

As well as historical figures, the system can respond in the voice of fictional characters. In fact, Mayne says the most “touching” message he’s received was this reply from the Incredible Hulk.

Using my AI writing project based on @OpenAI‘s API, I asked this question and got a very sincere reply:

Dear Hulk,

Why Hulk smash?

Best,

Banner

Dear Bruce,

Hulk likes to smash. Why? Hulk not know why.

Please help.

Your friend,

Hulk

— Andrew Mayne (@AndrewMayne) July 4, 2020

Mayne is keen to stress that AI|Writer “should not be considered a reliable source nor an accurate reflection of their actual views and opinions” and is merely an experiment to examine interactions with AI. He also plans to open up access to the tool to more people. If you wanna join the waiting list, you can sign up here.

Published July 6, 2020 — 12:38 UTC

Continue Reading

Artificial Intelligence

Study: Only 18% of data science students are learning about AI ethics

Published

on

study:-only-18%-of-data-science-students-are-learning-about-ai-ethics

Amid a growing backlash over AI‘s racial and gender biases, numerous tech giants are launching their own ethics initiatives — of dubious intent.

The schemes are billed as altruistic efforts to make tech serve humanity. But critics argue their main concern is evading regulation and scrutiny through “ethics washing.”

At least we can rely on universities to teach the next generation of computer scientists to make. Right? Apparently not, according to a new survey of 2,360 data science students, academics, and professionals by software firm Anaconda.

Only 15% of instructors and professors said they’re teaching AI ethics, and just 18% of students indicated they’re learning about the subject.

[Read: Scientists claim they can teach AI to judge ‘right’ from ‘wrong’]

Notably, the worryingly low figures aren’t due to a lack of interest. Nearly half of respondents said the social impacts of bias or privacy were the “biggest problem to tackle in the AI/ML arena today.” But those concerns clearly aren’t reflected in their curricula.

The AI ethics pipeline

Anaconda’s survey of data scientists from more than 100 countries found the ethics gap extends from academia to industry. While organizations can mitigate the problem through fairness tools and explainability solutions, neither appears to be gaining mass adoption.

Only 15% of respondents said their organization has implemented a fairness system, and just 19% reported they have an explainability tool in place.

The study authors warned that this could have far-reaching consequences:

Above and beyond the ethical concerns at play, a failure to proactively address these areas poses strategic risk to enterprises and institutions across competitive, financial, and even legal dimensions.

The survey also revealed concerns around the security of open-source tools and business training, and data drudgery. But it’s the disregard of ethics that most troubled the study authors:

Of all the trends identified in our study, we find the slow progress to address bias and fairness, and to make machine learning explainable the most concerning. While these two issues are distinct, they are interrelated, and both pose important questions for society, industry, and academia.

While businesses and academics are increasingly talking about AI ethics, their words mean little if they don’t turn into actions.

Published July 3, 2020 — 17:16 UTC

Continue Reading

Artificial Intelligence

Trump contracts Peter Thiel-backed startup to build his (virtual) border wall

Published

on

trump-contracts-peter-thiel-backed-startup-to-build-his-(virtual)-border-wall

Trump’s xenophobic dream of building a “big, beautiful wall” along the Mexico–US border has moved a step closer to (virtual) reality. The White House just struck a deal with Palmer Luckey’s Anduril Industrial to erect an AI-powered partition along the frontier.

Anduril will install hundreds of surveillance towers across the rugged terrain, The Washington Post reports. The pillars will use cameras and thermal imaging to detect anyone trying to enter “the land of the free” and send their location to the cellphones of US Border Patrol agents.

US Customers and Border Protection confirmed that 200 of the towers would be installed by 2022, although it didn’t mention Anduril by name, nor the cost of the contract. Anduril executives told The Post that the deal is worth several hundred million dollars.

“These towers give agents in the field a significant leg up against the criminal networks that facilitate illegal cross-border activity,” said Border Patrol Chief Rodney Scott in a statement. “The more our agents know about what they encounter in the field, the more safely and effectively they can respond.”

[Read: Trump’s latest immigration ban is bad news for US AI ambitions]

In a description of the system that reads like a vacations brochure, the agency said the towers were “perfectly suited for remote and rural locations” operate with “100 percent renewable energy” and “provide autonomous surveillance operations 24 hours per day, 365 days per year.”

Luckey, Thiel, and Trump

Notably, the towers don’t use facial recognition. Instead, they detect movement via radar, and then scan the image with AI to check that it’s a human. Anduril claims it can distinguish between animals and people with 97% accuracy.

The company is also confident that its system has a long-term future on the border — regardless of who wins November’s presidential election. Candidate Joe Biden recently called Trump’s wall dream “expensive, ineffective, and wasteful,” but Democrats have also expressed support for a cheaper, virtual barrier.

“No matter where we go as a country, we’re going to need to have situational awareness on the border,” Matthew Steckman, Anduril’s chief revenue officer, told The Post. “No matter if talking to a Democrat or a Republican, they agree that this type of system is needed.”

That’s more good news for Anduril, which this week saw its valuation leap to $1.9 billion after raising a $200 million funding round.

The company was founded in 2017 by Oculus inventor Palmer Luckey. After he sold the VR firm to Facebook for $3 billion, Luckey was reportedly ousted from the social network for donating $10,000 to a pro-Trump group so it could spread memes about Hillary Clinton.

Anduril is also backed by another of Trump’s big buddies in big tech: billionaire investor and former PayPal founder Peter Thiel — who promises he’s not a vampire.

However, even Thiel is considering ditching his increasingly deranged and racist President. Perhaps the nice big cheque for Anduril will get him back onboard the Trump Train.

Published July 3, 2020 — 12:06 UTC

Continue Reading

Trending

English
Spanish English