Connect with us

Artificial Intelligence

La Inteligencia Artificial y los olores



la inteligencia artificial y los olores supnews
Imagen de BrownMantis en Pixabay

La Inteligencia Artificial cada día nos sorprende más con las actividades que puede realizar. Va desde aquella que puede jugar ajedrez, otra que es capaz de crear obras de arte (como lo hace la máquina CloudPainter)  y una más que detecta emociones, así lo ha demostrado Emonet, por mencionar algunas.


Pero hasta el momento no se ha podido crear una Inteligencia Artificial que sea capaz de detectar olores


¿Por qué no se ha podido crear una IA capaz de detectar olores?


El olfato humano es uno de los sentidos más menospreciados y al mismo tiempo menos aprovechado. Llevarlo a la Inteligencia Artificial sería de mucha ayuda, especialmente en el sector salud. Para ello se debe aprender de los perros, quienes son capaces de detectar enfermedades a través de los aromas. 


Una vez que llevan un entrenamiento especial, los canes son capaces de detectar pequeños cambios en el cuerpo humano a través de nuestras hormonas que liberan compuestos orgánicos volátiles. Dentro de las enfermedades que son capaces de detectar, tenemos las siguientes: cáncer, narcolepsia, migrañas, baja azúcar en la sangre, convulsiones, miedo y estrés.


¿Se ha tomado iniciativa de crear esta Inteligencia Artificial?


La NASA desarrolló en el 2014 la Nariz electrónica o ENose, una máquina que tenía la capacidad de “reconocer cualquier compuesto o combinación de compuestos”. Dicha IA era capaz de poder reconocer si se trataba de una Coca-Cola o Pepsi.


Pero el ENose va a más ayuda de poder identificar una bebida y ayudar a las misiones de la NASA. También puede tener aplicaciones en la tierra, ya que es capaz de detectar acumulaciones de gas en torres de perforación petrolíferas y con esto proteger a los trabajadores.


¿Cómo van los avances en la actualidad?


Andreas Mershin, físico y director del laboratorio Label Free Research Group del MIT, y su colega y mentor, Shuguang Zhang, han comenzado a desarrollar un sistema de inteligencia artificial llamado Nano-Nose.


Inteligencia Artificial: El dilema entre seguridad y privacidad

Esta Inteligencia Artificial cuenta con una base de datos obtenida de los perros que son capaces de detectar enfermedades a través del olfato. Información que les llevará a seleccionar los receptores que se necesitan colocar en el dispositivo.


La finalidad es que se convierta en una herramienta de diagnóstico y que un futuro no muy lejano, venga incluido en todos los smartphones para poder recopilar la información de los usuarios. 


Aún falta mucho para llegar a que esto se convierta en una realidad, pero las bases ya están cimentadas. ¡Será un gran avance tecnológico para la humanidad!


Artificial Intelligence

New AI project captures Jane Austen’s thoughts on social media




Have you ever wanted to pick the brains of Sir Isaac Newton, Mary Shelley, or Benjamin Franklin? Well now you can (kinda), thanks to a new experiment by magician and novelist Andrew Mayne.

The project — called AI|Writer — uses OpenAI’s new text generator API to create simulated conversations with virtual historical figures. The system first works out the purpose of the message and the intended recipient by searching for patterns in the text. It then uses the API‘s internal knowledge of that person to guess how they would respond in their written voice.

The digitized characters can answer questions about their work, explain scientific theories, or offer their opinions. For example, Marie Curie gave a lesson on radiation, H.G. Wells revealed his inspiration for The Time Machine, while Alfred Hitchcock compared Christopher Nolan’s Interstellar to Stanley Kubrick’s 2001.

[Read: Confetti, koalas, and candles of love: Backstage at Eurovision’s AI song contest]

Mayne also used the system for creative inspiration. When Edgar Allan Poe was asked to complete a poem that started with “The grey cat stared at the wall and the sounds beyond…” he made the following suggestion:

The grey cat stared at the wall

and the sounds beyond the wall,

And stood, his paws folded,

in silent grace,

until the walls fell away,

as cats can do.

AI|Writer’s strengths and weaknesses

The characters could also compare their own eras with the present day. When Jane Austen was asked how her characters would use social media in the 21st century, the author replied:

I’d have Emma promote her self-published book on Facebook. I’d have Emma update her status with a lament about the deplorable state of the publishing industry in a desperate attempt to get her Facebook friends to buy her book.

Mayne says the characters did well with historical facts, but could be “quite erratic with matters of opinion” and “rarely reply to the same question in the same way.”

He demonstrated these variations by asking both Newton and Gottfried Leibniz who invented calculus.

“Newton almost always insists that he invented Calculus alone and is pretty brusque about it,” Mayne wrote on his website. “Leibniz sometimes says he did. Other times he’ll be vague.” At one point, Leibniz even threatened to kill Mayne if he tried to take the credit for the discovery.

As well as historical figures, the system can respond in the voice of fictional characters. In fact, Mayne says the most “touching” message he’s received was this reply from the Incredible Hulk.

Using my AI writing project based on @OpenAI‘s API, I asked this question and got a very sincere reply:

Dear Hulk,

Why Hulk smash?



Dear Bruce,

Hulk likes to smash. Why? Hulk not know why.

Please help.

Your friend,


— Andrew Mayne (@AndrewMayne) July 4, 2020

Mayne is keen to stress that AI|Writer “should not be considered a reliable source nor an accurate reflection of their actual views and opinions” and is merely an experiment to examine interactions with AI. He also plans to open up access to the tool to more people. If you wanna join the waiting list, you can sign up here.

Published July 6, 2020 — 12:38 UTC

Continue Reading

Artificial Intelligence

Study: Only 18% of data science students are learning about AI ethics




Amid a growing backlash over AI‘s racial and gender biases, numerous tech giants are launching their own ethics initiatives — of dubious intent.

The schemes are billed as altruistic efforts to make tech serve humanity. But critics argue their main concern is evading regulation and scrutiny through “ethics washing.”

At least we can rely on universities to teach the next generation of computer scientists to make. Right? Apparently not, according to a new survey of 2,360 data science students, academics, and professionals by software firm Anaconda.

Only 15% of instructors and professors said they’re teaching AI ethics, and just 18% of students indicated they’re learning about the subject.

[Read: Scientists claim they can teach AI to judge ‘right’ from ‘wrong’]

Notably, the worryingly low figures aren’t due to a lack of interest. Nearly half of respondents said the social impacts of bias or privacy were the “biggest problem to tackle in the AI/ML arena today.” But those concerns clearly aren’t reflected in their curricula.

The AI ethics pipeline

Anaconda’s survey of data scientists from more than 100 countries found the ethics gap extends from academia to industry. While organizations can mitigate the problem through fairness tools and explainability solutions, neither appears to be gaining mass adoption.

Only 15% of respondents said their organization has implemented a fairness system, and just 19% reported they have an explainability tool in place.

The study authors warned that this could have far-reaching consequences:

Above and beyond the ethical concerns at play, a failure to proactively address these areas poses strategic risk to enterprises and institutions across competitive, financial, and even legal dimensions.

The survey also revealed concerns around the security of open-source tools and business training, and data drudgery. But it’s the disregard of ethics that most troubled the study authors:

Of all the trends identified in our study, we find the slow progress to address bias and fairness, and to make machine learning explainable the most concerning. While these two issues are distinct, they are interrelated, and both pose important questions for society, industry, and academia.

While businesses and academics are increasingly talking about AI ethics, their words mean little if they don’t turn into actions.

Published July 3, 2020 — 17:16 UTC

Continue Reading

Artificial Intelligence

Trump contracts Peter Thiel-backed startup to build his (virtual) border wall




Trump’s xenophobic dream of building a “big, beautiful wall” along the Mexico–US border has moved a step closer to (virtual) reality. The White House just struck a deal with Palmer Luckey’s Anduril Industrial to erect an AI-powered partition along the frontier.

Anduril will install hundreds of surveillance towers across the rugged terrain, The Washington Post reports. The pillars will use cameras and thermal imaging to detect anyone trying to enter “the land of the free” and send their location to the cellphones of US Border Patrol agents.

US Customers and Border Protection confirmed that 200 of the towers would be installed by 2022, although it didn’t mention Anduril by name, nor the cost of the contract. Anduril executives told The Post that the deal is worth several hundred million dollars.

“These towers give agents in the field a significant leg up against the criminal networks that facilitate illegal cross-border activity,” said Border Patrol Chief Rodney Scott in a statement. “The more our agents know about what they encounter in the field, the more safely and effectively they can respond.”

[Read: Trump’s latest immigration ban is bad news for US AI ambitions]

In a description of the system that reads like a vacations brochure, the agency said the towers were “perfectly suited for remote and rural locations” operate with “100 percent renewable energy” and “provide autonomous surveillance operations 24 hours per day, 365 days per year.”

Luckey, Thiel, and Trump

Notably, the towers don’t use facial recognition. Instead, they detect movement via radar, and then scan the image with AI to check that it’s a human. Anduril claims it can distinguish between animals and people with 97% accuracy.

The company is also confident that its system has a long-term future on the border — regardless of who wins November’s presidential election. Candidate Joe Biden recently called Trump’s wall dream “expensive, ineffective, and wasteful,” but Democrats have also expressed support for a cheaper, virtual barrier.

“No matter where we go as a country, we’re going to need to have situational awareness on the border,” Matthew Steckman, Anduril’s chief revenue officer, told The Post. “No matter if talking to a Democrat or a Republican, they agree that this type of system is needed.”

That’s more good news for Anduril, which this week saw its valuation leap to $1.9 billion after raising a $200 million funding round.

The company was founded in 2017 by Oculus inventor Palmer Luckey. After he sold the VR firm to Facebook for $3 billion, Luckey was reportedly ousted from the social network for donating $10,000 to a pro-Trump group so it could spread memes about Hillary Clinton.

Anduril is also backed by another of Trump’s big buddies in big tech: billionaire investor and former PayPal founder Peter Thiel — who promises he’s not a vampire.

However, even Thiel is considering ditching his increasingly deranged and racist President. Perhaps the nice big cheque for Anduril will get him back onboard the Trump Train.

Published July 3, 2020 — 12:06 UTC

Continue Reading


Spanish English