Connect with us

Artificial Intelligence

Google retira un programa que ofrecía datos de sus usuarios a terceros



Google retira un programa que ofrecía datos de sus usuarios supnews
Imagen de Simon Steinberger en Pixabay

Parece que no tenemos escapatoria. Lo que era un secreto a voces se ha convertido en una verdad irrefutable del tamaño de una catedral. Una pesadilla para algunos y el anuncio de la llegada del apocalipsis para otros. Las grandes corporaciones nos espían. Pero lo peor es que hacen lo que quieren con nuestros datos sin que podamos hacer mucho por evitarlo. Y en la mayoría de los casos, nosotros afirmamos estar de acuerdo, aun sin saberlo.


Durante las últimas semanas ha sido tema frecuente los comunicados de empresas como Facebook, Microsoft, Amazon o Apple. Admiten que sí, nos han estado escuchando. Siempre bajo la premisa de asegurar que todo marche bien, entrenar mejor a sus sistemas de Inteligencia Artificial y optimizar las experiencias de los usuarios. (Es decir, nos espían por nuestro propio beneficio).


De cualquier forma, algunos de estos conglomerados han decidido poner en el congelador sus programas de escuchas, mientras se aseguran que los datos extraídos no terminarán en manos de personas inescrupulosas.


Google huye hacia adelante


Mucho antes que el asunto de las ‘escuchas’ llegara a los medios de comunicación, Google ya había decidido cancelar un programa que compartía datos de sus usuarios con terceros. Se trata del Mobile Network Insights, una especie de mapa de intensidad, diseñado para medir la calidad de los servicios ofrecidos por las empresas de telefonía móvil. Servicio que se alimentaba de los datos en tiempo real que proporcionaban sus clientes desde los smartphones.


Facebook impone su marca sobre Whatsapp e Instagram

La subsidiaria de Alphabet no quiere verse envuelta en otra polémica de filtración de información y cosas por el estilo. Evaluaron que antes de tener que ir otra vez a tribunales, era mejor desistir de la idea de este servicio.

Mobile Network Insights fue pensado para que las compañías de telefonía tuviesen acceso a información relativa a la calidad de sus propios servicios, así como el de sus competidores. De momento, un servicio similar llamado Actionable Insights ofrecido por Facebook sigue operativo.

Con información de Hipertextual

Artificial Intelligence

New AI project captures Jane Austen’s thoughts on social media




Have you ever wanted to pick the brains of Sir Isaac Newton, Mary Shelley, or Benjamin Franklin? Well now you can (kinda), thanks to a new experiment by magician and novelist Andrew Mayne.

The project — called AI|Writer — uses OpenAI’s new text generator API to create simulated conversations with virtual historical figures. The system first works out the purpose of the message and the intended recipient by searching for patterns in the text. It then uses the API‘s internal knowledge of that person to guess how they would respond in their written voice.

The digitized characters can answer questions about their work, explain scientific theories, or offer their opinions. For example, Marie Curie gave a lesson on radiation, H.G. Wells revealed his inspiration for The Time Machine, while Alfred Hitchcock compared Christopher Nolan’s Interstellar to Stanley Kubrick’s 2001.

[Read: Confetti, koalas, and candles of love: Backstage at Eurovision’s AI song contest]

Mayne also used the system for creative inspiration. When Edgar Allan Poe was asked to complete a poem that started with “The grey cat stared at the wall and the sounds beyond…” he made the following suggestion:

The grey cat stared at the wall

and the sounds beyond the wall,

And stood, his paws folded,

in silent grace,

until the walls fell away,

as cats can do.

AI|Writer’s strengths and weaknesses

The characters could also compare their own eras with the present day. When Jane Austen was asked how her characters would use social media in the 21st century, the author replied:

I’d have Emma promote her self-published book on Facebook. I’d have Emma update her status with a lament about the deplorable state of the publishing industry in a desperate attempt to get her Facebook friends to buy her book.

Mayne says the characters did well with historical facts, but could be “quite erratic with matters of opinion” and “rarely reply to the same question in the same way.”

He demonstrated these variations by asking both Newton and Gottfried Leibniz who invented calculus.

“Newton almost always insists that he invented Calculus alone and is pretty brusque about it,” Mayne wrote on his website. “Leibniz sometimes says he did. Other times he’ll be vague.” At one point, Leibniz even threatened to kill Mayne if he tried to take the credit for the discovery.

As well as historical figures, the system can respond in the voice of fictional characters. In fact, Mayne says the most “touching” message he’s received was this reply from the Incredible Hulk.

Using my AI writing project based on @OpenAI‘s API, I asked this question and got a very sincere reply:

Dear Hulk,

Why Hulk smash?



Dear Bruce,

Hulk likes to smash. Why? Hulk not know why.

Please help.

Your friend,


— Andrew Mayne (@AndrewMayne) July 4, 2020

Mayne is keen to stress that AI|Writer “should not be considered a reliable source nor an accurate reflection of their actual views and opinions” and is merely an experiment to examine interactions with AI. He also plans to open up access to the tool to more people. If you wanna join the waiting list, you can sign up here.

Published July 6, 2020 — 12:38 UTC

Continue Reading

Artificial Intelligence

Study: Only 18% of data science students are learning about AI ethics




Amid a growing backlash over AI‘s racial and gender biases, numerous tech giants are launching their own ethics initiatives — of dubious intent.

The schemes are billed as altruistic efforts to make tech serve humanity. But critics argue their main concern is evading regulation and scrutiny through “ethics washing.”

At least we can rely on universities to teach the next generation of computer scientists to make. Right? Apparently not, according to a new survey of 2,360 data science students, academics, and professionals by software firm Anaconda.

Only 15% of instructors and professors said they’re teaching AI ethics, and just 18% of students indicated they’re learning about the subject.

[Read: Scientists claim they can teach AI to judge ‘right’ from ‘wrong’]

Notably, the worryingly low figures aren’t due to a lack of interest. Nearly half of respondents said the social impacts of bias or privacy were the “biggest problem to tackle in the AI/ML arena today.” But those concerns clearly aren’t reflected in their curricula.

The AI ethics pipeline

Anaconda’s survey of data scientists from more than 100 countries found the ethics gap extends from academia to industry. While organizations can mitigate the problem through fairness tools and explainability solutions, neither appears to be gaining mass adoption.

Only 15% of respondents said their organization has implemented a fairness system, and just 19% reported they have an explainability tool in place.

The study authors warned that this could have far-reaching consequences:

Above and beyond the ethical concerns at play, a failure to proactively address these areas poses strategic risk to enterprises and institutions across competitive, financial, and even legal dimensions.

The survey also revealed concerns around the security of open-source tools and business training, and data drudgery. But it’s the disregard of ethics that most troubled the study authors:

Of all the trends identified in our study, we find the slow progress to address bias and fairness, and to make machine learning explainable the most concerning. While these two issues are distinct, they are interrelated, and both pose important questions for society, industry, and academia.

While businesses and academics are increasingly talking about AI ethics, their words mean little if they don’t turn into actions.

Published July 3, 2020 — 17:16 UTC

Continue Reading

Artificial Intelligence

Trump contracts Peter Thiel-backed startup to build his (virtual) border wall




Trump’s xenophobic dream of building a “big, beautiful wall” along the Mexico–US border has moved a step closer to (virtual) reality. The White House just struck a deal with Palmer Luckey’s Anduril Industrial to erect an AI-powered partition along the frontier.

Anduril will install hundreds of surveillance towers across the rugged terrain, The Washington Post reports. The pillars will use cameras and thermal imaging to detect anyone trying to enter “the land of the free” and send their location to the cellphones of US Border Patrol agents.

US Customers and Border Protection confirmed that 200 of the towers would be installed by 2022, although it didn’t mention Anduril by name, nor the cost of the contract. Anduril executives told The Post that the deal is worth several hundred million dollars.

“These towers give agents in the field a significant leg up against the criminal networks that facilitate illegal cross-border activity,” said Border Patrol Chief Rodney Scott in a statement. “The more our agents know about what they encounter in the field, the more safely and effectively they can respond.”

[Read: Trump’s latest immigration ban is bad news for US AI ambitions]

In a description of the system that reads like a vacations brochure, the agency said the towers were “perfectly suited for remote and rural locations” operate with “100 percent renewable energy” and “provide autonomous surveillance operations 24 hours per day, 365 days per year.”

Luckey, Thiel, and Trump

Notably, the towers don’t use facial recognition. Instead, they detect movement via radar, and then scan the image with AI to check that it’s a human. Anduril claims it can distinguish between animals and people with 97% accuracy.

The company is also confident that its system has a long-term future on the border — regardless of who wins November’s presidential election. Candidate Joe Biden recently called Trump’s wall dream “expensive, ineffective, and wasteful,” but Democrats have also expressed support for a cheaper, virtual barrier.

“No matter where we go as a country, we’re going to need to have situational awareness on the border,” Matthew Steckman, Anduril’s chief revenue officer, told The Post. “No matter if talking to a Democrat or a Republican, they agree that this type of system is needed.”

That’s more good news for Anduril, which this week saw its valuation leap to $1.9 billion after raising a $200 million funding round.

The company was founded in 2017 by Oculus inventor Palmer Luckey. After he sold the VR firm to Facebook for $3 billion, Luckey was reportedly ousted from the social network for donating $10,000 to a pro-Trump group so it could spread memes about Hillary Clinton.

Anduril is also backed by another of Trump’s big buddies in big tech: billionaire investor and former PayPal founder Peter Thiel — who promises he’s not a vampire.

However, even Thiel is considering ditching his increasingly deranged and racist President. Perhaps the nice big cheque for Anduril will get him back onboard the Trump Train.

Published July 3, 2020 — 12:06 UTC

Continue Reading


Spanish English