Connect with us

Artificial Intelligence

Reconocimiento facial para perros

Published

on

Reconocimiento facial para perros supnews
Captura

Decir que la Inteligencia Artificial no tiene límites ya suena a ‘cliché’. Y eso que ‘apenas’ estamos en 2019 y el mundo todavía está lejos de descubrir todas las bondades que ofrecen estas tecnologías. También son muchos los problemas, aunque estos van exclusivamente relacionados con los usos que los seres humanos hacemos de ellas.

 

Como sea. Para no salirnos del tema de las bondades de la AI, una tienda online de mascotas de Brasil ha diseñado un sistema que dará mucho de qué hablar. Se trata de una herramienta de reconocimiento facial para perros que permite a estas mascotas ‘comprar’ sus productos favoritos.

 

¿Cómo funciona?

 

Con la ayuda de Leonardo Ogata, reconocido entrenador canino brasileño, se creó una base de datos sobre los significados de las expresiones faciales de cánidos domésticos de varias razas. Con esta información, la Inteligencia Artificial de Pet-Commerce puede detectar cuáles son los intereses de cada can.


TAMBIÉN TE PUEDE INTERESAR:

La Inteligencia Artificial también juega Póker (y gana)


Para llegar a esto, las mascotas deben ver un video con todo el stock del ecommerce en un dispositivo que cuente con una cámara y conexión a internet. Según las reacciones de los animales, la AI calificará el interés que muestran hacia cada producto en una escala que parte desde hueso rojo (poco interés) hasta hueso verde (gran interés). Cuando ocurre esto último, el ítem se agrega de forma automática al carrito de compras.

El video tiene que reproducirse a volumen alto, al tiempo que los perros deben estar en la libertad de retirarse si las imágenes en la pantalla no captan su atención. Por el momento el sistema solo funciona con canes, aunque el equipo detrás de esta tecnología ya está desarrollando el reconocimiento facial para gatos.

Con información de E-News

Artificial Intelligence

Everything you need to know about artificial neural networks

Published

on

everything-you-need-to-know-about-artificial-neural-networks

Welcome to Neural Basics, a collection of guides and explainers to help demystify the world of artificial intelligence.

One of the most influential technologies of the past decade is artificial neural networks, the fundamental piece of deep learning algorithms, the bleeding edge of artificial intelligence.

You can thank neural networks for many of applications you use every day, such as Google’s translation service, Apple’s Face ID iPhone lock and Amazon’s Alexa AI-powered assistant. Neural networks are also behind some of the important artificial intelligence breakthroughs in other fields, such as diagnosing skin and breast cancer, and giving eyes to self-driving cars.

The concept and science behind artificial neural networks have existed for many decades. But it has only been in the past few years that the promises of neural networks have turned to reality and helped the AI industry emerge from an extended winter.

While neural networks have helped the AI take great leaps, they are also often misunderstood. Here’s everything you need to know about neural networks.

Similarities between artificial and biological neural networks

The original vision of the pioneers of artificial intelligence was to replicate the functions of the human brain, nature’s smartest and most complex known creation. That’s why the field has derived much of its nomenclature (including the term “artificial intelligence”) from the physique and functions of the human mind.

Artificial neural networks are inspired from their biological counterparts. Many of the functions of the brain continue to remain a mystery, but what we know is that biological neural networks enable the brain to process huge amounts of information in complicated ways.

The brain’s biological neural network consists of approximately 100 billion neurons, the basic processing unit of the brain. Neurons perform their functions through their massive connections to each other, called synapses. The human brain has approximately 100 trillion synapses, about 1,000 per neuron.

Every function of the brain involves electrical currents and chemical reactions running across a vast number of these neurons.

How artificial neural networks functions

The core component of ANNs is artificial neurons. Each neuron receives inputs from several other neurons, multiplies them by assigned weights, adds them and passes the sum to one or more neurons. Some artificial neurons might apply an activation function to the output before passing it to the next variable.

artificial neuron structureThe structure of an artificial neuron, the basic component of artificial neural networks (source: Wikipedia)

At its core, this might sound like a very trivial math operation. But when you place hundreds, thousands and millions of neurons in multiple layers and stack them up on top of each other, you’ll obtain an artificial neural network that can perform very complicated tasks, such as classifying images or recognizing speech.

Artificial neural networks are composed of an input layer, which receives data from outside sources (data files, images, hardware sensors, microphone…), one or more hidden layers that process the data, and an output layer that provides one or more data points based on the function of the network. For instance, a neural network that detects persons, cars and animals will have an output layer with three nodes. A network that classifies bank transactions between safe and fraudulent will have a single output.

Different layers of a neural network
Neural networks are composed of multiple layers (source: www.deeplearningbook.org)

Training artificial neural networks

Artificial neural networks start by assigning random values to the weights of the connections between neurons. The key for the ANN to perform its task correctly and accurately is to adjust these weights to the right numbers. But finding the right weights is not very easy, especially when you’re dealing with multiple layers and thousands of neurons.

This calibration is done by “training” the network with annotated examples. For instance, if you want to train the image classifier mentioned above, you provide it with multiple photos, each labeled with its corresponding class (person, car or animal). As you provide it with more and more training examples, the neural network gradually adjusts its weights to map each input to the correct outputs.

Basically, what happens during training is the network adjust itself to glean specific patterns from the data. Again, in the case of an image classifier network, when you train the AI model with quality examples, each layer detects a specific class of features. For instance, the first layer might detect horizontal and vertical edges, the next layers might detect corners and round shapes. Further down the network, deeper layers will start to pick out more advanced features such as faces and objects.

Visualization of a neural network's features
Each layer of the neural network will extract specific features from the input image.(source: arxiv.org)

When you run a new image through a well-trained neural network, the adjusted weights of the neurons will be able to extract the right features and determine with accuracy to which output class the image belongs.

One of the challenges of training neural networks is to find the right amount and quality of training examples. Also, training large AI models requires vast amounts of computing resources. To overcome this challenge, many engineers use “transfer learning,” a training technique where you take a pre-trained model and fine-tune it with new, domain-specific examples. Transfer learning is especially efficient when there’s already an AI model that is close to your use case.

Neural networks vs classical AI

Traditional, rule-based AI programs were based on principles of classic software. Computer programs are designed to run operations on data stored in memory locations, and save the results on a different memory location. The logic of the program is sequential, deterministic and based on clearly-defined rules. Operations are run by one or more central processors.

Neural networks, however are neither sequential, nor deterministic. Also, regardless of the underlying hardware, there’s no central processor controlling the logic. Instead, the logic is dispersed across the thousands of smaller artificial neurons. ANNs don’t run instructions; instead they perform mathematical operations on their inputs. It’s their collective operations that develop the behavior of the model.

Instead of representing knowledge through manually coded logic, neural networks encode their knowledge in the overall state of their weights and activations. Tesla AI chief Andrej Karpathy eloquently describes the software logic of neural networks in an excellent Medium post titled “Software 2.0”:

The “classical stack” of Software 1.0 is what we’re all familiar with — it is written in languages such as Python, C++, etc. It consists of explicit instructions to the computer written by a programmer. By writing each line of code, the programmer identifies a specific point in program space with some desirable behavior.

In contrast, Software 2.0 can be written in much more abstract, human unfriendly language, such as the weights of a neural network. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard (I tried).

Neural networks vs other machine learning techniques

Artificial neural networks are just one of the several algorithms for performing machine learning, the branch of artificial intelligence that develops behavior based on experience. There are many other machine learning techniques that can find patterns in data and perform tasks such as classification and prediction. Some of these techniques include regression models, support vector machines (SVM), k-nearest methods and decision trees.

When it comes to dealing with messy and unstructured data such as images, audio and text, however, neural networks outperform other machine learning techniques.

For example, if you wanted to perform image classification tasks with classic machine learning algorithms, you would have to do plenty of complex “feature engineering,” a complicated and arduous process that would require the efforts of several engineers and domain experts. Neural networks and deep learning algorithms don’t require feature engineering and automatically extract features from images if trained well.

This doesn’t mean, however, that neural network is a replacement for other machine learning techniques. Other types of algorithms require less compute resources and are less complicated, which makes them preferable when you’re trying to solve a problem that doesn’t require neural networks.

Other machine learning techniques are also interpretable (more on this below), which means it’s easier to investigate and correct decisions they make. This might make them preferable in use cases where interpretability is more important than accuracy.

The limits of neural networks

In spite of their name, artificial neural networks are very different from their human equivalent. And although neural networks and deep learning are the state-of-the-art of AI today, they’re still a far shot from human intelligence. Therefore, neural networks will fail at many things that you would expect from a human mind:

  • Neural networks need lots of data: Unlike the human brain, which can learn to do things with very few examples, neural networks need thousands and millions of examples.
  • Neural networks are bad at generalizing: A neural network will perform accurately at a task it has been trained for, but very poorly at anything else, even if it’s similar to the original problem. For instance, a cat classifier trained on thousands of cat pictures will not be able to detect dogs. For that, it will need thousands of new images. Unlike humans, neural networks don’t develop knowledge in terms of symbols (ears, eyes, whiskers, tail)—they process pixel values. That’s why they will not be able to learn about new objects in terms of high-level features and they need to be retrained from scratch.
  • Neural networks are opaque: Since neural networks express their behavior in terms of neuron weights and activations, it is very hard to determine the logic behind their decisions. That’s why they’re often described as black boxes. This makes it hard to find out if they’re making decisions based on the wrong factors.

AI expert and neuroscientist Gary Marcus has explained the limits of deep learning and neural networks in an in-depth research paper last year.

Also neural networks aren’t a replacement for good-old fashioned rule-based AI in problems where the logic and reasoning is clear and can be codified into distinct rules. For instance, when it comes to solving math equations, neural networks perform very poorly.

There are several efforts to overcome the limits of neural network, such a DARPA-funded initiative to create explainable AI models. Other interesting developments include developing hybrid models that combine neural networks and rule-based AI to create AI systems that are interpretable and require less training data.

Although we still have a long way to go before we reach the goal of human-level AI (if we’ll ever reach it at all), neural networks have brought us much closer. It’ll be interesting to see what the next AI innovation will be.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here

Read next:

Satoshi Nakaboto: ‘S&P 500 outperformed Bitcoin over the past year’

Corona coverage

Read our daily coverage on how the tech industry is responding to the coronavirus and subscribe to our weekly newsletter Coronavirus in Context.

For tips and tricks on working remotely, check out our Growth Quarters articles here or follow us on Twitter.

Continue Reading

Artificial Intelligence

Sony patents AI soundtrack generator that adapts game music to your emotions

Published

on

sony-patents-ai-soundtrack-generator-that-adapts-game-music-to-your-emotions

Sony has filed a patent for a soundtrack generator that matches music to a gamer’s emotions, sparking speculation that the tech could find its way into the PlayStation 5.

The patent, which was first spotted by Gaming Intel, describes an AI-powered system that adapts background music to game scenarios and a player’s activity.

The feature maps emotions such as tension, joy, and fear to musical components including rhythm, melodic structure, and harmonic density, based on published reviews and social media opinions. This data will train algorithms to play sounds that trigger the desired feelings.

According to the patent, composers would first create musical motifs for characters, activities, areas, and even player personalities:

Once the composer has created the motifs, they can be assigned to the elements/characters. This could be done by the composer or sound designers and can be changed as the game is developed or even dynamically inside the game after it has been released.

The algorithms would then adjust the motifs to the gamer’s style of play, the time of day, state of the game, and the hours that they’ve been playing.

[Read: Nvidia teaches AI to create new version of Pac-Man just by watching gameplay]

It’s not entirely clear how this will work in practice. But the filing suggests that the music’s tempo could be increased if the player’s moving quickly, and then reduced when they’re less active.

Patents offer hints for future gaming

While the filing makes no specific mention of the PS5, its timing suggests that feature could be used in the console.

It joins a growing list of gaming-related patents recently filed by Sony. Others include a controller sensor that measures sweat and heart rates to understand how a player’s feeling and a companion robot that reacts to their emotions.

The new patent suggests that some of these could be combined with the music generator:

Players who have opted in can be tracked on social media, analysis about their personalities can be made based on their behavior and as more and more users take advantage of biometric devices which track them (electrodermal activity, pulse and respiration, body temperature, blood pressure, brain wave activity, genetic predispositions, etc.), environmental customization can be applied to musical environments.

The wackier ideas are less likely to get the green light, but the filings show that Sony sees a big future for real-time personalization in gaming.

Published May 26, 2020 — 18:39 UTC

Continue Reading

Artificial Intelligence

Chinese city’s health-tracking surveillance tech set to outlast the pandemic

Published

on

chinese-city’s-health-tracking-surveillance-tech-set-to-outlast-the-pandemic

A Chinese city plans to turn its contact-tracing app into a permanent health tracker, deepening fears that surveillance tech introduced to fight COVID-19 will outlast the pandemic.

Authorities in the eastern city of Hangzhou have proposed combining medical records, physical exam results, and data on lifestyle choices to create a healthcare score for citizens.

Officials said the system would be a “firewall to enhance people’s health and immunity,” the Guardian reports. They aim to launch the app by the end of next month.

Each of the city’s 10 million residents would be given a colored health badge based on a collation of this data, and a score from 1-100 that will be used to create health rankings.

[Read: Snowden warns that the surveillance states we’re creating now will outlast the coronavirus]

“At the same time, we can use big data to rate group health in apartment buildings, residential communities and businesses,” said Hangzhou health commission chief Sun Yongrong.

The surveillance backlash grows

Hangzhou’s system was originally used to identify a citizen’s risk of infection by tracking their travel history and health. Their virus status was added to a QR code that showed whether they should be quarantined or allowed to move around the city.

The health codes were run by mobile payments company Alipay, which told CNBC that it had “not been contacted by any party with respect to this project.”

Hangzhou’s health commission’s website suggests the city’s moving forward with the plans regardless, further stoking tensions about digital surveillance in China.

Critics worry that the system will not only read their personal health records, but could also be used to screen job applicants and create tiered insurance pricing plans.

For other nations using surveillance tech to combat the coronavirus, the plans show that temporary security measures can quickly become permanent.

Published May 26, 2020 — 13:44 UTC

Continue Reading

Trending

English
Spanish English