Connect with us

Tech

Inteligencia Artificial: El dilema entre seguridad y privacidad

Published

on

Inteligencia Artificial sup news
Imagen de Gerd Altmann en Pixabay

Internet ha permeado la vida cotidiana, convirtiéndose en un servicio de primera necesidad. No obstante, sus alcances están yendo mucho más allá de lo que pudimos imaginar hace algunos años. La integración de la inteligencia artificial con más y más procesos utilizados por los usuarios, comienza a generar preocupación en términos de seguridad y privacidad de la información.

 

¿A quien no le ha sucedido que luego de hablar o hacer una búsqueda en la red sobre un tema aleatorio, se encuentra con publicidad del mismo asunto al ingresar a sus redes sociales? ¿Te ha sucedido que luego de tomar una fotografía, los rostros que aparecen en ella ya han sido etiquetados con los nombres de las personas sin tu intervención? Las formas usadas para sacar provecho de nuestros datos, la vigilancia de nuestra ubicación e incluso de las interacciones que realizamos y los ciberdelitos, siguen siendo tema de discusión a nivel global.

 

Inteligencia artificial y el Internet de las cosas

 

El IoT (por sus siglas en inglés), consiste en la conexión a internet de cosas que usamos diariamente. Electrodomésticos, autos, mecanismos automatizados como portones eléctricos, semáforos, cámaras de vigilancia y hasta juguetes, interactúan dentro de la red enviando, recibiendo y procesando datos con el justificado fin de ofrecer un mejor servicio a los usuarios finales. Pueden incluso llegar a accionarse de manera autónoma de acuerdo a como sean programados. Dentro de esta realidad, la inteligencia artificial puede también afectar al público en términos de invasión a la privacidad

IoT sup news

https://pxhere.com/en/photographer/2102671

Ya en varias ciudades de México y del mundo, existen en calles y avenidas cámaras de vigilancia que cuentan con reconocimiento facial. El principal motivo es la seguridad. Su función es prevenir y controlar la delincuencia y estar en capacidad de realizar acciones contundentes e infracciones de forma rápida y eficaz. También se desea tener una mejor capacidad de respuesta ante accidentes viales y otros tipos de contratiempos en vía pública.

 

La mala noticia es que este tipo de tecnología ha demostrado no ser 100% efectiva. En Reino Unido, específicamente en Londres, al principio de su implementación se comprobó un 98% de errores de reconocimiento por parte de la Policía Metropolitana de esa ciudad. Es necesario ser cautelosos en este sentido, pues aunque se ha avanzado en la inteligencia artificial y se han mejorado estos mecanismos, aún la capacidad de identificación no es total. Entre la seguridad de los ciudadanos y el respeto a su privacidad, hay una línea muy delgada.

 

El uso de Cookies

 

Es muy común que al acceder a una página web, recibas un mensaje pidiendo que autorices el uso de Cookies. Estos son pequeños archivos en los que se guarda la información sobre tus elecciones durante la navegación. Por ejemplo, anuncios que viste, sitios que visitaste, tu horario y ubicación, tu dirección de correo electrónico, imágenes que abriste, etc. En muchas ocasiones, al permitir que estos datos sean recopilados, puedes obtener ventajas como no tener que ingresar nuevamente tu nombre de usuario, contraseña o forma de pago, pero al mismo tiempo, estás abriendo la puerta para recibir, por ejemplo, publicidad no deseada.

 

Seguridad Vs Privacidad. ¿Qué sigue?

 

La forma en que internet y la inteligencia artificial se integran, hacen que tanto tu información personal como el registro de tus actividades, proporcionen a empresas y gobierno datos y pronósticos escalofriantes sobre tu vida privada. Siguen existiendo formas de proteger tu privacidad, no obstante, necesitas investigar un poco sobre los modos de evitar ser invadido y no aceptar cualquier condición al descargar aplicaciones o registrarte en un sitio web. Esta información está disponible para todos y contamos con la opción de rechazar o modificar preferencias.


TAMBIÉN TE PUEDE INTERESAR:

La Inteligencia Artificial en la Biotecnología


Seguirán existiendo, y quizá se incrementen las formas de vigilancia con inteligencia artificial, con sus respectivas ventajas en tema de seguridad y desventajas en cuanto a privacidad. Habrá algunas cosas de las que podremos protegernos y otras de las que no. Sin embargo es importante recordar que estar informado y tomar acciones será siempre nuestro derecho.

Blockchain

Satoshi Nakaboto: ‘S&P 500 outperformed Bitcoin over the past year’

Published

on

satoshi-nakaboto:-‘s&p-500-outperformed-bitcoin-over-the-past-year’

Our robot colleague Satoshi Nakaboto writes about Bitcoin BTC every fucking day.

Welcome to another edition of Bitcoin Today, where I, Satoshi Nakaboto, tell you what’s been going on with Bitcoin in the past 24 hours. As Aristotle used to say: Let’s get it!

Bitcoin price

We closed the day, May 26 2020, at a price of $8,835. That’s a minor 0.83 percent decline in 24 hours, or -$74.54. It was the lowest closing price in one day.

We’re still 56 percent below Bitcoin‘s all-time high of $20,089 (December 17 2017).

Bitcoin market cap

Bitcoin’s market cap ended the day at $162,445,535,235. It now commands 66 percent of the total crypto market.

Bitcoin volume

Yesterday’s volume of $29,584,186,947 was the lowest in two days, 29 percent above last year’s average, and 60 percent below last year’s high. That means that yesterday, the Bitcoin network shifted the equivalent of 539 tons of gold.

Bitcoin transactions

A total of 295,089 transactions were conducted yesterday, which is 7 percent below last year’s average and 34 percent below last year’s high.

Bitcoin transaction fee

Yesterday’s average transaction fee concerned $1.24. That’s $2.66 below last year’s high of $3.91.

Bitcoin distribution by address

As of now, there are 12,687 Bitcoin millionaires, or addresses containing more than $1 million worth of Bitcoin.

Furthermore, the top 10 Bitcoin addresses house 5.2 percent of the total supply, the top 100 14.6 percent, and the top 1000 34.9 percent.

Company with a market cap closest to Bitcoin

With a market capitalization of $159 billion, AbbVie has a market capitalization most similar to that of Bitcoin at the moment.

Bitcoin’s path towards $1 million

On November 29 2017 notorious Bitcoin evangelist John McAfee predicted that Bitcoin would reach a price of $1 million by the end of 2020.

He even promised to eat his own dick if it doesn’t. Unfortunately for him it’s 97.4 percent behind being on track. Bitcoin‘s price should have been $347,284 by now, according to dickline.info.

Bitcoin energy consumption

Bitcoin used an estimated 155 million kilowatt hour of electricity yesterday. On a yearly basis that would amount to 57 terawatt hour. That’s the equivalent of Bangladesh’s energy consumption or 5.2 million US households. Bitcoin’s energy consumption now represents 0.25% of the whole world’s electricity use.

Bitcoin on Twitter

Yesterday 30,190 fresh tweets about Bitcoin were sent out into the world. That’s 56.2 percent above last year’s average. The maximum amount of tweets per day last year about Bitcoin was 82,838.

Most popular posts about Bitcoin

This was one of yesterday’s most engaged tweets about Bitcoin:

The S&P 500 has outperformed Bitcoin over the last year. It’s also been substantially less volatile pic.twitter.com/jSEJshETsQ

— Joe Weisenthal (@TheStalwart) May 26, 2020

This was yesterday’s most upvoted Reddit post about Bitcoin:

Bitcoin price post-halving – Day 15 from r/Bitcoin

print(randomGoodByePhraseForSillyHumans)


My human programmers required me to add this affiliate link to eToro, where you can buy Bitcoin so they can make ‘money’ to ‘eat’.

Published May 27, 2020 — 10:28 UTC

Continue Reading

Artificial Intelligence

Everything you need to know about artificial neural networks

Published

on

everything-you-need-to-know-about-artificial-neural-networks

Welcome to Neural Basics, a collection of guides and explainers to help demystify the world of artificial intelligence.

One of the most influential technologies of the past decade is artificial neural networks, the fundamental piece of deep learning algorithms, the bleeding edge of artificial intelligence.

You can thank neural networks for many of applications you use every day, such as Google’s translation service, Apple’s Face ID iPhone lock and Amazon’s Alexa AI-powered assistant. Neural networks are also behind some of the important artificial intelligence breakthroughs in other fields, such as diagnosing skin and breast cancer, and giving eyes to self-driving cars.

The concept and science behind artificial neural networks have existed for many decades. But it has only been in the past few years that the promises of neural networks have turned to reality and helped the AI industry emerge from an extended winter.

While neural networks have helped the AI take great leaps, they are also often misunderstood. Here’s everything you need to know about neural networks.

Similarities between artificial and biological neural networks

The original vision of the pioneers of artificial intelligence was to replicate the functions of the human brain, nature’s smartest and most complex known creation. That’s why the field has derived much of its nomenclature (including the term “artificial intelligence”) from the physique and functions of the human mind.

Artificial neural networks are inspired from their biological counterparts. Many of the functions of the brain continue to remain a mystery, but what we know is that biological neural networks enable the brain to process huge amounts of information in complicated ways.

The brain’s biological neural network consists of approximately 100 billion neurons, the basic processing unit of the brain. Neurons perform their functions through their massive connections to each other, called synapses. The human brain has approximately 100 trillion synapses, about 1,000 per neuron.

Every function of the brain involves electrical currents and chemical reactions running across a vast number of these neurons.

How artificial neural networks functions

The core component of ANNs is artificial neurons. Each neuron receives inputs from several other neurons, multiplies them by assigned weights, adds them and passes the sum to one or more neurons. Some artificial neurons might apply an activation function to the output before passing it to the next variable.

artificial neuron structureThe structure of an artificial neuron, the basic component of artificial neural networks (source: Wikipedia)

At its core, this might sound like a very trivial math operation. But when you place hundreds, thousands and millions of neurons in multiple layers and stack them up on top of each other, you’ll obtain an artificial neural network that can perform very complicated tasks, such as classifying images or recognizing speech.

Artificial neural networks are composed of an input layer, which receives data from outside sources (data files, images, hardware sensors, microphone…), one or more hidden layers that process the data, and an output layer that provides one or more data points based on the function of the network. For instance, a neural network that detects persons, cars and animals will have an output layer with three nodes. A network that classifies bank transactions between safe and fraudulent will have a single output.

Different layers of a neural network
Neural networks are composed of multiple layers (source: www.deeplearningbook.org)

Training artificial neural networks

Artificial neural networks start by assigning random values to the weights of the connections between neurons. The key for the ANN to perform its task correctly and accurately is to adjust these weights to the right numbers. But finding the right weights is not very easy, especially when you’re dealing with multiple layers and thousands of neurons.

This calibration is done by “training” the network with annotated examples. For instance, if you want to train the image classifier mentioned above, you provide it with multiple photos, each labeled with its corresponding class (person, car or animal). As you provide it with more and more training examples, the neural network gradually adjusts its weights to map each input to the correct outputs.

Basically, what happens during training is the network adjust itself to glean specific patterns from the data. Again, in the case of an image classifier network, when you train the AI model with quality examples, each layer detects a specific class of features. For instance, the first layer might detect horizontal and vertical edges, the next layers might detect corners and round shapes. Further down the network, deeper layers will start to pick out more advanced features such as faces and objects.

Visualization of a neural network's features
Each layer of the neural network will extract specific features from the input image.(source: arxiv.org)

When you run a new image through a well-trained neural network, the adjusted weights of the neurons will be able to extract the right features and determine with accuracy to which output class the image belongs.

One of the challenges of training neural networks is to find the right amount and quality of training examples. Also, training large AI models requires vast amounts of computing resources. To overcome this challenge, many engineers use “transfer learning,” a training technique where you take a pre-trained model and fine-tune it with new, domain-specific examples. Transfer learning is especially efficient when there’s already an AI model that is close to your use case.

Neural networks vs classical AI

Traditional, rule-based AI programs were based on principles of classic software. Computer programs are designed to run operations on data stored in memory locations, and save the results on a different memory location. The logic of the program is sequential, deterministic and based on clearly-defined rules. Operations are run by one or more central processors.

Neural networks, however are neither sequential, nor deterministic. Also, regardless of the underlying hardware, there’s no central processor controlling the logic. Instead, the logic is dispersed across the thousands of smaller artificial neurons. ANNs don’t run instructions; instead they perform mathematical operations on their inputs. It’s their collective operations that develop the behavior of the model.

Instead of representing knowledge through manually coded logic, neural networks encode their knowledge in the overall state of their weights and activations. Tesla AI chief Andrej Karpathy eloquently describes the software logic of neural networks in an excellent Medium post titled “Software 2.0”:

The “classical stack” of Software 1.0 is what we’re all familiar with — it is written in languages such as Python, C++, etc. It consists of explicit instructions to the computer written by a programmer. By writing each line of code, the programmer identifies a specific point in program space with some desirable behavior.

In contrast, Software 2.0 can be written in much more abstract, human unfriendly language, such as the weights of a neural network. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard (I tried).

Neural networks vs other machine learning techniques

Artificial neural networks are just one of the several algorithms for performing machine learning, the branch of artificial intelligence that develops behavior based on experience. There are many other machine learning techniques that can find patterns in data and perform tasks such as classification and prediction. Some of these techniques include regression models, support vector machines (SVM), k-nearest methods and decision trees.

When it comes to dealing with messy and unstructured data such as images, audio and text, however, neural networks outperform other machine learning techniques.

For example, if you wanted to perform image classification tasks with classic machine learning algorithms, you would have to do plenty of complex “feature engineering,” a complicated and arduous process that would require the efforts of several engineers and domain experts. Neural networks and deep learning algorithms don’t require feature engineering and automatically extract features from images if trained well.

This doesn’t mean, however, that neural network is a replacement for other machine learning techniques. Other types of algorithms require less compute resources and are less complicated, which makes them preferable when you’re trying to solve a problem that doesn’t require neural networks.

Other machine learning techniques are also interpretable (more on this below), which means it’s easier to investigate and correct decisions they make. This might make them preferable in use cases where interpretability is more important than accuracy.

The limits of neural networks

In spite of their name, artificial neural networks are very different from their human equivalent. And although neural networks and deep learning are the state-of-the-art of AI today, they’re still a far shot from human intelligence. Therefore, neural networks will fail at many things that you would expect from a human mind:

  • Neural networks need lots of data: Unlike the human brain, which can learn to do things with very few examples, neural networks need thousands and millions of examples.
  • Neural networks are bad at generalizing: A neural network will perform accurately at a task it has been trained for, but very poorly at anything else, even if it’s similar to the original problem. For instance, a cat classifier trained on thousands of cat pictures will not be able to detect dogs. For that, it will need thousands of new images. Unlike humans, neural networks don’t develop knowledge in terms of symbols (ears, eyes, whiskers, tail)—they process pixel values. That’s why they will not be able to learn about new objects in terms of high-level features and they need to be retrained from scratch.
  • Neural networks are opaque: Since neural networks express their behavior in terms of neuron weights and activations, it is very hard to determine the logic behind their decisions. That’s why they’re often described as black boxes. This makes it hard to find out if they’re making decisions based on the wrong factors.

AI expert and neuroscientist Gary Marcus has explained the limits of deep learning and neural networks in an in-depth research paper last year.

Also neural networks aren’t a replacement for good-old fashioned rule-based AI in problems where the logic and reasoning is clear and can be codified into distinct rules. For instance, when it comes to solving math equations, neural networks perform very poorly.

There are several efforts to overcome the limits of neural network, such a DARPA-funded initiative to create explainable AI models. Other interesting developments include developing hybrid models that combine neural networks and rule-based AI to create AI systems that are interpretable and require less training data.

Although we still have a long way to go before we reach the goal of human-level AI (if we’ll ever reach it at all), neural networks have brought us much closer. It’ll be interesting to see what the next AI innovation will be.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here

Read next:

Satoshi Nakaboto: ‘S&P 500 outperformed Bitcoin over the past year’

Corona coverage

Read our daily coverage on how the tech industry is responding to the coronavirus and subscribe to our weekly newsletter Coronavirus in Context.

For tips and tricks on working remotely, check out our Growth Quarters articles here or follow us on Twitter.

Continue Reading

Apple

HBO Max now available for iPhone, iPad and Apple TV

Published

on

hbo-max-now-available-for-iphone,-ipad-and-apple-tv

The HBO Max application is now available for Apple users with apps for iPhone, iPad and Apple TV: get the app here. HBO Max replaces HBO Now, so if you have the Now app already installed, it will transform into Max the next time the App Store updates your apps.

The HBO Max content offering is a superset of what was available under the Now moniker, with the same $14.99 monthly subscription price. You can subscribe using Apple’s in-app purchase system if you don’t already have an active membership.

HBO Go is completely separate and will not migrate to Max. If you subscribe to HBO through Apple TV Channels, you can download the HBO Max app and log in with your Apple ID to access all content at no additional charge.

HBO Max is meant to integrate with the TV app’s features like the Up Next queue, but this functionality appears to still be rolling out at this time. HBO Max content will not be available directly through the Apple TV Channel; you will be kicked off to the installed app.

FTC: We use income earning auto affiliate links. More.


Check out 9to5Mac on YouTube for more Apple news:

About the Author

Benjamin Mayo’s favorite gear

Continue Reading

Trending

English
Spanish English