Some thoughts on: Artificial Intelligence (AI) – could spell the end of the human race
Last week I published an article called: “Stephan Hawking: “Artificial Intelligence (AI)” – could spell the end of the human race” Below an overview of three articles on AI whith some of them building upon Stephan Hawking’s statement on AI: “Artificial Intelligence (AI) – could spell the end of the human race”
Wearing Your Intelligence: How to Apply Artificial Intelligence in Wearables and IoT – appeared on Wired.com
“Wearables and the Internet of Things (IoT) may give the impression that it’s all about the sensors, hardware, communication middleware, network and data but the real value (and company valuation) is in insights. In this article, we explore artificial intelligence (AI) and machine learning that are becoming indispensable tools for insights, views on AI, and a practical playbook on how to make AI part of your organization’s core, defensible strategy.
Before we proceed, let’s first define the terms. Otherwise, we risk commingling marketing terms like “Big Data” and not addressing the actual fields.
Artificial Intelligence: The field of artificial intelligence is the study and design of intelligent agents able to perform tasks that require human intelligence, such as visual perception, speech recognition, and decision-making. In order to pass the Turing test, intelligence must be able to reason, represent knowledge, plan, learn, communicate in natural language and integrate all these skills towards a common goal.
Machine Learning: The subfield of machine learning grew out of the effort of building artificial intelligence. Under the “learning” trait of AI, machine learning is the subfield that learns and adapts automatically through experience. It focuses on prediction, based on known properties learned from the training data. The origin of machine learning can be traced back to the development of neural network model and later to the decision tree method. Supervised and unsupervised learning algorithms are used to predict the outcome based on the data.”
Sure, Artificial Intelligence May End Our World, But That Is Not the Main Problem – appeared on Wired.com
“The robots will rise, we’re told. The machines will assume control. For decades we have heard these warnings and fears about artificial intelligence taking over and ending humankind.
Such scenarios are not only currency in Hollywood but increasingly find supporters in science and philosophy. For example, Ray Kurzweil wrote that the exponential growth of AI will lead to a technological singularity, a point when machine intelligence will overpower human intelligence. Some think this is the end of the world; others see more positive possibilities. For example, Nick Bostrom thinks that a superintelligence could help us solve issues such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.
On Tuesday, leading scientist Stephen Hawking joined the ranks of the singularity prophets, especially the darker ones, as he told the BBC that “the development of full artificial intelligence could spell the end of the human race.” He argues that humans could not compete with an AI which would re-design itself and reach an intelligence that would surpass that of humans.
The problem with such scenarios is not that they are necessarily false—who can predict the future?—or that it does not make sense to reflect on science fiction scenarios. The latter is even mandatory, I think, if we are to better understand and evaluate current technologies. It is important to flesh out the philosophical issues at stake in such scenarios and explore our fears in order to find out what we value most.”
Artificial intelligence is here—and it’s nothing to fear (yet) – appeared on qz.com
“Just this week, the world’s most famous living physicist, Stephen Hawking, laid out his worries about artificial intelligence: “The development of full artificial intelligence could spell the end of the human race,” he told the BBC. In October, Elon Musk delivered much the same message, warning that “we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.” “Yet efforts to develop artificial intelligence (AI) continue apace, with all the major (and even many minor) computer science research and development facilities devoting time, energy, and money to making computers behave like humans. Some of them are succeeding: Machines can now understand humans, speak with them, learn from them, and write like them. They will make some jobs obsolete and others easier. But they aren’t—yet—out to get us.” What do you think?