[ad_1]

Missed attending Transform 2022? Check out all the summit sessions now in our on-demand library. look here.


Is technology perceptible? Since the creation of the first artificial intelligence (AI) programs in 1951, researchers and technical experts have been working tirelessly to develop highly sophisticated AI programs. One of his early pioneers of this kind of technology was British mathematician and computer scientist Alan Turing. Turing understood that humans combine available information with reasons for making decisions. He theorized that because humans can use these methods to reach logical conclusions, machines could do the same.

Around the same time, we also saw popular culture take advantage of the advent of AI and robots to create a new class of villains. Robots with human intelligence who can feel, express emotions, connect and conquer the world just like humans do. The result is a fear of advanced technology that has persisted in movies, pop culture, and books for the past 70 years.

Outside popular culture, scientists and engineers were actively working to develop smarter and more advanced AI programs. Early on, Turing believed that AI could be programmed to make decisions, which paved the way for scientists to ask philosophical but very important questions. Whether this is beneficial or dangerous is left to individual interpretation, but despite the recent headlines, perceptual technology has not yet appeared, nor will it in our lifetime.

That’s because AI and machine learning (ML) are still in their infancy and require significant progress when it comes to optimization and innovation. We’ve mastered many of the building blocks needed to create advanced AI systems, but we’re still unable to build a fully sentient being.

event

meta beat 2022

MetaBeat will convene thought leaders on October 4th in San Francisco, CA to provide guidance on how Metaverse technologies are transforming the way all industries communicate and do business.

register here

Society has different ways of defining the term “sentient”

To really understand what it takes for technology to become sentient, it is important to analyze the philosophy behind what Western civilization defines as “sentence.” We also need to distinguish between trained technical devices and legally autonomous decision-making machines. Colloquially, we define senses as beings who are conscious of themselves and who have agency and autonomy over decision-making.

In the early 1900s, Turing studied this idea and what it means for something to be conscious. As a result of his research, he developed a test to determine whether machines have human-level consciousness. In this test, we found that AI has human-level consciousness when humans cannot distinguish between communicating with machines and humans.

Simple, isn’t it? It’s a little more complicated than that.

For example, if you ask questions or interact with a customer service person (e.g., a bank teller) via phone or online live chat service, assume they are self-aware and conscious. I can do it. This is because they are listening to us and are able to react and respond in ways that offer meaningful solutions to our problems. Emotions such as joy, fear, etc. may be expressed. If machines could process the same functions of hearing, responding to, and detecting emotions in meaningful ways, how would this affect our definition of consciousness?

Sentient Technologies: Mirroring Human Interactions

Humans program software to handle functions that humans normally do, so AI will mirror human interactions. As a result, they’re embedding some of their own biases into the AI ​​they’re creating, but that’s a whole other story. For example, consider a chatbot. This type of technology eliminates the need for human employees to fill the office space of her call centers and allows customer inquiries to be answered and routed to the right person. However, the technology is built to respond and interact in a conversational fashion that the person you are calling can respond to or help the caller get an answer or the task they need.

As AI becomes more sophisticated, AI will undoubtedly become more complex. That said, just because something can handle complex tasks doesn’t mean it’s intuitive. Today, AIs are trained that way so they can perform a wide variety of tasks. For example, it can talk to us, translate in real time, or power self-driving cars.

This is possible not because AI is making decisions, but because machines or software follow a human-installed set of rules and codified information. It’s also worth pointing out that in most of these situations there is still a human in the loop component and the AI ​​is not working independently.

emergent phenomenon

AI needs a set of rules to follow and humans to decide those rules. One example of how these rules are formed is the idea of ​​”emergence” in technology. It can be defined as the emergence of unpredictable new things in the process of organic evolution.

This means that even if the machine is not specifically programmed to do something, it can perform certain tasks and operations relatively unprompted due to the training it has undergone and the broader context in which it operates. It means that you have sex. AI development process.

However, this does not mean that the machine is sentient. Rather, it represents current technological advances in improving systems that help IT teams minimize the time they spend training machines to perform tedious tasks. It’s all about the degrees of freedom or limitations that humans put into their systems. This idea that AI could learn itself to do things, like we imagine machines taking over the world, tends to stem from sensational Hollywood-inspired horror. I have.

Will AI become conscious?

It wouldn’t be fair to put our feet on the ground and say we’ll never have sentient AI, but it’s more realistic to think that this kind of technology is hundreds of years away. The possibilities of AI are just beginning. The idea of ​​perceptual AI is interesting, but you need to learn to walk before you can run.

If that happens, it will raise a massive philosophical question for the wider community.If a machine is conscious, do Google LaMDA extend human rights to access to that machine or lawyers? At this point, we still have a long way to go before thinking about and developing sentient AI in terms of perfecting AI in general.?

As AI and ML continue to develop and improve, we hope to enhance customer and employee experiences and minimize the time developers spend perfecting individual pieces of technology. Become. Because the big picture starts to come together.

The idea of ​​sci-fi AI taking over the world might be a great plot for a movie or podcast series, but technology is definitely your friend, not your enemy. Widespread adoption of AI into everyday life will normalize AI, build trust from a human perspective, and remove the years-old layers of fear of sensationalist sentiment towards ML.

We might think Siri or Alexa is mad at us, and she’s definitely listening, but you can rest assured that she’s not a sentient being.

Adam Sypniewski is Deepgram’s CTO.

data decision maker

Welcome to the VentureBeat Community!

DataDecisionMakers is a place for data professionals, including technologists, to share data-related insights and innovations.

Join DataDecisionMakers for cutting-edge ideas, updates, best practices, and the future of data and data technology.

You might consider contributing your own article!

Read more about DataDecisionMakers

[ad_2]

Source link

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *