From the readings, what is artificial intelligence and how is it similar or different from what you consider to be human intelligence?
One reading suggests that artificial intelligence is a sub-field of computer science whose goal is to enable the development of computers that are able to do things normally done by people, especially with regard to things associated with people acting 'intelligently'. The extent to which AI mimics human intelligence depends on whether we're discussing 'strong' or 'weak' AI, where a 'strong' AI system tries greatly to mimic human thought, and 'weak' AI systems have little weight put into how the machine does its work, instead focusing on the results. Further classifications of AI include whether it's 'general' or 'narrow'. General AI is designed to reason in general terms, and narrow AI is often designed for a more specific group of tasks. Are AlphaGo, Deep Blue, and Watson proof of the viability of artificial intelligence or are they just interesting tricks or gimmicks? I think that they represent a specific subset of the above AI groupings. As one reading says, even Watson isn't able to express a thoughtful view on something like ISIS, because that type of opinion isn't something that can be formed by its own system (unless specifically programmed to do so, thus not meeting the definition or requirements of an AI system). Is the Turing Test a valid measure of intelligence or is the Chinese Room a good counter argument? I think that the Chinese Room is a good counter argument, as its logic, at least at a surface glance, appears solid to me. I think that there's more to the characteristics of true artificial intelligence than simply simulating genuine, human intelligence. Are the growing concerns over the power of artificial intelligence and its role in our lives warranted? Are you worried about the potential dangers imposed by artificial intelligence? Explain why or why not. Warranted? Sure. I think it's reasonable to question the viability of this type of technology because of its breadth, power, and the uniqueness of it. For the same reasons, I think it's reasonable to be inquiring as to whether this is a type of advancement that we shouldn't engage in or pursue. With great power comes great responsibility... and opportunity for possible disaster. That being sad, I'm not too worried. As Eric Schmidt and Sebastian Thrun of Fortune write, there has been this type of worry and skepticism leading up to a number of history's technological innovations. I do believe that the scope and consequence of this type of development is greater than many of those, even, but I also believe that there's a long way to go between the place where we have to worry about machines with that level of independent power and ability and where we are now. With all of our most talented minds in the world thinking about this issue, I imagine we'll do everything collectively possible to work to ensure that the future of AI is one in our favor, rather than against it. If, at that point where we have all of our greatest minds involved and it still doesn't work out for us, then GGWP no-re, we would've gotten elim'd by the Covies anyway when they made their way to Earth in 2552. Finally, could a computing system ever be considered a mind? Are humans just biological computers? What are the ethical implications are either idea? Body
0 Comments
Leave a Reply. |
AuthorNikolas Dean Brooks is a current Senior at Notre Dame. This blog is for the "Ethics and Professional Issues" course under Dr. Peter Bui. Archives |