If you are in Auckland this Saturday, February 27, from 12:00 PM - 5:00 PM, Techvana, The Ntec New Zealand Computer Museum is having its first public open day, so come and check out their amazing collection of vintage computers, game consoles, phones, robotics and other technology from the past.
Friday, February 26, 2016
Monday, February 22, 2016
Measuring (machine) intelligence by MCQ?
I'm idly wondering: would Ms Google pass most MCQ tests constructed by academics? If so, should we believe that she is intelligent? Cade Metz, in an article in Wired, gives a partial answer:
"... Clinicians were helping IBM train Watson for use in medical research. But as metaphors go, it wasn't a very good one. Three years later, our artificially intelligent machines can't even pass an eighth-grade science test, much less go to medical school. So says Oren Etzioni, a professor of computer science at the University of Washington and the executive director of the Allen Institute for Artificial Intelligence, the AI think-tank funded by Microsoft co-founder Paul Allen. Etzioni and the non-for-profit Allen Institute recently ran a contest, inviting nearly 800 teams of researchers to build AI systems that could take an eighth grade science test, and today, the Institute released the results: The top performers successfully answered about 60 percent of the questions. In other words, they flunked..."
Apparently: somewhere in the world, folks have formed the belief that 60% of possible marks is a fail, no matter how the testing instrument is constructed; however if this test of machine intelligence were run here, we'd be required - by University policy - to award a C+ pass!
Metz quotes Doug Lenat: "... If you're talking about passing multiple choice science tests, I always felt that was not actually the test AI should be aiming to pass," he says. "The focus on natural language understanding-science tests, and so on-is something that should follow from a program being actually intelligent. Otherwise, you end up hitting the target but producing the veneer of understanding." What a pleasant surprise: I agree with Doug about something!
It's an intriguing question to ask of any University: is it certifying only "the veneer of understanding" on its graduates, or do they have some "deep understanding"? More importantly, how might we reliably measure the depth of understanding in a MOOC, or in any semi-automated teaching environment employing only MCQs and keyword-matches and machine-intelligent testing procedures?
[This post was adapted from an email by my colleague Clark Thomborson.]
"... Clinicians were helping IBM train Watson for use in medical research. But as metaphors go, it wasn't a very good one. Three years later, our artificially intelligent machines can't even pass an eighth-grade science test, much less go to medical school. So says Oren Etzioni, a professor of computer science at the University of Washington and the executive director of the Allen Institute for Artificial Intelligence, the AI think-tank funded by Microsoft co-founder Paul Allen. Etzioni and the non-for-profit Allen Institute recently ran a contest, inviting nearly 800 teams of researchers to build AI systems that could take an eighth grade science test, and today, the Institute released the results: The top performers successfully answered about 60 percent of the questions. In other words, they flunked..."
Apparently: somewhere in the world, folks have formed the belief that 60% of possible marks is a fail, no matter how the testing instrument is constructed; however if this test of machine intelligence were run here, we'd be required - by University policy - to award a C+ pass!
Metz quotes Doug Lenat: "... If you're talking about passing multiple choice science tests, I always felt that was not actually the test AI should be aiming to pass," he says. "The focus on natural language understanding-science tests, and so on-is something that should follow from a program being actually intelligent. Otherwise, you end up hitting the target but producing the veneer of understanding." What a pleasant surprise: I agree with Doug about something!
It's an intriguing question to ask of any University: is it certifying only "the veneer of understanding" on its graduates, or do they have some "deep understanding"? More importantly, how might we reliably measure the depth of understanding in a MOOC, or in any semi-automated teaching environment employing only MCQs and keyword-matches and machine-intelligent testing procedures?
[This post was adapted from an email by my colleague Clark Thomborson.]
Tuesday, February 16, 2016
Researcher illegally shares millions of science papers free online to spread knowledge
There has been an ongoing controversy over intellectual property, copyright and ownership of scientific papers for a while now. Basically, academics, usually funded by their governments and hence the public through taxation, do research and publish their results as scientific papers in journals and conference proceedings. These publications are not usually in the public domain, as you might expect, but are owned by private companies that are increasingly being dominated by a few massive multi-national publishers like Elsevier. The publishers then charge the academics and their universities to access the journals and conference proceeding,s even though the academics gave the publishers the rights to their work for free in the first place. Clearly this is crazy and there has been a growing movement to boycott Elsevier for several years now.
This might be going to come to a head soon since a researcher in Russia has made over 48 million journal articles - almost every single peer-reviewed paper published - freely available online. She's refusing to shut the site down, despite an injunction and lawsuit from Elsevier. Science Alert reports that the Russian researcher is standing her ground "claiming that it's Elsevier that have the illegal business model... referring to article 27 of the UN Declaration of Human Rights, which states that everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits".
Now I'm definitely not recommending that you access scientific papers illegally, but if you're interested the Russian website is Sci-Hub.
This might be going to come to a head soon since a researcher in Russia has made over 48 million journal articles - almost every single peer-reviewed paper published - freely available online. She's refusing to shut the site down, despite an injunction and lawsuit from Elsevier. Science Alert reports that the Russian researcher is standing her ground "claiming that it's Elsevier that have the illegal business model... referring to article 27 of the UN Declaration of Human Rights, which states that everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits".
Now I'm definitely not recommending that you access scientific papers illegally, but if you're interested the Russian website is Sci-Hub.
Wednesday, February 10, 2016
Crowdsourced Seizure Prediction
Epilepsy, which is often compared to an electrical storm in the brain, affects nearly one percent of people worldwide. The most common treatment is medication, which can leave people feeling tired or dizzy. Other options include surgery and a new type of implanted device that uses electrical pulses to prevent seizures. A better prediction algorithm has the potential to make implanted devices more effective. Ideally, the devices would work a bit like a heart defibrillator — only delivering electrical current when it's needed. However, to date prediction algorithms have been little better than random chance. That is until the result of a competition, announced at the American Epilepsy Society's annual meeting in Seattle, showed the value of sharing a complex problem in neuroscience with experts from unrelated fields. The winning team included a mathematician and an engineer, but no doctor. The contest, with a first prize of $15,000, was sponsored by NINDS, the American Epilepsy Society and the Epilepsy Foundation. Over 500 teams entered via Kaggle.com, a website that allows researchers and companies to post data in the cloud for competitors to analyze, an approach known as crowdsourcing.
Thanks to my colleague, Mark Wilson, for finding this story in Discovermagazine.com.
Thanks to my colleague, Mark Wilson, for finding this story in Discovermagazine.com.
Friday, February 5, 2016
Donald Murray - Data Communications Pioneer
Donald Murray |
Donald Murray, MA, MIEE, completed a B.A. in Science at Auckland University College in 1890! He had an interesting and varied career, being initially a trainee farmer, then newspaper reporter - first with the New Zealand Herald, then with the Sydney Morning Herald. Seeing the widespread use of telegraphy by newspapers, he became interested in extending telegraphy to use standard typewriter keyboards. Moving to London, he worked on solving this problem, founding his own company to manufacture and sell his systems. Retiring to Monte Carlo in 1925, he passed his remaining years writing on Philosophy, work not completed by the time of his death in Switzerland in 1945.
A summary of Donald Murray’s career is now on our department’s computer history pages. And there is a lot more detail about Murray and his family for those who are interested.
[This post was written by Bob Doran]
Wednesday, February 3, 2016
Internet of Things security is so bad, there’s a search engine for sleeping kids
I've made a couple of posts about home automation and the growing Internet of Things (IoT); automating my house is a sort of hobby. As the uptake grows users (and manufacturers) need to take security much more seriously, since it's possible for an insecure device to compromise an entire home network. For example, a popular first IoT device for new parents is a WiFi and cloud-connected baby monitor with webcam. Parents understandably love the idea of being able to check up on their baby (and the baby sitter) using their smartphone from anywhere. However, many of these webcams are insecure and there is now a search engine for the Internet of Things (IoT), called Shodan, that lets users easily browse vulnerable webcams. This is disturbing, creepy and potentially dangerous. Arstechnica recently published a piece that makes a good argument that users need to take more responsibility, but that manufacturers must also play their part by designing security into IoT devices. If necessary, governments may have to legislate.
Subscribe to:
Posts (Atom)