We have been told that technology can’t be stopped and that we are heading for a jobless economy. We have been told that anyone who disagrees is an old fogey standing in the way of progress.
Peter Greene says “nonsense.”
Do you remember the predictions about 15 years ago that MOOCS would drive most institutions of higher education out of existence. Didn’t happen. Except for job-oriented and/or highly motivated persons, online instruction is boring except in small doses, monitored by a teacher.
Greene writes that venture capitalists have lost patience with self-driving trucks and cars.
As millions of parents have become involuntary home-schoolers, they see the limits of online instruction. Boredom sets in.
There are literally dozens of reasons online instruction will not “sweep the nation.” Right now it simply places distractions (Facebook, Snapchat, etc.) closer at hand, but at its core education is a social process, a process in which we learn, learn how to learn, and learn how to work with others. It requires others (teachers, other students, …) to do.
While some things can be enhanced by online instruction, replaced? Un-uh. Karate and dancing are not things you can learn by yourself, neither are most things.
I’ve been thinking about driver free trucks since my son-in-law started a dump truck business a month ago. In yesterday’s post from Greene, he questions the euphemistic term, ‘artificial intelligence,’ which is more of a marketing term rather than a fact. There is no intelligence in AI. There are a series of algorithmic options, but no real thinking. Computers are no more sentient than a toaster. I hope we do not get to a point of driver less dump trucks. Fully loaded, they have the power and inertia to take out a block of homes before they come to a halt. One line of faulty code could make this happen.
Hopefully, this crisis will reveal the limitations of computer instruction. Yesterday, my grandson’s teacher held a group chat with students online. Only four students in the class participated. Technology has its limitations, and so do people in a society where there is so much income inequality.http://curmudgucation.blogspot.com/2020/03/business-and-humanity-when-people-tell.html
Cx: not inertia, mass
I think the term you are actually looking for is ” kinetic energy” (energy of motion) since that is what does all the damage to the houses when the truck plows through them and is thereby brought to a halt.
All the energy of motion of the truck has to be somehow dissipated by the crushing, breaking and movement of the houses (off their foundations, for example) and also damage to the truck itself.
Kinetic energy depends on both the mass and the velocity (of the dump truck relative to the houses in this case)
It’s actual value is ( 1/2)mv^2
The Kinetic energy is therefore directly proportional to the mass, but proportional to velocity squared, which means if the dump truck is going twice as fast relative to the houses, it will have 4 times the energy.
Inertia is really synonymous with “inertial mass” or just “mass” (since gravitational mass and inertial mass are the same, by Einstein’s “equivalence principle”)
But you are right about AI really being a misnomer.
Most of what is referred to as AI these days is actually “weak AI” and really has nothing to do with “intelligence” in the usual sense. Some of it is algorithmic but some of it is not (the output from a neural net, for example).
The latter may actually be the most problematic when it comes to automated truck operation because in many cases, the behavior of the truck is not predictable, as an algorithm would be. And neural networks sometimes exhibit bizarre highly unexpected behavior when confronted with situations that were not part of their training data.
Real AI is now referred to “strong AI” and still does not exist -except in the heads of computer “scientists”(sic)
I should have said inertia ” depends on” inertial mass tather than “is synonymous with”
our future: “one line of faulty code…”
Peter has outdone himself with this one. Superb. I was particularly taken by this piece of careful thinking from his essay:
This is the flip side of our old issue, standardization. To make a measurement algorithm work, you have to set up a system that excludes all the edge events; one simple way to do that is multiple choice test questions. This is why AI still hasn’t a hope in hell of actually assessing writing in any meaningful way– because a written essay will often include edge elements. In fact, the better essays are better precisely because the writer has included an edge element, a piece of something that falls outside the boundaries of basic expectations.
I’ve long dreamed of what Rumi or Blake or or Emerson or Joyce would write in response to one of those standardized test writing prompts. LOL!!!! Imagine the e-grading bots or even the humans at the Pearson, Not Persons Grading Center trying to deal with one of those!
We have proven that nonsense papers that conform to the level of standardization required by the algorithm can be highly rated despite saying absolutely nothing. Computers cannot think.
This is one of Greene’s most insightful posts.
Google Babel Generator to get an instant essay that will win high marks from a robograder
AI is incapable of “understanding” which means it is incapable of distinguishing between a meaningful essay and complete gibberish.
That’s why automated essay graders can be (and have been) easily gamed by people who know how they work.
Despite all the hype coming from CS experts over the decades, producing a computer with understanding remains a hard but to crack.
Maybe I am wrong. Seems like the autocorrect AI wants to talk about but cracks.
Same is true, btw, of readability formulas. They look at a) word frequency, based on a count from some language corpus; b) sentence length. For most of them, that’s it. As if this were easy:
All time exists at all time.
And this were hard:
A moo cow came down along the road, and a very nice cow it was, and the cow said “Moo,” and I said, “You, too,” and it tickled me silly and pink that she talked to me, and I thanked her and wished her a lovely morning, saying, “Good day to you, Mrs. Moo!”
I’d love to see what a computer grader would do with Ladle Rat Rotten Hut
https://annex.exploratorium.edu/exhibits/ladle/
Autocorrect already changed Hut to but the first time I wrote it. Seems to be obsessed with buts.
Dun stopper laundry wrote! An yonder nor sorghum-stenches, dun stopper torque wet strainers!”
“Hoe-cake, murder”
Ha ha ha.
Cracks me up every time I read it.
Wake me up when there is a computer that can recognize that without being programmed to do so.
I might then have reason to pay attention to the CS hype. Might.
The interesting thing about Ladle Rat Rotten Hut is that it is actually meaning masquerading as nonsense.
So real understanding is required to “get it”. And there is far more involved than simple translation of words as with relatively “simple” natural language translation (which isn’t really simple at all)
A computer can easily have the story of Little Red Riding Hood stored in a vast database and be programmed so that whenever it encounters the various keywords together, it outputs “That’s Little Red Riding Hood” giving the appearance of recognition.
But the “recognition” is actually not real and actually involves no cognition at all.
There is a famous test (Turing Test) that is supposed to test whether computers have acquired human level intelligence. The test is if the computer can fool the average person in a “conversation” (through a user interface) into thinking the computer is actually another person.
It turns out that that it’s very easy to fool humans (surprise!) and as a result, the test is little more than a gimmic. But lots of computer “scientists” (sic) have taken it quite seriously over the years despite it’s being utterly useless as a test of “artificial intelligence” or of machine “thinking”.
Ask not what technology (AI, or otherwise) can do (for profits) but what technology can do for humanity.
I am not a techie, but for a conference two years ago, I wanted to address the creep and creepiness of tech in education.
In that presentation, I offered information about the relatively new “Partnership on Artificial Intelligence to Benefit People and Society” (PAI) formed by leaders of seven corporations, all of whom want the rest of us to think that artificial intelligence—AI—can solve major problems.
The “Partnership on Artificial Intelligence to Benefit People and Society,” began in 2016 as a PR campaign designed to put a positive spin on AI in the face of increasing public and press disapproval of data mining and the use of algorithms to drive online behavior (e.g. Facebook’s profit-seeking from Russian operatives who interfered with our elections).
That PR effort is still important because… the founding companies for PAI are all major profit seekers who thrive on acquiring and hoarding massive amounts of personal data on individuals. Here are some PR briefs about o0r from these founders.
–Amazon. Data from customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa.
–Apple. Sells iPhone, iPad, Mac, Apple Watch, and Apple TV, has software platforms such as iOS, macOS, watchOS, and tvOS with services via the App Store, Apple Music, Apple Pay, and iCloud.
–DeepMind. Based in London, acquired in 2014 by Google (part of Alphabet group) works on AI programs that can “learn to solve any complex problem without needing to be taught how.”
–Google. A subsidiary of Alphabet Inc. Google products/platforms include Search, Maps, Gmail, Android, Google Play, Chrome, and YouTube.
–Facebook. Facebook has AI Research (FAIR) and Applied Machine Learning.
–IBM’s Watson is. .”.the most advanced AI computing platform available today, deployed in more than 45 countries and across 20 different industries.”
–Microsoft. Says, “More than any other technology that has preceded it, AI has the potential to extend human capabilities, empowering us all to achieve more.”
PAI aspires to address problems in education, climate change, food, health, transportation, and inequality. Such grandiose ambitions may well be the biggest problem with AI promoters.
I don’t think corporate systems of artificial intelligence should be trusted to solve humanity’s most pressing challenges.
The original public relations purpose of PAI is still being forwarded by the “AI and Media Integrity Steering Committee.” Members work to “confront the emergent threat of AI-generated mis/disinformation, synthetic media, and AI’s effects on public discourse.”
This committee’s March 12, 2020, report titled “The Deepfake Detection Challenge: Insights and Recommendations for AI and Media Integrity” is more about getting tech companies to work together (process of work problems) than remedies for the proliferation of AI-generated synthetic audio/visual content.
By March 25, 2020, PAI was able to report on a preliminary MediaReview typology for describing, labeling, and testing audio-visual manipulations with audiences. This work builds on the Washington Post’s fact checking scheme described here. https://firstdraftnews.org/latest/wapo-guide-to-manipulated-video/
PAI now has 100 groups from 13 countries as “partners.” Over half of these are non-profits. The meaning of partner is not clear, nor is the cost of a partnership. The 990 form suggests that PAI survives on a relatively small budget of about $7 million with most of its funds from memberships. The website lists all partners https://www.partnershiponai.org/partners/
A different effort is under development by The Institute of Electrical and Electronics Engineers (IEEE). This organization has enlisted knowledgeable people from six continents to identify socially responsible uses of intelligent and autonomous systems and technologies. They hope to agree on ethical principles that give priority to human well-being in a given cultural context. Among the issues they hope to address are:
1. profiling individuals, as is common on Facebook, Amazon and other websites,
2. protecting personally identifiable information
3. the “right to be forgotten,”
4. autonomous machines (cars, robots, weapons of war)
5. consequences from biased or error ridden data and
6. legal and moral hazards in data mining and the design of algorithms.
See more at https://standards.ieee.org/industry-connections/ec/autonomous-systems.html
Both of these organizations, and others, are trying to put the best face-forward on AI along with some bragging about their moral leadership–as if the intelligence they have created, totally artificial and nothing but computer code, could make moral judgments about truth, beauty, and the rest. BS.
“Seitz-Axmacher points at “edge cases,” the rare-but-significant events that can happen. In driving, these events are rare but significant, like a deer or small child darting into the road. ”
When we teach statistics to high school students, the most important lesson we teach is that the improbable is the thing that most often happens. This paradox is the subject of many a wonderful narratives. Think about Charles Dickens and his use of coincidence. We learn that the probability of one of several things happening is the sum of the individual probabilities. Thus it is very likely that you either run into a relative at Wal-Mart in Dothan, AL or find a 1955 penny, or drop your keys into the only opening to a new cave or … You get the picture. Makes like exciting.
Unfortunately, we do not want life in a semi or a dump truck to be exciting. It is like the difference between war and target practice; the former is too exciting. It is those edge events that make AI driven trucks a thing of the distant future at best. Sort of like the prediction that keyboarding would go the way of the covered wagon back in the 90s. Did not occur.
Sorry. Makes life exciting in second paragraph above.
AI & the latest in touchscreen (remember, they’re resistant to hacking…not) voting machines.
Don’t think Disney had this in mind with “Tomorrowland.”
More like Dystopialand…& what economy are we referring to here? Isn’t the economy supposed to benefit human beings?
Oh, that’s right–benefits the top 1% of the top 10%, as Bernie would say.
Economic Theory
Economic Theory:
Growth that’s exponential
Really needn’t worry
Earth is not essential
The obsession with big data has to end sometime. My guess is it will end when Bill Gates and Jeff Bezos stop paying people off to make it profitable. The only thing it does is reduce people down to numbers. It will end. The tech-lash after we all spend months in cyberspace is going to be powerful, if I may venture a prediction. I just wish that, in the meantime, tech companies would stop misusing words like ‘intelligence’ and ‘smart’ to describe products that are unintelligent in substance and unwise to use because of their surveillance capabilities.
AI will drive trucks and will replace teachers. Not in the Ivies though.
P.S. Have you heard of a Duolongo app for 3-year olds to teach them reading and writing using phonics? https://www.google.com/amp/s/www.theverge.com/platform/amp/2020/3/26/21194763/duolingo-abc-ios-app-teach-kids-read-english-free-download