Some go up and some go down?

A response to the question posed in the previous post:

We had a brief moment when we were moving toward social literacy in computing. At St. John’s, it occurred thirty years ago, and I remember it clearly because it was the reason for my being hired.  The number of programming classes exploded beyond the comp sci teacher’s ability to teach them.  A one-semester elective in “BASIC Programming” (as in the BASIC computer language) was taken by about 3/4 of the sophomore class every year.   Most of the rest of the class picked it up as a senior.

Then the history requirement (a year in 9th) was increased to add a semester in 10th, which put a huge damper on the momentum because there were other 10th-grade electives and later to a full year in 10th, which resulted in the current level of programming you see now.

However, a trend counter to the dumbing-down of the interface was, at least initially, a bigger impediment to the movement, I think.  That was the increasing sophistication of the user interface as windows and other kinds of graphical interfaces came on the scene, which made programming much harder to start.   When all we had was a text-based interface, it was much easier to program.  Objects made sophisticated programming easier but beginning programming much harder.

As far as the scribe analogy to Gutenberg in the original post goes, I agree, but that’s been true of most major technological inventions.  When I was 16, I could take apart most of my car under the hood, fix what was wrong, try new stuff, and so on.  Now, almost no-one does that because of the computerification of cars.  You need expensive equipment just to diagnose what’s going on.  Again, the subtle, sophisticated tasks are made easier by a computer–and gas mileage goes up.  The down side is that there is almost no entry-level tinkering.

Today, almost no-one does anything with a car but drive it and buy accessories for it: they *use* it, but few understand how it works and even fewer can do anything with it.  I don’t like that trend either, but I wager that it’s irreversible short of Armageddon.

One reason I think using Mathematica is better in my math classes than letting students use only calculators is that it exposes them to at least a bit of programming-like thinking.

It’s simply true, I would assert, that as a technology starts to mature, it becomes a tool, and most people want to use tools automatically–they do not want to understand them.  The understanding is left to the people who will become the “scribes”–the engineers, programmers, etc.–who enjoy the tool itself and will work to make it better.

The most important tool we have is our minds, which I gather is your point.  And it’s why  teachers need to resist the urge to  be too “helpful” by telling kids what to do because that detracts from students’ ability and willingness to think for themselves.  It does, however, produce short-term gains.    And some teachers seem genuinely to believe that our students can’t think unless the teacher lays out everything step-by-step for them and never gives a problem or assignment the kids haven’t already seen.  That’s a huge disservice and a large disincentive for kids to think and to grow intellectually.

It’s also easier for the teacher, though.  When I’m very tired or sick, I sometimes find myself just answering kids’ questions about what to do to solve a problem because I don’t have the energy to spend trying to find the leading questions necessary to push them to think successfully for themselves.

I used to get annoyed but now I laugh when I hear a student (or worse, a parent) say that kids “teach themselves” in my classes.  While that’s true in a sense, the parent implication is that I don’t do anything to help.  I invite them to try to get kids to think in productive ways.  It’s a hell of a lot of work, much more than the alternatives of simply not helping or of simply answering questions.

One last point to address the simplification of powerful software referenced in the previous post:  I want to remark that nothing “had” to be simplified, but that the manufactures *chose* to make things simpler and simpler in order to sell more product.  The creativity tools were made easier and easier to use in order to sell more of them.

My provocative question (ie, “a question to provoke/annoy”) is whether the total amount of creativity in the world was actually diminished by such actions.  In terms of transportation, for instance, more people drive more and use their cars for more things than when we had to do more of our own repair and maintenance.  With the iPad apps, is there now a higher base level of creativity compared to when almost no-one but professionals and gifted amateurs “created content” even if the higher levels are lower?

Expand the base enough, and the weighting effect kicks in.  A million times .01 is, after all, more than 1000 times 1.0…..

That’s not meant to be a rhetorical question, btw, but one for serious consideration.  If “the masses” are more creative than they were before, even if not anywhere near the level of the talented amateurs of yester-year, is that such a bad thing?  After all, maybe the iPad apps don’t allow fine control of the creative process, but I promise you that some software, somewhere, has evolved to fill the gap.  It may not be Final Cut Pro any more, but I don’t see that video special effects and editing are disappearing from movies.

This entry was posted in Creativity, Implications for teaching, Learning, Reflection, tech and tagged , , , , . Bookmark the permalink.