That one teenager you know does not represent all teenagers

Some tech pundits fall into a bad habit of spotting trends among the dwindling number of young people they encounter on a regular basis.

Here’s an example from Adam Rifkin’s post, “Tumblr Is Not What You Think“:

A teenage friend of mine told me recently that he tries to post something to his Tumblog on an hourly basis …

So, there’s a teenage friend and that friend has a hyperactive Tumblr habit.

This isn’t a point. It’s an aside. And it certainly doesn’t qualify as a Trend to Watch.

It may very well be that Tumblr is where young people gather. I have no problem with that conclusion. But we’ve got to stop with this one-young-person-speaks-for-them-all nonsense.

Think back to when you were a teenager. Were your actions representative of everyone in your age group? Were they even representative of your close friends? Sometimes, but not always.

I don’t mean to pick on Rifkin. Much of his post focuses on survey results and his own insights. But that one line about that one teenager really stuck with me. I’m not sure why it’s in there.

Notable things: How do you give ethics to a robot? Let’s keep political pundits honest with batting averages, ad-banner honesty from The Onion

Self-driving cars. Drones. Robot armies. All of these things are stepping from science fiction into our daily lives, yet we haven’t addressed a fundamental question:

How do we teach our machines to be ethical?

Gary Marcus explores the repercussions of machine ethics in this fascinating essay. Of particular note is the following excerpt, which contrasts machine ethics with humanity’s still-under-construction ethical methods:

The thought that haunts me the most is that that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation). What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.

In many ways what we’re searching for is a way to make machine ethics better than our own. How do you even begin to do that?


Proposed: A batting average for political pundits and pollsters.

+1


Every ad-driven website should be required, by law, to include this on their terms of service page:

… we can go through a whole dog-and-pony show here where I pretend that this column exists as a forum for ideas, and that I act as an independent voice who isn’t beholden to advertisers, and the power of the First Amendment, and blah blah, etc. etc. But let’s get real for a second here, okay? This column — nay, this entire website, this entire industry we call journalism — exists for one purpose and one purpose only: to sell ads. Lots of ads. Big, stupid ads. Ads with loud videos that play when you run your mouse cursor over them. Ads with pictures of supermodels and bacon cheeseburgers and beer bottles dripping with condensation. Ads with huge fricking graphics of SUVs that “drive” across your screen as though you were living in some sort of damned nightmare world. In short, ads that will make poor, honest working saps like you — yes, you, reader — click on them so that The Onion can continue stocking the coffers and I can continue to send my kid through four years of Cornell’s hotel management school.

Via The Onion