March 28, 2024

Why Do Innovation Clusters Form?

Recently I attended a very interesting conference about high-tech innovation and public policy, with experts in various fields. (Such a conference will be either boring or fascinating, depending on who exactly is invited. This one was great.)

One topic of discussion was how innovation clusters form. “Innovation cluster” is the rather awkward term for a place where high-tech companies are concentrated. Silicon Valley is the biggest and best-known example.

It’s easy to understand why innovative people and companies tend to cluster. Companies spin out of other companies. People who move to an area to work for one company can easily jump to another one that forms nearby. Support services develop, such as law firms that specialize in supporting start-up companies or certain industries. Nerds like to live near other nerds. So once a cluster gets going, it tends to grow.

But why do clusters form in certain places and not others? We can study existing clusters to see what makes them different. For example, we know that clusters have more patent lawyers and fewer bowling alleys, per capita, than other places. But that doesn’t answer the question. Thinking that patent lawyers cause innovation is like thinking that ants cause picnics. What we want to know is not how existing clusters look, but how the birth of a cluster looks.

So what causes clusters to be born? Various arguments have been advanced. Technical universities can be catalysts, like Stanford in Silicon Valley. Weather and quality of life matter. Cheap land helps. Some argue that goverment-funded technology companies can be a nucleus – and perhaps funding cuts force previously government-funded engineers to improvise. Cultural factors, such as a general tolerance for experimentation and failure, can make a difference.

Simple luck plays a role, too. Even if all else is equal, a cluster will start forming somewhere first. The feedback cycle will start there, pulling resources away from other places. And that one place will pull ahead, for no particular reason except that it happened to reach critical mass first.

We like to have explanations for everything that happens. So naturally we’ll find it easy to discount the role of luck, and give credit instead to other factors. But I suspect that luck is more important than most people think.

Return to Monkey High

Newsweek has released its annual list of America’s top high schools, using the same flawed formula as last year. Here’s what I wrote then:

Here is Newsweek’s formula:
“Public schools are ranked according to a ratio devised by Jay Mathews: the number of Advanced Placement and/or International Baccalaureate tests taken by all students at a school in 2004 divided by the number of graduating seniors.”

Both parts of this ratio are suspect. In the numerator, they count the number of students who show up for AP/IB tests, not the number who get an acceptable score. Schools that require their students to take AP/IB tests will do well on this factor, regardless of how poorly they educate their students. In the denominator is the number of students who graduate. That’s right — every student who graduates lowers the school’s rating.

To see the problems with Newsweek’s formula, let’s consider a hypothetical school, Monkey High, where all of the students are monkeys. As principal of Monkey High, I require my students to take at least one AP test. (Attendance is enforced by zookeepers.) The monkeys do terribly on the test, but Newsweek gives them credit for showing up anyway. My monkey students don’t learn enough to earn a high school diploma — not to mention their behavioral problems — so I flunk them all out. Monkey High gets an infinite score on the Newsweek formula: many AP tests taken, divided by zero graduates. It’s the best high school in the universe!

[Note to math geeks annoyed by the division-by-zero: I can let one monkey graduate if that would make you happier.]

Though it didn’t change the formula this year, Newsweek did change which schools are eligible to appear on the list. In the past, schools with selective admission policies were not included, on the theory that they could boost their ratings by cherry-picking the best students. This year, selective schools are eligible, provided that their average SAT score is below 1300 (or their average ACT score is below 27).

This allows me to correct an error in last year’s post. Monkey High, with its selective monkeys-only admission policy, would have been barred from Newsweek’s list last year. But this year it qualifies, thanks to the monkeys’ low SAT scores.

Newsweek helpfully includes a list of selective schools that would have made the list but were barred due to SAT scores. This excluded-schools list is topped by a mind-bending caption:

Newsweek excluded these high performers from the list of Best High Schools because so many of their students score well above average on the SAT and ACT.

(If that doesn’t sound wrong to you, go back and read it again.) The excluded schools include, among others, the famous Thomas Jefferson H.S. for Science and Technology, in northern Virginia. Don’t lose heart, Jefferson teachers – with enough effort you can lower your students’ SAT scores and become one of America’s best high schools.

Do University Honor Codes Work?

Rick Garnett over at ProfsBlawg asked his readers about student honor codes and whether they work. His readers, who seem to be mostly lawyers and law students, chimed in with quite a few comments, most of them negative.

I have dealt with honor codes at two institutions. My undergraduate institution, Caltech, has a simply stated and all-encompassing honor code that is enforced entirely by the students. My sense was that it worked very well when I was there. (I assume it still does.) Caltech has a small (800 students) and relatively homogeneous student body, with a student culture that features less student versus student competitiveness than you might expect. Competition there tends to be student versus crushing workload. The honor code was part of the social contract among students, and everybody appreciated the benefits it provided. For example, you could take your final exams at the time and place of your choosing, even if they were closed-book and had a time limit; you were trusted to follow the rules.

Contrasting this to the reports of Garnett’s readers, I can’t help but wonder if honor codes are especially problematic in law schools. There is reportedly more cutthroat competition between law students, which could be more conducive to ethical corner-cutting. Competitiveness is an engine of our adversarial legal system, so it’s not surprising to see law students so eager to win every point, though it is disappointing if they do so by cheating.

I’ve also seen Princeton’s disciplinary system as a faculty member. Princeton has a student-run honor code system, but it applies only to in-class exams. I don’t have any first-hand experience with this system, but I haven’t heard many complaints. I like the system, since it saves me from the unpleasant and trust-destroying task of policing in-class exams. Instead, I just hand out the exams, then leave the room and wait nearby to answer questions.

Several years ago, I did a three-year term on Princeton’s Student-Faculty Committee on Discipline, which deals with all serious disciplinary infractions, whether academic or non-academic, except those relating to in-class exams. This was hard work. We didn’t hear a huge number of cases, but it took surprisingly long to adjudicate even seemingly simple cases. I thought this committee did its job very well.

One interesting aspect of this committee was that faculty and students worked side by side. I was curious to see any differences between student and faculty attitudes toward the disciplinary process, but it turned out there were surprisingly few. If anything, the students were on average slightly more inclined to impose stronger penalties than the faculty, though the differences were small and opinions shifted from case to case. I don’t think this reflected selection bias either; discussions with other students over the years have convinced me that students support serious and uniform punishment for violators. So I don’t think there will be much difference in the outcomes of a student-run versus a faculty-run disciplinary process.

One lesson from Garnett’s comments is that an honor code will die if students decide that enforcement is weak or biased. Here the secrecy of disciplinary processes, which is of course necessary to protect the accused, can be harmful. Rumors do circulate. Sometimes they’re inaccurate but can’t be corrected without breaching secrecy. For example, when I was on Princeton’s discipline committee, some students believed that star athletes or students with famous relatives would be let off easier. This was untrue, but the evidence to contradict it was all secret.

Academic discipline seems to have a major feedback loop. If students believe that the secret disciplinary processes are generally fair and stringent, they will be happy with the process and will tend to follow the rules. This leaves the formal disciplinary process to deal with the exceptions, which a good process will be able to handle. Students will buy in to the premise of the system, and most people will be happy.

If, on the other hand, students lose their trust in the fairness of the system, either because of false rumors or because the system is actually unfair, then they’ll lose their aversion to rule-breaking and the system, whether honor-based or not, will break down. Several of Garnett’s readers tell a story like this.

One has to wonder whether it makes much difference in practice whether a system is formally honor-based or not. Either way, students have an ethical duty to follow the rules. Either way, violations will be punished if they come to light. Either way, at least a few students will cheat without getting caught. The real difference is whether the institution conspicuously trusts the students to comply with the rules, or whether it instead conspicuously polices compliance. Conspicuous trust is more pleasant for everybody, if it works.

[Feel free to talk about your own experiences in the comments. I’m especially eager to hear from current or past Princeton students.]

Minimum Age for Pro Basketball?

Yesterday was the NBA draft. In the first round, eight high school seniors were taken, and only five college seniors. (The rest were overseas players and college underclassmen.) The very first pick was a high school senior, chosen over a very accomplished college player.

You have to be 16 to drive. You have to be 21 at drink alcohol (at least where I live). Should there be a minimum age for playing professional basketball? NBA commissioner David Stern favors a minimum age of 20 for NBA players. The NFL’s rule, banning players less than three years out of high school, withstood a court challenge from Maurice Clarett, who wanted to go pro after two years of college.

Nobody can argue, after seeing Kobe Bryant, Kevin Garnett, and LeBron James, that college is a prerequisite for NBA stardom. Sure, some high-school draftees wash out, but they may well have failed just as badly had they spent four years playing college ball.

Stern, and other proponents of the minimum age rule, argue that going to college is good for these kids. That’s probably true, if they become real students. But it’s hard to see the point in making them pretend to be students, which is what many of them would do were it not for the straight-to-the-pros path. It’s especially hard to see the point of making them mark time as pseudo-students until they pass some arbitrary age threshold, at which point they can drop their pseudo-education like a red-hot brick and jump to the pros.

Another, considerably more cynical, argument for an age limit is that forcing kids to play college sports is a clever way to subsidize university education. If college basketball is just minor-league pro ball with unpaid players, then it can serve as a profit center for universities, generating revenue to support other students who are actually being educated.

But all of this ignores the biggest losers in the trend towards professionalization of college sports: the true student-athletes. These are the players who don’t spend all day in the weight room, who study things other than game films. It’s very hard for them to compete against full-time athletes, and so they face intense pressure to slack on their studies.

It seems to me that professional football and basketball could learn a thing or two from baseball. The normal path in baseball has been for players to turn pro immediately after high school, with only a few players

Computers As Graders

One of my least favorite tasks as a professor is grading papers. So there’s good news – of a sort – in J. Greg Phelan’s New York Times article from last week, about the use of computer programs to grade essays.

The computers are surprisingly good at grading – essentially as accurate as human graders, where an “accurate” grade is defined as one that correlates with the grade given by another human. To put it another way, the variance between a human grader and a computer is no greater than between two human graders.

Eric Rescorla offers typically interesting commentary on this. He points out, first, that the lesson here might not be that computers are good at grading, but that human graders are surprisingly bad. I know how hard it is to give the thirtieth essay in the stack the careful reading it deserves. If the grader’s brain is on autopilot, you’ll get the kind of formulaic grading that a computer might be able to handle.

Another possibility, which Eric also discusses, is that there is something simple – I’ll call it the X-factor – about an essay’s language or structure that happens to correlate very well with good writing. If this is true, then a computer program that looks only for the X-factor will give “accurate” grades that correlate well with the grades assigned by a human reader who actually understands the essays. The computer’s grade will be “accurate” even though the computer doesn’t really understand what the student is trying to say.

The article even gives hints about the nature of the X-factor:

For example, a high score almost always contains topically relevant vocabulary, a variety of sentence structures, and the use of cue terms like “in summary,” for example, and “because” to organize an argument. By analyzing 50 of these features in a sampling of essays on a particular topic that were scored by human beings, the system can accurately predict how the same human readers would grade additional essays on the same topic.

This is all very interesting, but the game will be up as soon as students and their counselors figure out what the X-factor is and how to maximize it. Then the SAT-prep companies will teach students how to crank out X-factor-maximizing essays, in some horrendous stilted writing style that only a computerized grader could love. The correlation between good writing and the X-factor will be lost, and we’ll have to switch back to human graders – or move on to the next generation of computerized graders, looking for a new improved X-factor.