"Computers Used at School to Help Students Weighed Down With Heavy Books in Backpacks" by Martha Irvine
Source: The Boston Globe - 12 October 2002
Something's missing at the new Sun Valley Charter High School in Ramona, Calif. There are no textbooks, only computers.
That means students there don't have to lug heavy backpacks a familiar ritual for many young Americans who carry books from class to class and home at day's end.
Growing back pain complaints prompted a new California law limiting textbook weight. But some say assignments drawn from the Internet, "e-books" or CD-ROMs will be the real solution.
"It's not the wave of the future; it's the wave of the present," says David Tarr, executive director instead of principal at Sun Valley High, a public school near San Diego.
Officials there used money normally spent on textbooks for computers. The new school's first students about 60 incoming freshmen get assignments from such services as Cuestia.com, an online library, and Interactive Mathematics, curriculum on computer CD...
"Clearly, electronic delivery will make this problem [back problems from heavy backpacks] go away. But I think we're a number of years away from that," says Stephen Driesler, executive director of the Association of American Publishers' school division, a trade organization for textbook publishers.
Still others believe that, with wider use, high-tech devices will be cheaper than costly-to-print textbooks.
That's why, last spring, Richard Bellaver asked his graduate students at Ball State University to test e-books, hand-held devices that present electronic text and pictures. He says the average scores of students who studied only with e-books and those who used traditional textbooks were virtually the same.
Now he wants to try other high-tech options to see what works best for students "and hopefully, save them some money," says Bellaver, associate director of the university's Center for Information and Communication Sciences.
Whatever technology becomes dominant, Mark Gross CEO and founder of Data Conversion Laboratory Inc. says schools will eventually save money.
"This will be a boon for poor educational districts," says Gross, whose New York-based company has converted everything from bulky law books to the Defense Department's weapons systems guides into electronic text.
Until then, New Jersey is considering imposing a maximum textbook weight. California Gov. Gray Davis signed a similar measure last week.
Textbook publishers, meanwhile, suggest restoring lockers that have been removed at many schools and giving students time between classes to get to them.
Driesler also says more students should wear backpacks properly on both shoulders instead of one even if that method has, as he puts it, a "dweeb factor."
On the Net: American Chiropractic Association backpack guidelines: http://www.acatoday.com/media/tips/backpacks.shtml
Source: The New York Times -- 1 August 2002
Early in the film A Beautiful Mind, the mathematician John Nash is seen sitting in a Princeton courtyard, hunched over a playing board covered with small black and white pieces that look like pebbles. He was playing Go, an ancient Asian game. Frustration at losing that game inspired the real Mr. Nash to pursue the mathematics of game theory, research for which he eventually won a Nobel Prize.
In recent years, computer experts, particularly those specializing in artificial intelligence, have felt the same fascination--and frustration.
Programming other board games has been a relative snap. Even chess has succumbed to the power of the processor. Five years ago, a chess-playing computer called Deep Blue not only beat but thoroughly humbled Garry Kasparov, the world champion at the time. That is because chess, while highly complex, can be reduced to a matter of brute force computation.
Go is different. Deceptively easy to learn, either for a computer or a human, it is a game of such depth and complexity that it can take years for a person to become a strong player. To date, no computer has been able to achieve a skill level beyond that of the casual player.
The game is played on a board divided into a grid of 19 horizontal and 19 vertical lines. Black and white pieces called stones are placed one at a time on the grid's intersections. The object is to acquire and defend territory by surrounding it with stones.
Programmers working on Go see it as more accurate than chess in reflecting the ineffable ways in which the human mind works. The challenge of programming a computer to mimic that process goes to the core of artificial intelligence, which involves the study of learning and decision-making, strategic thinking, knowledge representation, pattern recognition and, perhaps most intriguingly, intuition.
"A good Go player could make a move and other players say, `Yes, that's a good move,' but they can't explain to you why it's a good move, or how they even know it's a good move," said Dr. John McCarthy, a professor emeritus at Stanford University and a pioneer in artificial intelligence.
Dr. Danny Hillis, a computer designer and chairman of the technology company Applied Minds, said that the depth of Go made it ripe for the kind of scientific progress that comes from studying one example in great detail. "We want the equivalent of a fruit fly to study," Dr. Hillis said. "Chess was the fruit fly for studying logic. Go may be the fruit fly for studying intuition."
Along with intuition, pattern recognition is a large part of the game. While computers are good at crunching numbers, people are naturally good at matching patterns. Humans can recognize an acquaintance at a glance, even from the back. "Every Go book is filled with advice on patterns of different kinds," Dr. McCarthy said.
Dr. Daniel Bump, a mathematics professor at Stanford, works on a program called GNU Go in his spare time. "You can very quickly look at a chess game and see if there's some major issue," he said. But to make a decision in Go, he said, players must learn to combine their pattern-matching abilities with the logic and knowledge they have accrued in years of playing...
One measure of the challenge the game poses is the performance of Go computer programs. The last five years have yielded incremental improvements but no breakthroughs, said David Fotland, a programmer and chip designer in San Jose, Calif., who created and sells The Many Faces of Go, one of the few commercial Go programs.
Mr. Fotland's program was the winner of a tournament last weekend in Edmonton, Alberta, that pitted 14 Go-playing programs--including several from Japan--against one another. But even The Many Faces of Go is weak enough that most strong players could beat it handily.
Part of the challenge has to do with processing speed. The typical chess program can evaluate about 300,000 positions per second, and Deep Blue was able to evaluate some 200 million positions per second. By midgame, most Go programs can evaluate only a couple of dozen positions each second, said Anders Kierulf, who wrote a program called SmartGo.
In the course of a chess game, a player has an average of 25 to 35 moves available. In Go, on the other hand, a player can choose from an average of 240 moves. A Go-playing computer would take about 30,000 years to look as far ahead as Deep Blue can with chess in three seconds, said Michael Reiss, a computer scientist in London.
If processing power were all there was to it, the solution would be simply a matter of time, since computers are growing ever faster. But the obstacles go much deeper. Not only do Go programs have trouble evaluating positions quickly, they have trouble evaluating them correctly...
Mr. Fotland, 44, the creator of The Many Faces of Go has been working on computer Go for 20 years and is chief technology officer at Ubicom, a small semiconductor company in Silicon Valley...
It takes a strong Go player to write even a weak Go program. Mr. Fotland, for instance, said he had written programs for checkers, Othello and chess. The algorithms are all very similar, and it is not difficult to write a reasonably strong program, he said. Each of the games took him a year or two to finish. "But when I started on Go," he said, "there was no end to it"...
"There's a certain stream of consciousness when you're looking at positions," Dr. Bump said. "You might look at 10 variations, but you don't really know what's going on in the back of your mind. Even a strong player doesn't know how his mind works when he looks at a position"...
Dr. Reiss, who is the author of Go4++, a previous champion that placed second in last weekend's playoff, agrees with Dr. Bump. Dr. Reiss, who is an expert in neural networks, compares a human being's ability to recognize a strong or weak position in Go with the ability to distinguish between an image of a chair and one of a bicycle. Both tasks, he said, are hugely difficult for a computer.
For that reason, Mr. Fotland said, "writing a strong Go program will teach us more about making computers think like people than writing a strong chess program"...
"I think in the long run the only way to write a strong Go program is to have it learn from its own mistakes, which is classic A.I., and no one knows how to do that yet," Mr. Fotland said. A few programs have some learning capabilities built into them...
It seems unlikely that a computer will be programmed to drub a strong human player any time soon, Dr. Reiss said. "But it's possible to make an interesting amount of progress, and the problem stays interesting," he said. "I imagine it will be a juicy problem that people talk about for many decades to come."
Source: Discover - August 2002
A third of the way into A Beautiful Mind, the recent film about mathematician John Nash, a sexy blonde and four brunettes walk into a bar, batting their eyelashes. After a bit of ogling, Nash and his number-minded friends decide to compete for the blonde. Then Nash has second thoughts. If everyone goes for the same woman, he says, we'll just end up blocking each other out and offending the rest of the women. The only way for everyone to succeed is to ignore the blonde and go for the brunettes instead.
The scene is an attempt to illustrate Nash's most important contribution to game theory--the Nash equilibrium. Nash showed that in any competitive situation--war, chess, even picking up a date at a bar--if the participants are rational, and they know that their opponents are rational, there can be only one optimal strategy. That theory won Nash a Nobel Prize in economics. The math behind it is flawless and has transformed the way people think about evolution, arms races, stock markets, and ticktacktoe. There's only one problem: People aren't rational.
"They just aren't that smart," says Thomas Palfrey, an economist at the California Institute of Technology, "and if they are that smart, they don't think everybody else is that smart"... The best strategy in any game against Palfrey is not to play at all: In the past few years, he has hit upon a variation on Nash's theory that describes true competitive behavior for the first time. More than any economist before him, Palfrey is likely to guess your every move.
The imbalance at the crux of the Nash equilibrium first struck him and his colleague Richard McKelvey in 1988, at a conference in Haifa, Israel. Game theorist Robert Aumann was giving a talk on a hypothetical game called centipede, in which two players take turns being offered the larger portion of a pot of money. If either player accepts, the game is over, and the loser gets the smaller amount. But each time the pot is declined, the amount of money inside grows. If it increases tenfold, a nervy player's winnings can grow from $10 to $10 million in only six turns. But the Nash equilibrium dictates a less daring strategy--take the larger pot the first chance you get. "This is ridiculous," Palfrey remembers thinking. "Game theorists keep talking about the centipede game as if they know what's going to happen, but in fact they don't know. So let's run an experiment."
Back at Caltech, he and McKelvey sat some students in front of a network of cubicled computers and let them play one another anonymously... Contrary to the predictions of the Nash equilibrium, only 37 of 662 centipede games ended in the first round. "The problem we had then was how to explain the data. How would you actually model players not being fully rational?"
The breakthrough came when the economists factored in altruism and skepticism. The players knew that their decisions didn't always make sense--and that their opponents weren't always rational, either--and that made their optimal strategy distinctly different from what Nash predicted. "You can never know what somebody else is thinking," Palfrey says. "All you can do is try to guess what they're thinking. And they're trying to guess what you're thinking, too." To model this behavior, Palfrey mixed a little statistics in with the theory behind the Nash equilibrium. Strategic mistakes and errors are part of the new formula, but the reasoning is the same as Nash's. There's still a single best strategy: Palfrey calls it the quantal response equilibrium.
In 1998, not long after Palfrey and McKelvey's ideas coalesced, Palfrey started running model auctions to test the theory. For years in economic circles, a controversy had raged over the subject of overbidding. Why is it that people in auctions routinely pay more than they should? Some economists, like Glenn Harrison of the University of South Carolina, thought that bidders simply value the joy of winning more than the satisfaction of getting an object for less than it's worth--especially when it's not worth much to begin with. Others, like Daniel Friedman at the University of California at Santa Cruz, thought overbidding might be the result of an aversion to risk: Most bidders would rather bid too high than run the risk of losing the item altogether.
To see who was right, Palfrey had a group of students join a series of computer auctions using real money. Each item up for auction was assigned a value. Then pairs of students bid against each other, with each student allowed a single bid. When a student won an item for less than its assigned value, he kept the difference. For example, if Palfrey told a student an object was worth $7, and the student won it with a $3 bid, he would win $4. Each student participated in 15 separate auctions. It was, Palfrey says, "a very simplified version of the New York Stock Exchange."
Every auction has a Nash equilibrium--a bid that perfectly balances the risk of losing to a higher bidder (and making no profit) against the possibility of greater profits (the lower you bid, the more money you'll make). In Palfrey's experiment, that optimal bid was half the object's given value, yet the students regularly bid much higher. According to the Nash equilibrium, they should have walked out of the experiment with an average of $14.20. Instead, they averaged only $10.70 - exactly as Palfrey's quantal response equilibrium had predicted.
The experiment banged the gavel on the overbidding debate. Risk aversion and random mistakes pushed the bids higher--even in auctions for inexpensive objects. When overbidding was riskier than underbidding, the students were less likely to overbid. To Palfrey, bidders are like people in a parking lot faced with feeding the meter or potentially paying a parking ticket: They'll feed the meter even if the attendant only comes by once a year. "The more risk averse you are in an auction, the higher you bid," Palfrey says. And the more bidders overbid, the more their high bids begin to snowball...
Source: The New York Times - 8 August 2002
Three Indian computer scientists have solved a longstanding mathematics problem by devising a way for a computer to tell quickly and definitively whether a number is prime--that is, whether it is evenly divisible only by itself and 1.
Prime numbers play a crucial role in cryptography, so devising fast ways to identify them is important. Current computer recipes, or algorithms, are fast, but have a small chance of giving either a wrong answer or no answer at all.
The new algorithm-- by Manindra Agrawal, Neeraj Kayal and Nitin Saxena of the Indian Institute of Technology in Kanpur--guarantees a correct and timely answer. Though their paper has not been published yet, they have distributed it to leading mathematicians, who expressed excitement at the finding.
"This was one of the big unsolved problems in theoretical computer science and computational number theory," said Shafi Goldwasser, a professor of computer science at the Massachusetts Institute of Technology and the Weizmann Institute of Science in Israel. "It's the best result I've heard in over 10 years."
The new algorithm has no immediate applications, since existing ones are faster and their error probability can be made so small that it is practically zero. Still, for mathematicians and computer scientists, the new algorithm represents a great achievement because, they said, it simply and elegantly solves a problem that has challenged many of the best minds in the field for decades.
Asked why he had the courage to work on a problem that had stymied so many, Dr. Agrawal replied in an e-mail message: "Ours was a completely new and unexplored approach. Consequently, it gave us hope that we might succeed."
The paper is now posted on the computer science department Web page at the Indian Institute of Technology (www.cse.iitk.ac.in).
Methods of determining whether a number is prime have captivated mathematicians since ancient times because understanding prime numbers is the key to solving many important mathematical problems. More recently, attention has focused on tests that run efficiently on a computer, because such tests are part of the underlying mathematics of several widely used systems for encrypting data on computers.
So-called primality testing plays a crucial role in the widely used RSA algorithm, whose security relies on the difficulty of finding a number's prime factors. RSA is used to secure transactions over the Internet.
On Sunday, the researchers e-mailed a draft of the paper on the result to dozens of expert mathematicians and computer scientists. Dr. Carl Pomerance, a mathematician at Bell Labs, said he received the paper on Monday morning and determined it was correct.
After discussing the draft with colleagues over lunch, Dr. Pomerance arranged an impromptu seminar on the result that afternoon.
That he could prepare and give a seminar on the paper so quickly was "a measure of how wonderfully elegant this algorithm is," Dr. Pomerance said. "This algorithm is beautiful."
Source: ESPN.com - 26 September 2002
Wednesday night at the Boston chapter meeting of the American Statistical Association, Harvard statistics professor Carl Morris will not be discussing new graphical methods in health-care modeling or legal technometrics... The subject will be dear to Morris' heart--"Professional Baseball and Markov Modeling," and the proof of his new theorem that, when used to evaluate player performance, rates Barry Bonds as now finishing up by far the best offensive season of all time.
Morris' approach will sound familiar to many fans of baseball statistics: He has built a way to determine how many runs per game a team of players of various calibers will score...
What Morris has done, he will assert, is devise the first approach that is not an estimate, nor a computer simulation, but a relatively straightforward algebraic formula that comes to an answer that is probabilistically true--and can be backed up by rigorous mathematical proof. When he's done, the team of nine Bondses winds up with an average of 22.4 runs per game--besting seasons by Babe Ruth, Ted Williams and Bonds' own 2001 total as the best in major-league history.
"This is an exact calculation, not an estimate," says Morris, the former chairman of Harvard's statistics department. "It is correct"...
His conviction for the significance of this new method--which he calls Runs Per Game (RPG)--derives from how it in effect merges the Jamesian approach of simple inputs (singles, walks, at-bats, etc.) with a device to reconstruct innings and how runners advance before three outs are made...
Before we continue, and for those appropriately skeptical, here's the formula with some explanation of its derivation: RPG = 9 * [E(br) - E(lob)]
Not particularly helpful? Well, this is more intuitive than it might first appear. E(br) is the expected number of baserunners per inning, while E(lob) is the number of batters left on base per inning. The difference between those two represents those who have scored; multiply that by 9 and you get runs per nine innings, or game.
Now, how do we get E(br) and E(lob)? This is where it gets considerably more complicated but not ridiculously so. First, the easier half, E(br):
E(br) = 3 * [obp / (1 - obp)]
"Obp" is just what you think it is--on-base percentage, otherwise known as the probability of not making an out. (1 - obp), then, is the probability of making an out. The ratio of these two is the expected number of runners who reach per out; multiplied by three, it becomes the expected number of runners per three outs, or inning. This is easier to see by plugging in a few representative obp's: a .500 OBP results in an average of three runners per inning, while a .333 OBP gets you an average of 1.5.
So, what is E(lob), the expected number of men left on base? This is the esoteric part of the calculation--a considerable part of which most readers will be thankful to be spared. Here is the easiest representation of it:
E(lob) = [L1 * (1 - p0)] + [L2 * (1 - p0 - p1)] + [L3 * (1 - p0 - p1 - p2)]
Skip this current paragraph if you want, but again, the above formula isn't quite as awful as it looks. It's just the sum of three terms, using two concepts represented by the p's and the L's. The p's are the probabilities that 0, 1 and 2 baserunners reach in an inning -- for instance, p0, the probability of three outs in a row, is (1 - obp) cubed; the others are determined through elementary algebra. The L's quantify the number of men who will be left on base in certain situations -- for example, L2 is the probability that two men reach and do not score. The manner in which the L's and p's combine to create men left on base is too complicated to explain here; however, it does recreate, given certain assumptions, all the combinations of events in which runners reach, advance but ultimately do not score. (The L's also are the principal spot where a player's power comes into play, in that slugging clears runners from the bases rather than leaving them on.)
OK. Enough explanation. Given that the method does determine the expected number of men who reach and men who are left there logically and accurately--and it does--at this point let's do what it was designed for, and have some fun by plugging numbers in from this season. The top 10 RPG totals for the 2002 season, entering this final week:
1. Barry Bonds 22.42
2. Manny Ramirez 10.82
3. Jim Thome 10.66
4. Brian Giles 10.10
5. Jason Giambi 9.36
6. Larry Walker 9.34
7. Todd Helton 9.28
8. Chipper Jones 9.05
9. Vladimir Guerrero 8.91
10. Sammy Sosa 8.86
No, Barry Bonds' total is no typo. Bonds is so good at all three major batting skills--hitting for average, hitting for power and drawing walks (i.e., not making outs)--that a team of nine Bondses would score more than twice as many runs as any other major-league player this season. The average player nets out at 4.44, a tick lower than what the actual average team scores...
Moreover, as Morris will announce Wednesday night, Bonds also is putting up -- by a wide margin -- the best season of all time. Here are the top five RPG performances in major-league history:
Player -- Year -- RPG
1. Barry Bond -- 2002 -- 22.4 (through Sunday)
2. Babe Ruth -- 1923 -- 19.6
3. Ted Williams -- 1941 -- 18.9
4. Babe Ruth -- 1920 -- 18.2
5. Barry Bonds -- 2001 -- 17.1 ...
COMET is sponsored in part by a grant from the California Mathematics Project.
COMET is produced by:
2002 Archive >