Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not only that but you can probably parse a jeopardy "answer" into a series of roughly independent clauses, and then try to predict classes that rank high in these clauses. For example, in the "answer" "This action flick starring Roy Scheider in a high-tech police helicopter was also briefly a TV series" you can get it right just by looking for things that correlate highly with "action flick", "Roy Schneider", "police helicopter", and "TV series".


But they must surely be doing something fancier than this naive-bayes-style model, otherwise they'd have no use for a roomful of supercomputers.


Well, naïve Bayesian inference is supercomputer-level when you use it on a huge universe of data.

As Peter Norvig often points out, these kind of tasks are highly data dependent. The supercomputers are probably more used for data access as for raw computation. I can totally imagine Peter writing a forty-line Python app that runs on Google's infrastructure that does about as well.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: