Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From reading this fascinating piece, I (as one who is not technically proficient in any of the relevant areas) would conclude that AI-based approaches to computing can work well in the following areas: (1) you can program something in accordance with a strict set of limited rules and achieve amazing results, such as in computer chess, where today the advanced machines can basically outdo humans in playing that particular game, e.g., Deep Blue; (2) you can program something to use layered algorithms to find high degrees of probability that a particular objectively knowable fact is correct in response to someone inquiring about that fact, e.g. I.B.M. Watson; (3) you can custom-program something to enable someone to determine other forms of objectively verifiable relationships (such as mathematical comparisons or outcomes), e.g., Wolfram Alpha; and, (4) you can (at least theoretically) hard-wire a database to yield objective facts drawn from a vast body of custom knowledge manually programmed into a machine, albeit at an enormous cost in time and effort, thus gaining the benefits of mass storage, instant retrieval, and rapid replication but without drawing on the iterative power of computing devices and potential algorithmic solutions to the problem.

To me, all such approaches appear to throw in the towel on the question, "can AI-based computing ever achieve the equivalent of exercising human judgment?"

For example, in law, all sorts of questions can arise to which there is no "mechanical" answer - what is the best strategy to employ in a particular legal fight or litigation? what, among an array of complex alternatives involving tax, business risk, liability risk, and human factors, is the optimum way to merge two companies? or even the simpler but still judgment-based decision, is it best for me to set up my company in the form of an LLC or a corporation or some other form and what domicile should I choose? or involving personal decisions such as what techniques do I use to raise and train my kids in terms of education, morals, setting life goals, etc.

Are there AI proponents out there who believe that AI-based computing can ever handle such issues? To me, it seems evident that no algorithmic approach will ever be able to rise to the level of addressing such problems but this may just be based on my ignorance and lack of imagination.

Going back to law, for example, the article suggests that IBM may one day capitalize upon using a Watson-type machine to help people answer bureaucratic questions. I wonder about this if the approach is based purely on probabilities because, no matter what the data set, no one could ever know for sure that the answer is the correct one. At most, the AI-device could say, "this likely is a good starting point" and, beyond that, you are on your own to confirm whether it is accurate or not (which, of course, could make for a tremendously helpful resource in itself, as long as it is used properly).

I would think any such method would become hopelessly confused, though, in dealing with some knotty tax question, as for example, "determine my unrealized built-in gain in my C corp so that I know how much tax I have to pay on converting to S-corp status" (see here for the methodology on this, which is mind-numbing for all but CPAs steeped in tax minutiae: http://www.taxalmanac.org/index.php/Treasury_Regulations,_Su...). I could imagine a custom-programmed approach dealing with such questions but only one that is very specific to the problem at hand.

Don't mean to go on - I truly found this piece quite intriguing since, before reading it, I would (out of ignorance) have laughed at anyone who would have claimed that an AI-based machine could play "Jeopardy" in any meaningful way.

So my question to those who are knowledgeable in the HN community is this: are there other conceptual approaches that, given sufficient time and resources will potentially be capable of rising to the level where they can address the higher-level (judgment-based) sorts of issues I identify above or is this basically it?

For anyone interested, the Hollywood view of this sort of thing appeared in the 1957 movie "Desk Set," where a "Miss Watson" administered a machine called EMMANAC, which could give instant answers to questions expressed as normal people would ask them. This clip highlights the view of AI-computing as depicted in that movie: http://www.youtube.com/watch?v=Rdl9ynODxbk&feature=relat.... The broad theme is that of a group of researchers who resented the idea that their jobs were about to be eliminated by the impersonal beast, which Miss Watson, however, endearingly referred to as "Emmy."



I think at some point the AI goalpost will move beyond "capable of human judgement" as well :) (see Asimov's stories of R.Daneel & R.Giskard for example).

I'm one of those who believes we'll crack AI (all aspects of it) at some point. I believe that the algorithm that our brains (and that of most animals) use is a very simple one that just needs tremendous parallel processing. I don't think anyone including IBM is yet thinking at that level (they work at too high a level for my tastes) and they do not consider enough of biology in their work (too computer-science-y, which prevents problem solving at the beginning).

I think it's just a matter of time before the neuroscientists have most of the details of brain and neuron functioning and we're able to decode the algo and replicate it on computers. And then it will be upto hardware progress to match first animal and then human intelligence.


If it's simple algo why haven't we come up with anything promising?


I'm hoping it's something like the crypto algos where computing the hash is easy but getting the original message back (without already knowing the key) is not so easy.

In our case, the "simple" algo would be something trivial to implement but tough to actually figure out what it is in the first place.


Leibniz's folly was essentially that, the "Calculus Ratiocinator". He supposed a machine that could settle questions of law. The problem is that no one agrees on the meanings of the inputs.

It's a common fallacy to believe that computers can produce "correct" answers to questions which a sample of 1,000 (or even 2) humans don't give the same answers to. If you have nothing unambiguous or even statistically reasonable to compare your answers against, the experiment is unfalsifiable.

Human scientists (if they are such, and not just bottle-washers and button-sorters), pose questions and then experiment against reality, which is and always will be the final appeal. A computer "scientist" cannot avoid that -- they still have to simulate and experiment. Several do, actually. The best seem to be based on "genetic algorithms", ie, simulation and experiment, just faster.

So there are many things computers can do and will do. But expecting a computer to tell you whether abortion is right or wrong, or to run for president, or be a perfect oracle, is missing the point. We, as humans, are very probably incapable of building something more intelligent than ourselves, because, by definition, we wouldn't recognize it if we did. Eliezer will probably jump in at this point, and I'm curious what he'd say.


The place where this system would be useful would be loading up a company’s email and then asking what problems they knew with a new drug X.

The software would then look at all the emails and work though the chains where they say "Let's call Sample 37b, Drug X" and another one that said "Sample 37b is causing issue Z in 7% of patents."

The software would then say: *It was known that: Drug X causes issue Z in 7% of patents. See: Email: A and B

Think of it like grep for logic.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: