Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you check if you don't have any other view into the data but SQL and you don't know SQL?
 help



Sql takes at most an afternoon to learn enough of to navigate a database with

Same way you do today; you trust whoever wrote the query.

I do not sell a wrapper on top of some LLM; you can absolutely write your SQL directly. There is an engine, there are iceberg tables. You can just live your best life doing your own SQL by hand.

Now if you couldnt do it before and you have a sensible understanding, you can likely do a bit more with the CLI tooling. And if you know a lot more, you can still do that. The queries are not hidden, or abstracted, If you need them they will be saved - transparently in SQL.

So I dont know what is the answer to the question "how do people do things they don't know how to do" ?


> So I dont know what is the answer to the question "how do people do things they don't know how to do" ?

The statue quo had been to learn SQL or ask a human you trust to check their own work, which hopefully you can reuse.

Now it's ask AIs that are intentionally a bit random, and less likely to (or incapable of) check(ing) their work. Perhaps without seeing the SQL at all, requiring to trust it for every interaction. And in a culture that moves so fast that there is no checking by any(one|thing).


If you think a language model can't check their work, then you are using the tools wrong. Plain and simple.

Modern models are quite capable at surfacing and validating their assumptions and checking correctness of solutions.

Oversight helps you build confidence in the solutions. Is it perfect, no.. but way better then most engineers I also ask to check things.


No they don't. To be able to "check one's work", implies that they can be held accountable, that they can tell apart right from wrong, when in reality they're merely text predictors.

If you think an LLMs can check their work, then you are doing a terrible job at writing software. Plain and simple.

They even go as far as "cheating", so tests fail, writing incorrect tests, or straight out leaking code (lol) like the latest Claude Code blunder. Is this the tool the original comment "is using wrong, plain and simple"? Or do you have access to some other model that works in a wildly different way than generating text predictions?


you can have it write test cases though.

in this case to make a local copy of the db, fill it with a set of records with an expected output of the query, then check to see of the query produces what you want.

you could then have it make queries that check the various assumptions that went into that artificial set of data. if it can find the assumptions broken, add records like that to the test set.

same old agentic programming techniques as ever. use your engineering skill to set up feedback loops. stuff that was painful to do as an engineer for checking your work is now straightforward


The point is that you have verify it yourself. Like you wrote: "check to see of the query produces what you want"

Otherwise the LLM can just write tests against whatever it wrote and not what is expected. This happens often with the top models too.

Someone needs to check the tests work, review they cover edge cases etc.


Feedback loops require a deterministic metric for success. You are doing the equivalent of using a slot machine to decide whether something is right or wrong.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: