Hacker Newsnew | past | comments | ask | show | jobs | submit | tibbar's commentslogin

$8M sounds like a lot, but (a) the cost of making a material financial mistake c an easily dwarf this, and (b) the cost of the engineers maintaining the system was likely about this expensive anyway. And infra is expensive when you're Uber. It all seems rather overblown to me.

Doesn't context cacheing mostly eliminate this problem? (I suppose for enough context the 90% discount is eventually a lot anyway)

Context caching is really storing the KV-cache for reuse. It saves running prefill for that part of the context, but tokens referencing that KV-cache will still cost more.

Yes. For example you'll typically have a "budget" of 1-10k writes/sec. And a single heavy join can essentially take you offline. Even relatively modest enterprises typically need to shift some query patterns to OLAP/nosql/redis/etc. before very long.


can share our work setup we've been tinkering with at a mid size org. iceberg datalake + snowflake for our warehouse, iceberg tables live in s3, that is now shareable to postgres via the pg_lake extension which automagically context switches using duckdb under the hood to do olap queries acrossed the vast iceberg data. we keep the postgres db as an application db so apps can retrieve the broader data they want to surface in the app from the iceberg tables, but still have spicy native postgres tables to do their high volume writes.

very cool shit, it's certainly blurred the whole olap vs oltp thing a smidge but not quite. more or less makes olap and oltp available through the same db connection. writing back to iceberg is possible, we have a couple apps doing it. though one should probably batch/queue writes back as iceberg definitely doesnt have the fast-writes story. its just nice that the data warehouse analytics nerds have access to the apps data and they can do their thing in the environment they work with back on the snowflake side.

this is definitely an "i only get to play with these techs cause the company pays for it" thing. no one wants to front the cost of iceberg datalake sized mountains of data on some s3 storage somewhere, and it doesn't solve for any sort of native-postgres'ing. it just solves for companies that are doing ridic stuff under enormous sla contracts to pay for all manners of cloud services that joe developer the home guy isn't going to be tinkering with anytime soon. but definitely an interesting time to work near data, so much "sql" has been commercialized over the years and it's really great to see postgres being the peoples champ and helping us break away from the dumb attempts to lock us in under sql servers and informix dbs etc. but we still havent reached a one database for everything yet, but postgres is by and large the one carrying the torch though in my head cannon. if any of them will get there someday, it's postgres.


Are you running self hosted Postgres to run pg_lake?

Also, I am working on https://github.com/viggy28/streambed to perform analytics queries on S3 using DuckDB


Sure, but this isn't really an AMA thread [despite the offer to "answer any questions"]. This is about Sid's journey with (extremely advanced) cancer. Airing grievances about Gitlab is just out of place here, you gotta read the room.


[flagged]


Don't hate the player hate the game. Also this is off topic and distasteful


Paying different amounts for different regions is not being an asshole. Virtually every company on the planet does regional CoL adjustments.

And get a grip - you are free to bring value to the world in your way if you're not happy to be an employee. Attacking others that have done nothing to harm you is entirely uncalled for, especially on a discussion about their own cancer. Please act like an adult.


But they don't do CoL adjustment for Gitlab.


The point is that ideally the models keep improving until they can solve problems people care about. Which is already partly true, but there are lots of problems that are still out of reach.


I think OpenAI is more of an aesthetic. Very... Apple-like, polished, with an eye towards making really cool stuff. And aesthetics are a type of philosophy.

This is less noble than how Anthropic presents themselves but still much more attractive to many than XAI.


The feeling on the street is that Anthropic IS the Apple of the AIs.


Come now, surely Anthropic is a premium Linux distribution.


And Apple a premium Unix derivative?


To a researcher, the aesthetic is more like Bell Labs, with many research teams working with some autonomy, which is why the public naming of model releases appears chaotic. Very different to the top-down approach of Apple.


> aesthetics are a type of philosophy.

What philosophy is that?


It's literally called aesthetics, the philosophical approach is the original meaning of the word - https://en.wikipedia.org/wiki/Aesthetics

Properly, focusing on aesthetics as an ethic would be practicing the philosophy of aestheticism - https://en.wikipedia.org/wiki/Aestheticism



How can cursor be worth more than a few billion? Claude/Codex are already better autonomous SWE-lite replacements. Cognition surely has a better internal harness. Cursor does have a lot of users, I'll give it that.


I like Cursor a lot more than Claude Code. It works better for me overall. I like the way they integrate it into the IDE so the agent is my tool rather than a 'partner' or something like that. I'm pretty sad that they lost some engineers, I hope these folks weren't integral to Cursor in any way.


Distribution is also important. Cursor is a great normie tool (I’m one of them), with probably more enterprise deals than the competition.


Moats are weird right now… but Cursor doesn’t have one at all so I agree it can’t really be worth much.


i think closer to tens-of-thousands-of-dollars, by my napkin math!


a lot of the value of tests is confirming that the system hasn't regressed beyond the behavior at the original release. It's bad if the original release is wrong, but a separate issue is if the system later accidentally stops behaving the way it did originally.


The issue I see is that the high test coverage created by having LLMs write tests results in almost all non-trivial changes breaking tests, even if they don't change behavior in ways that are visible from the outside. In one project I work, we require 100% test coverage, so people just have LLMs write tons of tests, and now every change I make to the code base always breaks tests.

So now people just ignore broken tests.

> Claude, please implement this feature.

> Claude, please fix the tests.

The only thing we've gained from this is that we can brag about test coverage.


My best unit tests are 3 lines, one of them whitespace, and they assert one single thing that's in the requirements.

These are the only tests I've witnessed people delete outright when the requirements change. Anything more complex than this, they'll worry that there's some secondary assertion being implied by a test so they can't just delete it.

Which, really is just experience telling them that the code smells they see in the tests are actually part of the test.

meanwhile:

    it("only has one shipping address", ...
is demonstrably a dead test when the story is, "allow users to have multiple shipping addresses", as is a test that makes sure balances can't go negative when we decide to allow a 5 day grace period on account balances. But if it's just one of six asserts in the same massive tests, then people get nervous and start losing time.


Unit tests vs acceptance tests. You shouldn't be afraid to throw away unit tests if the implementation changes, and acceptance tests should verify behavior at API boundaries, ignoring implementation details.


BDD helps with this as it can allow you to get the setup out of the tests making it even cheaper for someone to yeet a defunct test.


I feel it end up a massive drag on development velocity and makes refactoring to simpler designs incredibly painful.

But hey, we're just supposed to let the AIs run wild and rewrite everything every change so maybe that's a heretic view.


>simpler designs

Some complex design might just be hacks on hacks, but some are chesterton's fences


I think the first enlightenment is that software engineers should be able to abstract away these algorithms to reliable libraries.

The second enlightenment is that if you don't understand what the libraries are doing, you will probably ship things that assemble the libraries in unreasonably slow/expensive ways, lacking the intuition for how "hard" the overall operation should be.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: