By some measures, the software industry is in a state where steel production was maybe in 1880-1900. By the end of the 19. century we were quite able to produce largish steel constructions but the Siemens-Martin furnace aka Open Hearth Furnace was new https://en.wikipedia.org/wiki/Open-hearth_furnace and allowed us to produce large quantities of quality steel. At the same time the industrial processes were still quite imprecise and manual, people got hurt regularly during work. The LLMs we use today are an improvement in the process, just like the Open Hearth Furnace, but there are much quicker, more precise technologies that we don't yet know about/ don't use in the mainstream. (The electric arc furnace, electro-slag remelting, vacuum arc remelting, oxygen converter process would be equivalent advancements to the OHF for example.)
So where is the meaning in all this?
We can look at the steel making revolution and try to learn from it. Software is in most places, so is steel. In my experience a steel mill employee generally gets paid. There is the dignity in struggle, because the work tends to be demanding. Perhaps we will have machines working on the standardized components for us to put together after QA/ conformance testing to form a larger system. Maybe software engineering will really be more like work planning or a machine engineering studio. What I am confident about is that we will get more standardization and everything will get a lot more complex, yet we will have tools to cope with that.
In effect if the operating system knew about the DRAM layout, it could for instance double critical data structures and race the processing. Maybe this would be helpful in the networking areas.
On the other hand this can maybe get fixed in hardware by just copying the page that's being refreshed to the side somewhere, eliminating the whole waiting problem. Last but not least, AFAIK writes to a row already recharge the capacitors so there shouldn't be a need to refresh it. What am I missing?
To me and many other people the syntax looks like Lego but that's a taste thing and arguing about taste isn't very productive. What is more objective is that there are less syntactic patterns to care about to do at least 90% of pretty complex systems including concurrency. The rest can usually be limited to a few namespaces that are rarely touched later because they just work. Compare that with Python...
If you want to calcify something and add robustness, use clojure.spec or Malli. Clojure encourages writing testable code and also in general, there is less code to test. Smaller problem, easier to tackle well.
The JVM is a beast for serious things because of its performance and tooling. If you need something small/ with a quick start, you can use GraalVM or some of the dialects like ClojureScript or Babashka to do what needs to be done. There is ongoing work on ClojureCLR, Jank, Janet, Basilisp, Hy and other dialects or inspired languages. Usually, these are pretty close to Clojure or try to follow the behavior of Clojure so that stuff written using Clojure.core just works the same. Clojure is turning out to be the actual lingua franca.
For me, programming in Clojure is the nearest thing to fun that I ever had doing programming. To me there seems to be less ceremony about things especially on bigger projects. For the little things Babashka tends to be even more straight forward.
And yes, there are things about Clojure that can make the life harder. Usually it has to do with laziness e.g. when you just try to get a data structure written to a file. When you want to have restartable, stateful components such as database connections, web servers, etc. and want to start them in a certain order. There are some functions that are unexpectedly slow and stuff like this that could be somewhat more predictable. All this would be more approachable if there were real documents for beginners with a little more explanations than the terse descriptions that senior developers with 20+ years of experience find sufficient.
Many people assume that companies need or want global enterprise level of management of infrastructure or 24/7 support. That's simply not the case. Many small and mid-sized companies just need their applications to run. There is no CTO on the board and nobody else really cares where the stuff runs if it fits a certain budget, is available enough to not cause major disruptions and is responsive enough to not cause complaints. Some companies may care about a certain level of compliance/ security and whether their admins/ DevOps people seem to be in agony most of the time but of those there aren't many. That's also a reason why the EU introduced directives such as NIS2, DORA, CRA, CER, even the now 10 year old GDPR and more.
Most companies I have seen have never updated the BIOS of their servers, nor the firmware on their switches. Some of those have production applications on Windows XP or older and you can see VMware ESXi < 6.5 still in the wild. The same for all kinds of other systems including Oracle Linux 5.5 with some ancient Oracle DB like 10g or something, that was the case like 5 years ago but I don't think the company has migrated away completely to this day.
Any sufficiently old company will accrete systems and approaches of various vintages over time only very slowly ripping out some of those systems. Usually what happens is that parts of old systems or old workarounds will live on for decades after they have been supposedly decommissioned. I had a colleague who was using CRT monitors in 2020 with computers of similar vintage, probably with Pentium III or early Pentium IV, because he had everything set up there and it just worked for what he was doing. I don't admire it, yet that stuff works and I do respect that people don't want to replace expensive systems just because they are out of support, when they do actually work and they have people taking care of them.
Totally, but then you probably don’t want SREs. If you’re okay with 99% availability (~7 hours of downtime a month assuming 24x7 goal), you can get by with much cheaper staffing and won’t have to deal with the turnover from SREs who get bored.
I hope they expand it in such a way that anybody could uncover the ships of the Russian "shadow fleet" and put more pressure on politicians and officials. Suspicious draught or erratic position changes or incorrect data upon leaving/ entering a port would be key to detecting possible circumvention of sanctions.
Many are just not that diligent with proper dental hygiene. Interdental brushes/ superfloss are used only occasionally if at all and not every day. There are people that brush for 2 minutes and call it a day, because they heard it's enough in some advert or because the electric toothbrush stops. Well, it turns out you need a lot longer than that and a reasonable technique if you want to keep your teeth clean and healthy.
The acidic, sugary drinks and food don't help at all. Drinking mineral water (no sugar, no extra acids) or to wash the mouth with regular water or a low concentration sodium bicarbonate / baking soda solution to balance pH after eating/ drinking something else would probably help. Of course, if not dissolved completely, the baking soda could act as an abrasive which wouldn't be that great for tooth health so probably just use regular water.
You don't need to invest much money to keep good oral health, it certainly is much cheaper to fix the problems that will arise if you don't, if they can be fixed at all. It however does cost effort/ time.
Correct. The worst thing you can do with a tooth brush is to drink something acidic or extremely sugary (like a sports gel) and then immediately brush your teeth aggressively.
There are three main tooth diseases: Cavities caused by bacteria, loose teeth through gum disease and (permanent) erosion of enamel.
The last one is easy but annoying to prevent: Only brush your teeth 30 minutes after you got rid of food debris/sugar/acid in your mouth using your tongue and by drinking water.
If you brush for 2 minutes I can guarantee you have not cleaned your teeth properly if you have 28-32 of them, use interdental brushes (TePe, Curaprox)/ superfloss and a regular tooth brush.
If you are worried about brushing off your enamel, you should get correct tooth brushes, not use abrasive tooth pastes, not brush immediately after drinking acids as another commenter has written. Some people have soft enamel but effect of some medication/ sickness/ malnutrition during childhood but that is relatively rare. If in doubt, consult dental hygiene specialist or a dentist.
Source: My wife is an established dental hygienist keeping up with the newest approaches, going to advanced courses/ master classes, visiting conferences.
Consult a passionate dental hygienist or get a second opinion from a different dentist. Either you are doing yourself no favors by biting to push over the top and should probably get some kind of a retainer like boxers have to prevent overloading your teeth.
Or you are one of many people that have been told how great their teeth are yet which have periodontitis/ gum inlfamation. (Source: My wife is an established dental hygienist keeping up with the newest approaches, going to advanced courses, visiting conferences.) If your gums are redish instead of light pink that's a good indication. If you are bleeding on regular use of interdental brushes/ flossing that's another hint something might be off.
Many people seem to be running OpenCode and similar tools on their laptop with basically no privilege separation, sandboxing, fine-grained permissions settings in the tool itself. This tendency is reflected also by how many plugins are designed, where the default assumption is the tool is running unrestricted on the computer next to some kind of IDE as many authentication callbacks go to some port on localhost and the fallback is to parse out the right parameter from the callback URL. Also for some reasons these tools tend to be relative resource hogs even when waiting for a reply from a remote provider. I mean, I am glad they exist, but it seems very rough around the edges compared to how much attention these tools get nowadays.
Please run at least a dev-container or a VM for the tools. You can use RDP/ VNC/ Spice or even just the terminal with tmux to work within the confines of the container/ machine. You can mirror some stuff into the container/ machine with SSHFS, Samba/ NFS, 9p. You can use all the traditional tools, filesystems and such for reliable snapshots. Push the results separately or don't give direct unrestricted git access to the agent.
It's not that hard. If you are super lazy, you can also pay for a VPS $5/month or something like that and run the workload there.
I have a pretty non-standard setup but with very standard tools. I didn't follow any specific guide. I have ZFS as the filesystem, for each VM a ZVOL or dataset + raw image and libvirt/ KVM on top. This can be done using e.g. Debian GNU/ Linux in a somewhat straight forward way. You can probably do something like it in WSL2 on Windows although that doesn't really sandbox stuff much or with Docker/ Podman or with VirtualBox.
If you want a dedicated virtual host, Proxmox seems to be pretty easy to install even for relative newcomers and it has a GUI that's decent for new people and seasoned admins as well.
For the remote connection I just use SSH and tmux, so I can comfortably detach and reattach without killing the tool that's running inside the terminal on the remote machine.
I hope this helps even though I didn't provide a step-by step guide.
If you are using VSCode against WSL2 or Linux and you have installed Docker, managing devcontainers is very straightforward. What I usually do is to execute "Connect to host" or "Connect to WSL", then create the project directory and ask VSCode to "Add Dev Container Configuration File". Once the configuration file is created, VSCode itself will ask you if you want to start working inside the container. I'm impressed with the user experience of this feature, to be honest.
Working with devcontainers from CLI wasn't very difficult [0], but I must confess that I only tested it once.
Note that while containers can be leveraged to run processes at lower privilege levels, they are not secure by default, and actually run at elevated privileges compared to normal processes.
Make sure the agent cannot launch containers and that you are switching users and dropping privileges.
On a Mac you are running a VM machine that helps, but on Linux it is the user that is responsible for constraints, and by default it is trivial to bypass.
Containers have been fairly successful for security because the most popular images have been leveraging traditional co-hosting methods, like nginx dropping root etc…
By themselves without actively doing the same they are not a security feature.
While there are some reactive defaults, Docker places the responsibility for dropping privileges on the user and image. Just launching a container is security through obscurity.
It can be a powerful tool to improve security posture, but don’t expect it by default.
I checked with Gemini 3 Fast and it provided instructions on how to set up a Dev Container or VM. It recommended a Dev Container and gave step-by-step instructions. It also mentioned VMs like VirtualBox and VMWare and recommended best practices.
This is exactly what I would have expected from an expert. Is this not what you are getting?
My broader question is: if someone is asking for instructions for setting up a local agent system, wouldn't it be fair to assume that they should try using an LLM to get instructions? Can't we assume that they are already bought in to the viewpoint that LLMs are useful?
the llm will comment on the average case. when we ask a person for a favourite tool, we expect anecdotes about their own experience - I liked x, but when I tried to do y, it gave me z issues because y is an unusual requirement.
when the question is asked on an open forum, we expect to get n such answers and sometimes we'll recognise our own needs in one or two of them that wouldn't be covered by the median case.
I think you're focusing too much on the word 'favourite' and not enough on the fact that they didn't actually ask for a favourite tool. They asked for a favourite how-to for using the suggested options, a Dev Container or a VM. I think before asking this question, if a person is (demonstrably in this case) into LLMs, it should be reasonable for them to ask an LLM first. The options are already given. It's not difficult to form a prompt that can make a reasonable LLM give a reasonable answer.
There aren't that many ways to run a Dev Container or VM. Everyone is not special and different, just follow the recommended and common security best practices.
I've started a project [1] recently that tries to implement this sandbox idea. Very new and extremely alpha, but mostly works as a proof of concept (except haven't figured out how to get Shelley working yet), and I'm sure there's a ton of bugs and things to work through, but could be fun to test and experiment with in a vps and report back any issues.
That's why you run with "dangerously allow all." What's the point of LLMs if I have to manually approve everything? IME you only get half decent results if the agent can run tests, run builds and iterate. I'm not going to look at the wall of texts it produces on every iterations, they are mostly convincing bullshit. I'll review the code it wrote once the tests pass, but I don't want to be "in the loop".
I really like the product created by fly.io's https://sprites.dev/ for AI's sandboxes effectively. I feel like its really apt here (not sponsored lmao wish I was)
Oh btw if someone wants to run servers via qemu, I highly recommend quickemu. It provides default ssh access,sshfs, vnc,spice and all such ports to just your local device of course and also allows one to install debian or any distro (out of many many distros) using quickget.
I personally really like zed with ssh open remote. I can always open up terminals in it and use claude code or opencode or any and they provide AI as well (I dont use much AI this way, I make simple scripts for myself so I just copy paste for free from the websites) but I can recommend zed for what its worth as well.
I would assume for certain problems LLMs have a solution readily available for JavaScript/ TypeScript or similarly popular languages but not for Clojure/Script. Therefore my thinking was that the process of getting to a workable solution would be longer and more expensive in terms of tokens. I however don't have any relevant data on this so I may just be wrong.
So where is the meaning in all this?
We can look at the steel making revolution and try to learn from it. Software is in most places, so is steel. In my experience a steel mill employee generally gets paid. There is the dignity in struggle, because the work tends to be demanding. Perhaps we will have machines working on the standardized components for us to put together after QA/ conformance testing to form a larger system. Maybe software engineering will really be more like work planning or a machine engineering studio. What I am confident about is that we will get more standardization and everything will get a lot more complex, yet we will have tools to cope with that.