I write a lot of tools that depend on the TypeScript compiler API, and they run in a lot of a lot of JS environments including Node and the browser. The current CJS codebase is even a little tricky to load into standard JS module supporting environments like browsers, so I've been _really_ looking forward to what Jake and others have said will be an upcoming standard modules based version.
Is that still happening, and how will the native compiler be distributed for us tools authors? I presume WASM? Will the compiler API be compatible? Transforms, the AST, LanguageService, Program, SourceFile, Checker, etc.?
I'm quite concerned that the migration path for tools could be extremely difficult.
[edit] To add to this as I think about it: I maintain libraries that build on top of the TS API, and are then in turn used by other libraries that still access the TS APIs. Things like framework static analysis, then used by various linters, compilers, etc. Some linters are integrated with eslint via typescript-eslint. So the dependency chain is somewhat deep and wide.
Is the path forward going to be that just the TS compiler has a JS interop layer and the rest stays the same, or are all TS ecosystem tools going to have to port to Go to run well?
If I got it correctly, they created a node native module that allows synchronous communication on standard I/O between external processes.
So, this node module will make possible the communication between the typescript compiler GO process, that will expose an “API server compiler”, and a client side JavaScript process.
They don’t think it will be possible to port all APIs and some/most of them will be different than today.
I really wonder how tough that is going to be to migrate to.
I have API use that falls into a few categories, that aren' just LSP-ish type cases:
- Transforms, which presumably there has to be some solution for, even if it's porting to Go.
- Linters, which integrate with typescript-eslint and need the type-checker.
- Codemods, which create and modify AST nodes and re-emit them.
- Static analyzers, which build us app-specific models of the code and rely on AST traversal and the type-checker.
- Analyzer libraries that offer tools to other libraries and apps that expose the TypeScript AST and functions that operate on AST nodes.
Traversing the AST over IPC is going to be too chatty, so I presume there will have to be some sort of way to get a whole SourceFile in one call, but then I wonder about traversal. You'll need a visitor library on your side of the IPC at least, but that's simple. But then you also need all the predicates. You don't want to be calling ts.isTemplateExpression() on every node via IPC.
And I do all this stuff in web workers too, so whatever this IPC is has to work there.
In the post, they specifically talk about two points that seems to address some of your doubts
1. “We expect to have a more curated API that is informed by critical use-cases (e.g. linting, transforms, resolution behavior, language service embedding, etc.).”
2. “We also can imagine opportunities to optimize, use other underlying IPC strategies, and provide batch-style APIs to minimize call overhead.”
Anyway, I’ve used the compiler API a lot too, and I really enjoy its huge capabilities, making possible practically everything on the source code (EDIT: and hijack the build process too). I hope we won’t miss too much.
Do a n00b a favour... would you ever run wasm outside of a client browser? Are you suggesting that wasm is a viable platform for local services or commands?
Or do you mean that there's a use case for a compilation in the browser?
In my experience it is pretty difficult to make WASM faster than JS unless your JS is really crappy and inefficient to begin with. LLVM-generated WASM is your best bet to surpass vanilla JS, but even then it's not a guarantee, especially when you add js interop overhead in. It sort of depends on the specific thing you are doing.
I've found that as of 2025, Go's WASM generator isn't as good as LLVM and it has been very difficult for me to even get parity with vanilla JS performance. There is supposedly a way to use a subset of go with llvm for faster wasm, but I haven't tried it (https://tinygo.org/).
I'm hoping that Microsoft might eventually use some of their wasm chops to improve GO's native wasm compiler. Their .NET wasm compiler is pretty darn good, especially if you enable AOT.
I think the Wasm backends for both Golang and LLVM have yet to support the Wasm GC extension, which would likely be needed for anything like real parity with JS. The present approach is effectively including a full GC implementation alongside your actual Golang code and running that within the Wasm linear memory array, which is not a very sensible approach.
The major roadblocks for WasmGC in Golang at the moment are (A) Go expects a non-moving GC which WasmGC is not obligated to provide; and (B) WasmGC does not support interior pointers, which Go requires.
These are no different than the issues you'd have in any language that compiles to WasmGC, because the new GC'd types are (AIUI) completely unrelated to the linear "heap" of ordinary WASM - they are pointed to via separate "reference" types that are not 'pointers' as normally understood. That whole part of the backend has to be reworked anyway, no matter what your source language is.
Go exposes raw pointers to the programmer, so from your description i think those semantics are too rudimentary to implement Go's semantics, there would need to be a WasmGC 2.0 to make this work.
It sounds like it would be a great fit for e.g. Lua though.
That's not the base language, it's an unsafe superset. There's no reason why a Wasm-GC backend for Golang should be expected to support that by default.
If it is part of the language reference it is part of the language.
Usually when language reference books used to be printed, or we used ISO languages, what is there on paper, is the language.
We are only discussing semantics, if it is hardcoded primitives, or made available via the standard library, specially in case of blessed packages like unsafe which aren't fully implemented, rather magical types for the compiler.
Which is nothing new since the 1960's that there are systems languages with some way to mark code unsafe, the C linage of languages are the ones that decided to ignore this approach.
The standard library uses unsafe for syscalls, for higher-performance primitives like strings.Builder, etc, so it's support is mandatory to run any non-trivial Go program
For a while the GOOS=nacl port and the Google App Engine ports of Go disallowed unsafe pointer manipulation too, so there is some precedent. Throughout some of the ecosystem you can see pieces of "nounsafe" build tag support (e.g. in easyjson).
Most programming languages that offer unsafe, either as language keyword, or meta package (unsafe/SYSTEM/UNSAFE whatever the name), have similar option, that doesn't make it less of a feature.
Somehow I don't think Wasm-GC is going to support bare metal syscalls anytime soon. That stuff all has to be rewritten anyway if you want to target WASM.
It's not just system calls. E.g. reflection package uses unsafe too: https://github.com/golang/go/blob/master/src/reflect/value.g... .
Many packages from Go standard library use unsafe one way or the other, so it's not fair to say that unsafe package is separate from the rest of the language
The GC extension is supported within browsers and other WASM runtimes these days - it's effectively part of the standard. Compiler developers are dropping the ball.
I did some perf benchmarks a few years ago on some JS code vs C code compiled to WASM using clang and running on V8 vs the same C code compiled to x64 using clang.
The few cases that performed significantly better than the JS version (like >2x speed) were integer-heavy math and tail-call optimized recursive code, some cases were slower than the JS version.
What I was surprised was that the JS version had similar performance to the x64 version with -O3 in some of my benchmakrs (like float64 performance).
This was a while ago though when WASM support had just landed in browsers, so probably things got better now.
Interop with a WASM-compiled Go binary from JS will be slower but the WASM binary itself might be a lot faster than a JS implementation, if that makes sense. So it depends on how chatty your interop is. The main place you get bogged down is typically exchanging strings across the boundary between WASM and JS. Exchanging buffers (file data, etc) can also be a source of slowdown.
I write a lot of tools that depend on the TypeScript compiler API, and they run in a lot of a lot of JS environments including Node and the browser. The current CJS codebase is even a little tricky to load into standard JS module supporting environments like browsers, so I've been _really_ looking forward to what Jake and others have said will be an upcoming standard modules based version.
Is that still happening, and how will the native compiler be distributed for us tools authors? I presume WASM? Will the compiler API be compatible? Transforms, the AST, LanguageService, Program, SourceFile, Checker, etc.?
I'm quite concerned that the migration path for tools could be extremely difficult.
[edit] To add to this as I think about it: I maintain libraries that build on top of the TS API, and are then in turn used by other libraries that still access the TS APIs. Things like framework static analysis, then used by various linters, compilers, etc. Some linters are integrated with eslint via typescript-eslint. So the dependency chain is somewhat deep and wide.
Is the path forward going to be that just the TS compiler has a JS interop layer and the rest stays the same, or are all TS ecosystem tools going to have to port to Go to run well?