AI autocomplete is a feature, not a product (to paraphrase SJ)
I can understand Windsurf getting the valuation as they had their own Codeium model
$B for a VSCode fork? Lol
I always forget syntax for things like ssh port forwarding. Now just describe it at the shell:
$ ssh (take my local port 80 and forward it to 8080 on the machine betsy) user@betsy
or maybe:
$ ffmpeg -ss 0:10:00 -i somevideo.mp4 -t 1:00 (speed it up 2x) out.webm
I press ctrl+x x and it will replace the english with a suggested command. It's been a total game changer for git, jq, rsync, ffmpeg, regex..
For more involved stuff there's screen-query: Confusing crashes, strange terminal errors, weird config scripts, it allows a joint investigation whereas aider and friends just feels like I'm asking AI to fuck around.
For extradata it sends uname and the procname when it captures such as "nvim" or "ipython" and that's it.
All this IDE churn makes me glad to have settled on Emacs a decade ago. I have adopted LLMs into my workflow via the excellent gptel, which stays out of my way but is there when I need it. I couldn't imagine switching to another editor because of some fancy LLM integration I have no control over. I have tried Cursor and VS Codium with extensions, and wasn't impressed. I'd rather use an "inferior" editor that's going to continue to work exactly how I want 50 years from now.
Emacs and Vim are editors for a lifetime. Very few software projects have that longevity and reliability. If a tool is instrumental to the work that you do, those features should be your highest priority. Not whether it works well with the latest tech trends.
Are you getting irrelevant suggestions as those autocompletes are meant to predict the things you are about to type.
I'm sure it's initially slower than vibe-coding the whole thing, but at least I end up with a maintainable code base, and I know how it works and how to extend it in the future.
Old fashioned variable name / function name auto complete is not affected.
I considered a small macropad to enable / disable with a status light - but honestly don't do enough work to justify avoiding work by finding / building / configuring / rebuilding such a solution. If the future is this sort of extreme autocomplete in everything I do on a computer, I would probably go to the effort.
I have largely disabled it now, which is a shame, because there are also times it feels like magic and I can see how it could be a massive productivity lever if it needed a tighter confidence threshold to kick in.
The thing that bugs me is when Im trying to use tab to indent with spaces, but I get a suggestion instead.
I tried to disable caps lock, then remap tab to caps lock, but no joy
But I found once it was optional I hardly ever used it.
I use Deepseek or others as a conversation partner or rubber duck, but I'm perfectly happy writing all my code myself.
Maybe this approach needs a trendy name to counter the "vibe coding" hype.
Fortunately, alien space magic seems immune, so far at least. I assume they do not like the taste, and no wonder.
Sure, you might not like it and think you as a human should write all code, but frequent experience in the industry in the past months is that productivity in the teams using tools like this has greatly increased.
It is not unreasonable to think that someone deciding not to use tools like this will not be competitive in the market in the near future.
I don’t think the point was “don’t use LLM tools”. I read the argument here as about the best way to integrate these tools into your workflow.
Similar to the parent, I find interfacing with a chat window sufficiently productive and prefer that to autocomplete, which is just too noisy for me.
Went back to VSCode with a tuned down Copilot and use the chat or inline prompt for generating specific bits of code.
I was converting a bash script to Bun/TypeScript the other day. I was doing it the way I am used to… working on one file at a time, only bringing in the AI when helpful, reviewing every diff, and staying in overall control.
Out of curiosity, threw the whole task over to Gemini 2.5Pro in agentic mode, and it was able to refine to a working solution. The point I’m trying to make here is that it uses MCP to interact with the TS compiler and linters in order to automatically iterate until it has eliminated all errors and warnings. The MCP integrations go further, as I am able to use tools like Console Ninja to give the model visibility into the contents of any data structure at any line of code at runtime too. The combination of these makes me think that TypeScript and the tooling available is particularly suitable for agentic LLM assisted development.
Quite unsettling times, and I suppose it’s natural to feel disconcerted about how our roles will become different, and how we will participate in the development process. The only thing I’m absolutely sure about is that these things won’t be uninvented with the genie going back in the bottle.
Sometimes it auto-completes nonsense, but sometimes I think I'm about to tab on auto-completing a method like FooABC and it actually completes it to FoodACD, both return the same type but are completely wrong.
I have to really be paying attention to catch it selecting the wrong one. I really really hate this. When it works its great, but every day I'm closer to just turning it off out of frustration.
A lot of people are against change because it endangers their routine, way of working, livelihood, which might be a normal reaction. But as accountants switched to using calculators and Excel sheets, we will also switch to new tools.
I was exploring using andyk/ht discussed on hn a few months back, to sit as a proxy my llm can call at the same time i control via xtermjs, but i need to figure out how to train the llm to output keybindings/special keys etc, but promising start nonetheless, i can indeed parse a lot of extra info than just a command, just imagine if AI could use all of the shell auto-complete features but feed into it..
maybe i should revisit/cleanup that repo and make it public. It feels like with just some data training on special key bindings etc an llm should be able to type, even if -char by char- at a faster speed than a human, to control TUI's
Any library that breaks backwards compatibility in major version releases will likely befuddle these models. That's why I have seen them pin dependencies to older versions, and more egregiously, default to using the same stack to generate any basic frontend code. This ignores innovations and improvements made in other frameworks.
For example, in Typescript there is now a new(ish) validation library call arktype. Gemini 2.5 pro straight up produces garbage code for this. The type generation function accepts an object/value. But gemini pro keeps insisting that it consumes a type.
So Gemini defines an optional property as `a?: string` which is similar to what you see in Typescript. But this will fail in arktype, because it needs it input as `'a?': 'string'`. Asking gemini to check again is a waste of time, and you will need enough familiarity with JS/TS to understand the error and move ahead.
Forcing development into an AI friendly paradigm seems to me a regressive move that will curb innovation in return for boosts in junior/1x engineer productivity.
On the short term. Have fun debugging that mess in a year while your customers are yelling at you! I'll be available for hire to fix the mess you made which you clearly don't have the capability to understand :-)
But coding agents can indeed save some time writing well-defined code and be of great help when debugging. But then again, when they don't work on a first prompt, I would likely just write the thing in Vim myself instead of trying to convince the agent.
My point being: I find agent coding quite helpful really, if you don't go overzealous with it.
Where is this 2x, 10x or even 1.5x increase in output? I don't see more products, more features, less bugs or anything related to that since this "AI revolution".
I keep seeing this being repeated ad nauseam without any real backing of hard evidence. It's all copium.
Surely if everyone is so much more productive, a single person startup is now equivalent to 1 + X right?
Please enlighten me as I'm very eager to see this impact in the real world.
Additionally, what you are failing to realise is that not everyone is just vibe coding and accepting blindly what the LLM is suggesting and deploying it to prod. There are actually people with decade+ of experience who do use these tools and who found it to be an accelerator in many areas, from writing boilerplate code, to assisting with styling changes.
In any case, thanks for the heads up, definitely will not be hiring you with that snarky attitude. Your assumption that I have no capability to understand something without any context tells more about you than me, and unfortunately there is no AI to assist you with that.
I simply cannot see how I can tell an agent to implement anything I have to do in a real day job unless it's a feature so simple I could do it in a few minutes. Even those the AI will likely screw it up since it sucks at dealing with existing code, best practices, library versions, etc.
I said this in another comment but I'll repeat the question: where are these 2x, 10x or even 1.5x increases in output? I don't see more products, more features, less bugs or anything related to that since this "AI revolution".
I keep seeing this being repeated ad nauseam without any real backing of hard evidence.
If this was true and every developer had even a measly 30% increase in productivity, it would be like a team of 10 is now 13. The amount of code being produced would be substantially more and as a result we should see an absolute boom in new... everything.
New startups, new products, new features, bugs fixed and so much more. But I see absolutely nothing but more bullshit startups that use APIs to talk to these models with a few instructions.
Please someone show me how I'm wrong because I'd absolutely love to magically become way more productive.
Or if I'm working on a full stack feature, and I need some boilerplate to process a new endpoint or new resource type on the frontend, I have the AI build the api call that's similar to the other calls and process the data while I work on business logic in the backend. Then when I'm done, the frontend API call is mostly set up already
I found this works rather well, because it's a list of things in my head that are "todo, in progress" but parallelizable so I can easily verify what its doing
I am not a professional SWE; I am not fluent in C or Rust or bash (or even Typescript) and I don't use Emacs as my editor or tmux in the terminal;
I am just a nerdy product guy who knows enough to code dangerously. I run my own small business and the software that I've written powers the entire business (and our website).
I have probably gotten a AT LEAST a 500-1000% speedup in my personal software productivity over the past year that I've really leaned into using Claude/Gemini (amazing that GPT isn't on that list anymore, but that's another topic...) I am able to spec out new features and get them live in production in hours vs. days and for bigger stuff, days vs weeks (or even months). It has changed the pace and way in which I'm able to build stuff. I literally wrote an entire image editing workflow to go from RAW camera shot to fully processed product image on our ecommerce store that's cut out actual, real, dozens of hours of time spent previously.
Is the code I'm producting perfect? Absolutely not. Do I have 100% test coverage? Nope. Would it pass muster if I were a software engineer at Google? Probably not.
Is it working, getting to production faster, and helping my business perform better and insanely more efficiently? Absolutely.
1) Stops me overthinking the solution 2)Being able to ask it pros and cons of different solutions 3) multi-x speedup means less worry about throwing away a solution/code I don't like and rewriting / refactoring 4) Really good at completing certain kinds of "boilerplate-y" code 5) Removed need to know the specific language implementation but rather the principle (for example pointers, structs, types, mutexes, generics, etc). My go to rule now is that I won't use it if I'm not familiar with the principle, and not the language implementation of that item 6) Absolute beast when it comes to debugging simple to medium complexity bugs
I just noticed CLion moved to a community license, so I re-installed it and set up Copilot integration.
It's really noisy and somehow the same binding (tab complete) for built in autocomplete "collides" with LLM suggestions (with varying latency). It's totally unusable in this state; you'll attempt to populate a single local variable or something and end up with 12 lines of unrelated code.
I've had much better success with VSCode in this area, but the complete suggestions via LLM in either are usually pretty poor; not sure if it's related to the model choice differing for auto complete or what, but it's not very useful and often distracting, although it looks cool.
I would take care. Emacs has no internal boundaries by design and it comes with the ability to access files and execute commands on remote systems using your configured SSH credentials. Handing the keys to an enthusiastically helpy and somewhat cracked robot might prove so bad an idea you barely even have time to put your feet up on the dash before you go sailing through the windshield.
All that to say that the base of your argument is still correct: AI really isn't saving all that much time since everyone has to proof-read it so much in order to not increase the number of PR bugs from using it in the first place.
git cloen blahalalhah
I did a ctrl+x x and it fixed it. I'm using openrouter/google/gemma-3-27b-it:free via chutes. Not a frontier model in the slightest.
If I want to, let's say, create some code in a language I never worked on an LLM will definitely make me more "productive" by spewing out code for me way faster than I could write it. Same if I try to quickly learn about a topic I'm not familiar with. Especially if you don't care about the quality, maintainability, etc. too much.
But if I'm already a software developer with 15 years of experience dealing with technology I use every day, it's not going to increase my productivity in any meaningful way.
This is the dissonance I see with AI talk here. If you're not a software developer the things LLMs enable you to do are game-changers. But if you are a good software developer, in its best days it's a smarter autocomplete, a rubber-duck substitute (when you can't talk to a smart person) or a mildly faster google search that can be very inaccurate.
If you go from 0 to 1 that's literally infinitely better but if you go from 100 to 105, it's barely noticeable. Maybe everyone with these absurd productivity gains are all coming from zero or very little knowledge but for someone that's been past that point I can't believe these claims.
The impact in the real world isn't more product output, it's less developers needed for the same output.