Conversations over the years have shown me that DDD was a great inverse marketing tool, ironically pushing developers towards the embedded debugger UI in their favorite IDEs... despite DDD itself being indeed very powerful. But even "usefulness over aesthetics" has its limits!
The only thing which takes time is debuginfod downloads.
There's also gdbgui that I know of, a web-based UI for GDB:
Always good to see more movement in the debug tooling
It’s very odd. It’s like it doesn’t cache something and ends up doing some strange expensive symbol search every time it hits a breakpoint or something.
Curious if anyone has a good solution to this also
I am not saying that you haven't just trying to make sure that your argument is backed by data rather than heresay.
[1] https://sourceware.org/gdb/current/onlinedocs/gdb.html/TUI.h...
I am curious does your project have large external dependencies or is it self-contained.
This is one area where I believe a GUI tool is so much better: I can hover over variable names to view their values, expand and collapse parts of a nested structure, edit values easily, and follow execution in the same environment I write my code in.
Sure, it doesn't help much for some scenarios (one I've heard people mention is multithreaded code, where logs are better?), but for most people it's not that far from a superpower.
If polished a bit it could be useful, though from all the frontends i've tried the one i disliked the least (none are great) is Gede[0] (which i just noticed had a new release a few hours ago) as it has a very simple and straightforward UI and while it doesn't expose much functionality, what exposes seem to work fine without bugs.
I complain about gdb all the time, speed is just one aspect. Step-by-step debugging is just terrible on Linux. Maybe that's actually the reason few people complain about it, they just don't use gdb, instead relying on other tools, especially printf(). I am not in the video game industry, but they seem to be way, way ahead of everyone else, especially Linux (non-game) developers. Maybe some collaboration is in order.
As for your specific problem, I don't know. Do you have optimization turned on when debugging? gcc/gdb and the LLVM equivalents let you debug optimized builds, but it is not ideal as knowing which instruction corresponds to which line is complicated, and maybe gdb is working extra hard for it. The "-Og" flag is supposed to only do "debugger friendly" optimizations, also "-ggdb" or "-ggdb3" is supposed to be better than plain "-g" for use with gdb.
I was debugging something earlier this week that was hit like a hundred times in a tight loop. After the first dozen or so times I told gdb to continue, I realized, wait, this will be faster if I just fprintf some relevant information to a file. Sure enough the file pointed me in the right direction, and I was able to go back and get fancy with "disp" and "cond" and hit that breakpoint only when I needed to.
Running inside Docker, multithreaded, multiprocessed, all can be debugged with a little effort. Most often much less effort that repeatedly printf debugging.
My experience is the opposite: I see developers waste hours stepping through their code a line at a time when a few judiciously placed logs (printfs() are fine, but we can do better) would have told them exactly what they needed in a jiffy.
If you have a fairly shallow bug, that is a single point in your code that always behaves incorrectly, then I find debuggers reasonably effective.
But most of the bugs that I see aren't that shallow, with code misbehaving when the context is just so and perfectly fine otherwise. In those cases, I need to see lots of different invocations and their context. The debugger is like trying to drink the information ocean I need through a straw. A mostly plugged straw.
I wonder what makes our experiences so different? Do you unit test a lot? Particularly with TDD? I am guessing that this practice means I just don't get to see a lot of the bugs that a debugger would help me with.
(And it doesn't mean I never fire up the debugger. But it is fairly rare).
I just put :
layout src
set confirm off
in my $XDG_CONFIG_HOME/gdb/gdbinitIt's immediately obvious you're deadlocked, which is actually kind of tricking to suss out with log-style debugging.
Modern debuggers can do so much, being able to lay down conditions to only break when certain values are set, etc. etc. Some can even "rewind" programs. I'd say most people (including myself) are using only 25% of their debugger's capabilities.
Aside: One the reasons I despise working with async Rust code is the mess it makes of working with a debugger.
set prompt \001\033[01;36m\002(gdb)\001\033[0m\002
and I save history with set history save on
set history size 500000
set history filename ~/.cache/gdb/history
I’ve tried debuggers and see the appeal but I find it less useful than print debugging / logging.
I also rely heavily on unit tests when writing new code, so that also reduces the surface that I need to look for bugs based on the log. Moreover, most of my projects have 1-3 programmers and can largely “fit in my head” (<10,000 lines of code), so it’s probably different if you work at a FAANG company or something.
> programmers to master the debugging tools of their ecosystem. I've seen countless experienced developers use printf-based debugging and waste hourse debugging something which could've been easily figured out by setting a breakpoint and stepping through your code.
If you're wasting hours with printf-based debugging, I don't think you've 'mastered the debugging tools of the ecosystem'.
There are multiple ways to debug - step debugger tools, printf, logging to a file, etc. Each have their place.
If you're spending hours on any one approach, and perhaps that's the only approach you know, that's a red flag.
If you've spent hours going through printf, logging and step debugging and STILL don't have a good answer... bring in external eyes.
I've found/fixed bugs in a few minutes because of adding some log stuff first, because in those cases, it's the easiest approach. In other cases, running a debugger and setting a couple breakpoints is indeed the easier approach to start with, and I've done that.
Sometimes you find it with the first approach, sometimes you need to try the next approach.
Oh come off it, debuggers shine the brightest when there are lots of unknown unknowns. With printf debugging you can peel back exactly one layer at a time (oops, need to log one more thing) whereas with a debugger you can slice through the Gordian knot.
Being able to change breakpoints at runtime helps a lot when tracking down something more complex. Visual Studio breakpoints are great, and they’ve added conditional breakpoints which are even better. Previously I would approximate this by having code specifically branch to hit a breakpoint, ‘if (X) { breakHere();}’
I write a fair amount of native C++ code but only call it from either Python or dotnet so when I make a mistake here it’s usually a segfault / memory access issue which kills the process. There might be a way to debug the C++ from dotnet or Python but logging to std out helps me isolate the location of the issue which is sufficient. It’s not a big enough problem and I worry that either writing tests in C++ or learning a native debugger will pay off in time saved.
ps: I wish I could work on a porcelain layer to manage the breakpoints in a more logical manner. Considering a problem you'd like to create different sets of breakpoints, run various tests and gather results to reviews. With the ability to add or remove layers rapidly. It's probably not too hard to do.
Essentially you set it like a breakpoint (attaching a printf style string to a code location) and then just "continue" until you've gathered what you want.
Debuggers can be great for understanding multithreaded code - and you can potentially freeze threads and continue others in order to provoke a particular race condition.
However they're potentially quite weak at stepping through a concurrency bug - stopping after each line to understand the sequence of events has a good chance of making your bug go away.
I'd say you want Time Travel Debugging if you need to capture and step through a rare event: you get to record the bug happening (without interrupting it) and then step through the recording.
On Linux, Undo.io (disclaimer: where I work) and rr (open source) are good at this.
On Windows, you have Microsoft's own Time Travel Debug solution: https://learn.microsoft.com/en-us/windows-hardware/drivers/d...
(nb. there's also GDB's built-in process record technology but I'd recommend against that for any non-trivial software as the overheads are very high)
I've found it a very powerful yet compact way to visualize the state of a program when debugging.
A while ago there was a project to port it to GTK3 but I think that went away. I'm glad the mainline project is still going.
The one nice thing about GUD is that the interface is consistent across debuggers, so I don’t need to refresh myself on the keyboard shortcuts when switching between debugging Python with pdb and C++ with lldb.
[1] https://www.gnu.org/software/emacs/manual/html_node/emacs/GD...
[2] https://www.gnu.org/software/emacs/manual/html_node/emacs/St...
The default toString method I've found to be useless almost every time I wanted to inspect an object in our codebase since it just prints the type + "id" for the object
Some ancient version of NetBeans leaking ram like a sieve until it brings down the machine, or a decade old version of Eclipse that can't pull in a newer CDT, running on a fork of OpenOCD with nothing customized for the CPU architecture running dog slow.
Sadly, it can be faster to reserve a GPIO, bitbang a TX-only UART, and get on with it.
You can also make the dashboard display on another or across multiple terminals, letting you create a much nicer window layout. I've scripted this up with tmux before to have it automatically create the terminal layout and connect them to gdb, you can create really nice layouts that way (though it can be a lot of effort).
Nice one, I will add it to my notes to use it next time I need debugging. The least thing I want when looking for a bug in my own code is to have to deal with bugs in the debugging tools.
Of course - it's not like a GDB GUI is a novelty in itself, there are quite a few. But a GDB-GUI-only utility is a meaningful and important niche to consider.
Absolutely. I wrote about its features here https://begriffs.com/posts/2022-07-17-debugging-gdb-ddd.html
Since the article was written, the maintainers fixed the issues I pointed out. No need for many of those workarounds now. Versions 3.4.0 and 3.4.1 are substantial.
On the other hand, I'm working on an interactive application, and when I see a problem with it, I add more logging statements until I figure out what the problem is. Any time the logs have excessive detail as a consequence, I gate them behind an 'extra' flag on a per-unit basis, only removing the ones which amount to "got here".
If I had to pick one technique, it would be logs. I naturally think in terms of a trace through the execution pathway, rather than a step-by-step examination of the state of a small window into the code. It clearly works the other way around for some people.
One thing that makes this approach better for me is that debug logging is literally free, Zig uses a lazy compilation model so logging code which doesn't apply to a release compilation isn't even analyzed, let alone compiled, let alone included. In a language which doesn't work that way, there's motive to use printf-only debugging, and clean up after yourself afterwards, and that's extra work compared to firing up a debugger. So it shifts the balance.
- Remote debug.
- Use conditional breakpoints.
- Use breakpoints to trigger commands, e.g. log values, enable other breakpoints, etc. instead of stopping. execution.
- Debug multi-threaded code.
- Disassemble a fragment.
In IntelliJ with Java, you can set conditonal breakpoints with complex evaluations, you can set filters (only hit a breakpoint depending from where it is being called), use exception methods that only hit on certain exceptions instead of a specific line code, you can also use logging breakpoints, that act like printf debuging, but you don't need to scatter your code with print statements all over the place.
You can group, add descripitons, disable, enable and add temporary breakpoints, they are pretty powerful! I just wish intellij had a time travel debbuger like Visual Studio Pro.
https://www.jetbrains.com/help/idea/2024.3/using-breakpoints...
https://learn.microsoft.com/en-us/windows-hardware/drivers/d...
https://github.com/ocornut/imgui/blob/master/misc/debuggers/...
You can also script breakpoints to output the info you want and continue, giving you your information ocean.
Basically, a debugger is a more efficient and powerful tool. In the one situation where you're not skilled with a debugger feature, a printf can be quicker than having to learn, but it's objectively worse.
Slow stepping is a surprise; there's no OS reason for that to be slower. Possibly if your types are really large and complicated, the debugger has to fetch a lot of data to refresh its view of state each time?
Edit: Logging helps me look at what is going on in prod as well. I can trace messages/transactions completely through the path and if there's an issue, I'll see it.
Has anyone found a reliable way to use a debugger when you have a) multi-process b) multi-threaded c) async d) timeouts? I would love to use a debugger but printf and logs “just work”
In general, print-based debugging requires a greater degree of specificity. If you know exactly what you're looking for it's great.
If you are performing a more exploratory sort of debugging, a decent graphical debugger will save you a ton of time.
Like why should I keep trying this month's new editor with a couple new gimmicky features, when I can just pop a plugin onto Emacs that adds that exact feature set, while maintaining everything else how I like it.
I first really got into coding when Atom was a thing, and then that died off and became VS Code and I was pretty sad about it, because while VS Code is good, it doesn't follow the same philosophy as Atom. But then I took the time to learn Emacs ~4 years ago, and nothing new ever comes close to convincing me it's outdated tech that I need to move on from.
That was a random rant, but I'm just really appreciate Emacs, and I'm glad it's stuck around.
> If you master printf
The skill ceiling is low. Printf only does so much.
You could rope in environmental optimization to the skill discussion -- the ability to isolate areas of functionality, replicate problems, reason about unknown state, and do the legwork so that you can quickly spin the increased amount of iteration required by a simpler debugging tool -- but by then you have thoroughly sacrificed both simplicity and portability and are far past the skill floor of a debugger.
If we assess this by looking for problems created by overcommitting to one approach or another, overcommitting to a debugger looks like burning time trying to get tooling to work on a problem that doesn't really need it while overcommitting to printf looks like spending way too much time iterating on tiny steps that could have been jumped over given better visibility. I've seen both, of course, but I tend to see more of the latter and more denial about the latter. When you're burning time fighting tools it's obvious. When you're burning time because you don't know how a tool could have saved you time, it's less obvious.
YMMV.
In general anything you would want to debug should probably be exposed as a unit test and the area of concern should have test cases made that trigger the behavior you are concerned about.
The entire process of debugging essentially results in the same process as you would need to do to create unit test. While it is faster it is lost once done, making the entire process one shot.
This is the key. You need to be able to narrow down where the bug is.
During my long career, I’ve always been told “You should know you code well enough that a few well placed printfs is the most you’ll need to understand a bug”.
But, most of my career has been spent debugging large volumes of code written by other people. Code I’ve never seen before and usually will never see again.
A debugger making a 10X productivity difference for me is no joke.
In some languages, such as Python, it's fairly easy to write a debug-print function that prints all the local variables (as well as the function name and line number it was called in).
It may be somewhat cultural in that influence from functional programming eventually changed the way I think about state and state transitions, leading me to design my code differently, reducing the amount of debugging I have to do and making it easier to do via logging.
Sometimes the problem doesn't show up immediately in data and the code is too complex or uses a lot of wormhole techniques like particular forms of exception abuse, that's when I might fire up the debugger and browse frames instead.
That’s not just not the same league, it’s playing a whole different game.
You're also talking about debugging apps running comfortably in the idillic world created by the OS. It's much harder to debug foundational pieces with printf's when the program immediately panics or early printing isn't available.
In my opinion it's good to build habits that can be generalized to all sorts of software and not limit oneself to writing code in a highly structured environment where most of the work is done for you. I can trace through a program faster than someone can insert/remove printfs and recompile their program, and I don't need to think about what to print. I can look at everything at that point in time, covert data to strings, look at the stack, registers, etc. Very powerful stuff.
Long time ago, worked on a port of game from PC to Playstation 1. Since we had the Yaroze "devkit" (not really a devkit, rather amateur kit for doing games), printf debugging was the only thing available.
Things kind of worked, but when we #ifdef-out the printfs it was crashing (and no debugger). We somehow discovered that one of "printf" side effect was clearing the math errors.
You can have the source code debugger log messages to the output window without having to add logging statements and recompile the affected code.
This is the 21st century.
see visual studio's tracepoint functionality - works in native and .Net languages, https://devblogs.microsoft.com/visualstudio/tracepoints/
sure, if you want the logging messages available for perusal when deployed in production, then this won't help.
Even better - use Hot Reload and tweak your code in the debugger - https://learn.microsoft.com/en-us/visualstudio/debugger/hot-...
[edit] GDB's equivalent to tracepoint is mentioned elsewhere in this thread - https://news.ycombinator.com/item?id=42147372
To me the beauty of print debugging is you can see the flow, and see it quickly in contrast to the debugger. Simply with the debugger, a lot of the time is spent stepping past (at the moment) superfluous breakpoints.
Step, step, step, step, …, step, step, BANG!
Versus a quick BANG preceded by a trail of debris I can then postmortem. I use both, naturally, but prefer the crashes with a debris field than walking on eggs to potential disaster.
- You're using a dynamically typed language.
Something like Rust can eliminate most bugs that come from incorrect data types. For me, a lot of bugs used to come from types that were different from what I expect.
- It is super easy to run your program with a debugger attached.
If your code needs to run on a K8s cluster or a dedicated test machine instead of locally, logs are much easier to get hold of than a remote debug session. Some people aren't even aware that they can attach a debugger to a computer over the network, or inside a Docker container.
- Your environment.
If you don't use an IDE that supports a debugger, it's another friction point. I'm not sure if Vim has something similar to, say, PyCharm's debugger.
Similarly, if you're a junior, and you reach out to a senior and they tell you to debug using logs, you probably will never switch to using a debugger yourself.
I don't find debuggers all that useful, because I often find I'm spending more time thinking about how to use the debugger rather than how to fix the bug; since debugging is hard I want tools that I don't have to think about at all, as they distract me from thinking about the bug.
Maybe that's because I don't have enough experience with a particular tool. If I used a debugger more often it would come naturally to me. But I find most of my bugs are simple enough that that doesn't happen, because I write modular code and TDD.
You might find our Java product interesting, it adds Time Travel Debug to IntelliJ - https://undo.io/products/java/
Undo captures everything the process does, below the JVM level, so you can reproduce / rewind any problem you record as many times as you want (and copy the recording out of production onto a dev machine to debug, etc etc).
Please get in touch if you'd like a free trial.
Then they'll do the same thing when you replay.
Non-idempotent system calls are tricky because they interact with the outside world - but that's still OK.
In Time Travel Debug, the process you're debugging is essentially in the Matrix. When it's being recorded everything acts as normal (and it'll see the real results of those non-idempotent calls).
When it's being debugged, any interaction with the outside system is prevented and replaced with the behaviour we saw at record time. It'll still think it's doing the non-idempotent calls, they just won't change (or depend upon) the state of the rest of the system.
The DDD website ( https://www.gnu.org/software/ddd/ ) points to the source tar.gz and the full manual, but nothing that says "What's New" in recent versions.
https://ftp.gnu.org/old-gnu/Manuals/gdb/html_mono/gdb.html#S...
Another trick: for rare circumstances, code whatever complicated logic is needed to isolate the bug in order to issue a print statement, then use the debugger to break on that print statement.
One thing I also noticed is that using "problem-oriented" languages like Python or Java changes where you spend your time trouble-shooting: ironically, not where the problem is (business logic) anymore, those parts of the code indeed tend to work better, but intead you waste time with libraries (Java: CLASSPATH, Python: packages, all:version conflicts). In Contrast, in C/C++ it was mostly memory management errors and bugs in the actual business logic (the former is also a great distraction, somewhat diminished by the introduction of smart pointers).
But then later it got scrapped, or something like it.
Cloud "debugging" when you have multiple instances is one of the cases where there is no suitable enough debugger (yet).
And you don't need a full debugger setup on the target machine, just the recorder binary.
Gives you the possibility to have a proper debug experience without having to set up debugging that somehow works in a live k8s pod, or connects through special firewall holes or somesuch.
Not to mention that it actually takes extra time to do so, when people are used to debug Python/JavaScript/Go code with one single click these days.
There was even a story, that (at least for Common Lisp), you can start from almost blank state, but have an exception handler installed (that can continue), so as you go you live-edit and add pieces missing, or if code crashes change.
This is all good, until nowadays, where you really want to know what's deployed in production, and not just the last stuff I've live fixed.
I mean, I guess both have values tbh, but hard to pull two models like this and use... bit like - debugger or printf statements (or both!)
- I can get a good idea of the temporal behavior of the program, and I can just scroll up to see earlier state, rather than having to restart the program in the debugger. (I know that "time travel" debuggers exist, but I've found them finicky.) I can scrub back and forth through time just by scrolling.
- I can compare runs by diffing the logs. Sometimes that alone is enough to show where things start going amiss. Or I can keep instrumented logs from baseline runs.
- If there's a personally useful set of printf statements in an area that I'm in a lot, I can save those off to a patch file or a local branch. I don't have to reapply my breakpoints / watchpoints in the debugger each time. Easy persistence.
(That said, I do like to start with a debugger when tackling reproducible crashes.)
¹ https://www.gnu.org/prep/standards/standards.html#NEWS-File
² https://svn.savannah.gnu.org/viewvc/ddd/trunk/doc/NEWS?view=...
I don't really understand why people would use a separate debugger to the one that their IDE has. Most IDEs have solid debugger interfaces.
Just write `test_x.c`, `gcc -o test_x test_x.c -g`, and `gdb --args ./test_x --blah --blah` allows for much faster iteration on things.
PLUS the gdb commands end up far easier and more powerful to probe things than mess with the GUI most of the time.
I don't know, it seems very common to me. Have you not seen anything like that?
Debugger helps a lot when performing TDD. You will enjoy setting a breakpoint to check whether a line is hit as expected by a test case.
To each their own, but I wholeheartedly recommend learning about debuggers. It should be one of the core tools of every software engineer.
Last time I tried to build GDB from source, which was some two months ago, it wasn't in any way simple. GDB comes embedded in some GNU binutils repo, instructions to build it in isolation weren't obvious.
I ended up creating a new VM with a more recent Linux distro, that came with newer GDB, and migrating everything I'm working on to it, because that was much easier than building GDB from source.
break SourceFile.cpp:123
command
pp var1
pp var2
continue
end
With that `continue` at the end there, this breakpoint will not pause execution (except to run the commands).That has no bearing on whether we should have good tools for skilled developers, including debuggers & makefiles & source distributions. Eventually, students become moderately skilled. And we used gdb in class when I was a student.
Thanks for your honest review. Can you explain more about not being able to set the editor font? My tests show that as working. Make sure to "Save Configuration..." to make things permanent.
Also, for the "show variable value on hover", I'll test it.
Can you (or others) create an issue on my github page for any bugs or feature requests?
Thanks.
Unfortunately, it's also the best explanation as for why we don't have them in practice, and why all software seems perpetually developed by fresh juniors (because it is).
I saw this comment yesterday and thought of this discussion though, maybe it helps illustrate what I was trying to say? https://news.ycombinator.com/item?id=42170536
The point being made in the comment linked to, briefly, is how non-SW engineers "turn their nose up at open-source solutions". It's a bit broader than what I was saying, but was roughly the kind of trend I was trying to allude to.
It's next to impossible to properly source any claim in this area; as for a lighter standard of substantiation, C wasn't a big deal in university courses 15 years ago, so it's unlikely to suddenly become it now.
N=1, but back then, between C++ and Java courses jumping straight to IDEs (to focus on language instead of build steps), and Unix/C course sticking to direct calls to GCC, at least my year at my uni managed to go through 5 years without much exposure to make and Makefiles...
Anyway,
> Sure, a lot of software is written by junior devs but that has no bearing on whether gdb exists, or even how hard it is to build.
But it does have a bearing on GDB evolution and GDB GUIs, which almost universally expose less than the bare minimum of useful GDB features; of those that expose more, I'm yet to find one that works.
> We have lots of good debuggers, and luckily it only takes a few experts to write them, which is why we do have them in practice.
Well, yes. WinDbg, that debugger in Visual Studio, etc. :). GDB, too.
My point here is that, for better or worse, size and type of the target audience determines how much and what kind of attention the project gets. "Skilled, experienced developers" are a small niche; the vast majority of developers are fresh juniors (per the growth argument), creating pressure to satisfy them at their level. Which, for GCC, I guess it means the build steps remain arcane even relative to other GNU projects, and powerful GUI frontends for it are not a thing.