WSJT-X 3.1 improved  ·  FT8  ·  Decoder Analysis

Understanding FT8 Decoder Settings
in WSJT-X 3.1 improved

What "Decode Start" really does — and why serious FT8 optimization matters more than ever today. A long-form technical essay on timing strategy, staged decoding, CPU allocation, and practical operating philosophy.

Author: Yoshiharu Tsukuura / JP1LRT Format: Independent public HTML edition QRZ: JP1LRT
Article type: Technical long-form essay Primary topic: FT8 decoder timing strategy
Callsign: JP1LRT Language: English

JTDX has long offered helpful explanations of how its decoder settings work. Those guides have always been valuable, because they make one important point very clear:

decoder performance is not simply a matter of enabling every option that looks “stronger.” It is always a balance among processing time, missed decodes, and false decodes.

That is the key principle.

But in 2026, if your focus is performance in FT8/FT4 operation, the conversation has changed.

WSJT-X 3.1 improved is no longer just “standard WSJT-X with a few extra features.” A substantial amount of practical decoder thinking — including ideas that will feel very familiar to JTDX users — has been meaningfully incorporated into it. At this point, comparing it casually with WSJT-X 2.7 or earlier is, frankly, no longer very informative.

To put it more bluntly: for the average operator, WSJT-X 3.1 improved now deserves to be taken very seriously as a first-choice option.

Waiting for the next GA release of JTDX is, of course, one possible stance. But if your priority is practical decoder performance, available features, update activity, and what you can actually install and use today, then using what is already here and already strong is the more rational path.

None of this means that JTDX has no place. If someone is deeply attached to the JTDX interface, that is a perfectly valid reason to stay with it. But that is a different question from decoder capability and practical optimization.

This article is not meant to be just another list of recommended settings. What I want to examine instead is a deeper question:

What does FT8 decoder optimization actually mean?
And more specifically, what does the “Decode Start” setting in WSJT-X 3.1 improved really do?


The practical conclusion first

Start with Normal. Try 3-Stage if you have CPU headroom. Use Early if timing margin is more important.

Let me begin with the practical conclusion.

The most important rule in FT8 decoder tuning is not:

“Use the heaviest settings available.”

It is:

“Use the most aggressive settings your system can finish reliably within the 15-second cycle.”

That is a very different mindset.

As a baseline, something like the following is sensible for many stations:

Multithreaded FT8 decoderON
Number of decoding threadsAuto
Number of decode passes2
QSO Rx Frequency SensitivityMedium
Decoder SensitivityLow thresholds
Decode StartNormal
Reduce False DecodesON
Wideband DX Call SearchON

That is not necessarily the final answer, but it is a sound starting point. From there, you can move in a more aggressive direction if your CPU has headroom. If you begin to see lag, late completion, or instability, you step back.

As for Decode Start, a practical summary:

Normal
The baseline reference point. Start here — it makes comparison with other modes much easier.
3-Stage
Especially interesting if CPU headroom is available and missed decodes matter.
Early
Useful when you want to protect timing margin rather than extract every last bit of signal.
Late
Wait a little longer, gather slightly more signal, then commit later.
2-Stage
A strong compromise between responsiveness and processing load.

That is the practical summary. But the reason this setting matters is more interesting than the summary alone suggests.

Decode Start is not merely a “start earlier or later” preference. It is closely tied to how many stages the decoder uses, where those stages are placed in time, and how decoding effort is distributed across the FT8 cycle.

That is the real heart of the matter.


Why WSJT-X 3.1 improved deserves serious attention now

To understand why this matters, it helps to step back and look at the bigger picture.

Over the last few years, WSJT-X improved has become much more than a side branch with extra options. It has evolved into a practical platform for real-world decoder ideas, UI flexibility, and operating-oriented refinements.

Once you spend time with current improved builds, one thing becomes obvious:

the old question “How does this compare with standard WSJT-X 2.7?” is no longer the most useful one.

The more relevant questions are now:

  • How good is WSJT-X 3.1 improved as a practical FT8/FT4 operating tool today?
  • How much real-world decoder thinking has been built into it?
  • And how should an operator tune it for their own station, CPU, and operating priorities?

That is why this article is not primarily about comparison tables. It is about how to use the software intelligently.

And once the discussion shifts from “what settings exist?” to “how should they be used?”, it becomes necessary to ask what those settings actually mean inside a 15-second FT8 cycle.


The role of each setting

“More aggressive” does not automatically mean “better”

Before focusing on Decode Start, it is worth clarifying how FT8 decoder settings should be understood in general.

A very common misunderstanding is this:

heavier settings must mean better performance.

That sounds plausible, but FT8 does not work that way.

Because FT8 runs in 15-second cycles, what really matters is a balance among four things:

  1. How much signal you wait for before decoding
  2. How deeply you search for candidate messages
  3. Whether all of that finishes in time
  4. How many false decodes you are willing to tolerate

Seen that way, FT8 optimization is not about “maximum power.” It is about how to allocate limited processing time.

That perspective changes how each setting should be viewed.

Number of threads

For most operators, Auto is still the best starting point. Simply forcing more threads does not guarantee a better result. OS scheduling, background load, core topology, and overall system behavior all matter.

Number of decode passes

More passes can increase the chance of recovering weaker or more ambiguous signals. But they also cost CPU time. If your system is strong, 3 passes may be worthwhile. For many stations, 2 is the most sensible default. On limited hardware, 1 may be the safer choice.

Decoder Sensitivity

A practical reading is straightforward:

  • Minimum is lighter
  • Low thresholds is balanced
  • Subpass is more aggressive, but heavier

Subpass is not free performance. It is extra work in exchange for the possibility of recovering weaker or more difficult signals.

A note on Fast / Normal / Deep versus the Multithreaded FT8 Decoder

This is one part that is particularly easy to misunderstand. Because Fast / Normal / Deep appears on the same screen as the Multithreaded FT8 decoder, it is natural to assume that they are simply different versions of the same control. But the source suggests that they are not the same kind of setting.

First, Fast / Normal / Deep is the traditional decode-depth setting: in other words, a way of telling the decoder how deeply to search. In the conventional single-threaded FT8 decode path, this depth setting is directly tied to how heavy the decode becomes and how aggressively the search is pursued.

The Multithreaded FT8 decoder, by contrast, is a different axis altogether. It is fundamentally a question of which decoder engine family is being used. In the code, decode depth and multithreaded FT8 are handled as separate parameters, not as two names for the same thing.

The important nuance comes next. When FT8 is actually running with the multithreaded decoder enabled, the program enters a dedicated MTD path rather than the older conventional FT8 decode route. In that path, the settings that really control decoder behavior are the MTD-specific ones — rather than the old Fast / Normal / Deep levels:

  • Decoder Sensitivity
  • Decode Start
  • Number of decode passes
  • QSO Rx Frequency Sensitivity

So the most accurate way to think about it is this: Fast / Normal / Deep still exists, and it still belongs to the program's broader decode-depth logic, but in FT8 MTD operation it is not the primary knob that defines how aggressively the decoder behaves. In practical tuning, the controls that really matter are the MTD-oriented ones — especially Decoder Sensitivity, Decode Start, Number of decode passes, and QSO Rx Frequency Sensitivity.

In other words, if you are trying to seriously tune WSJT-X 3.1 improved for FT8 with the multithreaded decoder enabled, it is better to think in terms of MTD-specific behavior controls than to assume that Fast / Normal / Deep remains the main determinant of decoder character. The old depth setting is still part of the design; it is simply no longer the most important practical lever once you are inside the FT8 MTD path.

QSO Rx Frequency Sensitivity

This is not merely a speed setting. It affects how aggressively the decoder examines activity around the QSO-related frequency region and nearby candidates.

  • Low is conservative
  • Medium is balanced
  • High is aggressive

And, as expected, greater aggressiveness can also bring more questionable candidates and more decoding noise.

Reduce False Decodes

This deserves more attention than it sometimes gets. Finding more is not automatically better if the additional output is increasingly unreliable. On crowded bands and under ambiguous conditions, false-decode control matters a great deal.

Taken together, these settings are not competing in isolation to see which one is “strongest.” Each of them expresses a different answer to the same underlying question:

inside a very limited 15-second window, what should the decoder prioritize?

And among all of them, Decode Start is one of the clearest and most revealing examples.


The main question

What does “Decode Start” actually do?

This is where the subject becomes genuinely interesting.

In the user interface, Decode Start offers five choices:

  • 2-Stage
  • 3-Stage
  • Early
  • Normal
  • Late

At first glance, this looks like a simple timing control:

start decoding earlier, or start decoding later.

But that is not the full story.

Looking at the actual WSJT-X 3.1 improved implementation, these modes are handled internally as:

  • 0 = 2-Stage
  • 1 = 3-Stage
  • 2 = Early
  • 3 = Normal
  • 4 = Late

And during FT8 operation, the program uses values such as m_hsymStop, m_earlyDecode, and m_earlyDecode2 to determine when decoding stages are triggered.

This immediately tells us:

Decode Start is not simply a “when” setting. It is also a “how” setting.

To make that easier to understand, let's start with the simpler three modes — Early, Normal, and Late — and then move on to the more revealing 2-Stage and 3-Stage modes.


Early / Normal / Late

The simpler three modes first

Let's start with the more straightforward options.

When the multithreaded FT8 decoder is enabled, the internal stop points are approximately:

  • Early → 48
  • Normal → 49
  • Late → 50

In rough timing terms, that corresponds to approximately:

  • Early → about 13.8 seconds
  • Normal → about 14.1 seconds
  • Late → about 14.4 seconds

The basic meaning is intuitive:

  • Early gives up a little signal collection in exchange for more CPU margin
  • Late waits longer to gather slightly more information before decoding
  • Normal sits in the middle

If Decode Start consisted only of these three modes, it would still be a useful setting — but not a particularly unusual one.

What makes the feature much more interesting is what comes next:

2-Stage and 3-Stage.

Fig. 1  —  Which decode pass fires at which point (15-second cycle)
nz=41
11.8s
nz=46
13.2s
nz=48
13.8s
nz=49
14.1s
nz=50
14.4s
0s
2
4
6
8
10
12
14
15s
2-Stage
nd=0
STD pre-pass ①
STD pre-pass ②
final MTD
3-Stage
nd=1
STD pre-pass ①
STD pre-pass ②
final MTD
Early
nd=2
MTD only
Normal
nd=3
MTD only
Late
nd=4
MTD only
STD pre-pass (single-threaded · lightweight early decode)
MTD (multithreaded · full final decode)
nzhsym boundary

Additional technical note

What 2-Stage and 3-Stage really are

Not merely “earlier” or “later,” but a staged decoding strategy that combines STD and MTD

The WSJT-X improved change log describes the 2-stage and 3-stage modes as ones that “intelligently combine both decoders” — resulting in what it calls the best FT8 decoding performance to date.

In other words, the software explicitly presents them as a way of combining the traditional STD decoder (single-threaded) and the newer MTD decoder (multithreaded).

What the change log does not explain is the exact execution order or the timing relationship between the two. That only becomes clear when you look at the implementation.

And once you do, it becomes obvious that 2-Stage and 3-Stage are not merely different start times. They are multi-stage decoding strategies, and the different stages do not all play the same role.

The division of labor between STD and MTD

Here, STD refers to the traditional single-threaded FT8 decoder. MTD refers to the multithreaded FT8 decoder introduced in the improved line.

Looking at the implementation, the staged modes work roughly like this:

  • earlier stages use STD
  • the final stage uses MTD

That is the key design idea.

In other words, this is best understood as a deliberate separation between:

  • an earlier, lighter, faster look
  • and a later, more complete decode pass

That is a very important distinction.

A simpler design might have waited until the end of the receive interval and then performed one heavy decode pass. WSJT-X improved does not do that. Instead, it performs intermediate candidate searches before the final stage, then completes a more serious decode later.

This makes it clear that the design is paying close attention to one central problem:

how to allocate CPU time intelligently inside a very short FT8 cycle.

What 2-Stage really means

This is one of the places where the name can be slightly misleading. In the implementation, 2-Stage is not literally just a simple "two-stage" structure.

If you follow the source directly, 2-Stage proceeds like this:

  1. An early STD pre-pass at nzhsym=41
  2. A second STD pre-pass at nzhsym=46
  3. A final MTD pass at nzhsym=49

In rough timing terms, that corresponds to approximately:

  • Stage 1: about 11.8 seconds
  • Stage 2: about 13.2 seconds
  • Final stage: about 14.1 seconds

So, in implementation terms, 2-Stage is a STD → STD → MTD staged mode — not a simple two-step structure. What matters here is that 2-Stage is not simply "Normal, but slightly earlier." Instead, it inserts two lightweight STD pre-passes during the cycle, then fires the final MTD pass at nzhsym=49, which is Normal-equivalent timing.

Another way to say it is this:

2-Stage is not a "wait until the end and bet everything on one decode" mode. It scouts twice during the cycle with STD, then commits to a final MTD pass at Normal-equivalent timing. The practical difference from 3-Stage is simply that 3-Stage holds the final MTD pass one step longer — at nzhsym=50 instead of 49. Both modes share the same two STD pre-passes.

That design has a very clear practical meaning.

FT8 is a 15-second mode, and the amount of processing time available inside each cycle is limited. That means it can be advantageous to examine candidate signals before the final moment, especially when certain signals have already revealed enough structure to become useful candidates earlier in the cycle.

For that reason, 2-Stage is best understood as a very practical operating mode for situations where you want:

  • better responsiveness
  • limited extra load
  • and a lower chance of missing recoverable signals

What 3-Stage really means

3-Stage uses the same overall structure, but delays the final MTD pass by one additional step.

In implementation terms, it works like this:

  1. An early STD pre-pass at nzhsym=41
  2. A second STD pre-pass at nzhsym=46
  3. A final MTD pass at nzhsym=50

The approximate timing is:

  • Stage 1: about 11.8 seconds
  • Stage 2: about 13.2 seconds
  • Final stage: about 14.4 seconds

So compared with 2-Stage, 3-Stage does not add another STD scouting pass. Both modes already share the same two STD pre-passes. The real difference is simply that 3-Stage holds the final MTD pass one step later.

The intent remains quite clear:

  • look early for what can already be found
  • look again when more signal has accumulated
  • then perform the full final decode with MTD at the end

In that sense, 3-Stage is still the most ambitious staged mode in the set. It shares the same two early STD scouting passes as 2-Stage, but waits longer before committing to the final MTD decode.

Naturally, that also means more CPU load. But on a system with enough headroom, that additional ambition can be a real advantage.

It should be understood as “STD → STD → MTD,” not “MTD → STD”

This is one of the easiest points to misunderstand.

Since the change log says that 2-Stage and 3-Stage combine STD and MTD, some readers may imagine a model like this:

  • MTD runs first
  • then STD fills in what MTD missed

But that is not what the implementation suggests.

The more accurate interpretation is the opposite:

  • STD performs the earlier, lighter scouting passes
  • MTD performs the final, more serious decode

That distinction matters.

If one imagines “MTD first, STD later,” the relationship sounds like a fallback or repair step. What the code suggests instead is a much more elegant model:

  • lighter-weight work earlier in time
  • heavier-weight work later in time

That is not just a matter of decoder order. It reflects a very deliberate and very reasonable time-allocation strategy.

Why use STD in the earlier stages at all?

This, too, is revealing.

If the goal were simply “run the decoder more times,” then one might expect the software to run MTD repeatedly at every stage. But that is not what it does.

A likely reason is straightforward:

the earlier stages are meant to be light and fast.

At those earlier points, reception is not yet complete. The amount of information available is still lower than it will be at the final stage. Running the heaviest possible decode logic at full strength every time would not necessarily be the most efficient use of limited CPU time.

So instead, the software uses STD to look quickly and economically at earlier points, then saves MTD for the final and more consequential pass.

That is not merely an implementation detail. It reflects a decoding philosophy that is extremely well suited to FT8 as a short, burst-like, timing-sensitive workload.

The code also suggests that earlier stages use somewhat constrained processing compared with the final stage, which further supports the idea that these modes are not just repeated passes, but a staged progression in which each pass has a different role.

The essence of 2-Stage and 3-Stage is time allocation

At bottom, 2-Stage and 3-Stage are really about one thing:

how to allocate computational effort inside a 15-second cycle.

  • Should the decoder take an earlier look?
  • Should it take an additional look in the middle?
  • Should it wait until the end for the most complete final attempt?

That is what the staged modes are deciding.

So 2-Stage and 3-Stage should not be thought of as simple timing adjustments. They are better understood as different ways of unfolding the FT8 decoder across time.

Fig. 2  —  Internal structure of 2-Stage / 3-Stage
Light · Fast · Early scoutingSTD runs early — a lightweight candidate scan before reception is complete.
Heavy · Thorough · Final processingMTD fires last — full multi-core decode with maximum signal available.
2-Stage
Balanced — responsiveness with missed-decode reduction
STD
11.8s
pre-pass ①
STD
13.2s
pre-pass ②
MTD
14.1s
final decode
STD pre-pass × 2 → MTD final decode. Final MTD at nzhsym=49 (Normal-equivalent). Practical balance across most CPU setups.
3-Stage
Most aggressive — maximum missed-decode reduction
STD
11.8s
pre-pass ①
STD
13.2s
pre-pass ②
MTD
14.4s
final decode
STD pre-pass × 2 → MTD final decode. Final MTD at nzhsym=50 (Late-equivalent). Best missed-decode reduction when CPU headroom allows.

How should they be interpreted in actual operation?

Once translated into operating terms, the distinction becomes quite practical.

2-Stage

  • preserves responsiveness fairly well
  • adds an earlier look
  • keeps the final stage around Normal timing
  • improves candidate recovery without pushing CPU load to extremes

In other words, it is a balanced staged strategy.

3-Stage

  • looks early
  • looks again in the middle
  • still keeps a final MTD pass at roughly Late timing
  • pushes harder for missed-decode reduction
  • but increases CPU load accordingly

So 3-Stage is the most ambitious and most aggressive staged mode available.

If the CPU has sufficient headroom, it can be extremely attractive. If the CPU is already close to its timing limits, then the theoretical advantage matters less than simply finishing cleanly.

At that point, the meaning of these modes becomes much clearer:

they are not cosmetic options, but different philosophies for how to spend the 15 seconds of an FT8 cycle.

One more important point

The staged modes are not just “multiple decodes”; they assign different roles to different stages

Seen from a slightly wider perspective, what makes 2-Stage and 3-Stage so interesting is not merely that they trigger decoding more than once.

The deeper point is that each stage plays a different role.

  • earlier stages are lighter and faster
  • the final stage is more complete
  • CPU effort is distributed across the cycle instead of being spent all at once

This fits FT8 remarkably well.

One could, in principle, wait until the end and make one final decision. But there is real value in looking for recoverable candidates earlier, then revisiting the problem later with more information and more serious processing.

That is why understanding the staged modes is not just about understanding one setting. It is really about understanding what FT8 optimization itself means.


Practical interpretation

How should each mode actually be used?

After all the technical detail, it is worth bringing the discussion back to operating language.

2-Stage

Highly practical. It preserves responsiveness while still taking an earlier look. The final stage is not as late as Late, so it avoids becoming unnecessarily demanding.

3-Stage

The most ambitious mode — in a good sense.

It looks early, looks again later, and still holds on for a strong final pass. If CPU headroom is available and reducing missed decodes is a priority, this is one of the most interesting settings in the entire panel.

Early

This should not be dismissed as merely a weak-CPU fallback. It is also a very rational stability strategy. If avoiding spillover into the next cycle matters more than extracting every last bit of signal, Early can be exactly the right choice.

Normal

The reference point. For most users, this is where testing should begin. It makes comparison with the other modes much easier.

Late

Wait longer, decode later, and make the final decision with a bit more information. In theory, that can help. In practice, it is only useful if the system still finishes reliably.

Taken together, the Decode Start modes should not be understood as “personal taste.” They are better thought of as different timing strategies.

Fig. 3  —  Mode selection guide
Mode Decode pass structure CPU load Best suited for
2-Stage
ndecoderstart=0
STD(41)→STD(46)→MTD(49)
Med Balance between responsiveness and missed-decode reduction.
3-Stage
ndecoderstart=1
STD(41)→STD(46)→MTD(50)
High Ample CPU headroom · maximize missed-decode reduction.
Early
ndecoderstart=2
MTD only (nzhsym=48)
Low–Med Stability first · prevent spillover into next cycle.
Normal
ndecoderstart=3
MTD only (nzhsym=49)
Med Start here. The reference point for all comparisons.
Late
ndecoderstart=4
MTD only (nzhsym=50)
Med–High Ample CPU headroom · decode with maximum signal collected.

So what are we really optimizing?

Not average performance, but the use of CPU time inside the 15-second cycle

At this point, the larger principle should be clear.

FT8 optimization is not simply about making things “more powerful.”

It is about deciding:

  • how long to wait
  • how often to look
  • where to make lighter passes
  • where to make the heavier final pass
  • and whether the whole process remains stable from cycle to cycle

In other words, this is really about how CPU performance is used, not just how much of it exists.

From that perspective, Decode Start is one of the most important settings in the entire decoder panel. What looks like a small UI option is, in reality, a direct expression of time strategy.

And for that reason, understanding Decode Start is more than a matter of setting explanation. It is a way of understanding the underlying philosophy of computational allocation in FT8.


A more personal note

Why I am writing this at all

Up to this point, I have tried to keep the discussion general and technical. But for context, I should make my own position clear.

I am a JTDX enthusiast. I genuinely love the JTDX user interface. I am also one of the beta testers officially authorized by the JTDX development team, and I am responsible for the Japanese localization of JTDX.

So I am not writing this as an outsider taking casual shots at JTDX from a distance.

Quite the opposite.

I know JTDX well. I value it highly. And precisely for that reason, I can say this clearly: for the average amateur radio operator, WSJT-X 3.1 improved is now an entirely rational recommendation.

This is not because JTDX lacks value. It is because software should also be judged by what is practically available, mature enough to test, and useful right now.

If someone stays with JTDX because they truly prefer its interface, that is completely understandable. But if the question is present-day decoder capability and practical operating value, WSJT-X 3.1 improved deserves serious attention.


My own operating philosophy

Up to this point, I have focused on general principles. But it may be useful to explain where my own thinking leads in practice.

My main operating PC uses a Core i9-9900K. That detail matters — but not simply because it is a relatively strong CPU.

It also matters because of how the machine is configured.

In Windows power settings, I set the minimum processor state to 100%. In other words, I do not wait for the CPU to raise clocks after the decoding workload appears. I prefer to have it already running at high clock speed, ready and waiting. In my case, it effectively sits at 4.7 GHz.

The reason is simple.

FT8 decoding is not like long-duration video rendering, where a steady load runs for extended periods. It is much closer to a short burst of concentrated work, repeated at predictable intervals.

In that kind of workload, average benchmark performance is not the whole story. Initial response matters.

If the CPU is sitting in a lower-power state, then the system has to:

  • detect the load
  • change performance state
  • raise clocks
  • adjust voltage
  • and let the scheduler react

Those delays may be insignificant in long-running workloads. But in short, timing-sensitive bursts, they can matter more than people expect.

That is why I believe this:

for FT8, it is often better to have the CPU already at attention than to wait for it to wake up after the work has arrived.

Of course, that choice comes with trade-offs.

  • higher power consumption
  • more heat
  • lower efficiency
  • less elegance from an energy-saving standpoint

But on a station PC, I consider decode responsiveness and stability more important than electrical tidiness.


Why my own settings are deliberately aggressive

Because 50 MHz matters to me

My settings reflect that philosophy rather clearly.

  • Decoder Sensitivity: Subpass
  • QSO Rx Frequency Sensitivity: High
  • CPU already waiting at high clock speed

This is not a conservative setup, and I do not pretend that it is.

But there is a reason for it:

50 MHz matters a great deal to me.

On 6 meters:

  • conditions can change quickly
  • activity can rise suddenly
  • weak and strong signals often coexist
  • and a brief opening can make missed opportunities especially frustrating

Because of that, I am willing to spend CPU resources in order to reduce missed decodes.

That is why I use:

  • Subpass, to search more deeply
  • High sensitivity, to be more aggressive in candidate recovery
  • and a power configuration that keeps the CPU ready before the burst workload arrives

This is not simply “turn everything up because more must be better.”

It is a deliberate operating strategy built around:

  • emphasis on 50 MHz
  • sufficient CPU headroom
  • importance of response speed
  • and a willingness to trade efficiency for opportunity

Final thoughts

The real question is not “Which program is right?” but “Which tuning philosophy fits your station?”

If I had to reduce this entire discussion to a single sentence, it would be this:

FT8 decoder optimization is not about average CPU performance. It is about how effectively CPU performance is available at the moments when it matters most.

Seen from that perspective, WSJT-X 3.1 improved is a very interesting piece of software. Not because it merely offers more options, but because it gives the operator meaningful control over decoder timing strategy.

And Decode Start is one of the clearest examples of that.

2-Stage, 3-Stage, Early, Normal, and Late are not just cosmetic labels. They represent different ways of distributing decoding effort across the FT8 cycle.

And if one looks more closely at the staged modes, one can see an especially elegant idea underneath them:

STD and MTD are not simply both “used.” They are given different roles, and CPU time is distributed across the cycle accordingly. That is a sophisticated design choice — and understanding it changes how you approach optimization entirely.

And finally, let me say this as someone who genuinely values JTDX:

if I were recommending software to the average amateur radio operator today, WSJT-X 3.1 improved would be very near the top of the list.

If your attachment to JTDX is rooted in its interface, that is one thing. But if your concern is present-day decoder capability in real operating conditions, there is little reason to hesitate.

Install it. Try it. And evaluate it on the air.

That will answer the question more honestly than any abstract argument ever could.

Author

Yoshiharu Tsukuura (JP1LRT)
Amateur radio operator, JTDX enthusiast, beta tester authorized by the JTDX development team, and Japanese localization contributor.

Website / blog: https://www.qrz.com/db/JP1LRT

73,
Yoshiharu Tsukuura / JP1LRT