Skip to content

[PyTorch] Guard/document single parameter feature for grouped linear#2955

Merged
ksivaman merged 6 commits intoNVIDIA:mainfrom
ksivaman:single_param_better_doc
May 4, 2026
Merged

[PyTorch] Guard/document single parameter feature for grouped linear#2955
ksivaman merged 6 commits intoNVIDIA:mainfrom
ksivaman:single_param_better_doc

Conversation

@ksivaman
Copy link
Copy Markdown
Member

@ksivaman ksivaman commented May 1, 2026

Description

Guard/document single parameter feature for grouped linear.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

  • Guard/document single parameter feature for grouped linear

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>
@ksivaman ksivaman requested review from ptrendx and timmoon10 May 1, 2026 22:28
@ksivaman ksivaman added the 2.15.0 label May 1, 2026
@ksivaman
Copy link
Copy Markdown
Member Author

ksivaman commented May 1, 2026

/te-ci pytorch L0

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented May 1, 2026

Greptile Summary

This PR guards the experimental single_grouped_weight / single_grouped_bias feature in GroupedLinear behind the NVTE_GROUPED_LINEAR_SINGLE_PARAM environment variable, emitting a UserWarning when the flags are requested without the env var or when the experimental path is enabled. CI scripts are updated to run three test suites with the feature enabled so the code path is exercised in automation. The implementation is straightforward and correct; the only nit is a slightly misleading warning message (see inline comment).

Confidence Score: 5/5

Safe to merge; the gating logic is correct, warnings are informative, and CI coverage of the new path is added.

Only P2 findings (minor warning-message wording). No logic bugs, no data-corruption risk, no security concerns. The function correctly short-circuits when neither flag is set, resolves at construction time, and the stacklevel is accurate for both direct-instantiation call sites.

No files require special attention.

Important Files Changed

Filename Overview
transformer_engine/pytorch/utils.py Adds resolve_grouped_linear_single_param_flags gating logic; warning message says "is not set" even when the env var is explicitly set to "0", and the experimental-enabled warning fires unconditionally on every construction.
transformer_engine/pytorch/module/grouped_linear.py Calls resolve_grouped_linear_single_param_flags early in init before assigning the instance attributes; docs updated consistently.
transformer_engine/pytorch/ops/basic/grouped_linear.py Same gating call added to ops-level GroupedLinear.init; docs updated; single_grouped_weight/single_grouped_bias are resolved before weight/bias registration loops.
qa/L0_pytorch_unittest/test.sh Adds NVTE_GROUPED_LINEAR_SINGLE_PARAM=1 to three test invocations to exercise the single-param code path in CI.
qa/L0_pytorch_debug_unittest/test.sh Adds NVTE_GROUPED_LINEAR_SINGLE_PARAM=1 to the debug sanity test run.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A["GroupedLinear.__init__(single_grouped_weight, single_grouped_bias)"]
    A --> B["resolve_grouped_linear_single_param_flags()"]
    B --> C{Either flag True?}
    C -- No --> D["Return flags unchanged (no warning)"]
    C -- Yes --> E{NVTE_GROUPED_LINEAR_SINGLE_PARAM > 0?}
    E -- No --> F["Warn: env var not enabled\nForce both flags to False"]
    E -- Yes --> G["Warn: experimental feature active\nReturn flags unchanged"]
    F --> H["self.single_grouped_weight = False\nself.single_grouped_bias = False"]
    G --> I["self.single_grouped_weight = requested\nself.single_grouped_bias = requested"]
    D --> J["self.single_grouped_weight = False\nself.single_grouped_bias = False"]
Loading

Reviews (4): Last reviewed commit: "Merge branch 'main' into single_param_be..." | Re-trigger Greptile

if not (single_grouped_weight or single_grouped_bias):
return single_grouped_weight, single_grouped_bias

env_enabled = int(os.environ.get("NVTE_GROUPED_LINEAR_SINGLE_PARAM", "0")) > 0
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Non-integer env var value will raise ValueError

int(os.environ.get("NVTE_GROUPED_LINEAR_SINGLE_PARAM", "0")) will throw an uncaught ValueError if the variable is set to a non-numeric string (e.g. "true", "yes"). Wrapping in a try/except would give a cleaner error message. This is consistent with other similar env-var checks in the file, so this is a pre-existing pattern rather than a new regression — flagging for awareness.

Suggested change
env_enabled = int(os.environ.get("NVTE_GROUPED_LINEAR_SINGLE_PARAM", "0")) > 0
try:
env_enabled = int(os.environ.get("NVTE_GROUPED_LINEAR_SINGLE_PARAM", "0")) > 0
except ValueError:
env_enabled = False

timmoon10
timmoon10 previously approved these changes May 1, 2026
Copy link
Copy Markdown
Collaborator

@timmoon10 timmoon10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic breaks backward compatibility since we can no longer rely on the module kwargs to configure single grouped params. I guess we've already been treating single grouped params as an experimental feature. We should keep this instability in mind whenever we use this feature externally, e.g. in Mcore.

Comment on lines +86 to +87
single_grouped_weight: bool,
single_grouped_bias: bool,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While we are breaking backward compatibility, we might consider consolidating these options together. Do we really want to take on the burden of supporting the case with a single grouped weight and discrete bias, or discrete weights and single grouped bias?

if not (single_grouped_weight or single_grouped_bias):
return single_grouped_weight, single_grouped_bias

env_enabled = int(os.environ.get("NVTE_GROUPED_LINEAR_SINGLE_PARAM", "0")) > 0
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we only respect the kwargs if an envvar is set, it doesn't really make sense to keep the envvars rather than just checking the envvar. I guess we're half-heartedly maintaining/preparing the stable API for this feature.

ksivaman added 2 commits May 3, 2026 17:49
Signed-off-by: ksivamani <ksivamani@nvidia.com>
@ksivaman
Copy link
Copy Markdown
Member Author

ksivaman commented May 3, 2026

/te-ci pytorch

timmoon10
timmoon10 previously approved these changes May 4, 2026
Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>
@ksivaman
Copy link
Copy Markdown
Member Author

ksivaman commented May 4, 2026

/te-ci pytorch

@ksivaman ksivaman merged commit 528f16c into NVIDIA:main May 4, 2026
22 of 24 checks passed
@ksivaman ksivaman deleted the single_param_better_doc branch May 4, 2026 22:06
ksivaman added a commit that referenced this pull request May 4, 2026
…2955)

* Better documentation for single param and envvar guard

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>

* fix doc

Signed-off-by: ksivamani <ksivamani@nvidia.com>

* Fix test envvar

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>

---------

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>
Signed-off-by: ksivamani <ksivamani@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants