* Re: x86-64: fix ABI incompatibilities in parameter and return value passing
[not found] <s0f258b5.078@emea1-mh.id2.novell.com>
@ 2004-07-12 23:21 ` Richard Henderson
0 siblings, 0 replies; 5+ messages in thread
From: Richard Henderson @ 2004-07-12 23:21 UTC (permalink / raw)
To: Jan Beulich; +Cc: gcc-patches
On Mon, Jul 12, 2004 at 10:11:02AM +0200, Jan Beulich wrote:
> The point is that parameters of all these types get
> passed through %mm registers
In whose abi? The x86-64 abi passes __m64 values in sse registers.
r~
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: x86-64: fix ABI incompatibilities in parameter and return value passing
@ 2004-07-13 15:33 Jan Beulich
0 siblings, 0 replies; 5+ messages in thread
From: Jan Beulich @ 2004-07-13 15:33 UTC (permalink / raw)
To: rth; +Cc: gcc-patches
In the icc x86 abi. Since the patterns are shared, they must match both
(and they also can). However, as I just noted this change really doesn't
belong here, but rather in the cpu type patch that I also submitted
(it's actually duplicated there). So it's fine that you left it out
here. Thanks, Jan
>>> Richard Henderson <rth@redhat.com> 12.07.04 21:31:55 >>>
On Mon, Jul 12, 2004 at 10:11:02AM +0200, Jan Beulich wrote:
> The point is that parameters of all these types get
> passed through %mm registers
In whose abi? The x86-64 abi passes __m64 values in sse registers.
r~
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: x86-64: fix ABI incompatibilities in parameter and return value passing
@ 2004-07-12 9:50 Jan Beulich
0 siblings, 0 replies; 5+ messages in thread
From: Jan Beulich @ 2004-07-12 9:50 UTC (permalink / raw)
To: rth; +Cc: gcc-patches
I think it's rather an omission on my part not to have consolidated
VALID_MMX_REG_MODE_3DNOW with VALID_MMX_REG_MODE; I simply didn't
realize there is such a distinction while for SSE vs. SSE2 the earlier
includes the latter. The point is that parameters of all these types get
passed through %mm registers, and you can't even store them to memory
(in order to then operate on them one element at a time) without the
move patterns. Jan
>>> Richard Henderson <rth@redhat.com> 10.07.04 00:35:14 >>>
On Tue, Jul 06, 2004 at 04:33:40PM +0200, Jan Beulich wrote:
> * config/i386/i386.c (classify_argument): Treat V1xx modes the
same as
> their base modes. CTImode, TCmode, and XCmode must be passed in
memory.
> TFmode (__float128) must be is an SSE/SSEUP pair. V2SImode,
V4HImode,
> and V8QI are class SSE. All sufficiently small remaining vector
modes
> must be passed in one or two integer registers.
> (ix86_libcall_value): TFmode must be returned in xmm0, XCmode
must be
> returned in memory.
> (bdesc_2arg, ix86_init_mmx_sse_builtins): __builtin_ia32_pmuludq
and
> __builtin_ia32_pmuludq128 have non-uniform argument and return
types
> and must thus be handled explicitly.
> * config/i386/i386.md (*movdi_1_rex64): Add cases for moving
between
> MMX and XMM regs.
> (movv8qi_internal, movv4hi_internal, movv2si_internal,
> movv2sf_internal): Permit moving between MMX and XMM registers
(since
> MMX areguments and return values are passed in XMM registers).
> (sse2_umulsidi3): Correct type and mode.
Applied, mostly.
> (define_insn "movv2sf_internal"
> - [(set (match_operand:V2SF 0 "nonimmediate_operand" "=y,y,m")
> - (match_operand:V2SF 1 "vector_move_operand" "C,ym,y"))]
> - "TARGET_3DNOW
> + [(set (match_operand:V2SF 0 "nonimmediate_operand"
> "=y,y,m,!y,!*Y,?*x,?m")
> + (match_operand:V2SF 1 "vector_move_operand"
> "C,ym,y,*Y,y,*xm,*x"))]
> + "TARGET_MMX
I think the change from TARGET_3DNOW to TARGET_MMX is wrong.
V2SF is not in VALID_MMX_REG_MODE, only in VALID_MMX_REG_MODE_3DNOW.
I changed that back before applying the patch.
r~
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: x86-64: fix ABI incompatibilities in parameter and return value passing
2004-07-06 14:36 Jan Beulich
@ 2004-07-09 23:25 ` Richard Henderson
0 siblings, 0 replies; 5+ messages in thread
From: Richard Henderson @ 2004-07-09 23:25 UTC (permalink / raw)
To: Jan Beulich; +Cc: gcc-patches
On Tue, Jul 06, 2004 at 04:33:40PM +0200, Jan Beulich wrote:
> * config/i386/i386.c (classify_argument): Treat V1xx modes the same as
> their base modes. CTImode, TCmode, and XCmode must be passed in memory.
> TFmode (__float128) must be is an SSE/SSEUP pair. V2SImode, V4HImode,
> and V8QI are class SSE. All sufficiently small remaining vector modes
> must be passed in one or two integer registers.
> (ix86_libcall_value): TFmode must be returned in xmm0, XCmode must be
> returned in memory.
> (bdesc_2arg, ix86_init_mmx_sse_builtins): __builtin_ia32_pmuludq and
> __builtin_ia32_pmuludq128 have non-uniform argument and return types
> and must thus be handled explicitly.
> * config/i386/i386.md (*movdi_1_rex64): Add cases for moving between
> MMX and XMM regs.
> (movv8qi_internal, movv4hi_internal, movv2si_internal,
> movv2sf_internal): Permit moving between MMX and XMM registers (since
> MMX areguments and return values are passed in XMM registers).
> (sse2_umulsidi3): Correct type and mode.
Applied, mostly.
> (define_insn "movv2sf_internal"
> - [(set (match_operand:V2SF 0 "nonimmediate_operand" "=y,y,m")
> - (match_operand:V2SF 1 "vector_move_operand" "C,ym,y"))]
> - "TARGET_3DNOW
> + [(set (match_operand:V2SF 0 "nonimmediate_operand"
> "=y,y,m,!y,!*Y,?*x,?m")
> + (match_operand:V2SF 1 "vector_move_operand"
> "C,ym,y,*Y,y,*xm,*x"))]
> + "TARGET_MMX
I think the change from TARGET_3DNOW to TARGET_MMX is wrong.
V2SF is not in VALID_MMX_REG_MODE, only in VALID_MMX_REG_MODE_3DNOW.
I changed that back before applying the patch.
r~
^ permalink raw reply [flat|nested] 5+ messages in thread
* x86-64: fix ABI incompatibilities in parameter and return value passing
@ 2004-07-06 14:36 Jan Beulich
2004-07-09 23:25 ` Richard Henderson
0 siblings, 1 reply; 5+ messages in thread
From: Jan Beulich @ 2004-07-06 14:36 UTC (permalink / raw)
To: gcc-patches
This addresses various mismatches between the x86-64 ABI and the actual
implementation.
It fixes "gcc.dg/compat/vector-[12] c_compat_[xy]_tst.o compile".
bootstrapped and tested on x86_64-unknown-linux-gnu.
Jan
2004-07-06 Jan Beulich <jbeulich@novell.com>
* config/i386/i386.c (classify_argument): Treat V1xx modes the
same as
their base modes. CTImode, TCmode, and XCmode must be passed in
memory.
TFmode (__float128) must be is an SSE/SSEUP pair. V2SImode,
V4HImode,
and V8QI are class SSE. All sufficiently small remaining vector
modes
must be passed in one or two integer registers.
(ix86_libcall_value): TFmode must be returned in xmm0, XCmode
must be
returned in memory.
(bdesc_2arg, ix86_init_mmx_sse_builtins): __builtin_ia32_pmuludq
and
__builtin_ia32_pmuludq128 have non-uniform argument and return
types
and must thus be handled explicitly.
* config/i386/i386.md (*movdi_1_rex64): Add cases for moving
between
MMX and XMM regs.
(movv8qi_internal, movv4hi_internal, movv2si_internal,
movv2sf_internal): Permit moving between MMX and XMM registers
(since
MMX areguments and return values are passed in XMM registers).
(sse2_umulsidi3): Correct type and mode.
---
/home/jbeulich/src/gcc/mainline/2004-07-05.10.09/gcc/config/i386/i386.c 2004-07-02
15:20:42.000000000 +0200
+++ 2004-07-05.10.09/gcc/config/i386/i386.c 2004-07-06
09:42:35.694498152 +0200
@@ -2261,6 +2261,11 @@
return 0;
}
+ /* for V1xx modes, just use the base mode */
+ if (VECTOR_MODE_P (mode)
+ && GET_MODE_SIZE (GET_MODE_INNER (mode)) == bytes)
+ mode = GET_MODE_INNER (mode);
+
/* Classification of atomic types. */
switch (mode)
{
@@ -2281,9 +2286,7 @@
classes[0] = classes[1] = X86_64_INTEGER_CLASS;
return 2;
case CTImode:
- classes[0] = classes[1] = X86_64_INTEGER_CLASS;
- classes[2] = classes[3] = X86_64_INTEGER_CLASS;
- return 4;
+ return 0;
case SFmode:
if (!(bit_offset % 64))
classes[0] = X86_64_SSESF_CLASS;
@@ -2298,21 +2301,19 @@
classes[1] = X86_64_X87UP_CLASS;
return 2;
case TFmode:
- case TCmode:
- return 0;
- case XCmode:
- classes[0] = X86_64_X87_CLASS;
- classes[1] = X86_64_X87UP_CLASS;
- classes[2] = X86_64_X87_CLASS;
- classes[3] = X86_64_X87UP_CLASS;
- return 4;
- case DCmode:
- classes[0] = X86_64_SSEDF_CLASS;
- classes[1] = X86_64_SSEDF_CLASS;
+ classes[0] = X86_64_SSE_CLASS;
+ classes[1] = X86_64_SSEUP_CLASS;
return 2;
case SCmode:
classes[0] = X86_64_SSE_CLASS;
return 1;
+ case DCmode:
+ classes[0] = X86_64_SSEDF_CLASS;
+ classes[1] = X86_64_SSEDF_CLASS;
+ return 2;
+ case XCmode:
+ case TCmode:
+ return 0;
case V4SFmode:
case V4SImode:
case V16QImode:
@@ -2326,11 +2327,26 @@
case V2SImode:
case V4HImode:
case V8QImode:
- return 0;
+ classes[0] = X86_64_SSE_CLASS;
+ return 1;
case BLKmode:
case VOIDmode:
return 0;
default:
+ if (VECTOR_MODE_P (mode))
+ {
+ if (bytes > 16)
+ return 0;
+ if (GET_MODE_CLASS (GET_MODE_INNER (mode)) == MODE_INT)
+ {
+ if (bit_offset + GET_MODE_BITSIZE (mode) <= 32)
+ classes[0] = X86_64_INTEGERSI_CLASS;
+ else
+ classes[0] = X86_64_INTEGER_CLASS;
+ classes[1] = X86_64_INTEGER_CLASS;
+ return 1 + (bytes > 8);
+ }
+ }
abort ();
}
}
@@ -2960,11 +2976,11 @@
case SCmode:
case DFmode:
case DCmode:
+ case TFmode:
return gen_rtx_REG (mode, FIRST_SSE_REG);
case XFmode:
- case XCmode:
return gen_rtx_REG (mode, FIRST_FLOAT_REG);
- case TFmode:
+ case XCmode:
case TCmode:
return NULL;
default:
@@ -12865,8 +12881,6 @@
{ MASK_SSE2, CODE_FOR_mulv8hi3, "__builtin_ia32_pmullw128",
IX86_BUILTIN_PMULLW128, 0, 0 },
{ MASK_SSE2, CODE_FOR_smulv8hi3_highpart,
"__builtin_ia32_pmulhw128", IX86_BUILTIN_PMULHW128, 0, 0 },
- { MASK_SSE2, CODE_FOR_sse2_umulsidi3, "__builtin_ia32_pmuludq",
IX86_BUILTIN_PMULUDQ, 0, 0 },
- { MASK_SSE2, CODE_FOR_sse2_umulv2siv2di3,
"__builtin_ia32_pmuludq128", IX86_BUILTIN_PMULUDQ128, 0, 0 },
{ MASK_SSE2, CODE_FOR_sse2_andv2di3, "__builtin_ia32_pand128",
IX86_BUILTIN_PAND128, 0, 0 },
{ MASK_SSE2, CODE_FOR_sse2_nandv2di3, "__builtin_ia32_pandn128",
IX86_BUILTIN_PANDN128, 0, 0 },
@@ -12904,6 +12918,9 @@
{ MASK_SSE2, CODE_FOR_umulv8hi3_highpart,
"__builtin_ia32_pmulhuw128", IX86_BUILTIN_PMULHUW128, 0, 0 },
{ MASK_SSE2, CODE_FOR_sse2_psadbw, 0, IX86_BUILTIN_PSADBW128, 0, 0
},
+ { MASK_SSE2, CODE_FOR_sse2_umulsidi3, 0, IX86_BUILTIN_PMULUDQ, 0, 0
},
+ { MASK_SSE2, CODE_FOR_sse2_umulv2siv2di3, 0,
IX86_BUILTIN_PMULUDQ128, 0, 0 },
+
{ MASK_SSE2, CODE_FOR_ashlv8hi3_ti, 0, IX86_BUILTIN_PSLLW128, 0, 0
},
{ MASK_SSE2, CODE_FOR_ashlv8hi3, 0, IX86_BUILTIN_PSLLWI128, 0, 0 },
{ MASK_SSE2, CODE_FOR_ashlv4si3_ti, 0, IX86_BUILTIN_PSLLD128, 0, 0
},
@@ -13299,9 +13316,15 @@
tree di_ftype_v8qi_v8qi
= build_function_type_list (long_long_unsigned_type_node,
V8QI_type_node, V8QI_type_node,
NULL_TREE);
+ tree di_ftype_v2si_v2si
+ = build_function_type_list (long_long_unsigned_type_node,
+ V2SI_type_node, V2SI_type_node,
NULL_TREE);
tree v2di_ftype_v16qi_v16qi
= build_function_type_list (V2DI_type_node,
V16QI_type_node, V16QI_type_node,
NULL_TREE);
+ tree v2di_ftype_v4si_v4si
+ = build_function_type_list (V2DI_type_node,
+ V4SI_type_node, V4SI_type_node,
NULL_TREE);
tree int_ftype_v16qi
= build_function_type_list (integer_type_node, V16QI_type_node,
NULL_TREE);
tree v16qi_ftype_pcchar
@@ -13597,6 +13620,9 @@
def_builtin (MASK_SSE, "__builtin_ia32_setzero128", v2di_ftype_void,
IX86_BUILTIN_CLRTI);
+ def_builtin (MASK_SSE2, "__builtin_ia32_pmuludq",
di_ftype_v2si_v2si, IX86_BUILTIN_PMULUDQ);
+ def_builtin (MASK_SSE2, "__builtin_ia32_pmuludq128",
v2di_ftype_v4si_v4si, IX86_BUILTIN_PMULUDQ128);
+
def_builtin (MASK_SSE2, "__builtin_ia32_psllw128",
v8hi_ftype_v8hi_v2di, IX86_BUILTIN_PSLLW128);
def_builtin (MASK_SSE2, "__builtin_ia32_pslld128",
v4si_ftype_v4si_v2di, IX86_BUILTIN_PSLLD128);
def_builtin (MASK_SSE2, "__builtin_ia32_psllq128",
v2di_ftype_v2di_v2di, IX86_BUILTIN_PSLLQ128);
---
/home/jbeulich/src/gcc/mainline/2004-07-05.10.09/gcc/config/i386/i386.md 2004-07-02
15:20:47.000000000 +0200
+++ 2004-07-05.10.09/gcc/config/i386/i386.md 2004-07-06
09:42:35.710495720 +0200
@@ -1963,14 +1963,19 @@
"ix86_split_long_move (operands); DONE;")
(define_insn "*movdi_1_rex64"
- [(set (match_operand:DI 0 "nonimmediate_operand"
"=r,r,r,mr,!mr,!*y,!rm,!*y,!*Y,!rm,!*Y")
- (match_operand:DI 1 "general_operand"
"Z,rem,i,re,n,*y,*y,rm,*Y,*Y,rm"))]
+ [(set (match_operand:DI 0 "nonimmediate_operand"
"=r,r,r,mr,!mr,!*y,!rm,!*y,!*Y,!rm,!*Y,!*Y,!*y")
+ (match_operand:DI 1 "general_operand"
"Z,rem,i,re,n,*y,*y,rm,*Y,*Y,rm,*y,*Y"))]
"TARGET_64BIT
&& (TARGET_INTER_UNIT_MOVES || optimize_size)
&& (GET_CODE (operands[0]) != MEM || GET_CODE (operands[1]) !=
MEM)"
{
switch (get_attr_type (insn))
{
+ case TYPE_SSECVT:
+ if (which_alternative == 11)
+ return "movq2dq\t{%1, %0|%0, %1}";
+ else
+ return "movdq2q\t{%1, %0|%0, %1}";
case TYPE_SSEMOV:
if (get_attr_mode (insn) == MODE_TI)
return "movdqa\t{%1, %0|%0, %1}";
@@ -2001,6 +2006,8 @@
(const_string "mmxmov")
(eq_attr "alternative" "8,9,10")
(const_string "ssemov")
+ (eq_attr "alternative" "11,12")
+ (const_string "ssecvt")
(eq_attr "alternative" "4")
(const_string "multi")
(and (ne (symbol_ref "flag_pic") (const_int 0))
@@ -2008,9 +2015,9 @@
(const_string "lea")
]
(const_string "imov")))
- (set_attr "modrm" "*,0,0,*,*,*,*,*,*,*,*")
- (set_attr "length_immediate" "*,4,8,*,*,*,*,*,*,*,*")
- (set_attr "mode" "SI,DI,DI,DI,SI,DI,DI,DI,TI,DI,DI")])
+ (set_attr "modrm" "*,0,0,*,*,*,*,*,*,*,*,*,*")
+ (set_attr "length_immediate" "*,4,8,*,*,*,*,*,*,*,*,*,*")
+ (set_attr "mode" "SI,DI,DI,DI,SI,DI,DI,DI,TI,DI,DI,DI,DI")])
(define_insn "*movdi_1_rex64_nointerunit"
[(set (match_operand:DI 0 "nonimmediate_operand"
"=r,r,r,mr,!mr,!*y,!m,!*y,!*Y,!m,!*Y")
@@ -19701,52 +19708,68 @@
})
(define_insn "movv8qi_internal"
- [(set (match_operand:V8QI 0 "nonimmediate_operand" "=y,y,m")
- (match_operand:V8QI 1 "vector_move_operand" "C,ym,y"))]
+ [(set (match_operand:V8QI 0 "nonimmediate_operand"
"=y,y,m,!y,!*Y,?*Y,?m")
+ (match_operand:V8QI 1 "vector_move_operand"
"C,ym,y,*Y,y,*Ym,*Y"))]
"TARGET_MMX
&& (GET_CODE (operands[0]) != MEM || GET_CODE (operands[1]) !=
MEM)"
"@
pxor\t%0, %0
movq\t{%1, %0|%0, %1}
+ movq\t{%1, %0|%0, %1}
+ movdq2q\t{%1, %0|%0, %1}
+ movq2dq\t{%1, %0|%0, %1}
+ movq\t{%1, %0|%0, %1}
movq\t{%1, %0|%0, %1}"
- [(set_attr "type" "mmxmov")
+ [(set_attr "type"
"mmxmov,mmxmov,mmxmov,ssecvt,ssecvt,ssemov,ssemov")
(set_attr "mode" "DI")])
(define_insn "movv4hi_internal"
- [(set (match_operand:V4HI 0 "nonimmediate_operand" "=y,y,m")
- (match_operand:V4HI 1 "vector_move_operand" "C,ym,y"))]
+ [(set (match_operand:V4HI 0 "nonimmediate_operand"
"=y,y,m,!y,!*Y,?*Y,?m")
+ (match_operand:V4HI 1 "vector_move_operand"
"C,ym,y,*Y,y,*Ym,*Y"))]
"TARGET_MMX
&& (GET_CODE (operands[0]) != MEM || GET_CODE (operands[1]) !=
MEM)"
"@
pxor\t%0, %0
movq\t{%1, %0|%0, %1}
+ movq\t{%1, %0|%0, %1}
+ movdq2q\t{%1, %0|%0, %1}
+ movq2dq\t{%1, %0|%0, %1}
+ movq\t{%1, %0|%0, %1}
movq\t{%1, %0|%0, %1}"
- [(set_attr "type" "mmxmov")
+ [(set_attr "type"
"mmxmov,mmxmov,mmxmov,ssecvt,ssecvt,ssemov,ssemov")
(set_attr "mode" "DI")])
-(define_insn "movv2si_internal"
- [(set (match_operand:V2SI 0 "nonimmediate_operand" "=y,y,m")
- (match_operand:V2SI 1 "vector_move_operand" "C,ym,y"))]
+(define_insn "*movv2si_internal"
+ [(set (match_operand:V2SI 0 "nonimmediate_operand"
"=y,y,m,!y,!*Y,?*Y,?m")
+ (match_operand:V2SI 1 "vector_move_operand"
"C,ym,y,*Y,y,*Ym,*Y"))]
"TARGET_MMX
&& (GET_CODE (operands[0]) != MEM || GET_CODE (operands[1]) !=
MEM)"
"@
pxor\t%0, %0
movq\t{%1, %0|%0, %1}
+ movq\t{%1, %0|%0, %1}
+ movdq2q\t{%1, %0|%0, %1}
+ movq2dq\t{%1, %0|%0, %1}
+ movq\t{%1, %0|%0, %1}
movq\t{%1, %0|%0, %1}"
- [(set_attr "type" "mmxcvt")
+ [(set_attr "type"
"mmxmov,mmxmov,mmxmov,ssecvt,ssecvt,ssemov,ssemov")
(set_attr "mode" "DI")])
(define_insn "movv2sf_internal"
- [(set (match_operand:V2SF 0 "nonimmediate_operand" "=y,y,m")
- (match_operand:V2SF 1 "vector_move_operand" "C,ym,y"))]
- "TARGET_3DNOW
+ [(set (match_operand:V2SF 0 "nonimmediate_operand"
"=y,y,m,!y,!*Y,?*x,?m")
+ (match_operand:V2SF 1 "vector_move_operand"
"C,ym,y,*Y,y,*xm,*x"))]
+ "TARGET_MMX
&& (GET_CODE (operands[0]) != MEM || GET_CODE (operands[1]) !=
MEM)"
"@
pxor\t%0, %0
movq\t{%1, %0|%0, %1}
- movq\t{%1, %0|%0, %1}"
- [(set_attr "type" "mmxcvt")
- (set_attr "mode" "DI")])
+ movq\t{%1, %0|%0, %1}
+ movdq2q\t{%1, %0|%0, %1}
+ movq2dq\t{%1, %0|%0, %1}
+ movlps\t{%1, %0|%0, %1}
+ movlps\t{%1, %0|%0, %1}"
+ [(set_attr "type"
"mmxmov,mmxmov,mmxmov,ssecvt,ssecvt,ssemov,ssemov")
+ (set_attr "mode" "DI,DI,DI,DI,DI,V2SF,V2SF")])
(define_expand "movti"
[(set (match_operand:TI 0 "nonimmediate_operand" "")
@@ -23065,8 +23088,8 @@
(parallel [(const_int 0)])))))]
"TARGET_SSE2"
"pmuludq\t{%2, %0|%0, %2}"
- [(set_attr "type" "sseimul")
- (set_attr "mode" "TI")])
+ [(set_attr "type" "mmxmul")
+ (set_attr "mode" "DI")])
(define_insn "sse2_umulv2siv2di3"
[(set (match_operand:V2DI 0 "register_operand" "=x")
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2004-07-13 6:41 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <s0f258b5.078@emea1-mh.id2.novell.com>
2004-07-12 23:21 ` x86-64: fix ABI incompatibilities in parameter and return value passing Richard Henderson
2004-07-13 15:33 Jan Beulich
-- strict thread matches above, loose matches on Subject: below --
2004-07-12 9:50 Jan Beulich
2004-07-06 14:36 Jan Beulich
2004-07-09 23:25 ` Richard Henderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).