-
Notifications
You must be signed in to change notification settings - Fork 13.5k
[TARGETS-PARSER] Added const reference for params with size >= 16 bytes #125083
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Thank you for submitting a Pull Request (PR) to the LLVM Project! This PR will be automatically labeled and the relevant teams will be notified. If you wish to, you can add reviewers by using the "Reviewers" section on this page. If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers. If you have further questions, they may be answered by the LLVM GitHub User Guide. You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums. |
@llvm/pr-subscribers-backend-arm @llvm/pr-subscribers-backend-hexagon Author: Herman Semenoff (GermanAizek) ChangesReference: #125074 Patch is 23.10 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/125083.diff 17 Files Affected:
diff --git a/llvm/lib/Target/AArch64/AArch64ExpandPseudoInsts.cpp b/llvm/lib/Target/AArch64/AArch64ExpandPseudoInsts.cpp
index b44c48afe705ba..22424b11afb58f 100644
--- a/llvm/lib/Target/AArch64/AArch64ExpandPseudoInsts.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ExpandPseudoInsts.cpp
@@ -64,8 +64,8 @@ class AArch64ExpandPseudo : public MachineFunctionPass {
MachineBasicBlock::iterator &NextMBBI);
bool expandMultiVecPseudo(MachineBasicBlock &MBB,
MachineBasicBlock::iterator MBBI,
- TargetRegisterClass ContiguousClass,
- TargetRegisterClass StridedClass,
+ const TargetRegisterClass &ContiguousClass,
+ const TargetRegisterClass &StridedClass,
unsigned ContiguousOpc, unsigned StridedOpc);
bool expandFormTuplePseudo(MachineBasicBlock &MBB,
MachineBasicBlock::iterator MBBI,
@@ -1121,7 +1121,8 @@ AArch64ExpandPseudo::expandCondSMToggle(MachineBasicBlock &MBB,
bool AArch64ExpandPseudo::expandMultiVecPseudo(
MachineBasicBlock &MBB, MachineBasicBlock::iterator MBBI,
- TargetRegisterClass ContiguousClass, TargetRegisterClass StridedClass,
+ const TargetRegisterClass &ContiguousClass,
+ const TargetRegisterClass &StridedClass,
unsigned ContiguousOp, unsigned StridedOpc) {
MachineInstr &MI = *MBBI;
Register Tuple = MI.getOperand(0).getReg();
diff --git a/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp b/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp
index a082a1ebe95bf8..89a8c981a330d6 100644
--- a/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp
@@ -4252,7 +4252,7 @@ class TagStoreEdit {
}
// Add an instruction to be replaced. Instructions must be added in the
// ascending order of Offset, and have to be adjacent.
- void addInstruction(TagStoreInstr I) {
+ void addInstruction(const TagStoreInstr &I) {
assert((TagStores.empty() ||
TagStores.back().Offset + TagStores.back().Size == I.Offset) &&
"Non-adjacent tag store instructions.");
diff --git a/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.cpp b/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.cpp
index 17adda15d9fc8f..0edb5c436808f9 100644
--- a/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.cpp
+++ b/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.cpp
@@ -38,8 +38,8 @@ SDValue AArch64SelectionDAGInfo::EmitMOPS(unsigned Opcode, SelectionDAG &DAG,
SDValue Dst, SDValue SrcOrValue,
SDValue Size, Align Alignment,
bool isVolatile,
- MachinePointerInfo DstPtrInfo,
- MachinePointerInfo SrcPtrInfo) const {
+ const MachinePointerInfo &DstPtrInfo,
+ const MachinePointerInfo &SrcPtrInfo) const {
// Get the constant size of the copy/set.
uint64_t ConstSize = 0;
diff --git a/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.h b/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.h
index 7efe49c7206555..fe3fe7705def5d 100644
--- a/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.h
+++ b/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.h
@@ -26,8 +26,8 @@ class AArch64SelectionDAGInfo : public SelectionDAGTargetInfo {
SDValue EmitMOPS(unsigned Opcode, SelectionDAG &DAG, const SDLoc &DL,
SDValue Chain, SDValue Dst, SDValue SrcOrValue, SDValue Size,
Align Alignment, bool isVolatile,
- MachinePointerInfo DstPtrInfo,
- MachinePointerInfo SrcPtrInfo) const;
+ const MachinePointerInfo &DstPtrInfo,
+ const MachinePointerInfo &SrcPtrInfo) const;
SDValue EmitTargetCodeForMemcpy(SelectionDAG &DAG, const SDLoc &dl,
SDValue Chain, SDValue Dst, SDValue Src,
diff --git a/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp b/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
index e2389145cf33f2..a45df57dfac447 100644
--- a/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
+++ b/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
@@ -121,7 +121,7 @@ class TailFoldingOption {
return Bits;
}
- void reportError(std::string Opt) {
+ void reportError(const std::string &Opt) {
errs() << "invalid argument '" << Opt
<< "' to -sve-tail-folding=; the option should be of the form\n"
" (disabled|all|default|simple)[+(reductions|recurrences"
diff --git a/llvm/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp b/llvm/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp
index d3eda48f3276e9..bae7c12e02a9e7 100644
--- a/llvm/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp
+++ b/llvm/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp
@@ -2455,7 +2455,7 @@ class AArch64Operand : public MCParsedAsmOperand {
}
static std::unique_ptr<AArch64Operand>
- CreateFPImm(APFloat Val, bool IsExact, SMLoc S, MCContext &Ctx) {
+ CreateFPImm(const APFloat &Val, bool IsExact, SMLoc S, MCContext &Ctx) {
auto Op = std::make_unique<AArch64Operand>(k_FPImm, Ctx);
Op->FPImm.Val = Val.bitcastToAPInt().getSExtValue();
Op->FPImm.IsExact = IsExact;
@@ -3837,7 +3837,7 @@ static const struct Extension {
{"sme-tmop", {AArch64::FeatureSME_TMOP}},
};
-static void setRequiredFeatureString(FeatureBitset FBS, std::string &Str) {
+static void setRequiredFeatureString(const FeatureBitset &FBS, std::string &Str) {
if (FBS[AArch64::HasV8_0aOps])
Str += "ARMv8a";
if (FBS[AArch64::HasV8_1aOps])
diff --git a/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.h b/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.h
index 9671fa3b3d92fa..49e823615b0032 100644
--- a/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.h
+++ b/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.h
@@ -373,7 +373,7 @@ struct SysAlias {
constexpr SysAlias(const char *N, uint16_t E, FeatureBitset F)
: Name(N), Encoding(E), FeaturesRequired(F) {}
- bool haveFeatures(FeatureBitset ActiveFeatures) const {
+ bool haveFeatures(const FeatureBitset &ActiveFeatures) const {
return ActiveFeatures[llvm::AArch64::FeatureAll] ||
(FeaturesRequired & ActiveFeatures) == FeaturesRequired;
}
@@ -634,7 +634,7 @@ struct PHint {
unsigned Encoding;
FeatureBitset FeaturesRequired;
- bool haveFeatures(FeatureBitset ActiveFeatures) const {
+ bool haveFeatures(const FeatureBitset &ActiveFeatures) const {
return ActiveFeatures[llvm::AArch64::FeatureAll] ||
(FeaturesRequired & ActiveFeatures) == FeaturesRequired;
}
@@ -753,7 +753,7 @@ namespace AArch64SysReg {
bool Writeable;
FeatureBitset FeaturesRequired;
- bool haveFeatures(FeatureBitset ActiveFeatures) const {
+ bool haveFeatures(const FeatureBitset &ActiveFeatures) const {
return ActiveFeatures[llvm::AArch64::FeatureAll] ||
(FeaturesRequired & ActiveFeatures) == FeaturesRequired;
}
diff --git a/llvm/lib/Target/AMDGPU/Utils/AMDGPUDelayedMCExpr.cpp b/llvm/lib/Target/AMDGPU/Utils/AMDGPUDelayedMCExpr.cpp
index ceb475d77cb322..789ec588458568 100644
--- a/llvm/lib/Target/AMDGPU/Utils/AMDGPUDelayedMCExpr.cpp
+++ b/llvm/lib/Target/AMDGPU/Utils/AMDGPUDelayedMCExpr.cpp
@@ -12,8 +12,8 @@
using namespace llvm;
-static msgpack::DocNode getNode(msgpack::DocNode DN, msgpack::Type Type,
- MCValue Val) {
+static msgpack::DocNode getNode(const msgpack::DocNode &DN, msgpack::Type Type,
+ const MCValue &Val) {
msgpack::Document *Doc = DN.getDocument();
switch (Type) {
default:
diff --git a/llvm/lib/Target/ARM/ARMInstructionSelector.cpp b/llvm/lib/Target/ARM/ARMInstructionSelector.cpp
index 2d3cb71fbc3fd4..1054ed45a41edf 100644
--- a/llvm/lib/Target/ARM/ARMInstructionSelector.cpp
+++ b/llvm/lib/Target/ARM/ARMInstructionSelector.cpp
@@ -44,13 +44,13 @@ class ARMInstructionSelector : public InstructionSelector {
struct CmpConstants;
struct InsertInfo;
- bool selectCmp(CmpConstants Helper, MachineInstrBuilder &MIB,
+ bool selectCmp(const CmpConstants &Helper, MachineInstrBuilder &MIB,
MachineRegisterInfo &MRI) const;
// Helper for inserting a comparison sequence that sets \p ResReg to either 1
// if \p LHSReg and \p RHSReg are in the relationship defined by \p Cond, or
// \p PrevRes otherwise. In essence, it computes PrevRes OR (LHS Cond RHS).
- bool insertComparison(CmpConstants Helper, InsertInfo I, unsigned ResReg,
+ bool insertComparison(const CmpConstants &Helper, InsertInfo I, unsigned ResReg,
ARMCC::CondCodes Cond, unsigned LHSReg, unsigned RHSReg,
unsigned PrevRes) const;
@@ -525,7 +525,7 @@ bool ARMInstructionSelector::validReg(MachineRegisterInfo &MRI, unsigned Reg,
return true;
}
-bool ARMInstructionSelector::selectCmp(CmpConstants Helper,
+bool ARMInstructionSelector::selectCmp(const CmpConstants &Helper,
MachineInstrBuilder &MIB,
MachineRegisterInfo &MRI) const {
const InsertInfo I(MIB);
@@ -572,7 +572,7 @@ bool ARMInstructionSelector::selectCmp(CmpConstants Helper,
return true;
}
-bool ARMInstructionSelector::insertComparison(CmpConstants Helper, InsertInfo I,
+bool ARMInstructionSelector::insertComparison(const CmpConstants &Helper, InsertInfo I,
unsigned ResReg,
ARMCC::CondCodes Cond,
unsigned LHSReg, unsigned RHSReg,
diff --git a/llvm/lib/Target/ARM/MCTargetDesc/ARMMachObjectWriter.cpp b/llvm/lib/Target/ARM/MCTargetDesc/ARMMachObjectWriter.cpp
index 357654615e0024..e2586926cefe48 100644
--- a/llvm/lib/Target/ARM/MCTargetDesc/ARMMachObjectWriter.cpp
+++ b/llvm/lib/Target/ARM/MCTargetDesc/ARMMachObjectWriter.cpp
@@ -27,13 +27,15 @@ class ARMMachObjectWriter : public MCMachObjectTargetWriter {
void recordARMScatteredRelocation(MachObjectWriter *Writer,
const MCAssembler &Asm,
const MCFragment *Fragment,
- const MCFixup &Fixup, MCValue Target,
+ const MCFixup &Fixup,
+ const MCValue &Target,
unsigned Type, unsigned Log2Size,
uint64_t &FixedValue);
void recordARMScatteredHalfRelocation(MachObjectWriter *Writer,
const MCAssembler &Asm,
const MCFragment *Fragment,
- const MCFixup &Fixup, MCValue Target,
+ const MCFixup &Fixup,
+ const MCValue &Target,
uint64_t &FixedValue);
bool requiresExternRelocation(MachObjectWriter *Writer,
@@ -130,7 +132,7 @@ static bool getARMFixupKindMachOInfo(unsigned Kind, unsigned &RelocType,
void ARMMachObjectWriter::recordARMScatteredHalfRelocation(
MachObjectWriter *Writer, const MCAssembler &Asm,
- const MCFragment *Fragment, const MCFixup &Fixup, MCValue Target,
+ const MCFragment *Fragment, const MCFixup &Fixup, const MCValue &Target,
uint64_t &FixedValue) {
uint32_t FixupOffset = Asm.getFragmentOffset(*Fragment) + Fixup.getOffset();
@@ -240,7 +242,7 @@ void ARMMachObjectWriter::recordARMScatteredHalfRelocation(
void ARMMachObjectWriter::recordARMScatteredRelocation(
MachObjectWriter *Writer, const MCAssembler &Asm,
- const MCFragment *Fragment, const MCFixup &Fixup, MCValue Target,
+ const MCFragment *Fragment, const MCFixup &Fixup, const MCValue &Target,
unsigned Type, unsigned Log2Size, uint64_t &FixedValue) {
uint32_t FixupOffset = Asm.getFragmentOffset(*Fragment) + Fixup.getOffset();
diff --git a/llvm/lib/Target/ARM/Utils/ARMBaseInfo.h b/llvm/lib/Target/ARM/Utils/ARMBaseInfo.h
index dc4f811e075c60..0d895e600b1050 100644
--- a/llvm/lib/Target/ARM/Utils/ARMBaseInfo.h
+++ b/llvm/lib/Target/ARM/Utils/ARMBaseInfo.h
@@ -196,12 +196,12 @@ namespace ARMSysReg {
FeatureBitset FeaturesRequired;
// return true if FeaturesRequired are all present in ActiveFeatures
- bool hasRequiredFeatures(FeatureBitset ActiveFeatures) const {
+ bool hasRequiredFeatures(const FeatureBitset &ActiveFeatures) const {
return (FeaturesRequired & ActiveFeatures) == FeaturesRequired;
}
// returns true if TestFeatures are all present in FeaturesRequired
- bool isInRequiredFeatures(FeatureBitset TestFeatures) const {
+ bool isInRequiredFeatures(const FeatureBitset &TestFeatures) const {
return (FeaturesRequired & TestFeatures) == TestFeatures;
}
};
diff --git a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
index fbed25157a44e0..c392b13e1920d2 100644
--- a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
+++ b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
@@ -32,7 +32,7 @@ namespace adjust {
using namespace llvm;
static void unsigned_width(unsigned Width, uint64_t Value,
- std::string Description, const MCFixup &Fixup,
+ const std::string &Description, const MCFixup &Fixup,
MCContext *Ctx) {
if (!isUIntN(Width, Value)) {
std::string Diagnostic = "out of range " + Description;
diff --git a/llvm/lib/Target/Hexagon/HexagonConstExtenders.cpp b/llvm/lib/Target/Hexagon/HexagonConstExtenders.cpp
index 86ce6b4e05ed27..3c95714ef78bae 100644
--- a/llvm/lib/Target/Hexagon/HexagonConstExtenders.cpp
+++ b/llvm/lib/Target/Hexagon/HexagonConstExtenders.cpp
@@ -202,7 +202,7 @@ namespace {
Pos = std::distance(B->begin(), It);
}
}
- bool operator<(Loc A) const {
+ bool operator<(const Loc &A) const {
if (Block != A.Block)
return Block->getNumber() < A.Block->getNumber();
if (A.Pos == -1)
diff --git a/llvm/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp b/llvm/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp
index db9aa7e18f5e7a..d5def5342d8de2 100644
--- a/llvm/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp
+++ b/llvm/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp
@@ -949,7 +949,7 @@ namespace llvm {
void selectRor(SDNode *N);
void selectVAlign(SDNode *N);
- static SmallVector<uint32_t, 8> getPerfectCompletions(ShuffleMask SM,
+ static SmallVector<uint32_t, 8> getPerfectCompletions(const ShuffleMask &SM,
unsigned Width);
static SmallVector<uint32_t, 8> completeToPerfect(
ArrayRef<uint32_t> Completions, unsigned Width);
@@ -966,22 +966,22 @@ namespace llvm {
None,
PackMux,
};
- OpRef concats(OpRef Va, OpRef Vb, ResultStack &Results);
+ OpRef concats(const OpRef &Va, const OpRef &Vb, ResultStack &Results);
OpRef funnels(OpRef Va, OpRef Vb, int Amount, ResultStack &Results);
OpRef packs(ShuffleMask SM, OpRef Va, OpRef Vb, ResultStack &Results,
MutableArrayRef<int> NewMask, unsigned Options = None);
- OpRef packp(ShuffleMask SM, OpRef Va, OpRef Vb, ResultStack &Results,
+ OpRef packp(const ShuffleMask &SM, const OpRef &Va, const OpRef &Vb, ResultStack &Results,
MutableArrayRef<int> NewMask);
- OpRef vmuxs(ArrayRef<uint8_t> Bytes, OpRef Va, OpRef Vb,
+ OpRef vmuxs(ArrayRef<uint8_t> Bytes, const OpRef &Va, const OpRef &Vb,
ResultStack &Results);
- OpRef vmuxp(ArrayRef<uint8_t> Bytes, OpRef Va, OpRef Vb,
+ OpRef vmuxp(ArrayRef<uint8_t> Bytes, const OpRef &Va, const OpRef &Vb,
ResultStack &Results);
- OpRef shuffs1(ShuffleMask SM, OpRef Va, ResultStack &Results);
- OpRef shuffs2(ShuffleMask SM, OpRef Va, OpRef Vb, ResultStack &Results);
- OpRef shuffp1(ShuffleMask SM, OpRef Va, ResultStack &Results);
- OpRef shuffp2(ShuffleMask SM, OpRef Va, OpRef Vb, ResultStack &Results);
+ OpRef shuffs1(ShuffleMask SM, const OpRef &Va, ResultStack &Results);
+ OpRef shuffs2(const ShuffleMask &SM, const OpRef &Va, const OpRef &Vb, ResultStack &Results);
+ OpRef shuffp1(const ShuffleMask &SM, const OpRef &Va, ResultStack &Results);
+ OpRef shuffp2(const ShuffleMask &SM, const OpRef &Va, const OpRef &Vb, ResultStack &Results);
OpRef butterfly(ShuffleMask SM, OpRef Va, ResultStack &Results);
OpRef contracting(ShuffleMask SM, OpRef Va, OpRef Vb, ResultStack &Results);
@@ -1048,7 +1048,7 @@ static bool isLowHalfOnly(ArrayRef<int> Mask) {
return llvm::all_of(Mask.drop_front(L / 2), [](int M) { return M < 0; });
}
-static SmallVector<unsigned, 4> getInputSegmentList(ShuffleMask SM,
+static SmallVector<unsigned, 4> getInputSegmentList(const ShuffleMask &SM,
unsigned SegLen) {
assert(isPowerOf2_32(SegLen));
SmallVector<unsigned, 4> SegList;
@@ -1068,7 +1068,7 @@ static SmallVector<unsigned, 4> getInputSegmentList(ShuffleMask SM,
return SegList;
}
-static SmallVector<unsigned, 4> getOutputSegmentMap(ShuffleMask SM,
+static SmallVector<unsigned, 4> getOutputSegmentMap(const ShuffleMask &SM,
unsigned SegLen) {
// Calculate the layout of the output segments in terms of the input
// segments.
@@ -1213,7 +1213,7 @@ void HvxSelector::materialize(const ResultStack &Results) {
DAG.RemoveDeadNodes();
}
-OpRef HvxSelector::concats(OpRef Lo, OpRef Hi, ResultStack &Results) {
+OpRef HvxSelector::concats(const OpRef &Lo, const OpRef &Hi, ResultStack &Results) {
DEBUG_WITH_TYPE("isel", {dbgs() << __func__ << '\n';});
const SDLoc &dl(Results.InpNode);
Results.push(TargetOpcode::REG_SEQUENCE, getPairVT(MVT::i8), {
@@ -1496,7 +1496,7 @@ OpRef HvxSelector::packs(ShuffleMask SM, OpRef Va, OpRef Vb,
// Va, Vb are vector pairs. If SM only uses two single vectors from Va/Vb,
// pack these vectors into a pair, and remap SM into NewMask to use the
// new pair instead.
-OpRef HvxSelector::packp(ShuffleMask SM, OpRef Va, OpRef Vb,
+OpRef HvxSelector::packp(const ShuffleMask &SM, const OpRef &Va, const OpRef &Vb,
ResultStack &Results, MutableArrayRef<int> NewMask) {
DEBUG_WITH_TYPE("isel", {dbgs() << __func__ << '\n';});
SmallVector<unsigned, 4> SegList = getInputSegmentList(SM.Mask, HwLen);
@@ -1533,7 +1533,7 @@ OpRef HvxSelector::packp(ShuffleMask SM, OpRef Va, OpRef Vb,
return concats(Out[0], Out[1], Results);
}
-OpRef HvxSelector::vmuxs(ArrayRef<uint8_t> Bytes, OpRef Va, OpRef Vb,
+OpRef HvxSelector::vmuxs(ArrayRef<uint8_t> Bytes, const OpRef &Va, const OpRef &Vb,
ResultStack &Results) {
DEBUG_WITH_TYPE("isel", {dbgs() << __func__ << '\n';});
MVT ByteTy = getSingleVT(MVT::i8);
@@ -1546,7 +1546,7 @@ OpRef HvxSelector::vmuxs(ArrayRef<uint8_t> Bytes, OpRef Va, OpRef Vb,
return OpRef::res(Results.top());
}
-OpRef HvxSelector::vmuxp(ArrayRef<uint8_t> Bytes, OpRef Va, OpRef Vb,
+OpRef HvxSelector::vmuxp(ArrayRef<uint8_t> Bytes, const OpRef &Va, const OpRef &Vb,
ResultStack &Results) {
DEBUG_WITH_TYPE("isel", {dbgs() << __func__ << '\n';});
size_t S = Bytes.size() / 2;
@@ -1555,7 +1555,7 @@ OpRef HvxSelector::vmuxp(ArrayRef<uint8_t> Bytes, OpRef Va, OpRef Vb,
return concats(L, H, Results);
}
-OpRef HvxSelector::shuffs1(ShuffleMask SM, OpRef Va, ResultStack &Results) {
+OpRef HvxSelector::shuffs1(ShuffleMask SM, const OpRef &Va, ResultStack &Results) {
DEBUG_WITH_TYPE("isel", {dbgs() << __func__ << '\n';});
unsigned VecLen = SM.Mask.size();
assert(HwLen == VecLen);
@@ -1598,7 +1598,7 @@ OpRef HvxSelector::shuffs1(ShuffleMask SM, OpRef Va, ResultStack &Results) {
return butterfly(SM, Va, Results);
}
-OpRef HvxSelector::shuffs2(ShuffleMask SM, OpRef Va, OpRef Vb,
+OpRef HvxSelector::shuffs2(const ShuffleMask &SM, const OpRef &Va, const OpRef &Vb,
...
[truncated]
|
@llvm/pr-subscribers-backend-aarch64 Author: Herman Semenoff (GermanAizek) ChangesReference: #125074 Patch is 23.13 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/125083.diff 17 Files Affected:
diff --git a/llvm/lib/Target/AArch64/AArch64ExpandPseudoInsts.cpp b/llvm/lib/Target/AArch64/AArch64ExpandPseudoInsts.cpp
index b44c48afe705bad..22424b11afb58fb 100644
--- a/llvm/lib/Target/AArch64/AArch64ExpandPseudoInsts.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ExpandPseudoInsts.cpp
@@ -64,8 +64,8 @@ class AArch64ExpandPseudo : public MachineFunctionPass {
MachineBasicBlock::iterator &NextMBBI);
bool expandMultiVecPseudo(MachineBasicBlock &MBB,
MachineBasicBlock::iterator MBBI,
- TargetRegisterClass ContiguousClass,
- TargetRegisterClass StridedClass,
+ const TargetRegisterClass &ContiguousClass,
+ const TargetRegisterClass &StridedClass,
unsigned ContiguousOpc, unsigned StridedOpc);
bool expandFormTuplePseudo(MachineBasicBlock &MBB,
MachineBasicBlock::iterator MBBI,
@@ -1121,7 +1121,8 @@ AArch64ExpandPseudo::expandCondSMToggle(MachineBasicBlock &MBB,
bool AArch64ExpandPseudo::expandMultiVecPseudo(
MachineBasicBlock &MBB, MachineBasicBlock::iterator MBBI,
- TargetRegisterClass ContiguousClass, TargetRegisterClass StridedClass,
+ const TargetRegisterClass &ContiguousClass,
+ const TargetRegisterClass &StridedClass,
unsigned ContiguousOp, unsigned StridedOpc) {
MachineInstr &MI = *MBBI;
Register Tuple = MI.getOperand(0).getReg();
diff --git a/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp b/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp
index a082a1ebe95bf84..89a8c981a330d6c 100644
--- a/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp
@@ -4252,7 +4252,7 @@ class TagStoreEdit {
}
// Add an instruction to be replaced. Instructions must be added in the
// ascending order of Offset, and have to be adjacent.
- void addInstruction(TagStoreInstr I) {
+ void addInstruction(const TagStoreInstr &I) {
assert((TagStores.empty() ||
TagStores.back().Offset + TagStores.back().Size == I.Offset) &&
"Non-adjacent tag store instructions.");
diff --git a/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.cpp b/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.cpp
index 17adda15d9fc8f2..0edb5c436808f98 100644
--- a/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.cpp
+++ b/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.cpp
@@ -38,8 +38,8 @@ SDValue AArch64SelectionDAGInfo::EmitMOPS(unsigned Opcode, SelectionDAG &DAG,
SDValue Dst, SDValue SrcOrValue,
SDValue Size, Align Alignment,
bool isVolatile,
- MachinePointerInfo DstPtrInfo,
- MachinePointerInfo SrcPtrInfo) const {
+ const MachinePointerInfo &DstPtrInfo,
+ const MachinePointerInfo &SrcPtrInfo) const {
// Get the constant size of the copy/set.
uint64_t ConstSize = 0;
diff --git a/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.h b/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.h
index 7efe49c72065552..fe3fe7705def5de 100644
--- a/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.h
+++ b/llvm/lib/Target/AArch64/AArch64SelectionDAGInfo.h
@@ -26,8 +26,8 @@ class AArch64SelectionDAGInfo : public SelectionDAGTargetInfo {
SDValue EmitMOPS(unsigned Opcode, SelectionDAG &DAG, const SDLoc &DL,
SDValue Chain, SDValue Dst, SDValue SrcOrValue, SDValue Size,
Align Alignment, bool isVolatile,
- MachinePointerInfo DstPtrInfo,
- MachinePointerInfo SrcPtrInfo) const;
+ const MachinePointerInfo &DstPtrInfo,
+ const MachinePointerInfo &SrcPtrInfo) const;
SDValue EmitTargetCodeForMemcpy(SelectionDAG &DAG, const SDLoc &dl,
SDValue Chain, SDValue Dst, SDValue Src,
diff --git a/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp b/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
index e2389145cf33f26..a45df57dfac447a 100644
--- a/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
+++ b/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
@@ -121,7 +121,7 @@ class TailFoldingOption {
return Bits;
}
- void reportError(std::string Opt) {
+ void reportError(const std::string &Opt) {
errs() << "invalid argument '" << Opt
<< "' to -sve-tail-folding=; the option should be of the form\n"
" (disabled|all|default|simple)[+(reductions|recurrences"
diff --git a/llvm/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp b/llvm/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp
index d3eda48f3276e9a..bae7c12e02a9e7c 100644
--- a/llvm/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp
+++ b/llvm/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp
@@ -2455,7 +2455,7 @@ class AArch64Operand : public MCParsedAsmOperand {
}
static std::unique_ptr<AArch64Operand>
- CreateFPImm(APFloat Val, bool IsExact, SMLoc S, MCContext &Ctx) {
+ CreateFPImm(const APFloat &Val, bool IsExact, SMLoc S, MCContext &Ctx) {
auto Op = std::make_unique<AArch64Operand>(k_FPImm, Ctx);
Op->FPImm.Val = Val.bitcastToAPInt().getSExtValue();
Op->FPImm.IsExact = IsExact;
@@ -3837,7 +3837,7 @@ static const struct Extension {
{"sme-tmop", {AArch64::FeatureSME_TMOP}},
};
-static void setRequiredFeatureString(FeatureBitset FBS, std::string &Str) {
+static void setRequiredFeatureString(const FeatureBitset &FBS, std::string &Str) {
if (FBS[AArch64::HasV8_0aOps])
Str += "ARMv8a";
if (FBS[AArch64::HasV8_1aOps])
diff --git a/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.h b/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.h
index 9671fa3b3d92fa9..49e823615b0032f 100644
--- a/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.h
+++ b/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.h
@@ -373,7 +373,7 @@ struct SysAlias {
constexpr SysAlias(const char *N, uint16_t E, FeatureBitset F)
: Name(N), Encoding(E), FeaturesRequired(F) {}
- bool haveFeatures(FeatureBitset ActiveFeatures) const {
+ bool haveFeatures(const FeatureBitset &ActiveFeatures) const {
return ActiveFeatures[llvm::AArch64::FeatureAll] ||
(FeaturesRequired & ActiveFeatures) == FeaturesRequired;
}
@@ -634,7 +634,7 @@ struct PHint {
unsigned Encoding;
FeatureBitset FeaturesRequired;
- bool haveFeatures(FeatureBitset ActiveFeatures) const {
+ bool haveFeatures(const FeatureBitset &ActiveFeatures) const {
return ActiveFeatures[llvm::AArch64::FeatureAll] ||
(FeaturesRequired & ActiveFeatures) == FeaturesRequired;
}
@@ -753,7 +753,7 @@ namespace AArch64SysReg {
bool Writeable;
FeatureBitset FeaturesRequired;
- bool haveFeatures(FeatureBitset ActiveFeatures) const {
+ bool haveFeatures(const FeatureBitset &ActiveFeatures) const {
return ActiveFeatures[llvm::AArch64::FeatureAll] ||
(FeaturesRequired & ActiveFeatures) == FeaturesRequired;
}
diff --git a/llvm/lib/Target/AMDGPU/Utils/AMDGPUDelayedMCExpr.cpp b/llvm/lib/Target/AMDGPU/Utils/AMDGPUDelayedMCExpr.cpp
index ceb475d77cb3222..789ec5884585687 100644
--- a/llvm/lib/Target/AMDGPU/Utils/AMDGPUDelayedMCExpr.cpp
+++ b/llvm/lib/Target/AMDGPU/Utils/AMDGPUDelayedMCExpr.cpp
@@ -12,8 +12,8 @@
using namespace llvm;
-static msgpack::DocNode getNode(msgpack::DocNode DN, msgpack::Type Type,
- MCValue Val) {
+static msgpack::DocNode getNode(const msgpack::DocNode &DN, msgpack::Type Type,
+ const MCValue &Val) {
msgpack::Document *Doc = DN.getDocument();
switch (Type) {
default:
diff --git a/llvm/lib/Target/ARM/ARMInstructionSelector.cpp b/llvm/lib/Target/ARM/ARMInstructionSelector.cpp
index 2d3cb71fbc3fd47..1054ed45a41edfc 100644
--- a/llvm/lib/Target/ARM/ARMInstructionSelector.cpp
+++ b/llvm/lib/Target/ARM/ARMInstructionSelector.cpp
@@ -44,13 +44,13 @@ class ARMInstructionSelector : public InstructionSelector {
struct CmpConstants;
struct InsertInfo;
- bool selectCmp(CmpConstants Helper, MachineInstrBuilder &MIB,
+ bool selectCmp(const CmpConstants &Helper, MachineInstrBuilder &MIB,
MachineRegisterInfo &MRI) const;
// Helper for inserting a comparison sequence that sets \p ResReg to either 1
// if \p LHSReg and \p RHSReg are in the relationship defined by \p Cond, or
// \p PrevRes otherwise. In essence, it computes PrevRes OR (LHS Cond RHS).
- bool insertComparison(CmpConstants Helper, InsertInfo I, unsigned ResReg,
+ bool insertComparison(const CmpConstants &Helper, InsertInfo I, unsigned ResReg,
ARMCC::CondCodes Cond, unsigned LHSReg, unsigned RHSReg,
unsigned PrevRes) const;
@@ -525,7 +525,7 @@ bool ARMInstructionSelector::validReg(MachineRegisterInfo &MRI, unsigned Reg,
return true;
}
-bool ARMInstructionSelector::selectCmp(CmpConstants Helper,
+bool ARMInstructionSelector::selectCmp(const CmpConstants &Helper,
MachineInstrBuilder &MIB,
MachineRegisterInfo &MRI) const {
const InsertInfo I(MIB);
@@ -572,7 +572,7 @@ bool ARMInstructionSelector::selectCmp(CmpConstants Helper,
return true;
}
-bool ARMInstructionSelector::insertComparison(CmpConstants Helper, InsertInfo I,
+bool ARMInstructionSelector::insertComparison(const CmpConstants &Helper, InsertInfo I,
unsigned ResReg,
ARMCC::CondCodes Cond,
unsigned LHSReg, unsigned RHSReg,
diff --git a/llvm/lib/Target/ARM/MCTargetDesc/ARMMachObjectWriter.cpp b/llvm/lib/Target/ARM/MCTargetDesc/ARMMachObjectWriter.cpp
index 357654615e00244..e2586926cefe489 100644
--- a/llvm/lib/Target/ARM/MCTargetDesc/ARMMachObjectWriter.cpp
+++ b/llvm/lib/Target/ARM/MCTargetDesc/ARMMachObjectWriter.cpp
@@ -27,13 +27,15 @@ class ARMMachObjectWriter : public MCMachObjectTargetWriter {
void recordARMScatteredRelocation(MachObjectWriter *Writer,
const MCAssembler &Asm,
const MCFragment *Fragment,
- const MCFixup &Fixup, MCValue Target,
+ const MCFixup &Fixup,
+ const MCValue &Target,
unsigned Type, unsigned Log2Size,
uint64_t &FixedValue);
void recordARMScatteredHalfRelocation(MachObjectWriter *Writer,
const MCAssembler &Asm,
const MCFragment *Fragment,
- const MCFixup &Fixup, MCValue Target,
+ const MCFixup &Fixup,
+ const MCValue &Target,
uint64_t &FixedValue);
bool requiresExternRelocation(MachObjectWriter *Writer,
@@ -130,7 +132,7 @@ static bool getARMFixupKindMachOInfo(unsigned Kind, unsigned &RelocType,
void ARMMachObjectWriter::recordARMScatteredHalfRelocation(
MachObjectWriter *Writer, const MCAssembler &Asm,
- const MCFragment *Fragment, const MCFixup &Fixup, MCValue Target,
+ const MCFragment *Fragment, const MCFixup &Fixup, const MCValue &Target,
uint64_t &FixedValue) {
uint32_t FixupOffset = Asm.getFragmentOffset(*Fragment) + Fixup.getOffset();
@@ -240,7 +242,7 @@ void ARMMachObjectWriter::recordARMScatteredHalfRelocation(
void ARMMachObjectWriter::recordARMScatteredRelocation(
MachObjectWriter *Writer, const MCAssembler &Asm,
- const MCFragment *Fragment, const MCFixup &Fixup, MCValue Target,
+ const MCFragment *Fragment, const MCFixup &Fixup, const MCValue &Target,
unsigned Type, unsigned Log2Size, uint64_t &FixedValue) {
uint32_t FixupOffset = Asm.getFragmentOffset(*Fragment) + Fixup.getOffset();
diff --git a/llvm/lib/Target/ARM/Utils/ARMBaseInfo.h b/llvm/lib/Target/ARM/Utils/ARMBaseInfo.h
index dc4f811e075c602..0d895e600b10504 100644
--- a/llvm/lib/Target/ARM/Utils/ARMBaseInfo.h
+++ b/llvm/lib/Target/ARM/Utils/ARMBaseInfo.h
@@ -196,12 +196,12 @@ namespace ARMSysReg {
FeatureBitset FeaturesRequired;
// return true if FeaturesRequired are all present in ActiveFeatures
- bool hasRequiredFeatures(FeatureBitset ActiveFeatures) const {
+ bool hasRequiredFeatures(const FeatureBitset &ActiveFeatures) const {
return (FeaturesRequired & ActiveFeatures) == FeaturesRequired;
}
// returns true if TestFeatures are all present in FeaturesRequired
- bool isInRequiredFeatures(FeatureBitset TestFeatures) const {
+ bool isInRequiredFeatures(const FeatureBitset &TestFeatures) const {
return (FeaturesRequired & TestFeatures) == TestFeatures;
}
};
diff --git a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
index fbed25157a44e00..c392b13e1920d21 100644
--- a/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
+++ b/llvm/lib/Target/AVR/MCTargetDesc/AVRAsmBackend.cpp
@@ -32,7 +32,7 @@ namespace adjust {
using namespace llvm;
static void unsigned_width(unsigned Width, uint64_t Value,
- std::string Description, const MCFixup &Fixup,
+ const std::string &Description, const MCFixup &Fixup,
MCContext *Ctx) {
if (!isUIntN(Width, Value)) {
std::string Diagnostic = "out of range " + Description;
diff --git a/llvm/lib/Target/Hexagon/HexagonConstExtenders.cpp b/llvm/lib/Target/Hexagon/HexagonConstExtenders.cpp
index 86ce6b4e05ed27f..3c95714ef78baeb 100644
--- a/llvm/lib/Target/Hexagon/HexagonConstExtenders.cpp
+++ b/llvm/lib/Target/Hexagon/HexagonConstExtenders.cpp
@@ -202,7 +202,7 @@ namespace {
Pos = std::distance(B->begin(), It);
}
}
- bool operator<(Loc A) const {
+ bool operator<(const Loc &A) const {
if (Block != A.Block)
return Block->getNumber() < A.Block->getNumber();
if (A.Pos == -1)
diff --git a/llvm/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp b/llvm/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp
index db9aa7e18f5e7af..d5def5342d8de2b 100644
--- a/llvm/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp
+++ b/llvm/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp
@@ -949,7 +949,7 @@ namespace llvm {
void selectRor(SDNode *N);
void selectVAlign(SDNode *N);
- static SmallVector<uint32_t, 8> getPerfectCompletions(ShuffleMask SM,
+ static SmallVector<uint32_t, 8> getPerfectCompletions(const ShuffleMask &SM,
unsigned Width);
static SmallVector<uint32_t, 8> completeToPerfect(
ArrayRef<uint32_t> Completions, unsigned Width);
@@ -966,22 +966,22 @@ namespace llvm {
None,
PackMux,
};
- OpRef concats(OpRef Va, OpRef Vb, ResultStack &Results);
+ OpRef concats(const OpRef &Va, const OpRef &Vb, ResultStack &Results);
OpRef funnels(OpRef Va, OpRef Vb, int Amount, ResultStack &Results);
OpRef packs(ShuffleMask SM, OpRef Va, OpRef Vb, ResultStack &Results,
MutableArrayRef<int> NewMask, unsigned Options = None);
- OpRef packp(ShuffleMask SM, OpRef Va, OpRef Vb, ResultStack &Results,
+ OpRef packp(const ShuffleMask &SM, const OpRef &Va, const OpRef &Vb, ResultStack &Results,
MutableArrayRef<int> NewMask);
- OpRef vmuxs(ArrayRef<uint8_t> Bytes, OpRef Va, OpRef Vb,
+ OpRef vmuxs(ArrayRef<uint8_t> Bytes, const OpRef &Va, const OpRef &Vb,
ResultStack &Results);
- OpRef vmuxp(ArrayRef<uint8_t> Bytes, OpRef Va, OpRef Vb,
+ OpRef vmuxp(ArrayRef<uint8_t> Bytes, const OpRef &Va, const OpRef &Vb,
ResultStack &Results);
- OpRef shuffs1(ShuffleMask SM, OpRef Va, ResultStack &Results);
- OpRef shuffs2(ShuffleMask SM, OpRef Va, OpRef Vb, ResultStack &Results);
- OpRef shuffp1(ShuffleMask SM, OpRef Va, ResultStack &Results);
- OpRef shuffp2(ShuffleMask SM, OpRef Va, OpRef Vb, ResultStack &Results);
+ OpRef shuffs1(ShuffleMask SM, const OpRef &Va, ResultStack &Results);
+ OpRef shuffs2(const ShuffleMask &SM, const OpRef &Va, const OpRef &Vb, ResultStack &Results);
+ OpRef shuffp1(const ShuffleMask &SM, const OpRef &Va, ResultStack &Results);
+ OpRef shuffp2(const ShuffleMask &SM, const OpRef &Va, const OpRef &Vb, ResultStack &Results);
OpRef butterfly(ShuffleMask SM, OpRef Va, ResultStack &Results);
OpRef contracting(ShuffleMask SM, OpRef Va, OpRef Vb, ResultStack &Results);
@@ -1048,7 +1048,7 @@ static bool isLowHalfOnly(ArrayRef<int> Mask) {
return llvm::all_of(Mask.drop_front(L / 2), [](int M) { return M < 0; });
}
-static SmallVector<unsigned, 4> getInputSegmentList(ShuffleMask SM,
+static SmallVector<unsigned, 4> getInputSegmentList(const ShuffleMask &SM,
unsigned SegLen) {
assert(isPowerOf2_32(SegLen));
SmallVector<unsigned, 4> SegList;
@@ -1068,7 +1068,7 @@ static SmallVector<unsigned, 4> getInputSegmentList(ShuffleMask SM,
return SegList;
}
-static SmallVector<unsigned, 4> getOutputSegmentMap(ShuffleMask SM,
+static SmallVector<unsigned, 4> getOutputSegmentMap(const ShuffleMask &SM,
unsigned SegLen) {
// Calculate the layout of the output segments in terms of the input
// segments.
@@ -1213,7 +1213,7 @@ void HvxSelector::materialize(const ResultStack &Results) {
DAG.RemoveDeadNodes();
}
-OpRef HvxSelector::concats(OpRef Lo, OpRef Hi, ResultStack &Results) {
+OpRef HvxSelector::concats(const OpRef &Lo, const OpRef &Hi, ResultStack &Results) {
DEBUG_WITH_TYPE("isel", {dbgs() << __func__ << '\n';});
const SDLoc &dl(Results.InpNode);
Results.push(TargetOpcode::REG_SEQUENCE, getPairVT(MVT::i8), {
@@ -1496,7 +1496,7 @@ OpRef HvxSelector::packs(ShuffleMask SM, OpRef Va, OpRef Vb,
// Va, Vb are vector pairs. If SM only uses two single vectors from Va/Vb,
// pack these vectors into a pair, and remap SM into NewMask to use the
// new pair instead.
-OpRef HvxSelector::packp(ShuffleMask SM, OpRef Va, OpRef Vb,
+OpRef HvxSelector::packp(const ShuffleMask &SM, const OpRef &Va, const OpRef &Vb,
ResultStack &Results, MutableArrayRef<int> NewMask) {
DEBUG_WITH_TYPE("isel", {dbgs() << __func__ << '\n';});
SmallVector<unsigned, 4> SegList = getInputSegmentList(SM.Mask, HwLen);
@@ -1533,7 +1533,7 @@ OpRef HvxSelector::packp(ShuffleMask SM, OpRef Va, OpRef Vb,
return concats(Out[0], Out[1], Results);
}
-OpRef HvxSelector::vmuxs(ArrayRef<uint8_t> Bytes, OpRef Va, OpRef Vb,
+OpRef HvxSelector::vmuxs(ArrayRef<uint8_t> Bytes, const OpRef &Va, const OpRef &Vb,
ResultStack &Results) {
DEBUG_WITH_TYPE("isel", {dbgs() << __func__ << '\n';});
MVT ByteTy = getSingleVT(MVT::i8);
@@ -1546,7 +1546,7 @@ OpRef HvxSelector::vmuxs(ArrayRef<uint8_t> Bytes, OpRef Va, OpRef Vb,
return OpRef::res(Results.top());
}
-OpRef HvxSelector::vmuxp(ArrayRef<uint8_t> Bytes, OpRef Va, OpRef Vb,
+OpRef HvxSelector::vmuxp(ArrayRef<uint8_t> Bytes, const OpRef &Va, const OpRef &Vb,
ResultStack &Results) {
DEBUG_WITH_TYPE("isel", {dbgs() << __func__ << '\n';});
size_t S = Bytes.size() / 2;
@@ -1555,7 +1555,7 @@ OpRef HvxSelector::vmuxp(ArrayRef<uint8_t> Bytes, OpRef Va, OpRef Vb,
return concats(L, H, Results);
}
-OpRef HvxSelector::shuffs1(ShuffleMask SM, OpRef Va, ResultStack &Results) {
+OpRef HvxSelector::shuffs1(ShuffleMask SM, const OpRef &Va, ResultStack &Results) {
DEBUG_WITH_TYPE("isel", {dbgs() << __func__ << '\n';});
unsigned VecLen = SM.Mask.size();
assert(HwLen == VecLen);
@@ -1598,7 +1598,7 @@ OpRef HvxSelector::shuffs1(ShuffleMask SM, OpRef Va, ResultStack &Results) {
return butterfly(SM, Va, Results);
}
-OpRef HvxSelector::shuffs2(ShuffleMask SM, OpRef Va, OpRef Vb,
+OpRef HvxSelector::shuffs2(const ShuffleMask &SM, const OpR...
[truncated]
|
@@ -1822,7 +1822,7 @@ void NVPTXAsmPrinter::printFPConstant(const ConstantFP *Fp, raw_ostream &O) { | |||
} else | |||
llvm_unreachable("unsupported fp type"); | |||
|
|||
APInt API = APF.bitcastToAPInt(); | |||
const APInt &API = APF.bitcastToAPInt(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bitcastToAPInt returns a new APInt not a reference. So this creates a reference to a temporary object which works but seems like an uneccessary change.
@@ -51,7 +51,7 @@ void NVPTXFloatMCExpr::printImpl(raw_ostream &OS, const MCAsmInfo *MAI) const { | |||
break; | |||
} | |||
|
|||
APInt API = APF.bitcastToAPInt(); | |||
const APInt &API = APF.bitcastToAPInt(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bitcastToAPInt returns a new APInt not a reference.
const TargetRegisterClass &ContiguousClass, | ||
const TargetRegisterClass &StridedClass, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TargetRegisterClass certainly should never be passed by value. The actual pointer value is significant
@@ -121,7 +121,7 @@ class TailFoldingOption { | |||
return Bits; | |||
} | |||
|
|||
void reportError(std::string Opt) { | |||
void reportError(const std::string &Opt) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably be Twine or StringRef
@@ -121,7 +121,7 @@ class TailFoldingOption { | |||
return Bits; | |||
} | |||
|
|||
void reportError(std::string Opt) { | |||
void reportError(const std::string &Opt) { | |||
errs() << "invalid argument '" << Opt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't look like an OK error handling strategy though
@@ -32,7 +32,7 @@ namespace adjust { | |||
using namespace llvm; | |||
|
|||
static void unsigned_width(unsigned Width, uint64_t Value, | |||
std::string Description, const MCFixup &Fixup, | |||
const std::string &Description, const MCFixup &Fixup, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be using StringRef, and the code below should be using Twine instead of building the string itself
Reference: #125074